• No results found

Blood vessel network formation

N/A
N/A
Protected

Academic year: 2021

Share "Blood vessel network formation"

Copied!
62
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Blood vessel network formation

Bachelor thesis

Supervisor: R.M.H. Merks

Datum Bachelor exam: 20 04 2016

Mathematical Institute, University of Leiden

(2)

Contents

Abstract 3

1 Introduction 4

1.1 PDE model of Gamba et al . . . 4

1.2 Cellular Potts model . . . 4

1.3 Comparison Gamba et al. with the CPM . . . 5

1.4 Goal . . . 5

2 The model 6 2.1 Explanation of equations . . . 6

2.2 Results from the paper . . . 7

3 1D model - Euler Forward 8 3.1 Discretization . . . 8

3.2 The method of Euler Forward . . . 8

3.3 Simulating equation 1c . . . 9

3.4 Simulating equations 1a and 1b . . . 10

4 1D model - Runge-Kutta 4 11 4.1 The method of Runge-Kutta 4 . . . 11

4.2 Boundary and initial conditions . . . 12

4.3 Determining the parameters . . . 13

4.4 Results . . . 15

5 2D model - Runge-Kutta 4 23 5.1 Simulating equation 1c . . . 23

5.2 Simulating equation 1a . . . 24

5.3 Simulating equation 1b . . . 24

5.4 Results . . . 27

6 Extension of the 2D model 36 6.1 Simulating φ(n) . . . 36

6.2 Comparison model with and without pressure term . . . 38

6.3 Results . . . 40

7 Discussion 42

Bibliography 43

Appendix A 1D RK4 code 44

Appendix B 1D different initial conditions 49

Appendix C 2D RK4 code 51

Appendix D 2D different initial conditions 61

(3)

Abstract

This bachelor thesis is about blood vessel network formation and is based on a model from the paper ‘Percolation, Morphogenesis, and Burgers Dynamics in Blood Vessels Formation’ by A. Gamba et al (March 21, 2003) [3]. In this paper, a model consisting of three partial differential equations (PDE’s) was said to describe the formation of a stable blood vessel network.

There are several models to predict the formation of a blood vessel network, and it is important to keep on improving those and comparing them to each other. When these comparisons are done properly, biologists know which mod- els are most accurate for what biological aspects and can predict the outcome of a certain experiment best. They can also compare the results from their experiments to the simulations given by the different models to indicate if their experiment gives proper results.

According to the paper, the cells form stable structures on the long term.

We are interested in the long term behaviour of this model because we would like to see whether or not these stable structures would indeed form.

To simulate the given model of PDE’s, we used the program Matlab and the integration method Euler Forward (EF). Once this worked in a one-dimensional case, we extended the code and used the more complex method of integration Runge-Kutta 4 (RK4), first in one-dimension and later on in the final two- dimensional case.

After adding a density dependent pressure term to the model, just like was done in the paper [3], the results were not stable enough to draw any hard conclusions. The cells did seem to move towards each other to form a network, but blew up within the simulation of 500 seconds. This is probably due to our use of RK4, which is not a very accurate method of numerical integration.

We can therefore conclude that the given model, extended with the pressure term, could give a credible prediction of the formation of a blood vessel network.

We are however not sure because of instabilities most likely caused by our in- tegration method. Therefore, more research needs to be done with simulations in more advanced programs using more accurate techniques than RK4.

(4)

1 Introduction

A proper blood vessel network is necessary for the existence of almost every multicellular living being. In the embryonic state, this network is formed of endothelial cells. Vasculogenesis is the name of the formation of such structures.

Angiogenesis describes the formation of blood vessels out of existing vessels, for example when the network is damaged [7].

For science, it is interesting to look at the formation of blood vessel networks and to try to fully understand it. The reason for this is that it is an essential part of life without which humans could not exist. Understanding angiogenesis helps us understand tumour blood vessels and the formation of these [7]. Knowing how blood vessels are properly formed (vasculogenesis) helps figuring out what went wrong with deformed blood vessels.

In order to figure out how vasculogenesis, the topic of this paper, works, scientists have come up with different models to simulate the formation of blood vessels. Depending on the model, some assumptions are required for them to simulate the process properly. Generally speaking, the more assumptions are made, the less accurate the results are.

1.1 PDE model of Gamba et al

In this paper, we will discuss the partial differential equation (PDE) model of Gamba et al. [3]. We have chosen this particular model because we want to see if we will find the same conclusion as the paper draws, which is that stable blood vessel networks are formed.

This model makes some assumptions on the biology it describes. The first of these is that cells cannot be compressed into a singular point, which means there has to be a force preventing this from happening. The second assumption is the so-called inertial movement of cells, so the cells keep the motion caused by an earlier stimulus [2]. A third assumption is that the cells produce a chemoat- tractant, which is a substance that attracts other cells. The fourth and final assumption is that the chemoattractant seeps away from the cells and is slowly inactivated by the surrounding liquids [8].

1.2 Cellular Potts model

Another model to simulate the formation of networks is the cellular Potts model (CPM) [2]. This model looks at cells as though they are tiny drops of a liquid of which the boundaries are defined. The simulation takes place on a grid and each cell contains a number of grid surfaces. More types of cells can be included in the system and the compatibility with the different kinds of cells can be assigned. This means that a cell of one type can easily be aligned next to a similar cell, but rather not next to a cell of a different type.

The movement of cells, which the simulations are all about, is actually a series of small, straightforward steps. In this model, each cell is made up of a few sites in the lattice. For each step a randomly chosen site in the lattice,

(5)

which also is part of the boundary of a cell, is copied to a randomly selected site directly attached to it. Of course, the real model is far more complex and there are exceptions for why some parts of cells will be copied and others will not.

One example is that a cell cannot just disappear, so not all its grid points can be covered by other cells and it will want to contain about a constant number of lattice planes. This short description does, however, give some idea of how the system works [4].

1.3 Comparison Gamba et al. with the CPM

There are different methods to simulate the blood vessel network formation and those exist for a reason. Each model simulates the consequences of a different set of assumptions, helping scientists to choose what model to use according to the goals of their simulations. Furthermore, using multiple models for the same case can help to compare the outcomes with each other.

One of the advantages of the PDE model is that the formulas are fairly easy to understand for scientists with a mathematical background. It can also be implemented in the commonly used program Matlab, with which many scientists have worked already.

An advantage of the CPM is that the cells are distinctive during the sim- ulations and remain their own boundaries. Also different kinds of cells can be implemented, which is impossible in the PDE model. This method does, how- ever, require a specific program for the simulations to which the scientist needs to get used.

1.4 Goal

The simulation of the PDE model of Gamba et al. will be the main focus of this thesis. The main question of this paper is: Will the PDE model give a stable blood vessel network on the long-term?

This mathematical question is relevant for biological research, for many ex- periments are first simulated before executed. The models need to be as ac- curate as possible, so no biological experiments will be cancelled due to wrong numerical outcomes of the models. Also, being able to compare experiments to different - mathematical - models can help improve both

To answer the main question we will first present the PDE model and explain given equations. Next we will explain how the simulation was done in one and in two dimensions. After that, we add an extra term which is introduced in the paper. Finally, we will show and discuss the results and draw some conclusions.

(6)

2 The model

The model of Gamba et al. [3] consists of three PDE’s. One describes the cell density n, simulated as a continuous density field. The second one expresses the velocity ~v of the cells. The last PDE describes the amount of chemoattractant c, a soluble factor produced by the cells. The model can be found in equations 1a, 1b and 1c:

∂n

∂t = −∇ · (n~v) (1a)

∂~v

∂t = −~v · ∇~v + µ∇c (1b)

∂c

∂t = D∇2c + αn − τ−1c. (1c)

2.1 Explanation of equations

Equation 1a is based on the continuity equation. This means the cell distribution will always contain a specific density of cells, without losing or producing any.

Furthermore, the cells cannot disappear in one place and appear at the same time somewhere else. This is of course dependent on the boundary conditions, but the idea still holds. In the one-dimensional model this is a line, in the two-dimensional model a squared petri dish.

n is the cell density, or amount of cells. In equation 1a, n~v is the flux of n [5]. ~v shows the direction and extent of the movement and n is the amount of cells present at each point that can be moved. The gradient ∇ means the derivative of n~v is taken, so the gradient of the product of n and ~v is used. In one dimension, this can also be seen as ∇n~v =∂x (n~v). There is no extra term added to the continuity equation, meaning there are no sources or sinks, so the amount of cells n stays equal.

The second PDE 1b describes the velocity of the cells and is based on the general Navier-Stokes equation. The Navier-Stokes equation is the following:

ρ ∂~v

∂t + ~v · ∇~v



= −∇p + ∇ · ~T + ~f . (2) In this equation, the terms on the left hand side form the inertia, meaning there is some delay in the change of the velocity when compared to what happens in the system. The terms on the right hand side are external forces or sinks [6].

With ρ = 1, equation 2 can be written as:

∂~v

∂t = −~v · ∇~v − ∇p + ∇ · ~T + ~f . (3) This is comparable to equation 1b, with ∇ · ~T = 0 and ~f = 0. In the Navier- Stokes equation, −∇p expresses the movement of a liquid, or any substance, to move to places where there is not yet much of the substance present. In our case, however, we want the velocity to aim towards the ’crowded’ spaces, where

(7)

already a relatively large number of cells is. Therefore, the term +µ∇c from 1b has the opposite sign of −∇p, because of their opposing effects to the equation.

Equation 1c describes the chemoattractant production of the cells and is based on the heat equation, also referred to as the diffusion equation. This equation looks, in a one-dimensional form, as follows [1]:

∂u

∂t = α2·∂2u

∂x2. The notation for any dimension is:

∂u

∂t = α2· ∇2u. (4)

When in equation 4 u is replaced by c and α2 by D, the first part of 1c is obtained. This is valid, because c is, mathematically speaking, just a differ- ent variable depending on space and time. Furthermore, both α2 and D are constants and therefore just another notation for the same number. This term describes the diffusion of the chemoattractant. +αn, one of the two additional terms in 1c, simulates the production of the chemoattractant, for it is only pro- duced at places with a positive cell mass n. The last term, −τ−1c, is added to simulate the decrease of chemoattractant, for there is a limit to the concentra- tion of c.

2.2 Results from the paper

The results of the simulation of equations 1a, 1b and 1c as shown in the paper [3], are given in figure 1.

Figure 1: Numerical simulations, on a system of size L = 2 mm, obtained starting from four different values of the initial cell density: 1a 50 cells/mm2, 1b 100 cells/mm2, 1c 200 cells/mm2, 1d 400 cells/mm2. Figure obtained from ref [3].

(8)

3 1D model - Euler Forward

Before we can start with numerically integrating the PDE’s 1a til 1c, we first need to discretize the time and space of the model. In reality, time and space are continuous. This is, however, impossible to simulate, where discrete time and space are much easier to work with.

3.1 Discretization

Before we are able to discretize the space, we need to know the shape and sizes of the area to look at. In the paper [3], this was a squared Petri dish with sides of length 2 mm. By discretizing this area, we divide the space into small squares of equal size.

In the one-dimensional case, the area is a line of 2 mm, so discretization means dividing that line up into equally small pieces. The smaller each part is, the more accurate the simulation will be.

According to the computing power of the available computers, we could not take the steps infinitely small. Therefore, the one-dimensional line was divided into 100 pieces, each representing 0.02 mm, and the two-dimensional field into 10000 squares of 4.0 × 10−4 mm2.

Since it is hard to know how big our simulations will exactly be when com- pared to the real experiments, we will not stick to the described scale of 0.02 mm. We for example do not know which cell size was taken in the paper. Fur- thermore, we start by looking at only one cell, so we might want to zoom in a little. Therefore, we let our axis go from 0 mm to 100 mm in 100 steps where we keep in mind that this is not a realistic scale.

The time discretization can be handled in the same way as the space dis- cretization, except for the fact that we do not know in advance how long we will need to run the model in order to get proper results. In the paper, they say that the modelling was stopped after the simulated network was stable, but it is unclear after how many iterations this stability occurred [3]. Additionally, the sizes of the time and space steps, the number of dimensions and the chosen method are related, as will be discussed in paragraph 4.3.

3.2 The method of Euler Forward

The easiest method for numerical integration is Euler Forward (EF). It is also one of the most inaccurate techniques, so later on we will apply a method a bit more complex. But to start with and get to know numerical integration, this technique will be sufficient. EF is defined as follows [10]:

For y0 = f (t, y) and y(t0) = y0, where tn+1= tn+ h:

yn+1= yn+ hf (tn, yn). (5) The space and time in equation 5 are discrete, which can be seen by the subscript n after each t and y, meaning the nth time step or the nth point in the one-

(9)

dimensional space. Note that, since h is a constant independent of n, we apply equidistant numerical integration.

3.3 Simulating equation 1c

The PDE we start with is a more simple form of equation 1c, namely:

∂c

∂t = ∇2c. (6)

First, we will explain how a PDE like ∂c∂t = ∇c :=∂x∂c is discretized, after which we will continue with 6.

To discretize ∂c∂t = ∇c, we use equation 5 to calculate the slope of c(x, t) twice and take the average of these. This is because the value of the gradient at point x depends on both of the neighbouring points. Normal use of EF for calculating ∇c(x, t) only takes into account either c(x + ∆x) or c(x − ∆x). To get a more accurate answer, we use c(x + ∆x) and c(x − ∆x) both and take the average of the two separate formulas. This is shown in the following equations, where equation 9 contains the average of the slopes of 7 and 8:

∂c(x, t)

∂x = c(x + ∆x, t) − c(x, t)

∆x (7)

∂c(x, t)

∂x = c(x, t) − c(x − ∆x, t)

∆x (8)

c(x, t + ∆t) = c(x, t) +equation 7 + equation 8

2 · ∆t

= c(x, t) +

c(x+∆x,t)−c(x,t)

∆x +c(x,t)−c(x−∆x,t)

∆x

2 · ∆t

= c(x, t) + c(x + ∆x, t) − c(x − ∆x, t)

2 · ∆x · ∆t. (9)

From this point, we will continu with equation 6, which contains ∇2c := ∂x2c2

in one dimension. Here, Euler has to be applied three times in the same way as demonstrated twice in equation 9: first to calculate the slope using c(x + ∆x, t) (equation 7), then to find the gradient using c(x − ∆x, t) (equation 8) and after that we use those two slopes as the values of two neighbouring points to apply Euler for the third time to gain the second derivative. This is shown below:

c(x, t + ∆t) = c(x, t) + ∂2c(x, t)

∂x2 · ∆t

= c(x, t) + equation 7 − equation 8

∆x

= c(x, t) +

c(x+∆x,t)−c(x,t)

∆xc(x,t)−c(x−∆x,t)

∆x

∆x · ∆t

= c(x, t) + c(x + ∆x, t) + c(x − ∆x, t) − 2 · c(x, t)

(∆x)2 · ∆t. (10)

(10)

After being able to discretize equation 6, we can continue with formula 1c, which has a few additional elements in it: αn and τ−1c, where α and τ are constants. Officially, in the one-dimensional case, n depends on t and x.

However, we only look at the equation of c for now and thus exclude the PDE of n and therefore any changes in the field of n, so we can assume n is a constant as well.

By the previous assumptions, αn is a constant and therefore easy to imple- ment in the Matlab code. The term τ−1c = τ−1c(x, t) can also be added just like that, because τ is a strictly positive constant and c(x, t) is calculated in the time step prior to the time step it is used.

This supplies the following discretization of equation 1c, using formula 10:

c(x, t + ∆t) = c(x, t) +



D ·c(x + ∆x, t) + c(x − ∆x, t) − 2 · c(x, t) (∆x)2

+ αn − τ−1c(x, t)



· ∆t.

(11)

Once n(x, t) is considered a variable instead of a constant parameter, we can simply replace αn by αn(x, t)

3.4 Simulating equations 1a and 1b

The simulation of the first two PDE’s of the model can be done the same way as the third equation 1a was done in equation 11. First, we will explain how they are numerically integrated with EF. After that, RK4 is applied as explained above. The final one-dimensional code can be found in appendix A.

To start with PDE 1a, it contains the term ∇ · (n~v). This first derivative in the one-dimensional case can be calculated in the same way as in equation 9, where c is replaced by n~v. This leads to the following expression:

n(x, t + ∆t) = n(x, t)

− n(x + ∆x, t) · v(x + ∆x, t) − n(x − ∆x, t) · v(x − ∆x, t) 2 · ∆x



· ∆t. (12) The same can be done for equation 1b, which becomes

v(x, t + ∆t) =v(x, t) +



µ · c(x + ∆x, t) − c(x − ∆x, t) 2 · ∆x

− v(x, t) · v(x + ∆x, t) − v(x − ∆x, t) 2 · ∆x



· ∆t.

(13)

(11)

4 1D model - Runge-Kutta 4

Now that we have seen how numerical integration works by using Euler Forward, we can continue with the more accurate method Runge-Kutta 4 (RK4). It is a commonly used method and it is based on Euler Forward, which makes implementing it easier than it would have been if it were a completely different method.

4.1 The method of Runge-Kutta 4

The method of RK4 is defined as follows [10]:

For y0 = f (t, y) and y(t0) = y0: yn+1= yn+1

6(k1+ 2k2+ 2k3+ k4) (14) with the predictors k1 until k4 given by

k1= hf (tn, yn) (15a)

k2= hf

 tn+1

2h, yn+1 2k1



(15b) k3= hf

 tn+1

2h, yn+1 2k2



(15c) k4= hf (tn+ h, yn+ k3). (15d) In the Matlab code based on this method, first all ki, i = 1, . . . , 4 are calculated.

The basis for this code is the appliance of EF, which is the main result from chapter 3.

The first step in RK4 is to apply our prefound formulas for EF 12 for the cell distribution, 13 for the velocity and 11 for the distribution of the chemo attractant, to define k1 for the three different equations (for n, v and c). We use the notation k1n for k1 over the cell distribution field of n, etc.

k1n(x) is equal to the derivative of n to the time, so k1n(x) = ∂n(x, t)

∂t

= −n(x + ∆x, t) · v(x + ∆x, t) − n(x − ∆x, t) · v(x − ∆x, t)

2 · ∆x ,

which can be found in equation 12. It only depends on the space coordinate x and not on the time t, since k1n is defined on every point of the (in this case one-dimensional) petri disk, and it is only used in one time step. Therefore it is unnecessary to store k1n for every time step. The same formula and explanation hold, of course, also for k1v and k1c.

Next, we create a new field for n, namely n1, by n1(x) = n(x, t)+k1n(x)·12∆t.

This field is calculated as if the gradient of n would be calculated by EF, and thus equals k1n, and is followed over half a time step.

(12)

Then

k2n(x) = ∂n1(x)

∂t

= −n1(x + ∆x) · v1(x + ∆x) − n1(x − ∆x) · v1(x − ∆x)

2 · ∆x ,

which can be calculated similarly to k1n by using the field n1(x) in stead of n(x, t). We again create a new field for the cell distribution, namely n2(x) = n1(x) + k2n(x) ·12∆t. In this case, again a field is calculated at time t +12∆t, but now with a more advanced gradient than k1n. Now

k3n(x) = ∂n2(x)

∂t

= −n2(x + ∆x) · v2(x + ∆x) − n2(x − ∆x) · v2(x − ∆x)

2 · ∆x ,

and n3(x) = n2(x) + k3n(x) · ∆t describes the cell distribution at time t + ∆t using the gradient k3n. This gives us our final gradient:

k4n(x) = ∂n3(x)

∂t

= −n3(x + ∆x) · v3(x + ∆x) − n3(x − ∆x) · v3(x − ∆x)

2 · ∆x .

Now that we have calculated all kin(x), i = 1, . . . , 4, we can apply equation 14, which results into:

n(x, t + ∆t) = n(x, t) + 1

2 k1n(x) + 2k2n(x) + 2k3n(x) + k4n(x) · ∆t.

Of course, this method only works once a ’new field’ is calculated for all three variables at once, meaning that first n1, v1 and c1 have to be calculated before n2, v2and c2 can be calculated, and so on. As the calculations for v and c are similar to that from n, we do not elaborate on them.

The code for RK4 in one dimension can be found in appendix A.

4.2 Boundary and initial conditions

The paper uses periodic boundary conditions, so we will use these as well to make the final comparison as fair as possible. Periodic boundary conditions are not really realistic in this case, for cells that go to one side and ’slide off’ the Petri dish cannot re-enter the space from the other side. They actually cannot even get off the one-dimensional line or two-dimensional square. The reason why periodic boundaries are used is probably because they are fairly easy to simulate.

Now, let LX be the total length of the space, devided intoLX∆x+1 points with mutual distance ∆x. Then all points, from x = ∆x up to x = LX − ∆x, have two direct neighbours. The two ending points, x = 0 and x = LX, both only

(13)

have one neighbour and thus it can be heard to calculate any of the derivatives at these points. In order to tackle this problem, we define our actual field as the line between x = ∆x and x = LX − ∆x, so the derivatives at all actual points can be calculated straight forward.

To satisfy the boundary conditions, we define that x = ∆x and x = LX −∆x are neighbouring points. Therefore, at x = 0 we copy the values of x = LX −∆x and at x = LX the values of x = ∆x. See figure 2 for an illustration of how this works.

LX − ∆x

∆x 2∆x

3∆x 4∆x

LX − ∆x

∆x

Figure 2: Illustration of the partition of the space, showing the neighbours of each point in space.

Now that we have determined the boundary conditions of the model [3], we can continue with the initial conditions.

At the beginning of the experiment, the cells have not yet excreted the chemoattractant nor have they developed any velocity. Therefore, the initial conditions for these two are: c(x, 0) = 0 and v(x, 0) = 0.

The initial cell distribution is slightly more complicated. First, we start with a cell distribution similar to a normal distribution. This somewhat mimics the shape of a cell. Later on, we can obviously expand the distribution to more than one cell. Something else we do is give a single cell (normal distribution) a constant speed. We also simulate a homogeneous distribution and see what happens when we disturb it with a sinusoid. The code for the different types of initial conditions in one dimension can be found in appendix B.

4.3 Determining the parameters

The value of the parameters according to the paper [3] can be found in table 1.

Parameter Value Value in standard unit

µ 1 1

D 10−7 cm2/s 10−5 mm2/s

α 1 1

τ 1h 3600 s

σ 30µm 3 · 10−2 mm

r0=√

Dτ 200 µm 2 · 10−1 mm

∆t unknown unknown

∆x unknown unknown

Table 1: Values of parameters in the paper [3].

For most parameters, the unit is given and can thus be recalculated into the standard units seconds for time and millimeter for space. Two parameters,

(14)

however, are determined experimentally and have no given unit. These are the parameters µ and α, which are both set equal to 1. Since with the other parameters different units for space and time are used, it is very hard to figure out the unit of µ and α. Therefore we use the ratios of table 1 and run our Matlab code (see appendix A) with different values for α and µ to find which combination gives the best results.

We have chosen ∆x = ∆y = 1 mm and ∆t = 12 s. This needs to satisfy the following inequality: (∆x)D∆t214, which is a requirement for equation 1c to be stable in two dimensions [5]. We need this as we want to expand this one-dimensional model to two dimensions in chapter 5. The stability condition in one dimension is (∆x)D∆t212 [5] and this automatically is the case when the parameters satisfy the two-dimensional condition. The 2D inequality, for which is needed ∆x = ∆y, becomes:

D∆t (∆x)2 =

4 9·12

12 =2 9 ≤1

4,

and thus our choices for ∆t, ∆x and ∆y are sufficient. In this chapter, focusing on the one-dimensional model, ∆y is not used.

Since our space devision is quite large with steps of size 1 mm, we want to choose the mean cell radius σ larger than the 3 · 10−2 from the paper. We set σ = 6 mm to be able to see changes in the cells.

We decide to leave τ = 3600 s, since this parameter stands for the charac- teristic degradation time of soluble mediators, which does not change since the chemoattractant is still the same.

Now, we determine the value of r0. In the paper, the ratio rσ0 = 2·103·10−1−2 = 623, and we want to preserve this ratio in our choice of parameters. Therefore, we get r0= 623· σ = 623· 6 = 40 mm. Next, we use the equation r0=√

Dτ to find parameter D. This gives us D =rτ20 =3600402 = 49 mm2/s.

All these values can be found in table 3.

µ α Moment the cell distribution becomes negative

Moment the cell masss is no longer constant

1.5 1.5 t = 11.0 s t = 20.0 s

1 1 t = 15.0 s t = 30.5 s

0.5 0.5 t = 24.5 s t = 68.5 s

0.1 0.1 t = 80.5 s t = 641.0 s

0.05 0.05 t = 138.0 s t = 939.0 s

0.01 0.01 t = 522.5 s t = 2428.0 s

0.005 0.005 t = 961.5 s t = 4235.0 s

0.001 0.001 t = 4272.0 s t = 17993.5 s

Table 2: Results of the implementation of different values of µ and α.

We finally need to determine a value for µ and α. To do this, we first set all the other parameters as described above. Then we try different values for µ

(15)

and α in such a way that always holds µ = α since that was also the case in the paper [3]. We then run the Matlab code from appendix A and find for two events the moment it happens.

The first one is when the cell distribution, which started with one normal distribution in the middle of the space with σ = 6 mm, gets a negative value at any place. We store this moment in time, since from this moment on the simulation is no longer reliable as the cell distribution can never become neg- ative. This happens with all values we tried for µ and α, for according to the equations 1a-1c the cell tends to implode, causing an increasingly high peak of the cell making the simulation numerical instable.

The second moment to keep track of is when the cell mass is no longer constant. Because of the above mentioned instability of the simulation, this is likely to happen for any choice of µ and α.

These two moments in time can be found in table 2 for different values of µ and α, where LX was set on LX = 102 mm.

With reference to the results shown in table 2, we choose for µ = α = 0.01, even though this choice does not give optimal results. This is because later on, in chapter 6, we will add an extra term to the dynamical system to improve it.

In order to be able to see improvement quite fast, we do not want to have to wait for almost 36000 time steps to be calculated.

Parameter Value

µ 0.01

D 49 mm2/s

α 0.01

τ 3600 s

σ 6 mm

r0=√

Dτ 40 mm

∆t 12 s

∆x 1 mm

Table 3: Values of parameters implemented in our Matlab code.

The one-dimensional Matlab code with the right parameters can be found in appendix A.

4.4 Results

The first initial condition to try is one cell, displayed by a normal distribution in the middle of the space (here: a line) with σ = 6 mm. This initial condition is shown in appendix B.

Our prediction was that the normal distribution would become smaller and higher, because the chemoattractant is only produced at the place the cells are (this is illustrated in figure 3c). This substance stimulates the velocity towards this point in space, see figure 3b where a positive velocity is aimed to the right and a negative to the left. The cell distribution indeed accumulates, as is shown

(16)

in figure 3a. It accumulates to such an extend that the simulation becomes unstable, causing negative points in the cell distribution which is, of course, impossible to happen in real experiments.

Something else to verify to see how well this program simulates the real world, is the sum of all cells at each time step. Since it is impossible for cells to appear or disappear, this should be a contant line. In figure 3d we can see that for the first 600 seconds this is the case. It is not necessary to see wheather or not the same holds for a longer period of time, because the cell distribution already has some negative parts at that time, making the simulation unrealistic anyway.

0 20 40 60 80 100

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

Space in mm

Height in mm

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(a) Cell distribution

0 20 40 60 80 100

−0.08

−0.06

−0.04

−0.02 0 0.02 0.04 0.06 0.08

Space in mm

Velocity in mm/s

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(b) Velocity

0 20 40 60 80 100

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2

Space in mm

Amount of chemoattractant

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(c) Amount of chemoattractant

0 100 200 300 400 500 600

1 1 1 1 1 1 1 1

Time in s

Total amount of cells

Number of cells

(d) Cell mass

Figure 3: Initial condition for the cell distribution is a normal distribution. 3a The cell distribution for several time steps, 3b the velocity, 3c the amount of chemoattractant, 3d the integral of cell distribution n over time.

The next initial condition is a cell distribution consisting of three normal distributions placed next to each other with their means at one third, half and two thirds of the of the space. See appendix B for the Matlab code. Again they have no initial velocity and the amount of chemoattractant zero at the start.

(17)

According to the paper, when there are more peaks of cells they should move towards and interact with each other. Therefore we expect the cells to move towards each other. This is indeed what happens: the amount of chemoattrac- tant forms over time one big pile (see figure 4c). This causes the velocity to be aimed at the centre cell, which is shown in figure 4b. Therefore, the three cells tend to move slightly towards each other. They each implode, however, before they can collide (see figure 4a).

Finally, figure 4d shows that the total cell mass stays the same, even though the graph does not show a straight line. When we take a look at the y-axis of the graph, the only number shown is 3. Furthermore for every second between t = 0 s and t = 1018 s, Matlab says the total cell mass is equal to 3.0000.

Therefore, we can neglect the shape of the graph and interpret it as if it were a straight line in the range of graph 4d.

0 20 40 60 80 100

−0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

Space in mm

Height in mm

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(a) Cell distribution

0 20 40 60 80 100

−0.05

−0.04

−0.03

−0.02

−0.01 0 0.01 0.02 0.03 0.04 0.05

Space in mm

Velocity in mm/s

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(b) Velocity

0 20 40 60 80 100

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

Space in mm

Amount of chemoattractant

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(c) Amount of chemoattractant

0 100 200 300 400 500 600

3 3 3 3 3 3 3 3

Time in s

Cell mass

(d) Cell mass

Figure 4: Initial condition for the cell distribution is three normal distributions.

4a The cell distribution for several time steps, 4b the velocity, 4c the amount of chemoattractant, 4d the integral of cell distribution n over time.

To find out how the numerical integrator handles a constant movement, we

(18)

simulate a normal distribution of cells with a constant velocity (see appendix B to know how this was done). We expect the shape of the normal distribution to be retained. For this simulation, we use a constant velocity of 0.02 mm/s, because it is a speed also reached by the other simulations. If we would make the cells go much faster, the results would be disturbed since the speed would make cells ’skip’ certain space segments from one time step to the next. If we would take a lower speed, no real movement would be visible. Also, the PDE concerning the velocity, formula 1b, is removed from the code in appendix A, so no changes in speed are allowed.

0 20 40 60 80 100

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07

Space in mm

Height in mm

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(a) Cell distribution

0 20 40 60 80 100

−1

−0.5 0 0.5 1 1.5

Space in mm

Velocity in mm/s

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(b) Velocity

0 20 40 60 80 100

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16

Space in mm

Amount of chemoattractant

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(c) Amount of chemoattractant

0 100 200 300 400 500 600

1 1 1 1 1 1 1

Time in s

Cell mass

(d) Cell mass

Figure 5: Initial condition for the cell distribution is a normal distribution. The initial velocity is a constant speed, which is kept during the entire simulation.

5a The cell distribution for several time steps, 5b the velocity, 5c the amount of chemoattractant, 5d the integral of cell distribution n over time.

As the time passes, cell distribution seems to retain its original form at a differnt place, as can be seen in figure 5a. The velocity (figure 5b) is of course a constant. The amount of chemoattractant (figure 5c) takes the shape of the distribution of the cells, grows over time and remains positive, which is what

(19)

we expected.

Finally, the cell mass (figure 5d) does noet look like a constant line. This trend continues after a time of 600 seconds; at the time of 2500 s, the total surface covered by the cells is 0.9057, which is less than the 1 we started with.

Therefore, the Matlab simulation does not seem to be able to handle a constant movement of the cells for a longer period of time. Fortunately, this is not what will happen by our final simulations, where the velocity is subject to changes, and therefore we will continue with the same code we have been working with so far (see appendix A).

The next simulation tells us a lot about the accuracy of our simulations and the equations themselves. It starts with a homogeneous cell distribution, zero velocity and no chemoattractant (see appendix B for the initial conditions). We expect the cells to stay in place and to develop no velocity, but do produce a constant amount of chemoattractant.

0 20 40 60 80 100

−1

−0.5 0 0.5 1 1.5

Space in mm

Height in mm

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(a) Cell distribution

0 20 40 60 80 100

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

Space in mm

Velocity in mm/s

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(b) Velocity

0 20 40 60 80 100

0 0.01 0.02 0.03 0.04 0.05 0.06

Space in mm

Amount of chemoattractant

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(c) Amount of chemoattractant

0 100 200 300 400 500 600

0 0.5 1 1.5 2 2.5

Time in s

Cell mass

(d) Cell mass

Figure 6: Initial condition for the cell distribution is a homogeneous distribu- tions. 6a The cell distribution for several time steps, 6b the velocity, 6c the amount of chemoattractant, 6d the integral of cell distribution n over time.

(20)

To explain the results found in figure 6, we first take a look at equation 1c. Since n is a constant, namely n(x, t) = 0.01 for every space x and every time step t, we can replace n in the equation by 0.01. Because the initial condition is c(x, 0) = 0 and the cell distribution is homogeneous, c(x, t) will be homogeneous at each time step as well. Therefore the following always holds:

D∇2c(x, t) = 0. Filling in the parameters α = 0.01 and τ = 3600 as determined before in paragraph 4.3, equation 1c becomes as follows:

∂c(x, t)

∂t = D∇2c(x, t) + αn(x, t) − τ−1c(x, t)

= 0 + 0.001 · 0.001 − 3600−1c(x, t)

= 0.0001 −c(x, t)

3600 . (16)

To find an equilibrium for c(x, t), we set equation 16 equal to zero. This leads to

0 = ∂c(x, t)

∂t

0 = 0.0001 −c(x, t) 3600 c(x, t) = 0.36.

Figure 6c shows that indeed the amount of chemoattractant is an increasing homogeneous amount for each time step. This growth will stagnate when the value of c reaches 0.36 and remain constant from that moment on.

Next, consider equation 1b. The velocity starts at a value of zero, so ∇v = 0 at time t = 0. Furthermore, as c = constant at any time, ∇c = 0. So all terms on the right-hand side turn out to be zero at t = 0, meaning that ∂v∂t = 0. This means that v(x, 1) = 0, from which we again can conclude that ∂v∂t = 0. With recursion this leads to v(x, t) = 0 for every space x and for all time t. This can be seen in figure 6b.

Finally, take a look at PDE 1a, the equation to calculate the cell mass at each point. The only term on the right hand side of this PDE, is −∇ · (nv).

Because v(x, t) = 0 for all t at every place, this term will be zero as well.

Therefore, ∂n∂t = 0 for all t, so the graph will be the same constant line as the initial condition. This can be seen in figure 6a.

Figure 6d, the cell mass over time, is a constant line over time, which means no cells appear or disappear. This is exactly what we expect to happen.

The last initial conditions we look at is a perturbed homogeneous condition for the cell distribution. For this, the starting point is again a homogeneous distribution, but now perturbed with a small sine with a deflection of 0.001. The code of this initial cell distribution can be found in appendix B. The amount of chemoattractant and the speed start at zero again. We expect the minima and maxima in the cell distribution to stay at their original positions and to become more extreme; the minima get lower and the maxima higher. This is

(21)

likely to happen, since they are positioned at equal distances from each other, so from both sides of a minimum the velocity is equal on both sides, aimed away from this minimum. At a maximum the same is the case: on both sides an equal speed is aimed towards the maximum. The velocity is shown in figure 7b. The differences between minima and maxima in the cell distribution also indeed enlarges (see figure 7a. The amount of chemoattractant copies the shape of the cell distribution over time, which can be seen in figure 7c and which also was the case whith the other cell distributions we have tried so far.

0 20 40 60 80 100

0.096 0.097 0.098 0.099 0.1 0.101 0.102 0.103 0.104 0.105 0.106

Space in mm

Height in mm

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(a) Cell distribution

0 20 40 60 80 100

−1.5

−1

−0.5 0 0.5 1 1.5x 10−3

Space in mm

Velocity in mm/s

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(b) Velocity

0 20 40 60 80 100

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

Space in mm

Amount of chemoattractant

t = 0 s t = 100 s t = 250 s t = 500 s t = 600 s

(c) Amount of chemoattractant

0 100 200 300 400 500 600

10.1001 10.1001 10.1001 10.1001 10.1001 10.1001 10.1001 10.1001 10.1001

Time in s

Cell mass

(d) Cell mass

Figure 7: Initial condition for the cell distribution is a homogeneous distribution perturbed with a sine. 7a The cell distribution for several time steps, 7b the velocity, 7c the amount of chemoattractant, 7d the integral of cell distribution n over time.

Our final graph in figure 7, graph 7d, is not according to our predictions nor does it meet the requirements. Even though the scale on the y-axis does not seem to display more than one number, the shape of the graph suggests that the cell mass decreases as time passes. According to Matlab, the cell mass is equal to 10.1001 for the first 909 seconds of the simulated time. After that,

(22)

it slowly decreases. This suggests that this simulation is only reliable for the first 909 simulated seconds. This, however, does not seem like a big problem because in the simulations with one cell displayed by a normal distribution (see figure 3), the results become instable within the shown time span of 600 seconds.

So far, we have simulated a few ’simple’ initial conditions, including a cell distribution of one and three normal distributions, constant speed for a normal distribution, a homogeneous distribution and a homogeneous distribution per- turbed with a sine. The results do not all meet our expectations, but to a large extend the graphs do show the behaviour we expect when looking at the model they are based on [3]. Actually, only the fact that the cell mass is not always a straight line is different from our expectations (see figure 4d, 5d and 7d). The instability of some cell distributions (figure 3a and 4a) is, though physically incorrect, what we did expect to happen. Especially the accuracy of the results of the homogeneous distribution (see figure 6) makes us go on with the two- dimensional simulations in the same way as we have done the one-dimensional ones.

(23)

5 2D model - Runge-Kutta 4

Simulating the PDE’s 1a, 1b and 1c defining the model [3] in one dimension is a good way to test the Matlab code, but does not give any results that can be compared to real experiments executed on a circular or squared Petri dish. In paragraph 4.4 we have seen that the results of the simulations in one dimension are plausible, where we applied Runge-Kutta 4 (RK4) to equations 12, 13 and 11 (EF), as was described in paragraph 4.1. Now it is time to extend the code into two dimensions.

The simulation of the three equations will be done in approximately the same way as was done with the one-dimensional code. Especially formula 1c is fairly easy to extend, for it is independent of the vector ~v. The difficulty is in simulat- ing the velocity ~v, which was a vector of one component and therefore treated as a scalar in a one-dimensional case, but now becomes a two-dimensional vec- tor depending on a velocity in both the horizontal and the vertical directions, adding up to the real speed.

The same periodic poundary conditions are taken as for the one-dimensional model. Furthermore, a similar scale is used, as for now we are only checking if our application for RK4 is right and there is no use into using the same scale as in the paper [3].

5.1 Simulating equation 1c

Equation 1c is the only one of the equations describing the model that is in- dependent of vector ~v, so we will start by simulating this PDE. First, we will show how this formula can be discretized using Euler Forward (EF). Then we will apply RK4 in the same way as was done in paragraph 4.1 where EF is used as a basis for RK4.

PDE 1c can be written in two dimensions as follows:

∂c

∂t = D∇2c + αn − τ−1c

= D



∂x

∂y



·



∂x

∂y



c + αn − τ−1c

= D ∂2

∂x2 + ∂2

∂y2



c + αn − τ−1c

= D ∂2

∂x2c + ∂2

∂y2c



+ αn − τ−1c. (17)

In equation 10 we saw how to numerically calculate the first and second deriva- tive of a variable, so we can directly write equation 17 into a form that can be

(24)

implemented in the Matlab code (see appendix C for the code):

∂tc(x, y, t) =D c(x + ∆x, y, t) + c(x − ∆x, y, t) − 2c(x, y, t) (∆x)2

+c(x, y + ∆y, t) + c(x, y − ∆y, t) − 2c(x, y, t) (∆y)2



+ αn(x, y, t) − τ−1c(x, y, t).

(18)

5.2 Simulating equation 1a

In a two-dimensional case, we can split the velocity vector ~v into a horizontal and vertical component, respectively an x- and a y-component, giving ~v =vx

vy

 . Equation 1a can be written, in two dimensions, as follows:

∂n

∂t = −∇ · (n~v)

= −



∂x

∂y



·

 n ·vx

vy



= −



∂x

∂y



·n · vx n · vy



= − ∂

∂x n · vx + ∂

∂y n · vy



. (19)

In a discretized form rather than the continuous form, equation 19 becomes:

∂tn(x, y, t) =

− n(x + ∆x, y, t) · vx(x + ∆x, y, t) − n(x − ∆x, y, t) · vx(x − ∆x, y, t) 2∆x

+n(x, y + ∆y, t) · vx(x, y + ∆y, t) − n(x, y − ∆y, t) · vx(x, y − ∆y, t) 2∆y

 (20)

To write equation 20 we used the discretization of the first derivative in one dimension from equation 9, since we either have ∂x or ∂y which can both be handled in the same way. This code can also be found in appendix C.

5.3 Simulating equation 1b

Equation 1b, the hardest equation of the three from Gamba et al. [3] to simulate, is:

∂~v

∂t = µ∇c − ~v · ∇~v.

(25)

Especially the term ~v · ∇~v is hard to work out. In order to being able to calculte this term, we use [9]:

~

v × ~ω = 1

2∇(~v · ~v) − ~v · ∇~v, (21) where

~

ω = ∇ × ~v. (22)

Since the cross product is only defined in three dimensions, we will assume ~v also has a z-direction, which we later on set equal to zero, because the velocity is only defined in the two-dimensional lattice.

At first we will calculate ~ω as defined in equation 22, after which we will implement it in equation 21.

~

ω = ∇ × ~v

=

∂x

∂y

∂z

×

 vx vy vz

=

∂yvz − ∂z vy

∂zvx −∂x vz

∂xvy −∂y vx

. (23)

Next we take a closer look at equation 21. The desired term is ~v · ∇~v, so all other terms need to be calculated. We start with ~v × ~ω:

~ v × ~ω =

 vx vy vz

×

∂yvz −∂z vy

∂zvx −∂x vz

∂xvy −∂y vx

=

 vy

∂xvy −∂y vx

− vz ∂zvx − ∂x vz vz

∂yvz −∂z vy

− vx

∂xvy −∂y vx vx ∂z vx −∂x vz − vy

∂yvz − ∂z vy

=

vy∂x vy − vy∂y vx − vz∂z vx + vz∂x vz vz∂y vz − vz∂z vy − vx∂x vy + vx∂y vx vx∂z vx − vx∂x vz − vy∂y vz + vy∂z vy

. (24)

(26)

The next part of equation 21 to calculate is 12∇(~v · ~v):

1

2∇(~v · ~v) = 1 2

∂x

∂y

∂z

 vx vy vz

·

 vx vy vz

=1 2

∂x

∂y

∂z

 vx2+ vy2+ vz2

=1 2

2vx∂x vx + 2vy∂x vy + 2vz∂x vz 2vx∂y vx + 2vy∂y vy + 2vz∂y vz 2vx∂z vx + 2vy∂z vy + 2vz∂zvz

=

vx∂x vx + vy∂x vy + vz∂x vz vx∂y vx + vy∂y vy + vz∂y vz vx∂zvx + vy∂zvy + vz∂z vz

. (25)

Now that we have all elments of formula 21 except for the desired part ~v · ∇~v, we can determine this part. We get:

~

v · ∇~v =1

2∇ (~v · ~v) − ~v × ~ω

= equation 25 − equation 24

=

vx∂x vx + vy∂x vy + vz∂x vz vx∂y vx + vy∂y vy + vz∂y vz vx∂zvx + vy∂z vy + vz∂zvz

vy∂x vy − vy∂y vx − vz∂z vx + vz∂x vz vz∂y vz − vz∂z vy − vx∂x vy + vx∂y vx vx∂z vx − vx∂x vz − vy∂y vz + vy∂z vy

=

vx∂x vx + vy∂y vx + vz∂z vx vy∂y vy + vz∂z vy + vx∂x vy vz∂z vz + vx∂x vz + vy∂y vz

. (26)

When we set the third direction z equal to zero in equation 26, we get the following two-dimensional term which we can directly implement into formula 1b:

∇ (~v · ~v) = vx∂x vx + vy∂y vx vx∂x vy + vy∂y vy

!

. (27)

(27)

Now we can finally work out equation 1c, where we insert equation 27:

∂~v

∂t = µ∇c − ~v · ∇~v

= µ



∂x

∂y



c − ~v · ∇~v

=µ∂x c µ∂y c



− vx∂x vx + vy∂y vx vx∂x vy + vy∂y vy

!

∂vx

∂vy∂t

∂t



= µ∂x c − vx∂x vx − vy∂y vx µ∂y c − vx∂x vy − vy∂y vy

!

. (28)

This gives, when equation 28 is discretized and split into a horizontal and vertical part:

∂tvx(x, y, t) =µc(x + ∆x, y, t) − c(x − ∆x, y, t) 2∆x

− vx(x, y, t)vx(x + ∆x, y, t) − vx(x − ∆x, y, t) 2∆x

− vy(x, y, t)vx(x, y + ∆y, t) − vx(x, y − ∆y, t) 2∆y

(29)

and

∂tvy(x, y, t) =µc(x, y + ∆y, t) − c(x, y − ∆y, t) 2∆y

− vx(x, y, t)vy(x + ∆x, y, t) − vy(x − ∆x, y, t) 2∆x

− vy(x, y, t)vy(x, y + ∆y, t) − vy(x, y − ∆y, t)

2∆y .

(30)

These two equations 29 and 30, together with the two-dimensional numerical integration of the cell density 20 and chemoattractant 18, form the basis of the two-dimensional Matlab-code which can be found in appendix C. These equa- tions are based on the numerical method of EF, but can be used to implement RK4 as was shown in paragraph 4.1.

5.4 Results

For the simulations, we use the same five situations as in the one-dimensional case (paragraph 4.4) and the same parameters as in table 3. The first describes how one normal distribution of cells behaves over time, the second simulates a number of normal distributions, the third again starts with one normal dis- tribution as the cell distribtuion, but now with a constant velocity, the fourth simulates a homogeneous distribution of cells and the fifth shows how a homo- geneous distribution disturbed by a sinusoid behaves.

Referenties

GERELATEERDE DOCUMENTEN

brand attitude and b) a positive (negative) effect on behavioral intention, and these effects are mediated by attitudinal persuasion knowledge, such that high (low) brand integration

Possible light sources in the near-infrared range are infrared dyes, rare earth ions, and quantum dots. The dyes are know for their low luminescence quantum ef- ficiency, broad

In samenwerking met Alterra Wageningen, het Brabants Historisch Informatie Centrum en de Heemkunde- kring “Maasdorpen in de gemeente Lith” stelt Adviesbureau Cuijpers het

In het LNV-gewasgezondheids- onderzoek werd in 2006 een enquête gehouden, eerst bij de net opgerichte Telen met Toe- komst loonwerkgroep in Zuid Oost Nederland en vervolgens

In case of an incompatible RBC transfusion in an immunized patient, immediate destruction may occur if antibodies are still present but have not been detected during antibody

In case of an incompatible RBC transfusion in an immunized patient, immediate destruction may occur if antibodies are still present but have not been detected during antibody

De omschrijving ‘anti-X antistoffen’ suggereert antistoffen gericht tegen een antistof-X, terwijl antistoffen gericht tegen het X-antigeen bedoeld worden en dient daarom vervangen

Relative distribution of CD21 2 B cells within each subset of CB and PB naive B lymphocytes and MBCs expressing different IgH isotypes and isotype subclasses through life.