• No results found

Self-organized criticality and synchronisation in an interest rate swap market model

N/A
N/A
Protected

Academic year: 2021

Share "Self-organized criticality and synchronisation in an interest rate swap market model"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Self-organized criticality and synchronisation in

an interest rate swap market model

Dexter Drupsteen

August 28, 2019

Abstract

The interest rate swap (IRS) market is, by notional amount, one of the biggest over-the-counter (OTC) markets in the world1. The discov-ery of the Black Schole’s formula in 1973 spawned the creation of many OTC markets, including the IRS market [4, 12]. Financial markets have gained more attention in the field of complex systems for their complexity, emergent behaviour and the effects of crashes they produce on everyday life. In other financial markets behaviour such as power law distributions in distributions on market returns or in default cascade sizes, frequently attributed to self-organized criticality (SOC), have been found [1, 20, 13]. Our research question is whether the IRS market is a SOC system. We studied a slow-driven agent-based model of the IRS market, in which agents attempt to optimize a time dependent balance (hedge risk) by cre-ating temporary links (IRSs) with other agents. We are interested in the form of the default cascade size distribution, whether or not there are signs of SOC and to what extent this behaviour is tuneable. We find that the default cascade distribution does follow a power law. Moreover, the system can be pushed into a high risk state, which increases the chance on near system size events. This behaviour indicates that the model can be tuned towards a synchronisation regime in which extreme risks are more likely [20]. Our results show that in a simple model of a financial market extreme events occur without large external influences such as in-terest rate shocks. By slowly accumulating stress the model creates the right situation for large crises. Since the large crises in our model are not caused by an externally induced shock this model adds an endogenous perspective on financial crises.

1

Introduction

Systems existing of interacting fields or units can self-organize into a critical state. The critical state, better known from the field of phase transitions, is characterized by diverging correlations, either spatial, temporal or spatiotem-poral. Small pertubations of the system can then lead to system wide events. The difference between phase transitions and SOC systems is the fact that phase transitions have to be carefully tuned into the critical state, whereas the SOC system has the critical state as an attractor.

1According to the Bank of International Settlements https://www.bis.org/statistics/

(2)

Since the end of the twentieth century a lot of research into self-organized critical systems has been done [18]. It has been suggested that earthquakes, solar flares, sandpiles and many more systems [18] show signs of self-organizing criticality (SOC). Next to natural phenomena, there has also been research into the existence of SOC in financial systems and stock markets [21, 1, 13, 20, 16]. Furthermore, Per Bak also mentioned SOC as possible underlying mechanism for economic collapses [2].

Research into SOC financial systems has initialy been focused on the paradig-matic SOC models such as percolation in (multidimentional)latices [21] and more recently using network models [6]. Contrary to our approach, percolation and herd-behaviour models [13] primarily focus on price modeling and stock-market returns as SOC phenomenon [20]. On the other hand, network centered models primarily focus on systemic risk [6, 16, 15] and usually introduce a form of exogenous shock into the system to test systemic stability.

We therefore propose a new kind of model, a combination of the slow driven nature of SOC models such as the Bak, Tang and Wiesenfeld sandpile model [3] and the interdependent nature of network models. We develop this model by looking at the largest over-the-counter derivatives market of the world, the interest rate swap (IRS) market. An interest rate swap is a two sided instrument where one side pays a floating interest over a notional amount to one party and that party pays a fixed interest over the same notional amount. A swap is thus a financial instrument to swap a fixed interest rate for a floating interest rate. This instrument can thus be used to hedge against uncertain interest rates.

The IRS market model we researched takes some characteristics of the IRS market and applies this to a pool of agents. Agents use either side of the IRS to hedge an internal “risk” balance. In this way the IRS market model still models the hedging and connectivity characteristics of an IRS market but leaves out the external feature of the interest rate and leaves out the accounting part of the market, namely the exchange of notional amounts.

The agents in the model can default if their internal “risk” balance deviates too much from the balanced state and hits a certain threshold. As a consequence, other agents can default due to the risks that spreads when the contracts of the initial defaulting node are cancelled, and so a default cascade arises.

In search of self-organizing criticality in this particular model we concen-trated research on the default cascade size distribution. Using simulations we investigate the models dynamics under different configurations for the parame-ters of the model. Numerous authors from various fields have stated that for a system to be SOC, the energy release (in our case default cascade size) of the system must at least follow a power law distribution [25, 3, 24, 18]. If the model is SOC we expect the model to show a power law distribution with a critical ex-ponent as seen in other SOC models. If the model diverges from this prediction we ask if there are parameter configurations under which our prediction holds or what influences it. How does the deviation from our predicted power law arise from the configurations? What are the exponents for power laws found in our model and what influences them most? And of course can we conclude from the results that the system shows SOC?

To answer if the system is SOC we look to a few key features named in Christensen and Watkins. From Christensen we learn that “Slowly driven non-equilibrium systems with threshold dynamics self-organise into a steady state, in which events of all sizes are caused by the same mechanism and decrease in

(3)

frequency as a power law with size” [9]. Watkins tells us that for a system to be SOC we need three ingredients: finite size scaling, spatio-temporal power law correlations and apparant self tuning towards the critical point [9].

Results show power laws in the default cascade size distribution. Further-more an even Further-more heavy tailed distribution (hereinafter a humped power law) arises when the model is tuned to create a dense network and hold on to its IRSs. This humped power law ditribution can be approximated by the weighted sum of a powerlaw and a normal distribution. The powerlaw distribution governs small default cascades, but larger, near system wide cascades are goverened by the normal distribution implying larger risks for large cascades than for smaller cascades. Furthermore, we found that the distribution is tuneable and depends heavily on the amount risk that is stored in the system (in the form of IRSs).

This humped power law distribution we obtained in the default cascade size distribution is by no means unknown to science. Other systems displaying similar behaviour are the Landau-Ginzburg sandpile model [20] and a network model created by Lorimer et al. [17]. Didier Sornette calls this supra state of SOC the state of synchronisation and believes that the extreme risks come from a combination of extreme interaction, similar to the Lorimer’s network model, or a high homogenity in the system and poses the state of synchronisation as an underlying theory for financial bubble creation [20].

Since the extreme behaviour differs from what we expected, we decided to investigate under which configurations this behaviour arises. What are the driving factors behind these high risk situations?

2

Related work

The literature concerning complexity and self-organized criticality dates back to the end of the eighties, with the first paper published by Bak, Tang and Wiesenfeld [3]. SOC has since been a popular topic of interest in many fields and not limited to physics, matter theory, sociology, astrophysics and neurology [23]. The original paper discusses self-organized criticality with help of a one dimensional sandpile where grains of sand are slowly added to the sandpile. When repeated long enough the sandpile will reach a critical state in which the average outflux equals the average influx of sand grains. Cascading events in the form of sand avalanches, because the pile has become to steep, are not only local and small, but can be large, even system wide. The sandpile self-organizes into a system “where the average response diverges with system size” [9].

Regarding the working of SOC, and as an inspiration for the model presented in this paper, we rely on Complexity and Criticality by Christensen and others [9]. A general overview of the phenomenon and some extension are laid out in this book. We mainly focused on the random neighbour BTW model as an example for the IRS market model, since in the random neighbour BTW model sites are spatially uncorrelated which closely matches the IRS market model since it only has temporal random connections.

Papers with a focus on financial markets as complex systems we encountered can be divided into two groups. A group that focuses on market returns and explain this with help of a system displaying self-organized criticality. A good example of this is the work of Eguiluz and Zimmerman, which presents a self-organized model for herding behaviour and information propagation [13].

(4)

Inter-esting in this paper is that similar results are obtained as we do (i.e. humped power law distribution of returns similar to ours), through a different model. Another author, Sornette [20], focuses on the prediction of stock market crashes, and states that this can be the effect of self-organized cooperation and feedback loops in combination with herding. Both papers view stock market crashes (or financial market crashes) as having an endogenous causes.

The other group focuses primarily on the network in a financial market to research systemic risk, with no particular focus on SOC. For example the work of Borovkova [6] researches the effect of Central Clearing Party in an OTC derivatives market. They focus on systemic risk and the effect of introducing central clearing parties (CCP) in a few network structures and comparing their risk. Work of Haldane [15] also focuses on systemic risk in financial markets, and concludes that the topology of the network between the institutions has “fundamental implications for the state and dynamics of systemic risk.” In both papers the focus is primarily on the effect of the network structure, and their method for creating a crises is introducing shock in that network.

Our model borrows methods from all the above mentioned papers. First of all our model uses the slow driven nature we find in the papers regarding self-organized criticality. Secondly, we borrow the idea of [13] and [20], that financial crises may have an endogenous cause rather than an exogenous cause. Our model differs from the SOC focused papers in that we do not focus on SOC with regards to stock market returns but rather on the systemic risk of collapse itself.

We also deviate from papers that are network oriented. First of all we do not enforce a network structure on the model, but rather leave it to the market (or to be more precise, the model). Also we keep to our endogenous cause for crises and do not introduce external shocks, only a small drift of exposure to interest rates is added exogenously.

3

The model

In this Section we present our model for an IRS market. However, we note that the model is applicable to any system in which agents have a temporary mutually benificial relationship, for example an abstract ecological connection between one or more species of which we can then test the stability. The IRS market is of particular interest though because of its size and importance in the financial infrastructure, as well as that it demonstrates the applicability of our model to a financial system.

The model we present here is a slowly driven dynamical network of nodes (banks): it combines the slow driven nature of traditional sandpile models with the dynamical nature of complex (financial) networks. In the random neighbour sandpile model of Christensen a sandgrain is added to a random site every time step. When a site has reached a threshold it topples. During this process sandgrains are transferred to randomly chosen neighbouring sites, which can lead to another topple. This way the model is slowly driven, one sandgrain at a time, to different states.

In the IRS market model the slow driving mechanism in the model is the pertubation of the balance of the nodes in the system. In the case of our model, the balance of a node is a measurement for the exposure to interest rate

(5)

fluctuations a node has and not the accounting form of a balance. The financial position represented by the balance is a risk in the portfolio of the nodes. Having too many instruments that depend on a floating interest rate is represented by a large negative number and having too many instruments that depend on a fixed interest rate is represented by a large positive number. The balanced state is a perfect combination of the two. In real world markets that would represent a riskless portfolio.

The pertubation of the balance is a combination of the financial activity of the institutions, be it giving out or taking in loans, combined with fluctuations in interest rate. The balance of a node is initiated at zero, the balanced state. Every time step, for every node, the balance is perturbed by i ∼ N (0, σ) in

which we will use a default σ = 1. This means that the balance is a random process whose distribution through time is N (0, σt). Balances can become large into two directions, either positive or negative, but in the long run their average is zero, the balanced state. One can also note that the perturbance in the total system is distributed by N (0, σN ) in which N is the number of nodes in the system.

In order to keep their balance at zero, nodes go into contract with one another to hedge their excess. These contracts or swaps have two sides: a floating side and a fixed side. Nodes who have an excess of fixed interest rate on their balance are in need for the floating side of such a swap and the other way around. On every timestep the model tries to randomly link nodes who have the opposite need. A swap is created between them with a constant value Ivand the balances

of the nodes are corrected with this value. Nodes do not want to gain excess in the opposite direction when entering a swap, thus the balance must have a value larger than or equal to the swap value.

For example we have node n1 and node n2. The balance of node n1 is −4

and the balance of node n2 is 3. Now imagine that a swap “hedges” a value

of 3. Both nodes have an excess at the opposite sides of their balance so they decide to enter into a swap. n1 takes the fixed side and n2 takes the floating

side of the instrument. The balances of n1 and n2 are now respectively −1 and

0 as their balances are corrected by the value of the swap. Note that after each time step we have two lists of nodes that are in need of an IRS. We randomly link these two lists with IRSs. This also means that if one list is longer than the other, some nodes may not get an IRS. A possible future extension of the model could be adding preferential attachment to the IRS creation step.

However, a swap is not forever but has a certain maturity, a day when the swap will end. When a swap matures, the swap is removed from the system and balances are updated accordingly. In simulation this is done after the pertubation of the balance but before the creation of swaps to give nodes a chance to recover from ending swaps.

When the node’s balance grows larger than a specified default threshold T it is defaulting and all of the contracts the node was in are deemed void. This can happen when the number of nodes in need for one side of a swap is larger than the number of nodes in need for the other side. The removal of swaps influences the balances of neighbouring nodes. These balances in itself might then become larger than the default threshold and another node goes into default, repeating the process. This is the way a default cascade spreads through the dynamical network.

(6)

1. N the number of nodes in the system.

2. τ the term to maturity, or the number of steps a single IRS remains in the model.

3. T the threshold, the maximum balance deviation permitted before the default of a node.

4. Iv the value of a single IRS, and also the minimum balance deviation to

obtain an IRS.

We have implemented this model into a program written in python. Results from simulation are described in Section 5. In the next Section we take an analytical approach on the model.

4

Analytical approach

In the analytical approach to the IRS market model we will consider a large but finite amount of nodes N in the system. These have an internal balance Bi

driven by a pertubation of size  ∼ N (0, σ) every time step. The distribution of the balance of a node also follows a normal distribution N (0, tσ) for some time t.

In the simulation a node defaults when its net balance crosses the thresh-old T . The net balance is the internal balance Bi corrected by all the IRSs

the node has at that moment. For the sake of simplicity we will omit the net balance when looking at the model analytically and we assume that a node de-faults if its gross balance reaches a threshold of ˆT or − ˆT . This new threshold accounts for the effects of IRS creation on the balance. This means that ˆT  T . A node is in danger if it is near the threshold ˆT . To be more precise, if a node has on average k neighbours it is in danger of collapsing if its balance in in an interval [− ˆT , −(1−1k) ˆT ] or on the positive side [(1−k1) ˆT , ˆT ]. This is intuitive as there are two possible scenarios that could cause a default when a node is in these ranges: when one of the k IRSs expires there is a chance of 50% that this will push the nodes balance into default (the other half pushes the node away from the threshold); the other option is not the expiration of an IRS but rather the removal of an IRS due to the default of a neighbouring node. It is fair to assume a node has an equal amount of both sides of the IRSs. Although we omit net balances here, when a node has a balance of 0, it will deviate with equal chances to either side. Thus on average we can assume that a node has an equal amount of both sides of the IRSs.

We can calculate the chance that a node is in either range by adding both densities. But since on average half of the nodes have a positive balance and the other half have a negative balance, it suffices to calculate one of the densities. We have to take into account though, that the balance of a node can not grow past the threshold value ˆT for above these values it defaults. Therefore the chance of a node being in danger has to be scaled to the possible range of values it can take:

(7)

P (node i in danger) =P ((1 −

1

k) ˆT < Bi < ˆT )

P (Bi< ˆT )

.

While k is not a free parameter in our model, we explain at the end of this section how k can be approximated using the term to maturity τ and the value of an IRS Iv. Then if we expand the denominator of the previous equation we

come to: P (node i in danger) = P ((1 − 1 k) ˆT < Bi< ˆT ) P (Bi< ˆT ) = P (Bi≤ ˆT ) − P (Bi≤ (1 − 1 k) ˆT ) P (Bi< ˆT ) .

In these equations the Bi term behaves like a normal distribution over

N (0, t), so we can rewrite this using the densities and correcting for the fact that we only calculate for half of the distribution:

P (0 ≤ Bi≤ (1 − 1 k) ˆT ) = 1 2[1 + erf( (1 − 1k) ˆT t√2 )] − 1 2 P (0 ≤ Bi≤ ˆT ) = 1 2[1 + erf( ˆ T t√2)] − 1 2 P (node i in danger) = 1 2erf( ˆ T t√2) − 1 2erf( (1−1 k) ˆT t√2 ) 1 2erf( ˆ T t√2) = 1 − erf((1−1k) ˆT t√2 ) erf( Tˆ t√2) .

Over time this function converges to 1

k which is intuitive as when t gets large

and we normalize the whole normal distribution over a range [0, T ], it will start to look uniform over that range. Note that this is less than what one expects for a barrier hit for a Brownian motion (which is 1 given enough time). We lose this property because we are looking at a mean and not an individual particle. A default cascade behaves like a branching process. This means that when we look at a cascade we can define a root, the first node that defaults, and subsequent branches stemming from that root in the form of other nodes that have defaulted. Interesting then is the average branching ratio hbi. The average branching ratio gives us the average number of defaults induced by a single default. We know that if a default induces at least 1 other default, we have a system that produces an infinte cascade of defaults.

We know that the chance that a node defaults because of the default of one of its siblings converges to k1. So if we look at the change of a node inducing b other defaults, we get to a binomial equation:

(8)

k b  1 k b (1 − 1 k) k−b.

With this we can get an average branche ratio:

hbi = k X b=0 bk b  1 k b (1 −1 k) k−b = k − 1 k k k k − 1 k .

This equation gives us hbi = 1, the critical branching ratio for k > 1. This means that this system is capable of creating infinite cascades given the time and an (near) infinite amount of nodes. This is in line with what we see in Christensen [9] when they calculate the critical control parameter for the random neighbour BTW sandpile.

All of the terms in this reasoning are either time, come from the underlying distributions or are other parameters of the system, except for k. The average amount of neighbours can be estimated though. We explained that a node enters into an IRS when its balance has an excess on either side. We also noted that a node does not want to create an excess on the opposite side of its balance due to the IRS it entered. So it will only enter into an IRS with another node when its balance has a value Bi ≥ Iv or Bi≤ −Iv. On the other hand an IRS has a

term to maturity so there is a finite amount of neighbours a node can have. To calculate the average number of neighbours we can rely on the first passing time density (FPTD) [5]. The FPTD gives us a probability density through time for a Brownian particle to pass a given threshold. Since acquiring a neighbour depends on the passing of thereshold Ivor −Ivwe can use the FPTD. Instead of

looking at the chance a node needs to go into an IRS (which is what the FPTD with respect to Iv describes), we can look at the typical time it takes to reach

this threshold. The typical time it takes to reach the threshold is approximated by max(FPTD(t)) ∼ ∆x2 where x is the value of the threshold.

This means that we can expect a node to be in need of an IRS every I2 v

steps. So typicaly the chance of the creation of an IRS is 1 I2 v

and the chance of an IRS reaching its maturity is τ1. Then the change in the number of IRSs a node has is ∆ = 1 I2 v −1 τN

where N is the number of IRSs. We know we have reached a stable point when ∆ = 0 leading to 0 = 1 I2 v −1 τN N = τ I2 v .

(9)

Assuming that in a large enough system the number of neighbours equals the number of IRSs, we arive at

k ∼ τ I2

v

.

Which gives us the estimation for the number of neighbours of a node. We have measured the degree of nodes during simulations but the average degrees differ from what is expected based on the equation we arrived at. The reason for a discrepancy between the theoretical average and the actual average measured during simulation lies in the fact that for the calculation of k we did not consider a limit in available nodes nor did we discount k for defaults that happen during simulation.

Figure 1: Average number of IRSs per node with configuration N = 100 T = 15, τ = 850. We have used sampling to counter act the effects of large intermittent defaults and rebuild periods after large defaults. We can see that although there is some similarity with k ∼ Iτ2

v, system size and the effects of defaulting

neighbours cause a lower number of IRSs than expected.

Although we have found reasons to expect our model to have infinite cascades (or system size cascades), there are a few assumptions we have made that might influence this result when looking at the results from simulation. First of all we abandoned the idea of the creation of IRSs and its influence on the balances of nodes (as stated at the start of this section). Therefore we had to assume an estimate for the number of neighbours. This estimate will be influenced by defaults and the finite amount of nodes in the system. Results in simulation therefore are different than what we would expect from the results found in this section.

(10)

5

Results

The results we present in this Section, obtained by simulation, are used to re-search the dynamics of the model presented in Section 3.

From the perspective of a single node, the model looks simple and clear. In Figure 2 and 3 we can see the balance of two nodes through time. The node in Figure 2 does not show a default, but it does show us moments of the creation of IRSs. It is clear that the node only creates a swap when its balance is on or above the IRS value Iv and gets corrected towards the stable point zero.

Between the swap creations the nodes balances behaves as one would expect from a Brownian motion. Figure 3 shows similar behaviour, but with respect to defaulting. The balance of the node fluctuates between the two thresholds Bn ∈ [−T, T ]. When it hits the threshold −T or T the node defaults and is

reset to the balanced state, i.e. Bn= 0.

Figure 2: The balance of a node, with moments IRS creation depicted by a green dot. The node never creates a swap when it is beneath the value of a swap since it does not want to create an excess in the opposite position. Note that this figure was generated for illustrative purposes only. This specific run was not part of the simulations we have done.

Figure 3: The balance of a node, with moments of default depicted by a red dot. After a node defaulted we can see it is reset to the balanced state and is being seen as a new node in the model. This way the number of nodes stay the same througout the simulation. Note that this figure was generated for illustrative purposes only. This specific run was not part of the simulations we have done.

(11)

But the balance of a single node does not give us much information on the behaviour of the system as a whole. When investigating model behaviour concerning the default cascades, we are interested in the amount of risk that is contained in the system. The time series of the total amount of risk, or the sum of the absolute gross balances of all nodes, gives us a few characteristics of the system under different parameters. First and foremost it shows us that the model has a definite build up phase, characterized by a sharp increase in the absolute risk contained in the system. Next to that we can see large default cascades in the form of sudden declines in the stable part of the time series, when these cascades are present.

(a) Low risk system

(b) High risk system

Figure 4: The sum of the absolute gross balances of a system of 100 nodes for two different configurations: for (a) the configuration is Iv = 1, τ = 100 and

T = 15 and for (b) the configuration is Iv = 6, τ = 400 and T = 15. In Figure

(a) we see a build up without significant release of risk, i.e. no large defaults. Whereas in Figure (b) we can see two releases, one around t = 2000 and one around t = 5000 implying the default of a large number of nodes.

(12)

In Figure 4 we can see the timeseries for two different configurations of the model. Both show a build up phase towards a stable regime. The difference between the two is easy to spot though. The first time series 4(a) shows no large collapse but the second time series 4(b) shows two large collapses around t = 2000 and t = 5000. Note that the sum of the absolute gross balances is the sum of the absolute value of the balances of all the nodes before they are corrected by the IRSs they are in. For example, a node has three IRSs with a value of five each, to hedge its gross balance of −15, then its absolute gross balance would be 15.

The most notable difference between the two systems are the sharp declines of absolute risk in the high risk system around t = 2000 and t = 5000 in Figure 4(b). These sharp declines are the result of a large default cascade in which a large part of the system collapses. Furthermore we can see that after the collapse in the high risk system the system behaves like before and starts accumulating risk again. This results in a cyclic behaviour of risk build up and release. The residual risk in the system after a large collapse is the risk of nodes that were either not connected to the defaulting network or the risk of nodes that survived the default cascade.

By measuring the frequency of different cascade sizes we obtain a cascade size distribution. We have categorized the cascade size distributions we have measured using this model into five different types:

1. A steep power law with no systemic risk, only small cascades sizes. 2. A steep power law with outliers.

3. Power law reaching near system size with a cutoff at the end.

4. Power law with a connected hump indicating a high risk for system size cascades.

5. Power law with a hump but discontinuous. That is small cascades are governed by power law, large cascades are governed by the hump. We have tried different methods for categorizing the five types above. We looked at different characteristics: alpha, divergence from the power law and weight between power law and hump. To calculate the three metrics we needed an estimation for the slope of the power law α and the mean and standard deviation of the hump part. We did an estimation of which datapoints belonged to the power law using the derivative of the smoothed distribution and finding where that derivative crossed zero. With that we can calculate the power law part, ignoring the hump. Next we tried minimizing the following function using the estimates gathered for the α, µ, and σ to calculate the weight between the power law part and the hump with help of the Basin Hopping algorithm [22]:

f (x) = ˆy − (pcx−α+ (1 − p)N (µ, σ)) ,

where the value of p is the weight. Secondly we tried to get better estimations for the power law exponent by using the Python power law package [10] and tried to split of the hump better by smoothing the function before checking the derivative, to mitigate some of the noise in the data. While the fitting of the power law was good, the fitting on the hump was dissatisfactory and the weights

(13)

obtained from the fit were poor measures for type. Another variant we tried was inspired by Lorimer, Gomez and Stoop [17]. They calculate the relative error the measured result produced with respect to a power law over the whole x range of the distribution. This method performed badly on outliers and discontinuous distributions with hump. Lastly we reverted by merely splitting the distribution in a powerlaw part and a hump part (using methods described above) and calculating the total probability that was in the hump part to determine the humpyness of that specific configuration.

(a) Non-synchronized system

(b) High risk system

Figure 5: Cascade size distributions for a system with 100 nodes. The accumu-lated distributions for 10 runs with 3 × 105time steps. We can see the different

types of distributions encountered in the system. From pure power law, to power law with cutoff, to power law with a (discontinuous) hump.

(14)

The sample distributions of the types, depicted in Figure 5, show that there are two (arguably three) components to the distributions. First of all we have a power law part in the regime of smaller cascades. Furthermore we see that towards system size events we have either a cut-off (type 3) or a hump (type 4 and 5). Lastly the distribution for the type 5 system is discontinuous, leaving a gap between the power law part and the hump part.

Apart from the different components in the distributions we can also see that there are different slopes for the power law part of the distributions. Generally speaking types 1 and 2 have the highest values for α in the power law equation f (x) ∼ kx−α and thus the steepest slopes. Figure 6 gives us an overview of the ranges of α for the different types. We can see that there are overlapping ranges, thus the value of α is not enough to determine the type of the distribution for the given configuration.

Figure 6: Box plots showing the distribution of α for the different distribution types. We can see that the difference between type one and two is minimal. From type three averages, minima and maxima are increasing. Note that the distributions have been manually categorized into one of the five types.

The different values for α come from different configurations for our variables (term to maturity, IRS value, threshold and number of nodes). The different values for these variables have different effects in our system. First of all the IRS value determines the time it takes to create an IRS. A combination of the IRS value and the term to maturity influences the amount of IRSs that are in the system and thus the connectivity of the network. Since risk is hedged (or stored) using IRSs this combination also determines the maximum amount of hedged risk that the system can contain. Furthermore we have the relative risk nodes are willing to take that is defined by a combination between the IRS value and the default threshold (how close is a node willing to come to the threshold before going to look for a counter party). And lastly the number of nodes in the system influences the chance to find a suitable counter party.

(15)

(a) N = 100 T = 10 (b) N = 100 T = 15 (c) N = 100 T = 20

(d) N = 200 T = 10 (e) N = 200 T = 15 (f) N = 200 T = 20

Figure 7: Heatmaps for the value of α, the exponent of the power law, over the term to maturity (x-axis) and the value of an IRS (y-axis). (a),(b) and (c) all for 100 nodes and (d), (e) and (f) for 200 nodes.

Figure 8: Box plots showing the distribution of the number of defaults per timestep for the different values of the threshold. We can clearly see a decreasing trend when the value of the threshold increases.

(16)

Figure 7 shows us the effect of the configuration on the exponent of the power law. The first thing that stands out in this Figure is the fact that high values for α occur at short terms to maturity. A steeper power law distribution tells us that there is more weight in the small cascade sizes than there is in larger cascade sizes. A shorter term to maturity theoretically puts the nodes more often into risky situations, as hedged risk is added to the balance of the node more quickly when compared to longer terms to maturity.

Furthermore we see that the value of α increases when the threshold in-creases. In the heatmap that shows values for a threshold of T = 10, we can see that α < 5. Whereas for T = 20 the maximum value of α has increased to between 6 and 7. We can clearly see in Figure 8 that an increase in the thresh-old will decrease the number of defaults. An increase in the threshthresh-old will thus lower our chances for a large scale event happening. Experiments however show that we still have large scale events for larger values of the threshold.

Having a steeper power law and lower chance on large scale cascades in general leads us to the cause of the discontinuity of some of the distributions (i.e. type 5 distributions). Figure 9 shows us an increase in gap size when the threshold increases. So when both the exponent of the power law α and the size of the gap between the power law and the hump increase with threshold we can conclude that the threshold is the main driver for discontinuity. We have tested both IRS value and term to maturity as drivers for the gap. The IRS value did not give an increasing trend but rather oscilated around a mean. The term to maturity did give a trend but we can explain this by taking into account that humps are more frequent at longer terms to maturity (see Figure 10).

Figure 9: Box plots showing the maximum gap size for distributions given a threshold. Only gaps in distributions that showed cascade sizes of a minimum of 80% of their system size were taken into account. We can see an increasing mean gap size with threshold, leading us to believe that discontinuity of distributions are primarily caused by an increase in the threshold.

(17)

Although the threshold explains an increase in α and causes the distribution to be discontinuous at certain ranges, it does not explain the emergence of the high risk situations that are characterized by the hump. In Figure 10 the value for the weight in the hump of the distribution is shown for different parameter configurations.

To calculate this weight we take the cumulative distribution with X greater than the last point in the powerlaw as explained before. In order to do this we need to separate the power law part from the hump. The powerlaw part is calculated by smoothing the raw dataset from the simulation aggregate through a one dimensional Gaussian filter and checking whether there is gap greater than an extimated threshold (estimated by looking at discontinuous distributions manually) or when the derivative of the smoothed function crosses zero, as explained before.

(a) N = 100 T = 10 (b) N = 100 T = 15 (c) N = 100 T = 20

(d) N = 200 T = 10 (e) N = 200 T = 15 (f) N = 200 T = 20

Figure 10: Heatmaps showing the weight in the hump part of the distribution for different configurations. (a),(b) and (c) all for 100 nodes and (d), (e) and (f) for 200 nodes. We can see that high risk situations emerge when the term to maturity (x-axis) is long and the IRS value (y-axis) is not too small or too large with respect to the default threshold. The difference between the upper row for 100 nodes and the bottom row for 200 nodes is also apparant. The larger system of 200 nodes shows less weight in the hump. In figure 11 it will become clear that larger systems do still show a hump, but the weight in the hump part is lower.

We can see multiple things from Figure 10, firstly a dependence between the value of an IRS (the amount of risk that is hedged by entering an IRS) and the threshold on which a node defaults. As said before, the value of an IRS determines how long it takes for a node to be in need of an IRS but it is also a willingness to have a certain amount of risk before hedging that risk. The willingness to take certain risk shows by how much of their maximum risk (the

(18)

threshold) a node is willing to hedge with a single counter party and what kind of risk deficit it is comfortable to have, is determined by the IRS value. The creation time of an IRS also depends on this value, as a node only hedges risk when its deficit exceeds the value of the IRS. By looking at Figure 10 we can see that high risk situations arise when the threshold lies around two and a half times the value of a single IRS, both for the system consisting of hundred nodes as two hunderd nodes. We have no analytical explanation for this.

Furthermore we can see that there is a high dependence on the term to maturity when looking at high risk situations. The duration of a single IRS is a large driver for high risk situations as it, together with the value of an IRS, determines the total amount of risk that can be stored in the system, as well as the density of the network and thus the exposure of risk to other nodes in the system. Recall that the estimate for the number of neighbours in an infinite system was hki ∼ τ

I2 v

.

When a node enters into an IRS with a counterparty node, its absolute risk is reduced with the value of the IRS. When we look at the ideal situation, that the total amount of risk is reduced to zero when a node enters into an IRS, its balance has equal chances to drift to the negative side or the positive side and surpassing the IRS value on either side. When the term to maturity is long enough (i.e. longer than the average time for IRS creation), the number of IRSs in the system grows and thus the density as well as the gross absolute risk in the system. While a node will surely default eventually in this model (Brownian motion will always push a node across a boundary given enough time), system wide cascades only happen when the gross absolute risk, and thus the density, can become large enough.

Figure 11: Cascade size distributions for different system sizes with the same configuration (T = 10, Iv = 7, τ = 400). We can see that the hump is missing on

the smallest system sizes. This implies that the number of nodes in the system does matter when looking at high risk situations. Furthermore we can see that the hump decreases in size with system size, this could mean that the hump is a finite system size effect.

(19)

We can also see a difference between the two system sizes depicted in Figure 10. The heatmap for two hundred nodes seems to contain fewer large humps, or, the humps are smaller compared to the smaller system size. If we look at Figure 11 we see a hump that is decreasing and moving with system size. The whole distribution moves with system size. First of all this implies that the high risk situation is related to the finite size of the system. This is intuitive since if there were no limit to the number of nodes, risk would be able to accumulate indefinitely as for each node in a certain position there would be a node with the opposite position willing to go into an IRS.

Figure 12: The average density of the network for different cascade sizes just before the cascade, for the same system sizes and parameters as in Figure 11. Although we can see a growth in density for larger cascade sizes, we see a lower density for larger system sizes. This shows us that there is not something like a critical density that is unrelated to system size. This is intuitive since the number of IRSs needed in a system of 500 nodes to reach a similar density that has been reached in a system of 100 nodes scales with N2.

Since the hump decreases and moves with system size we can conclude that it takes longer for the system to becomes saturated. We investigated whether or not the existence of the hump correlates with a fixed density of the network, but Figure 12 shows us that this is not the case. We can explain the absence of a fixed network density for high risk situations by looking at the growth in the density and the growth of risk accumulation. Since the number of possible connections grows with N2, and thus density scales with N12, making it harder

to reach the same density with a larger system size. The total amount of risk with which the system is perturbed on the other hand scales with √N , due to the Brownian nature of the model. Recall that this total amount of risk follows a Brownian motion N (0, N tσ2).

(20)

The scale at which the system is perturbed with respect to the system size also shows us why the hump decreases in size with an increase in system size. The minimum amount of risk to topple a system of N scales with N because for a total system collapse we need at least for all nodes to be connected so we need 12N ∗ Iv risk to create all the connections and then N ∗ T + c risk to drive

all nodes into default. Near system size cascades thus require a total amount of risk that scales with N but the pertubation of the system only scales with √

N . Thus it takes a longer time to acquire the amount of risk needed for near system size cascades.

The absence of the hump in smaller system sizes which we have seen in Figure 11 points us in the direction how high risk on system size events arise. It tells us that the system was unable to gather risk efficiently enough to cause system size default cascades with a higher probability than smaller default cascades. Larger systems do show a significant hump however, from this we can conclude that it is not only the IRS value, the term to maturity and threshold but also system size that influences high risk situations. This is intuitive when we keep in mind that with more nodes in the system, the probability of a node finding a counter party to hedge with rises, which leads to a higher gross risk in the system and more connections between nodes.

(a) N = 100 T = 10 (b) N = 100 T = 15 (c) N = 100 T = 20

(d) N = 200 T = 10 (e) N = 200 T = 15 (f) N = 200 T = 20

Figure 13: Manual classification of distribution types for a range of configura-tions. (a),(b) and (c) all for 100 nodes and (d), (e) and (f) for 200 nodes. These results show the same high risk areas as Figure 10, but the distinction between type 4 and type 5 (orange and dark red in this graph) is more obvious.

In Figure 13 we present the different distribution types with their corre-sponding configurations. The Figure resembles the hump weight heatmap that we have seen in Figure 10 but the difference between type 4 (continuous hump) and type 5 (a discontinuous hump) is more visible. Interesting is that the model

(21)

using 200 nodes has a tendency to forming a discontinuous hump. Using Figures 6, 9, 8 we have shown that discontinuity comes from stability of the system, or rather the efficiency to hedge risk. The tendency to discontinuity for a larger number of nodes underwrites this conclusion, since a larger number of nodes increases the supply of nodes in need of an IRS. We can also see that the dis-tinction between type 1 and type 2 (pure power law and a power law with outliers) is minimal. But other than that we can see distinct regions for the types presented at the start of this Section. The spread of high risk types to larger IRS values and term to maturity as we saw in Figure 10 also is present in Figure 13, confirming that it is the efficiency of the system to hedge risk that ultimately creates high risk situations.

We can see from the results presented up to here that every parameter in the model influences a different part of the cascade size distribution. Increase in the threshold, the maximum amount of deficit a node can have, causes an increase in the exponent α of the distribution. The exponent α and the number of nodes N are a cause for the discontinuity in the distributions. The com-bination between the IRS value (with respect to the threshold) and the term to maturity are mostly the drivers in creating high risk situations since they directly determine how much risk can be stored in the network. The size of the network also plays a role in the latter part as we have seen. Having a network that is small, limits the number of hedge candidates and so the amount of risk that is being stored in the system. While if there are high risk situations in a configuration, increasing the system size will decrease the total weight in the hump because the minimum amount of risk needed to topple the whole network increases faster with the increase of N than the total amount of incoming risk per time step. The extra time it takes to accumulate enough risk to topple the whole network allows for smaller cascades to be more frequent, reducing the weight in the hump.

Network shape and properties

In the remainder of this section we will discuss the shape and properties of the network that the model creates to have a comparison with real world networks and to see whether characteristics of the distributions we have seen can be explained by these properties.

While we are aware of the fact that the ability of the system to hedge inflow of risk in the form of temporary links, can lead to high risk situations, we have not yet discussed the form and properties of the network that the model cre-ates. While for an over-the-counter market, public information does not contain network shapes nor an accurate number of market participants, academics have put effort in to making estimations of the network size and form [19].

One of the properties of a financial network is its density. We have already shown some density results in Figure 12, but only in the context of cascade sizes. Since we are missing figures like total market size in some currency and notional amounts all together, and we do not know for sure what the real market structure is, comparing the estimated density of real world markets with the density of our model allows us to relate our results to the real world. In the next Section we will discuss the densities found in our results and the estimated densities found by van Lelyveld et al. [19].

(22)

Figure 14: Heatmap of the network density over different values for the IRS value and the term to maturity. The network density is calculated by dividing the number of unique neighbours by the amount of possible unique connections. The highest densities are found at the lowest IRS values and the longest term to maturity, which is trivial since when the IRS value is low, more IRSs are created; analogous, when an IRS lives longer, more IRSs are created during its lifetime resulting in a higher density. It is therefore trivial that we see the highest density at the minimum of the IRS values and the maximum term to maturity.

Figure 14 depicts the effects of the IRS value and the term to maturity on the density of the network. The density of the network is measured by taking the sum of all the degrees in the network (note that this differs from the number of IRSs in the network), and dividing that by the maximum degree possible in the network, given the number of nodes. We can see that the density is far highest at the lowest IRS value and the largest term to maturity, which is to be expected as that region has the fastest creation of IRSs with the longest retention.

Whilst a density shows us how much of the total number of connections in network are existent, it does not tell us how the degrees are distributed through the network. Do we have some core-periphery network in which some central nodes are connected to every other node, and other nodes just lie sparsely connected on the periphery; do we have a random network in which the degree distribution follows a binomial function?

The problem with asking questions about the degree distributions is that the answer can vary over time. Due to the dynamical nature of the model the number of connections (IRSs) constantly changes. IRSs are destroyed because they mature, or because a counter party defaulted.

(23)

(a) Iv= 1 τ = 100 (b) Iv= 1 τ = 250 (c) Iv= 1 τ = 400

(d) Iv= 3 τ = 100 (e) Iv= 3 τ = 250 (f) Iv= 3 τ = 400

Figure 15: Run average degree distributions for two different values of the IRS value and three different values of the term to maturity.

Using a mean over time results in Figure 15 where we can clearly see the effects of a longer term to maturity. For both Iv = 1 and Iv = 3 it holds that

increasing the term to maturity results in a peak at larger degree values. It also results in a lower peak as this is an average taken over a full run. On the lefthand side tail of the distributions (between the peak and zero) we can see some small values, caused the start of the simulation (i.e. there are no IRSs yet) or nodes that have defaulted and are in the process of acquiring new IRS. The results regarding a higher IRS value are also clear, resulting in a lower average degree. This is intuitive as a higher IRS value can hedge more risk, so less of them are needed when adding the same amount of risk.

Figure 15 does not give us conclusive results regarding the “actual” degree distribution of the network (if we can talk about an actual degree distribution in an evolving network), and thus its shape. But the bell shaped curves towards the end of the graphs for Iv = 1 point in the direction of the network being random

at stationary moments (no large default cascades and not rising in risk). Since the network exists of directed edges, the IRSs, we can also take a look at the similarities between degree in versus the degree out. For a dynamic random network we would suspect that we have a bell shaped curve around kin= kout. In Figure 16 we can see the results of such an experiment. There is

a strong similarity between kin and kout as we would have expected when the

chance on having the fixed side of an IRS is equal to the chance of having the floating side of an IRS.

(24)

Figure 16: Normalized heatmap of in- versus outdegree of nodes averaged over 50 different runs for a model configuration which produces a loose hump (type 5). Note that the point (0, 0) has been omitted for scaling reasons.

Figures 15 and 16 make it probable that the network behaves as a random network during stationary periods (i.e. the period between building up the network and large cascades) more so when we look at the basic dynamics of the model. As mentioned before, a node with a net balance of zero behaves like a new node would, whether or not its net balance is zero because the simulation just started, it defaulted or it has hedged its balance perfectly. It is equally likely to have a deficit or a surpluss on its balance and thus is equally likely to be in need of either end of an IRS.

Since the network behaves in a random manner at stationary periods, we want to see whether or not cascade size and degree of the first failing node have a correlation other than random. It would be intuitive if large cascades are only induced by the failure of a node with a high degree. From Figure 17 we can conclude that there is no strong correlation between large cascade sizes and the degree of the inducing node. Further analysis resulted in a correlation coefficient of 0.19 for the distribution shown in figure 17a, but with a p-value of 0.11. For figure 17b we measured a correlation coefficient of 0.07 with a p-value of 0.36. For both distributions we could not find a significant correlation as we expected from Figures 15 and 16.

(25)

(a) Type 5

(b) Type 4

Figure 17: Heatmap of the degree of the first defaulting node for measured cascade sizes. Note that this Figure is different from Figure 12 in that this Figure depicts the spread of degrees for nodes that initiate a default cascade of a certain size whereas the whole network is taken into account in Figure 12.

6

Conclusion and discussion

In this paper we presented a model to research behaviour of a market hedging risk with Interest Rate Swaps, the largest financial over-the-counter product worldwide2.

The IRS market model shows self-organized criticality. We have looked at the default cascade distributions of several model configurations, and they produce power law distribution in most configuration without tuning the system after setting its paramters. The exponent of the power law curve, as we have seen, depends on the threshold at which a node defaults, with respect to the portion of risk the nodes hedge with a single IRS. The exponent of the power law ranged from high (an α up to 9) to lower values (an α around 2).

Next to self-organized criticality, we have found another state the model converges to under certain parameters, namely a state of high risk, creating a hump in the cascade size distribution at larger (i.e. near system size) cascade sizes. The meaning of the hump is that it is more likely to have a full scale crisis (near system size collapse), than a smaller crisis in which only a part of the system collapses. The high risk situation happens when the system is stable enough to accumulate the amount of risk that is needed for a near system size cascade to happen. We have seen that these situations arise when the term to maturity of an IRS is long. Furthermore IRSs with low values have a lower chance of defaulting a node when they expire or are removed due to the default of a counter party. Whereas IRSs with a large value compared to the defaulting

2According to the Bank of International Settlements https://www.bis.org/statistics/

(26)

threshold do not allow for a dense connected network that is necessary for near system size cascades. The density arises from the system being able to create IRSs with each other and thus shows how capable the system is to hedge risk. When the system is efficient at hedging risk it will accumulate risk until it reaches its limits and thus produces large cascades.

We have also seen that the system size itself plays a role. Under certain configurations smaller system sizes do not create high risk situations while larger size, under the same parameter configuration, do create high risk situation. Only when risk is stored efficiently in the model, high risk situations can emerge. Although smaller systems need less total risk added to be toppled, the absence of enough counter parties when a node wants to hedge risk limits the amount of edges in the system, and thus the total stored amount of risk.

The network that resulted from the model is random. We have seen nodes tend to have as much of one side of an IRS as the other. This is in line with what to expect when looking at the model. Intuitively there is no difference between a node that has a fully balanced state and no IRSs and a node that has a fully bal-anced state because of its IRSs. The balances of both nodes have equal chance to drift one way or the other. Therefore we do not expect a node that has more of one side of the IRS to have a lower chance of needing the other end of an IRS. Without analytical results on the cascade distribution of the model we can-not conclude the drive behind the hump with a hundred percent certainty, but, comparing runs with the same parameters while increasing system size does show movement of the characteristic hump in the direction of the limit of the system. This can imply that the hump is a finite size effect. The model accu-mulates risk until a level is reached that the nodes are satiated with IRSs, after that, when more risk is put into the system, the system is unable to hedge this risk leading to defaults or cascades in highly connected systems. Increasing the number of nodes allows for more risk to be stored in the system, moving and lowering the hump.

This line of reasoning not only explains the lack of high risk situations when the IRS value is low. Since with a low IRS value the removal of one or more IRSs from a node might not result in such a loss that it will default, i.e. default risk per removed IRS is lower. But it also explains the absence of high risk situations at high IRS values. Because with a high IRS value the unweighted connectivity within the model stays low. The same amount of risk might be initially stored, but the default is unable to spread effectively through a less connected network. From the point of view of the term to maturity, we only have to consider a short term, which also limits the density of the network.

So if we continue with the line of reasoning that it is the limit of the amount of risk that can be held by the system, increasing system size would stretch that risk limit. Taking it to the extreme, having an infinite system, there would be no amount of risk too high, and thus, the bubble could grow endlessly, the hump would disappear. Unfortunately there is no real world example of an infinite financial market and thus the risk of a hump remains.

The high risk or synchronous state produced by the model presented here is the result of both the system size and, a high homogenity and strong interactions as stated in Sornette’s work Dragon-Kings, Black Swans and the Prediction of Crises [20].

(27)

Comparability to the real world

There are a few ways we can compare the IRS market model to the real world. First of all we have the density of the network. We have seen that the density of the network in the model is dependent on the term to maturity and the value of the IRSs. We have a large range of average density values ranging from 8% to 64%, for networks consisting up to 200 nodes. Estimations of real world networks, under the assumption of a core-periphery network, range from 0.4% in Germany (based on 1800 market participants) to around 8% in the Netherlands and 12% in Italy (both based on around 100 market participants) [19]. We did not test network density for larger networks, but both the Dutch and Italian markets have numbers that our model would be applicable to.

Network structure is important in the case of financial markets, more so when we are dealing with default cascades. As earlier established, the network produced by the model has random characteristics, whereas real world financial markets seem to have a more core-periphery structure [19, 6, 11] or have a power law or other heavy tailed degree distribution [8, 7, 15]. Actual structures of the IRS market are unavailable to the public, seeing that interest rate swaps are not publicly traded on an exchange, but privately over-the-counter.

A summary of other shortcomings when compared to the real world is: the hedge strategy of the nodes lacks intelligence; a large amount of risk does not necessarily mean default for a financial instution, defaulting has a solvency reason; and, in the same line, an IRS should have a nominal value that is completely left out of the equation; lastly in comparison to the financial market, the model lacks a diversity of maturities and swap contracts (different values et cetera).

On the other hand, when we look at other examples of models used for re-searching SOC in systems, simplification of the problem is not unknown. The most basic example is of course the sandpile model [3], where details of how sandpiles work are simplified. But also in the field of financial modeling these simplifications are made. For example the modeling of stock market returns through percolation [21] or an extension of that work using herding [13] ignore (bounded) rationality in their models, but rather use simple mechanics to ex-plain distributions in real world markets. In short simplification can be a useful tool for investigating real world mechanics, mostly because of their ability to isolate and explain certain behaviour.

The simplicity of the model allows for further extension to better depict real world markets and on itself is a reason to have a broader discussion on struc-tural endogenous causes for financial crises. In conclusion the model presented here, a slow driven, simplistic model of a financial market shows self-organized criticality, it shows some interesting features and should be further researched. In the next Section we will do some recommendations for future work.

7

Future work

Although a beginning of an analytical solution is discussed in Section 4 it is incomplete. An analytical solution for the model should give a better insight in its behaviour. Our attempt started from a single node and abstracted away

(28)

IRS creation, net balances and single node defaults (other than large cascades). We now feel that these abstractions might have taken away some essential be-haviours of the model, especially with regards to single node defaults. Future attempts should include these aspects and should maybe start from a contagion point of view like others have done [14].

Furthermore, from a finance perspective, the addition of notional amounts and balances to the nodes would lead to a more realistic model.

From the same perspective, dissipation should be added as a variable to the model. When a financial institute defaults, not all its assets are immediately worthless, and some of it is recoverable. Therefore when a node defaults, its neighbours should be able to retrieve some assets and thus lower their exposure. Simulations with the dissipation on risk exposure in the current model resulted in the disappearance of default cascases larger than two market participants. Dissipation should be done on balance sheets and notional amounts to represent reality better.

Lastly a more realistic network structure should be constructed. This can be done by varying the variance of the pertubations for nodes. When some nodes have to deal with larger pertubations of their balances, they have a bigger need for IRSs and thus will be nodes in the core of the network. Other nodes, with a smaller pertubation variance, will be living in the periphery3. Preferential

attachment could also be part of the network forming, by for example imposing some form of locality to the nodes.

References

[1] Agata Aleksiejuk, Janusz A Holyst, and Gueorgi Kossinets. Self-organized criticality in a model of collective bank bankruptcies. Int Journal of Modern Physics C, 13(3):333–341, 2001.

[2] Per Bak and Kan Chen. Self-organized criticality. Scientific American, 264(1):46—-53, 1991.

[3] Per Bak, C Tang, and K Wiesenfeld. Self-organized criticality, 1988. [4] Fischer Black and Myron Scholes. The pricing of options and corporate

liabilities. Journal of political economy, 81(3):637—-654, 1973.

[5] Ian F Blake and William C Lindsey. Level-Crossing Problems for Random Processes. IEEE Transactions on Information Theory, 19(3):295–315, 1973. [6] Svetlana Borovkova and Hicham Lalaoui El Mouttalibi. Systemic Risk and

Centralized Clearing of OTC derivatives: A Network Approach. 2013. [7] Michael Boss, Helmut Elsinger, Martin Summer, Stefan Thurner, Michael

Boss, Martin Summer, Oesterreichische Nationalbank, Helmut Elsinger, Stefan Thurner, and Complex Systems. An Empirical Analysis of the Net-work Structure of the Austrian Interbank Market 1. pages 77–87, 2003.

3Specifically the IRS market has become popular under smaller financial

in-stitutions, including universities [Site in Dutch] https://www.groene.nl/artikel/ het-maagdenhuis-als-profit-center Last accessed 19-08-2019

(29)

[8] Michael Boss, Martin Summer, and Stefan Thurner. Contagion Flow Through Banking Networks.

[9] Kim Christensen and Nicholas R Moloney. Complexity and criticality. [10] Aaron Clauset, Cosma Rohilla Shalizi, and M E J Newman. Power-law

distributions in empirical data. SIAM Review, 51:661–703, 2009.

[11] Rama Cont, Amal Moussa, and Edson B Santos. Network structure and systemic risk in banking systems. 2012.

[12] Georges Dionne. Risk Management : History , Definition and Critique Risk Management : History , Definition and Critique. Risk Management and Insurance Review, 16(September):147—-166, 2013.

[13] Victor M Eguiluz and Martin G Zimmermann. Transmission of Information and Herd Behavior: an Application to Financial Markets. Physical review letters, 85:3–6, 2008.

[14] Prasanna Gai and Sujit Kapadia. Contagion in financial networks. Pro-ceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 466(February):2401–2423, 2010.

[15] Andrew G Haldane and Robert M May. Systemic risk in banking ecosys-tems. Nature, 469(7330):351–355, 2011.

[16] Dirk Helbing. Systemic risks in society and economics. Understanding Complex Systems, (October):261–284, 2012.

[17] Tom Lorimer, Florian Gomez, and Ruedi Stoop. Two universal physical principles shape the power-law statistics of real-world networks. Scientific Reports, 5:12353, 2015.

[18] R. T. James McAteer, Markus J. Aschwanden, Michaila Dimitropoulou, Manolis K. Georgoulis, Gunnar Pruessner, Laura Morales, Jack Ireland, and Valentyna Abramenko. 25 Years of Self-organized Criticality: Numer-ical Detection Methods. Space Science Reviews, 2015.

[19] Iman others van Lelyveld. Finding the core: Network structure in interbank markets. Journal of Banking & Finance, 49:27–40, 2014.

[20] Didier Sornette. Dragon-Kings, Black Swans and the Prediction of Crises. Swiss Finance Institute Research Paper, 2009.

[21] D Stauffer and D Sornette. Self-organized percolation model for stock market fluctuations. Physica A, 271(3-4):496–506, 1999.

[22] David J Wales and Jonathan PK Doye. Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters contain-ing up to 110 atoms. The Journal of Physical Chemistry A, 101(28):5111— -5116, 1997.

[23] Nicholas W Watkins, Gunnar Pruessner, Sandra C Chapman, Norma Bock Crosby, and Henrik J Jensen. 25 Years of Self-organized Criticality: Con-cepts and Controversies. Space Sci. Rev., 198(1-4):3–44, 2016.

(30)

[24] Marco J Van De Wiel and Tom J Coulthard. Self-Organized Criticality in River Basins : Challenging Sedimentary Records Self-organized criticality in river basins : Challenging sedimentary records of environmental change. Geology, (January):1–5, 2010.

[25] Gregory A Worrell, C A Stephen D Cranstoun, Javier Echauz, and Brian Litt. Evidence for self-organized criticality in human epileptic hippocam-pus. 13(16):17–21, 2002.

Referenties

GERELATEERDE DOCUMENTEN

Die doelstelling van hierdie studie is om die potensiaal van GSE-prosesse te bepaal om volhoubare skoolontwikkeling na afloop van interne asook eksterne evaluerings te

According to The European Action Coalition for the Right to Housing and the City who organized in 2016 a protest when the housing ministers of all EU-countries met in Amsterdam to

The importance of radiation absorption by fuel droplets rel- ative to convective heat transfer depends primarily on the type of fuel, the temperature of the flame and the droplet

van den Broek (TNO Technical Sciences, Delft, The Netherlands), Anne-Marie Brouwer (TNO Perceptual and Cognitive Sys- tems, Soesterberg, The Netherlands), Stephen Fairclough

Fieldwork was carried out from January 9th to March 29th, 2014, in Richardson, Texas. The focus was on Taiwanese Americans that all attend the same Taiwanese-American

Endogeneity is a problem, and is dealt with by including firm fixed effects to control for unidentifiable variables such as firm culture which might influence the

Table 4.6.4 Differences in perceptions between farmers and stakeholders in the ostrich industry with regards to the perceived importance/likeliness of welfare

Voor deze deelmarkt geldt hetzelfde verhaal als bij de professionele zorg eerste echelon, met dat verschil dat de gemiddelde druppeltijd lager zal liggen, omdat de afstanden tussen