• No results found

A comparison of interbank network reconstruction methods to assess systemic risk

N/A
N/A
Protected

Academic year: 2021

Share "A comparison of interbank network reconstruction methods to assess systemic risk"

Copied!
47
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master's Thesis Econometrics

A comparison of interbank network reconstruction

methods to assess systemic risk

Author

Isabel de Heus*

Student number 10002993

Supervisor

dr. Marco van der Leij

Second reader

prof. dr. Cees Diks

Specialisation

Mathematical economics

Date

23 June 2016

* Special thanks to prof dr. Iman van Lelyveld the support and the data.

Abstract

Accurate estimation of systemic risk is limited by the availability of information on

interbank credit linkages. The resilience of the financial sector depends on the structure of

the interbank network, but bilateral exposures are mostly unavailable, even to

supervisors. Reconstruction methods estimate these bilateral exposures of banks based

on the available information: the aggregate interbank exposures. Maximum entropy,

probably the most used reconstruction method so far, spreads the exposures as evenly as

possible. Recent studies (e.g. Upper, 2011 and Mistrulli, 2011) show that Maximum

entropy is not able to replicate empirical market structures. Instead, this thesis focuses on

7 alternative reconstruction methods and compares the systemic risk in terms of

DebtRank and a Default Cascade. The reconstruction of the Dutch interbank network

quarterly from 1998 through 2008 shows that when the interbank liabilities are

concentrated, these exposures are more informative than all reconstruction methods to

estimate systemic risk.

(2)

2

Statement of Originality

This document is written by Isabel de Heus who declares to take full responsibility for the

contents of this document.

I declare that the text and the work presented in this document is original and that no

sources other than those mentioned in the text and its references have been used in

creating it.

The Faculty of Economics and Business is responsible solely for the supervision of

completion of the work, not for the contents.

(3)

3

1 Introduction

Globalization of the financial sector has long been considered a positive development since it enables economic agents to diversify portfolios and to spread out risks. However, this development may well have reached its infliction point where the negative consequences outweigh the positive ones. Cecchetti and Kharroubi (2012) showed that, for advanced economies, financial development is good only up to a threshold, after which it has negative effects on growth. The financial crisis that started in 2007 highlighted once more that the highly interconnected financial sector can lead to disruptions to the real economy. After the fall of Lehman Brothers, American 'too big to fail' banks were bailed out to prevent the start of, or slow down the domino effect that would result in these disruptions. The years it took curators and regulators of defaulted financial institutions to untangle the underlying credit network illustrates the complexity of the system.

The structure of interbank networks is argued to be "robust-yet-fragile": the probability of a systemic event (an event that is detrimental for the system as a whole) is low, but when it happens the severity of it is widespread (Acemoglu et al., 2013). In order to reduce the possibility and severity of potential future crashes the Basel Committee tightened the capital requirements for banks. Nonetheless, at the start of 2016 financial market indices returned to the same price levels as before the crisis. This shows that the financial sector continues to grow, and therefore researchers increasingly search for accurate systemic risk metrics. A substantial part of the systemic risk research focuses on the interbank credit exposure network (e.g. Halaj and Kok, 2013 and Upper, 2011) as most risk are created or transferred via these financial institutions. Bhattacharya and Gale (1987) argue that the interbank market offers participants insurance against exogenous liquidity shocks. On the other hand, the interbank linkages can enforce a snowballing effect of shocks when banks are highly interconnected. Hence there exists a trade-off for the degree of interconnectedness: contagion effects versus resilience of the system. Allen and Gale (2000) showed that the vulnerability of the network highly depends on the dependence structure of the network. Therefore, the bilateral exposures between banks should be more informative than the aggregate network exposures to estimate systemic risk.

However, the fundamental problem here is that information on the interbank dependence structure of financial networks is usually not available. Data on bilateral exposures and underlying contracts are not reported to regulators due to privacy issues, which complicates the measurement of contagion effects in financial networks. This led to the introduction of estimation techniques that use the size of aggregate interbank exposures of banks to reconstruct bilateral exposures. Systemic risk measures capture the possibility and severity of contagion effects in the extreme event of shocks

(4)

4

and crises. Therefore the estimation method should be robust and efficient, especially in these tail events.

Information theory provides the most frequently used reconstruction method: maximum entropy (ME). This method is based on the assumption that all exposures are evenly distributed over the banks relative to the size of a bank's aggregate interbank exposures. However, Mistrulli (2011) and Upper (2011) argue that ME estimates are biased since they have a tendency to overestimate the severity of contagion and to underestimate the possibility of contagion. Other methods such as the minimum density method of Anand et al. (2014) and the high concentration method of Drehmann and Tarashev (2013) may offer improvements in this respect. Anand et al. (2014) base their method on the assumption that forming relationships between banks is costly. While Drehmann and Tarashev (2013) propose a method that reconstruct networks with concentrated exposures.

“The Missing Links” study of Anand et al. (2016) is the first study that conducts a horse race

between seven reconstruction methods compared for 25 different global financial networks. In this thesis, I will extend the study of Anand et al. (2016) for one specific network over eleven years: the Dutch Interbank network. Almost all techniques that were applied in the horse race are based on the maximum entropy methodology. The main difference between these entropy models is the assumed prior information on the financial network. The methods maximize the relative entropy between the estimated network and the prior network, subject to the aggregate interbank exposures. The prior network is structured such that it includes all known elements and the structure of the network. Each method defines this prior matrix differently. This optimization problem is solved by the iterative RAS algorithm.

Next to, and in combination with the entropy models, some methods are based on a fitness model. This model assigns a fitness to each bank which represents the ability to receive or send linkages to other banks. Each method bases the fitness on a specific (non-)topological characteristic. The fitnesses of two banks translate into the probability that those two banks are connected. Using these probabilities the methods randomize the structure of linkages. Subsequently, the size of the bilateral exposures are calculated using a configuration model or the RAS algorithm.

In the horse race of Anand et al. (2016), the reconstructed networks are not compared in terms of systemic risk but on similarity measures such as the Jensen score and the Hamming distance. These measures capture whether the reconstructed networks assign the same links between banks, or whether the size of the size of the reconstructed exposure corresponds to the original network. However, these networks statistics and similarity measures point into a different direction, hence the authors are not able to simply rank the methods.

Unambiguously, interbank networks are reconstructed in order to measure systemic risk. Especially when the estimated networks are used in stress testing, the method should estimate the

(5)

5

systemic importance of a bank accurately. Therefore, this thesis focuses on the systemic risk of a reconstructed network relative to the systemic risk of the original network. Intuitively, systemic risk can be measured as the number of defaults resulting from an economic shock. This Default Cascade lets banks default iteratively whenever these banks cannot absorb the loss incurred on its interbank asset when a counterparty defaults. However, DebtRank (Battiston et al. 2012) additionally takes into account the contagious feedback effects. DebtRank is based on the feedback centrality which accounts for the impact that defaulting counterparties have on the resilience of a bank. DebtRank is defined as the economic value that is affected by an exogenous shock. The performance of a method not only depends on its ability to reproduce the systemic risk in the network, more importantly is that the measured risk is not too volatile or insensitive to changes in the network compared to the risk in the original network. Also, the systemic risk estimates should be consistent over time.

This thesis provides insight in reconstruction methods in two different dimensions not yet studied by Anand et al.: (1) can the reconstruction methods assess systemic risk more efficiently than the aggregate interbank exposures only and (2) are the systemic risk estimates consistent in terms of their time dimension? These problems are addressed using quarterly data of Dutch interbank exposures that range from January 1998 through December 2008. The frequency of the data allows to estimate the same network for each point in time and to study its stability and consistency.

The comparison of reconstructed Dutch networks in terms of systemic risk as measured by DebtRank and a Default Cascade shows that estimating bilateral exposures can be inefficient. Especially when interbank liabilities are concentrated, the DebtRank of a bank is more correlated with its aggregate liability than it is with the DebtRank of the reconstructed network. Four out of seven methods estimate systemic risk consistently and co-moves with the original network. One method (Anand et al., 2012) underestimates systemic risk throughout the whole period, but the correlation with the original network is not as high as for the other methods. The methods of Mastrandrea et al. (2014) and Musmeci et al. (2013) estimate systemic risk most accurately.

The rest of this thesis is organized as follows: Section 2 gives an overview and discussion of relevant literature, Section 3 explains the methodology of each method and the risk metrics, Section 4 describes the data the results are set out in Section 5, the results are tested for robustness in Section 6, and Section 7 concludes.

(6)

6

2 Literature Review

The identification of systemic risk in the financial sector is one of the main concerns of supervisors and regulators. Researchers increasingly turn to estimation of financial networks as the structure of a network reveals the systemic risks of the financial sector. Systemic risk emerges via both a direct and indirect contagion channel. The direct channel is via credit exposures and the indirect channel is the result of consumer's beliefs about the bank's health. The focus of this thesis is on the direct channel because it is at the root of contagion and the resulting domino effects. In this light, regulators and supervisors ask large financial institutions to regularly publish financial figures. However, for reasons of confidentiality banks usually do not report their bilateral exposures of individual banks’ lending, but they only publish the aggregates of those interbank exposures. Instead of using these aggregate information directly to estimate systemic risk, reconstruction methods use this aggregate information to estimate the bilateral interbank exposures. The reconstruction technique shows similarities with tomography which is used to turn 2-dimensional into 3-dimensional objects. Network theory, originating from the field of information theory, provides useful network reconstruction methods and indicators to analyze the structure of a network. Initially, network theory became known via social network studies, but it becomes more prevalent in research of financial networks.

Sheldon and Maurer (1998) were among the first to apply network theory on a financial network. They analyzed the Swiss interbank lending network in order to gain insight into the systemic risk in the Swiss financial sector. The scarcity of data on bilateral exposures urged them to use methods that reconstruct networks based only on aggregates of each individual bank's interbank liabilities and assets. Sheldon and Maurer reconstructed the network using maximum entropy (ME), the leading reconstruction method over the last 20 years. The only input this method requires is the aggregate interbank assets and liabilities of each bank. The method then assigns a size to each possible link between banks by dividing the aggregate exposure equally over all other banks, relatively to the interbank assets and liabilities of the two banks. This idea corresponds to maximizing the unpredictability of information in the system, i.e. the entropy.

The basic assumption underlying ME, banks spread their exposures as evenly as possible, does not impose a specific structure on the network. Recent studies agree that empirical analysis of the characteristics of global financial networks shows that this basic assumption is too stringent. For an in-depth analysis of the network structure of a national interbank market, see e.g. Boss et al (2004), Craig and Von Peter (2014) and In 't Veld and Van Lelyveld (2014). Taking the studies together, interbank networks can be characterized by the following points:

(7)

7

 Core-periphery structure: exposures and links are concentrated at a core group of banks.

 Sparseness: only a small percentage of potential links are actual links.

 Tiering: subsets of banks differ in the number and nature of links they have.

 Loyalty: banks form relationships with each other.

 Low clustering coefficient: banks are not clustered together.

 Short average path length: banks are indirect highly connected.

Theoretical analysis of interbank networks relates these characteristics to systemic risk. Acemoglu et al. (2013) conclude that interbank networks are robust-yet-fragile. So there is a sharp distinction between the probability of contagion (the likelihood that a systemic event happens) and the severity of systemic risk (the proportion of the system that is effected by the systemic event). This explains why literature on ME related to systemic risk is ambiguous as to whether ME under- or overestimates systemic risk.

For each bank that has non-zero aggregate exposures, ME will assign a link to another bank, meaning that this method only tends to create complete networks. This contradicts the fact that interbank markets are usually not complete but rather sparse (Upper, 2011). Mistrulli (2011) concludes that ME reconstructed networks are biased and typically underestimate the probability of contagion. However, both Nier et al. (2007) and Degryse and Nguyen (2007) show that, comparing ME-formed networks with actual networks, ME overestimates the severity of contagion. The theoretical analysis of Acemoglu et al. (2013) studies both dimensions of contagion and argue that highly connected networks enhance the resilience of the network, but complete networks amplify the severity of contagion. Besides, ME reconstructs the network such that relatively, all banks hold the same portfolio of credit exposures. However, interbank portfolios are usually not perfectly diversified. Therefore ME will underestimate the likelihood of a systemic event due to this unrealistic assumption.

Encouraged by the financial crisis of 2008, systemic risk studies increasingly focused on alternatives to ME that account for the aforementioned stylized facts on interbank networks. The study of Anand et al. (2016) compares 7 recently introduced reconstruction methods by applying them to 25 different global markets. Their study focuses on the similarity between the reconstructed networks and the original networks. However, as the main reason behind reconstructing networks is to quantify systemic risk, this thesis compares the reconstruction methods in terms of systemic risk.

The High-Concentration method of Drehmann and Tarashev (2013) postulates the core-periphery characteristic of interbank networks. The method is similar to maximum entropy but the main assumption is now that banks do not spread their exposures as evenly as possible but that the

(8)

8

exposures are concentrated at specific banks. Drehmann and Tarashev (2013) compare this method with the ME method and conclude that their reconstructed networks have a higher probability of joint defaults of banks, i.e. systemic risk, than the reconstructed networks of ME. Related literature shows that the impact of a core-periphery structure on systemic risk is ambiguous. Degryse and Nguyen (2007) argue that a core-periphery structure reduces systemic risk, especially when the core banks are well capitalised. Nier et al. (2007) conclude that when a core bank is connected with many peripheral banks, systemic risk increases due to an increased reach for contagion.

Baral and Fique (2012) introduced a reconstruction method that is based on the Gumbel bivariate copula. Using the aggregate exposures, the bivariate copula estimates the dependence structure of the bilateral network. The Gumbel copula assumes that the exposures follow a core-periphery structure and that the interbank assets and liabilities are asymmetrically distributed. The asymmetric distribution of exposures, the peripheral banks lend more to the core banks than vice versa, is also demonstrated by Cocco et al. (2009) for the Portuguese interbank market. Baral and Fique (2012) compared their estimated exposures with ME reconstructed networks and found that there method estimates the exposures more accurately for the core-core, core-periphery and periphery-core relations. But the method estimates the exposures between peripheral banks less accurately.

The method of Cimini et al. (2015) accounts for the fact that banks differ in the number of linkages they have: the in- and out-degree. The motivation of the authors is that reconstructed networks based on aggregate exposures (strength) solely are inaccurate. Models that use the strength of each bank, tend to estimate more dense networks because these models prefer to distribute the exposures over the network. In contrast, Cimini et al. argue that the accuracy of the reconstructed networks improves when both the strengths and degrees are known. The degree distribution of a network has an ambiguous effect on contagion. The higher the degree, the more counterparties can be affected by the default of that bank and a low degree implies that the whole exposure is placed at a few counterparties so the loss will be larger. The study of Cimini et al. (2015) shows that their reconstructed networks based on the degree distribution of the system, estimates systemic risk as measured by DebtRank efficiently and they are close the those of the real network.

The estimation procedure of Mastrandrea et al. (2014) is similar to the method of Cimini et al. (2015) et al. since both methods use topological characteristics to determine the probability that two banks are linked. The method assigns the probability that two banks are linked based on the size of the aggregate exposures. In line with the horse race of Anand et al. (2016) no prior information on the degree distribution is assumed for this method Mastrandrea et al. applied their method to social and financial networks and argue that their method is able to accurately mimic the clustering and assortativity properties.

(9)

9

Similar to the reconstruction method of Mastrandrea, the method of Musmeci, Battiston et al. (2013) assumes that the likelihood of a link between two banks depends on the sizes of these banks. Musmeci et al. assume that the size of a bank corresponds to the ratio of total assets of that bank to the sum of total assets of all banks in the network. This idea corresponds to the disassortativity characteristic of interbank networks. In disassortative networks, small banks tend to connect to larger banks rather than to other small banks. Musmeci et al. reconstruct the inter-country World Trade Web based on the Gross Domestic Product (GDP) of the countries. The study shows that the reconstructed network accurately estimates systemic risk as measured by DebtRank, with an average error of 5% compared to the original network. In addition, the authors argue that the size of the GDP corresponds to the individual DebtRank but they are not perfectly correlated.

Anand et al. (2014) proposed a reconstruction method that is based on the idea that forming interbank links is costly. Therefore this method produces networks that have minimum density: the minimum number undirected links relative to the potential number of links. Anand et al. argue that their method overestimates contagion because the exposures are concentrated onto fewer links, which implies that the transmitted losses will be larger and are more likely to exceed the capital buffer. The authors also show that ME underestimates contagion since the completeness of the reconstructed network allows shocks to propagate easily through the network. Anand et al. (2014) propose that, taking the two methods together, they can serve as the upper and lower bound of contagion. Lorenz and Battiston (2007) argue that the density of a network has a non-monotonic effect on systemic risk. Increasing the density reduces the systemic risk only up to a specific point where systemic risk increases.

The horse race of Anand et al. (2016) applies the aforementioned 7 methods (excluding Cimini et al. (2015)) to 25 different networks and compares the reconstructed network statistics. These similarity measures are either link-based or exposure-based. The methods that most accurately reconstruct the links that are also present in the original network are from Baral and Fique (2012), Drehmann and Tarashev (2013) and ME. These methods tend to create complete network and therefore maximize the number of true-positives. On the other hand, the minimum density method of Anand et al. (2014) minimizes the number of false negatives and therefore estimates the links that are absent in the original network the best. For the exposure based similarity measures, Musmeci, Battiston et al. (2013), Baral and Fique (2012) and ME produce the most accurate size estimates. All together, Baral and Fique (2012) and ME are the only methods that well on both the exposure based as link based similarity measures.

The focus of the study of Anand et al. (2016) is primarily on the similarity between the reconstructed networks. In order to compare the reconstruction methods in terms of systemic risk, I could use a hypothetical network as a base network. One of the first hypothetical network models

(10)

10

was introduced by Erdős and Rényi (1959). They generate a network by sampling from a binary distribution with a given probability that two banks are connected, later known as the Erdős-Rényi probability. Studies of random networks (e.g. Iori et al., 2006) can be easily generalized and applied to different markets. However, to construct hypothetical networks assumptions on the network are needed, which makes the network sensitive to the made assumptions. Especially the structure and characteristics of a real interbank network are required to be able to make valid inference. Using realistic network data addresses this assumption problem and enables to interpret and analyse the outcomes in line with the economic environment. All together, for this study mainly the characteristics of the network are needed and hence the dependence of the outcomes on the quality of the data is limited.

3 Methodology

All reconstruction methods estimate the size of each bilateral exposure between all banks in the network based on the aggregate interbank exposures. At time t, the dependence structure of the network to be estimated can be characterized by the matrix Xt:

where the elements of the matrix, xij,t, represent the directional bilateral exposures of bank i to bank

j at period t. The first two paragraphs of this chapter desribe the methodology of each reconstruction method that estimates the elements of the matrix Xt. The inputs of all methods are the aggregates of

the interbank assets and of the interbank liabilities of each bank i, respectively Ai,t and Li,t:

If a reconstruction methods allows for more prior information than the aggregate interbank exposures only, this information should be incorporated into the prior matrix. The prior matrix contains all prior knowledge on bilateral exposures, the structure of the network and other relevant information and serves as the starting point for the estimations. The procedure is illustrated in Figure 1, where the square at step one represents the exposure matrix Xt and the orange bars represent the

(11)

11

The names of the methods, in line with the definitions of Anand et al. (2016), are listed in Table 1. Most methods leave it up to the user to adjust prior information. So in order to be able to relate the results to the horse race of Anand et al. (2016), the same assumptions are made here. These assumptions are specified at the methodology of each method.

Figure 1: Process of comparison of the methods. (1): The original network, (2): Calculate the aggregate exposures, (3): Remove the original network, (4): Reconstruct the method and (5): Compare the original network with the reconstructed network. Source: Anand et al. (2016).

Name Description Author

Maxe Maximum Entropy Upper (2011)

Bara Bivariate Copula Baral and Fique (2012)

Dreh High Concentration Matrices Drehmann and Tarashev (2013)

Batt Bootstrap Fitness Model Musmeci et al. (2013)

Anan Minimum Density Anand et al. (2014)

Mast2 Degree Weighted Configuration Model Mastrandrea et al. (2014) Cimi Fitness Induced Configuration Model Cimini et al. (2015) Table 1: Reconstruction method terminology

The last paragaph of this chapter focuses on the methodology of the systemic risk measures. The Default Cascade and DebtRank are both network-based risk metrics and use the bilateral exposures to estimate the resiliency of the network.

3.1 Reconstruction methods

3.1.1 Maximum Entropy - Maxe

The most frequently used reconstruction method is maximum entropy. This method forms the basis for the other four RAS methods considered in this study. The term entropy originates from information theory and can be defined as the unpredictability of information in the system. To

(12)

12

illustrate this definition, consider an example from cryptography. When deciphering an encrypted text, the different characters carry different information. The character 'x' is unexpected to appear as only few words contain this character. So the appearance of the 'x' rules out many possible words and therefore contains much information. In contrast, a frequently used character as 'e' or 'n' carries less information since it leaves many possibilities open. Hence, the letter 'x' has larger entropy than the letter 'e' because it concerns unpredictable information.

In the case of financial networks, consider the bilateral exposures to be estimated, xij as the

encrypted text and the aggregate interbank assets and liabilities as the carriers of information. The question is how to divide these liabilities and assets over the remaining banks. If the total amount of interbank liabilities are assigned to one counterparty, then the remaining exposures are perfectly predictable - or the entropy is minimized. In contrast, if the interbank exposures are spread as evenly as possible relative to the interbank exposures of the counterparties, the entropy is maximized since the remaining exposures are still to be divided over the rest of counterparties. The unpredictability in the system is then maximized because no specific structure on the network is imposed.

The interbank assets and liabilities can be seen as realisations of their marginal distributions fA(At) and fL(Lt). Then the bilateral exposure xij,t is the realisation of the joint distribution of f(At,Lt).

Wells (2004) shows that maximizing the entropy using no further information then simply results in the exposure estimate at period t:

. This implies that bank i has an exposure to itself,

, whenever its aggregate exposures are greater than zero. To rule out the possibility that banks have exposures assigned to themselves, Wells (2004) modifies the problem which the author refers to as cross entropy. The maximum entropy matrix is redefined as by imposing the restrictions for reciprocal exposures:

But as the aggregate exposures constraints (1) and (2) are not satisfied now, the cross entropy finds the matrix that minimizes the deviation between and and satisfies the aggregate exposures. The optimization problem of Maxe then becomes:

(13)

13

The exposures matrix has Nt² elements, so the optimization problem has N²-3N-1 degrees

of freedom. The RAS algorithm provides solutions to this underidentified optimization problem. RAS uses the aggregate liabilities constraint and the aggregate assets constraint to iteratively converge to the optimal exposure matrix. The starting value of the algorithm is the prior matrix and for each iteration s+1 the elements are updated according to the asset and liabilities constraint:

For this thesis, the number of iterations are set at 10.000. See Miller and Blair (1985) and Blien and Graef (1991) for a more detailed description of the RAS algorithm.

The resulting exposures matrix divides the exposures equally over the whole system. This implies that all banks that have non-zero aggregate interbank exposures are connected to each other, which is defined as a complete network. Anand et al. (2016) show that Maxe, together with Bara and Dreh, reconstructs on average the most dense networks. This implies that Maxe is able to identify links that are also present in the original network, but does not identify the links that are absent in the original network.

3.1.2 Dreh

The method Dreh of Drehmann and Tarashev (2013) is based on the same idea as maximum entropy, but imposes a specific structure on the prior matrix. In this case, 50 random matrices Rt are

generated such that the elements rij,t are drawn from the uniform distribution over the interval

[0,2Ai,tLj,t]. Thus the expected value of the random matrices is equal to the Maxe matrix. Since the

random matrices allow for an unequal distribution of the exposures over the banks, the random matrices are distributed such that the networks consist of a core with larger exposures and a periphery with smaller exposures. These randomized matrices are then referred to as High Concentration matrices. Subsequently, each of these 50 high concentration matrices Rt are used as

the prior matrix for the RAS algorithm which minimizes the relative entropy.

Dreh produces a high density network or, if the aggregate exposures of all banks are greater than zero, even a complete network, just as the method Maxe. Dreh creates networks that are more concentrated at a group of banks, which is defined as the core. Hence, the estimated networks then

(14)

14

have a dense core and a sparse periphery. However, Anand et al. (2016) conclude that Anan and Mast2 create even more concentrated networks than Dreh (and Maxe and Batt). The authors show that Dreh is successful in assigning links between banks that are present in the original network: true positives. However, this also means that the method also produces false positives (reconstructing a link that is not present in the original network) because it tends to create dense networks.

3.1.3 Batt

The method Batt of Musmeci, Battiston et al. (2013) is a combination of the entropy model and the fitness model. The two-step procedure starts with a fitness model to determine the probability of a link between two banks and uses this subsequently as input for the RAS algorithm. At the first step, each bank is assigned a fitness score. The fitness function can be any kind of function based on a non-topological characteristic of the network. In line with Anand et al. (2016), the fitness score of bank i at period t is here defined as the ratio of its total interbank and non-interbank assets to the sum of total assets of all banks,

This fitness score translates to a probability that bank i and bank j are linked. If bank i has fitness fi and bank j has fitness fj then the probability that the banks are connected at period t equals

where the parameter zt is needed to let pij,t be defined on the interval [0,1]. The parameter zt is

obtained by bootstrapping given that the expected number of links must equal the total number of links. From the probabilities pij,t, 50 binary matrices Ct (or the adjacency matrices) are generated,

where a one corresponds to a link and a zero corresponds to no link.

To quantify the size of each link, this binary matrix is multiplied elementwise by the matrix containing the products of interbank assets of bank i and interbank liabilities of bank j, the output matrix of Maxe. Then the usual RAS algorithm uses this matrix as the prior matrix.

The major improvement compared to Maxe found by Anand et al. (2016) is that the estimated network performs best in terms of estimating the size of the exposures. The authors measured similarity between the size of the exposures original network and the reconstructed Batt network by the Jensen metric. In terms of link prediction, concentration and borrowers dependency the three methods Maxe, Dreh and Batt perform similarly.

(15)

15

3.1.4 Anan

The Anan method of Anand et al. (2014) combines information theory with economic arguments to estimate bilateral exposures. The method is based on the argument that forming links is costly. Therefore the method minimizes the number of links necessary to satisfy the aggregate interbank exposures. The objective of the method is to minimize the following objective function each p:

here the first term sums the unit cost per assigned link over the system. The second term concerns the penalty term for deviations of the assigned exposures from the observed aggregate exposures. The penalty term is relative to the size of the inverse of the aggregate exposure. The deviation of the aggregate assets is weighted by which equals the inverse of Ai,t and the deviation of the

interbank liabilities is weighted by which equals the inverse of Li,t. The deviations from the

aggregate interbank assets and aggregate interbank liabilities are respectively given by:

where ADi,t measures the deviation of bank i’s total asset exposure versus the observed aggregate

exposure. LDi,t is the deviation of the liabilities of bank i from the aggregate exposure it needs to

satisfy. ADi,t and LDi,t can be seen as respectively the current surplus and current deficit.

Assigning costs to forming interbank linkages leads to sparse reconstructed networks. Next to the sparseness of networks, Anand et al. assume that interbank networks are disassortative. In other words, small banks tend to satisfy their lending and borrowing needs by connecting to larger banks have enough capital to do so. Incorporating the disassortativeness characteristic into the optimization problem leads to the introduction of the probability pij,t. This probability that bank i and

j are connected is proportional to the relative size of the banks:

where the current surplus and deficit measures the size of a bank. These probabilities are used at each step s of the iterative procedure to choose a link and load the maximum possible exposure given the current deficit and onto this link. The starting value of the network V( ) is equal to zero. Subsequently, if at iteration s V( ) is higher than at iteration s-1, the chosen links will be assigned. Otherwise, the chosen link will be rejected.

(16)

16

Clearly, Anan's reconstructed networks are opposite of Maxe's networks since they are designed to reconstruct the minimum number of links, while Maxe estimates maximum density networks. The horse race of Anand et al. (2016) shows that the minimum density networks of Anan correctly identify the which that are absent in the original network. This result is mainly driven by the focus of the similarity measures as they assign higher similarity to true-positives, which are clearly maximized by Anan.

3.1.5 Bara

The reconstruction method Bara is based on bivariate Gumbel copulas. Copulas are multivariate distribution functions of which the marginal distributions are uniform. Hence, copulas can be used to express the dependence structure between variables when only the marginal distributions of those variables are known. In this case, the marginal distributions are the aggregate interbank exposures and the unknown dependence structure are the bilateral exposures.

First, the data on aggregate exposures are transformed into uniform distributions. The marginal cumulative distributions functions (cdf’s) of the aggregate liability FL(Lt) and aggregate asset

FA(At) are estimated using the normal Kernel smoothing function. Subsequently, these cdf’s are used

to fit a copula to the data. The choice of the underlying copula depends on the prior information and assumptions on the structure of the actual network. The Gumbel copula assumes that the distribution of exposures is heavy tailed, such that exposures are unequally distributed or concentrated, and that the distribution is asymmetric, such that the concentration of assets is lower than that of liabilities. The Gumbel copula function produces the probability that two banks are connected and is defined as:

where the dependence parameter θ is estimated using maximum likelihood. The copula probabilities are then used as the prior matrix Q for the RAS algorithm in order to rescale the weights such that the exposures satisfy the aggregate exposures.

The RAS algorithm results in dense reconstructed networks. The high density explains why Bara belongs to the best true-link performers, those that correctly predict the links that are present in the original network, in the horse race of Anand et al. (2016). The similarity measure Cosine, the angle between the original network and the reconstructed network, is highest for Bara out of all methods in the horse race.

(17)

17

3.1.6 Cimi

The method of Cimini et al. (2015) is a Configuration Model (CM) that reconstructs networks based on the number links in the original network. First, the fitness model is applied to determine the probabilities that any two nodes are connected, then the CM assigns the weights (exposures) to the links such that the total number of links and the aggregate exposures are satisfied. So similar to all other methods, Cimini et al. assume to know the economic properties of each node: the total interbank liabilities (in-strength) and the total interbank assets (out-strength). But now additionally the number of incoming links, the in-degree, and the number of outgoing links, the out-degree, are assumed to be known for all nodes. Cimi has not yet been considered in the horse race of Anand et al. (2016). The procedure is similar to that of Batt but now the fitness of a bank is not determined by the size of the bank but by its degree. The sum over the in- and out-degree of bank i are respectively defined as:

where 00 equals 0 such that the degree counts the number of exposures that are greater than zero. The first step concerns the estimation of the probability that two banks are linked. The probability that bank i and j are connected depends on the in- and out-strength Ai and Lj of the

banks:

where zt is an unknown parameter needed to let pij,t be defined on the interval [0,1]. The

expected number of links in the reconstructed network should converge to the observed number of links:

where respectively and equal the reconstructed in- and out-degree of bank i. The optimal value of zt is given by the value for which the degree constraint is maximally satisfied.

Substituting the value of z into the expression for pij,t, 50 binary adjacency Ct matrices are drawn

from the probability matrix Pt containing the probabilities pij,t.

The second step assigns weights to the binary matrix. The total exposures are divided over the system relative to the aggregate exposures (strength). Rewriting the aggregate exposures

(18)

18

constraints (1) and (2) leads to the following expression for the size of the exposure of bank i to bank j:

where the elements of the binary matrix cij,t equal 1 if bank i and j are connected and equals 0

otherwise.

The reconstructed exposures of the degree CM of Cimini et al. do not have to add up to the observed aggregate exposures because the existence of a link is random and there is no rescaling of the weights as in the RAS-methods. Anand et al. (2016) did not include Cimi in their horse race, but Cimini et al. (2015) tested their method themselves by comparing network statistics of the reconstructed network with the original network. Cimini et al. (2015) conclude that the degree distribution is similar to that of the original network. Also the average shortest path length, the minimum number of links needed to connect two banks, is close to the original network's shortest path length. According to Cimini et al., link reciprocity, the extent to which pairs of banks tend to form mutual connections, is more difficult to reconstruct using Cimi's method.

3.1.7 Mast2

The method of Mastrandrea et al. (2014) is an enhanced configuration model that uses the strength of banks to determine the probability that two banks are linked. The model reconstructs an ensemble of networks given. The in- and out-strength of bank i, Li,t and Ai,t, are respectively defined

as the sum over its interbank liabilities and interbank assets.

The first step is to estimate the probability that two banks are connected. The configuration model maximizes the likelihood of the probability distribution given the aggregate interbank liabilities and interbank assets:

The value of xt and yt (the fitnesses) for which the likelihood function is maximized is then used to

calculate the probability that two banks are connected:

These probabilities are subsequently used to sample the size of the exposures. The weights wij,t are drawn from the geometric distribution with probability (1-pij,t). So the exposures of bank i to

bank j equals a number of failures before having one success given a probability of success of 1-pij,t.

(19)

19

This method minimizes the distance between the reconstructed aggregate exposures and the observed aggregate exposures, such that on average the aggregate exposures are met, but for each network, the exposures do not necessarily add up to the observed total exposures. Anand et al. (2016) showed that Mast2 reconstructs more sparse networks that are moderately concentrated. The horse race also shows that Mast2 correctly identifies the banks that are absent in the original network. According to the horse race of Anand et al. (2016), Mast2 estimates the existence of a link more accurately than the size of this link.

3.2 Illustration 5 bank network

In order to illustrate the characteristics of the reconstructed networks, all methods are applied to a simple random network consisting of five banks. This network is created by assigning links between banks by randomly sampling from the Bernoulli distribution given a 60% probability of a link. This is done in order to mimic the sparseness characteristic of interbank networks. Furthermore, without imposing any structure on the network, the weight given the existence of a link is randomly drawn from the uniform distribution over the interval (0,1000). For Batt the total assets of a bank are used to determine the probability that banks are linked, so this data sampled from the uniform interval (18000,22000). The original and reconstructed networks are visualized in Figure 2a-g. For the methods Batt, Cimi, Dreh and Mast2 one reconstruction is randomly chosen from the ensemble of 50 networks. Each bank is labeled with a number, and the weight of the exposure is represented by the width of the outgoing arrow (interbank asset) and the ingoing arrow (interbank liability).

(a) Orig Density: 70%

(b) Anan Density: 45%

(20)

20 (c) Bara Density: 100% (d) Batt Density: 100% (d) Cimi Density: 75% (e) Dreh Density: 100% (f) Mast2 Density: 100% (g) Maxe Density: 100% Figure 2: Network graph for the original network and the reconstructed networks

(21)

21

The original network has a density of 70% which is slightly higher than the probability of 60% that two banks are connected. The Herfindahl-Hirschman concentration index (details on calculation in Chapter 4) indicates for both the asset and liabilities exposures a moderate concentration: the exposures are not evenly distributed but they are concentrated at several banks. On average, the banks are linked with 2.8 other banks in the network (out of 4 potential counterparties). Almost 40% of the banks are classified as belonging to the core. Bank 1 holds the maximum amount of interbank assets and bank 2 holds the maximum amount of interbank liabilities.

The largest difference in density is clearly estimated for the network of Anan. The method minimizes the number of undirected linkages and hence underestimates the density with 25%. Because of Cimi's assumption that the number of links is known, the density of this reconstruction corresponds best to the original density. Cimi's network is not complete because the adjacency matrix is based on the observed number of links. Bara, Mast2, Dreh Batt and Maxe all reconstruct complete networks. The main reason for the completeness of Bara, Dreh, Batt and Maxe is the RAS-algorithm that distributes the exposures over the network and then adjusts the exposures by minimizing the deviation from the prior matrix. Mast2 also reconstructs a complete network, which results from the dependence of the probability that two banks are linked on the aggregate exposures. The original network is not highly concentrated, so the probability that two banks are connected is never close to zero: therefore all links are assigned a positive weight.

The average number of links that each bank has in the original network equals 2.8. Anan assigns on average 1.8 links to each bank and Cimi 3 links per bank. Clearly, the full density networks all have 4 links per bank. The concentration index indicates that all methods, except Anan, reconstruct networks that are less concentrated than the original network. The methods that do not use the RAS-algorithm (Anan, Cimi and Mast2) do not tend to reconstruct complete networks and therefore estimate the original concentration most accurately.

The maximum bilateral exposures of the original network are those from bank 1 and bank 5 to bank 2. So the banks with the largest aggregate exposures also have the largest bilateral exposure. This means that the RAS methods should easily identify this relationship, especially in this case when there are only 5 banks within the network. Indeed, the methods Batt, Bara, Dreh and Maxe identify this exposure as the maximum exposure. Also Anan reproduces this large exposure from bank 1 to bank 2, but because the iterative method already filled up bank 2's liabilities it does not assign the large liability to bank 5.

(22)

22

3.3 Systemic risk measures

Risk in an interbank financial network can be categorized into idiosyncratic risk, the risk that affects only one specific bank, and systemic risk, that affects the system as a whole. The latter is of great importance as it comprises the risk that the default of one institution causes not only the interbank system to collapse, but the real economy as well. Despite the abundance of research on systemic risk, so far no clear definition of systemic risk exists. Bisias et al. (2012) give an extensive overview of systemic risk analytics in order to quantify the systemic importance of a bank. Summing up, systemically important banks can be characterized by the three following criteria:

 The default of this bank has a great economic impact on the other banks in the system.

 The bank's default affects a large proportion of banks in the system,

 The domino effect resulting from this bank's default destabilizes the real economy

The remainder of this paragraph outlines two frequently used systemic risk measures that take these characteristics into account: the Default Cascade simulation and DebtRank.

3.3.1 Default Cascade

In order to estimate the impact of the default of a bank on the financial system, the resulting Default Cascade can be computed iteratively. The idea follows the sequential default algorithm of Furfine (2003). To initialize the Default Cascade, one bank defaults and can therefore not repay its obligations to the counterparty banks within the network. The remaining liquidity is then divided over the creditors. Hence, the counterparties of this defaulted bank have to absorb the loss, the asset exposure to that bank minus the returned liquidity, with its Tier-1 capital. The percentage amount of the interbank asset that counterparties lose as result of a defaulting bank is defined as the loss rate. This contagion mechanism is continued until no further bank defaults. In order to determine the systemic importance of a bank, this algorithm can be applied to any bank. The resulting number of defaulted banks relative to the total number of banks in the network indicates the impact this bank has on the rest of the interbank network. The systemic risk in the whole system is then the average of the defaulted bank ratio. The set of defaulted banks as result of the default of bank l at each iterative step is defined as follows:

where S={1,..,N} the set of all banks within the system and cj is the Tier-1 capital of bank j. Dl,0 equals

(23)

23

counterparties of the defaulted banks can absorb the loss. The Default Cascade of a bank is quantified as the proportion of active banks that will fail as a result of the initial default:

The Default Cascade represents the severity of systemic risk, given that a bank defaults.

3.3.2 DebtRank

The Default Cascade is the most intuitive contagion process, however the metric is clearly a simplification of the reality. The impact of a shock in the simulation is binary: either the bank defaults and its exposure is fully diminished or the bank survives and experiences no further downgrades. Usually when a bank faces economic shocks this leads to funding liquidity problems and the bank's assets will devaluate. A commonly used measure that does account for deterioration effects of systemic importance is DebtRank developed by Battiston et al. (2012). DebtRank is based on feedback centrality: a measure that assigns higher importance to nodes that are surrounded by systemically important nodes. Feedback centrality assumes that the impact of a default has an infinite number of reverberations from the initial defaulted bank to its counterparties and vice versa. The DebtRank Ri of node i is a number measuring the fraction of the total economic value in the

network that is potentially affected by the default of that node. DebtRank measures the severity of systemic risk and it also affects the likelihood of counterparties to default, therefore it is referred to as a stress indicator.

Similar to the Default Cascade, DebtRank defines the Tier-1 capital of bank i as the buffer Ei

that absorbs liquidity shocks. However, DebtRank measures the impact wij of the defaulted bank i on

its counterparties by the ratio between the loss and its buffer: ,

where the impact is 1 when the loss xji exceeds the loss absorbing capital. The impact is weighted by

its relative economic value:

by multiplying the impact by the economic value. Taken together, the economic value of the direct impact of bank i is equal to:

The impact metric calculates the contagious effect for an infinite amount of rounds, where the impact diminishes at an exponential pace determined by the dampening factor. However, if bank

(24)

24

i defaults and Wij>0 and Wji>0 then the contagion effect reverberates back to bank i, and this

mechanism will continue for an infinite amount of reverberations. The DebtRank metric solves this problem by allowing a contagion round to reverberate only once. Each bank is characterized by its state variable si and a continuous variable hi. The state variable indicates whether the bank is

distressed D, not distressed U, or inactive I. The continuous variable hi indicates the impact the bank

experienced on a scale from 0 to 1, where 0 indicates no impact and 1 corresponds to a default. An initial default of bank i corresponds to a starting value of hi(1)=1 and si(1)=D. Then the

contagion dynamics can be characterized by the state and continuous variable: .

The state variable addresses the problem mentioned before: if a bank defaulted at round t, the bank becomes inactive and cannot propagate losses anymore. Then the DebtRank of bank i equals:

so that it measures the proportion of systemic impact of bank i on the rest of the network.

4 Data

The fundamental problem of interbank network research is the limited availability of data on bilateral exposures is limited as banks consider this as confidential and private information. The Dutch banking supervisor, however, requires banks active in the Netherlands to report on their financial activities. Some of these reports are publicly available, but for reasons of confidentiality the majority of this data is reported to De Nederlandsche Bank (DNB), the Dutch Supervisor only. Therefore, the dataset used for this thesis is confidential and cannot be viewed outside DNB. The reports include the large interbank exposures, consolidated financial accounts and solvency figures. Using this information, Van Lelyveld and Liedorp (2006) estimate the Dutch interbank network yielding a dataset consisting of both bilateral interbank assets and interbank liabilities. Liedorp et al. (2010) used the same information and the same estimation procedure to extend the dataset until December 2008. The study of Van Lelyveld and Liedorp (2006) showed that the exposure estimates are accurate proxies for the actual exposures. The authors tested the dataset by comparing their estimations with survey data on the actual exposures.

(25)

25

The exposures are given at the end of each quarter from the first quarter of 1998 through the fourth quarter of 2008. Hence, the main advantage of this dataset is that it covers 44 snapshots of the same network, allowing the methods to be compared not only cross-sectional but also over time. At the start of 1998, the interbank assets summed up to EUR 219 billion, interbank liabilities covered EUR 339 billion. During these 11 years interbank exposures doubled, but relative to the total assets the exposures declined. The banks differ substantially in terms of size, activities, origin and legal status. Liedorp et al. (2010) categorize the banks into five different groups. The first group contains the five largest banks, which together account for 85% of the total interbank assets. The other groups are the other Dutch banks (31-36 banks), investment firms that do not provide traditional banking activities (3-8 banks), foreign subsidiaries that are also supervised by the DNB (23-33 banks) and foreign banks with Dutch branches that are not required to report solvency figures (20-32 banks).

Dutch banks remain net borrowers on the international interbank market from 1998 through 2008, as the interbank liabilities are larger than the interbank assets. This implies that the foreign counterparties, in contrast to the domestic banks, will most likely not be the source of systemic risk. The dataset aggregates the remaining foreign exposures as if it is one bank, so the probability that this 'bank' will default as a whole is close to zero. In addition, this study focuses on the differences between the estimation methods, and not on the absolute level of systemic risk. Therefore these foreign counterparties with no Dutch branches, the banks of which most data is not available, will be excluded from the dataset. The net interbank exposure of these foreign counterparties equals 120 billion.

The exposures data together with data of loss-absorbing capital per bank can be used to measure systemic risk. The DNB supervisory reports include for all Dutch banks the Tier-1 capital, which serves as input for the three systemic risk measures. The Basel Committee of Banking Supervision requires banks to hold a minimum amount of capital relative to risk-weighted assets. This Tier-1 capital is therefore a common proxy for loss absorbing capital. The foreign counterparties that are not under the supervision of the Basel committee are not required to report their capital ratio, thus for non-Dutch banks this ratio is estimated as follows. The average capital to asset ratio over the Dutch banks is ,

where at time t, is the Tier-1 Capital of bank i and is the total assets of bank i and the sum is taken over the number of Dutch active banks. The average capital to asset ratio for all Dutch banks in the dataset is equal to 13%.

(26)

26

The capital of a foreign bank that is not under supervision is estimated as:

At the start of 1998 the average amount of capital available at all banks is approximately 500 million. The average capital increases steadily throughout the 11 years and to about 1.5 billion at the fourth quarter of 2008. At the same time, the average total assets of each bank almost triples from 12 billion to 33 billion at the end of 2008. The average over the capital to total assets ratio of each bank remains around 13% over all 11 years.

The number of banks that are active varies from 103 to 91. At the start of 1998 the Dutch interbank market consisted of 102 active banks, over the subsequent 11 years the number of active banks reduces to 91. Throughout the whole period of 11 years a stable core of 49 banks remains active. The number of links in the network at the beginning of 1998 is about 10.000 around which it oscillates for the following four years. Corresponding to the decreasing number of banks in the network over time, the number of links decrease to around 8.000 in 2007. Subsequently, at the first quarter of 2008 the number of links sharply falls to just above 3.000, where it remains over the whole year (see Figure 3). The number of links relative to the to the total possible number of links, the density of the network, shows exactly the same pattern as the number of links. Therefore the decline in links is not a result of a lower potential for links. Most likely the structural break in the data is the result of a change in the structure of the data rather than a change in the actual network. A possible explanation is given by the calculation method specified by Liedorp et al. (2010). The authors estimated the network assuming that all banks are interlinked in order to prevent gridlock using the RAS algorithm. By replacing all elements equal to zero with a very small number, all banks seem to be linked. When this assumption is relaxed, the number of actual links will be lower, which is the case at the beginning of 2008. However, even without economic explanation, this breakpoint serves as a useful point of analysis, as it can be used to test whether the methods adjust adequately after structural breaks in the network.

Next to the number of nodes and links between them, the network is characterized by several network statistics that are expected to be more directly related to systemic risk. The Herfindahl-Hirschman concentration index (HHI) measures to what extent the interbank market exposures are concentrated. The HHI at period t is calculated as the mean of squared aggregate interbank liabilities or aggregate interbank assets market share:

(27)

27

where at time t Ai,t represents the aggregate interbank assets of bank i and Li,t represents the

aggregate interbank assets of bank i.

Figure 3: Network statistics from the first quarter of 1998 – fourth quarter of 2008.

Q1-98 Q1-99 Q2-00 Q3-01 Q4-02 Q1-04 Q2-05 Q3-06 Q4-07 Q4-08 90 92 94 96 98 100 102 104 Active Banks Q1-98 Q1-99 Q2-00 Q3-01 Q4-02 Q1-04 Q2-05 Q3-06 Q4-07 Q4-08 3000 4000 5000 6000 7000 8000 9000 10000 Links Q1-980 Q1-99 Q2-00 Q3-01 Q4-02 Q1-04 Q2-05 Q3-06 Q4-07 Q4-08 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 HHI Assets Q1-98 Q1-99 Q2-00 Q3-01 Q4-02 Q1-04 Q2-05 Q3-06 Q4-07 Q4-08 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 HHI Liabilities

(28)

28

So when the HHI of assets increases, the concentration increases: the total exposures is concentrated at several banks. The HHI of the assets decreases over time from 0.4 to 0.05, with the largest falls in period 20 and 30. Also in the 40th period, the HHI falls but not as much as before. The HHI of liabilities mirrors the graph of the HHI assets because for each borrower there has to be a lender and vice versa. Thus, the assets become less concentrated, but flow to concentrated liabilities. The degree to which the network depends on the largest borrower or lender is measured by the borrower respectively lender dependency. This is calculated as the average of the market share of the largest borrower (lender) relative to the total borrowing (lending). These metrics show respectively the same pattern as the HHI metric.

For completeness, I shall shortly discuss the limitations of the used dataset. The coverage of the dataset is limited since DNB does not obtain the supervisory reports from the foreign counterparties, which can lead to over- or underestimation of systemic risk. Secondly, banks sometimes only report their risk limits rather than the actual exposures. This could possibly lead to overestimation of the systemic risk in the network. Each fourth quarter the interbank exposures are lower than the exposures in the first three quarters. This is the result of rebalancing end-of-year effects.

5 Results

The seven different reconstruction methods discussed in Chapter 3 are applied to the Dutch interbank network for each quarter from 1998 through 2008. Similar to the study Anand et al. (2016) the reconstructed networks can be compared in terms of their network statistics. The similarity between the reconstructed network and the original network depends on both the number of links and the size of the exposures. The link-based Hamming distance, the distance between the adjacency matrices, are the same for Bara, Dreh, Maxe and Cimi and close to zero for (see Appendix B). Anan and Batt both reconstruct links that do not corresponds to the original network. The Jaccard link similarity measures indicates the same results. The exposure-based similarity measures, Cosine and Jensen, show that the same group of Bara, Dreh, Maxe and Cimi estimates the exposures more accurately than Anan and Batt. However, the similarity between the reconstructed network and the original network does not neccessarily have a direct relationship with the systemic risk in the network. In order to compare the reconstruction methods in terms of systemic risk, the DebtRank and the number of defaulted banks in the Default Cascade are used to stress-test the network and to assess the systemic importance of individual banks.

(29)

29

5.1 System-wide risk

The DebtRank and Default Cascade of a bank indicate the systemic importance of that bank, such that the stress of the system is defined by the mean of the systemic risk measures. For the reconstruction methods that produce an ensemble of 50 networks, the average DebtRank over the 50 networks represent the systemic risk of that method. Figure 4 below shows the mean of both risk measures for each reconstruction method.

Figure 4: (Upper) The upper graph shows the mean of the DebtRank over the system. (Bottom) The bottom graph shows the mean of the Default Cascade with a loss rate of 50%

At the beginning of 1998, the original network experienced high stress levels of 60% DebtRank. Over the following years the systemic risk in the original network decreases and falls sharply at the first quarter of 2005. The most distinctive result for the mean of the reconstructed DebtRank estimates is that they almost never cross the original DebtRank. The DebtRank of Dreh, Maxe and Cimi form one group and are always greater than the original network: the difference between the DebtRank of the original network and each reconstructed network is significantly

(30)

30

different from zero (t-statistic respectively 13.09, 14.41 and 13.58. Also, the standard deviation of the deviation from the original network is small (see Table 2), hence they have a consistent upward bias. Mast2 and Batt underestimate DebtRank rarely and are additionally closest to the original network with a sum of squared errors (SSE) of respectively 0.05 and 0.02. In contrast, Anan is the only method that underestimates the systemic risk throughout the whole period and it has a maximum SSE of 1.88.

Anan´s DebtRank peaks around the same moments as the original network but moves to a lesser extent than the other methods. However the significant correlation between the first difference of the original’s DebtRank and the first difference of Anan’s DebtRank of 0.647, shown in Table 2, indicates that Anan moderately co-moves with the original network. The first differences of the DebtRanks of the original network and those of Batt, Mast2, Cimi, Maxe, Bara and Dreh are almost perfectly correlated (0.98). The lower bound of Anan is therefore less informative than the upper bounds of these other six methods which are additionally close to the DebtRank of the original network.

Remarkably, the pattern of the mean DebtRank is similar to that of the HHI asset concentration index. Whenever the asset concentration declines the systemic risk as measured by DebtRank declines: a linear regression of DebtRank on the asset concentration indicates a significant positive effect of 1.20 (with standard error of 0.07). The opposite is true for the liabilities concentration: for each unit increase in the liabilities concentration, DebtRank decreases with 2.56 (where R² equals 0.84). The HHI asset index of each reconstructed network does not follow the same break at the start of 2005 as the original network (see Appendix A), still the DebtRank estimates remain accurate. Even though the HHI index is based only on aggregate exposures, there is a significant relationship between the HHI index and the DebtRank. The DebtRank of the original network shows a small increase at the end of 2007, while the number falls sharply in the original network. The reconstructed networks of Cimi, Batt, Maxe, Dreh, Bara and Mast2 accurately mimic the fall of 4400 links, see Appendix A. However, at this period the DebtRank of the reconstructed networks show a relative large jump at this year (+0.10 for the reconstructions versus +0.03 for the original).

The results for the second risk measure, the Default Cascade, are shown at the bottom graph of Figure 4. The graph shows the development of the average number of defaulted banks relative to the total number of banks in the Default Cascade. The loss-rate, the percentage a bank loses of its asset exposures when a counterparty defaults, is assumed to equal 50%. The Default Cascade of the original network is significantly different from DebtRank (2-sample t-statistic: -8.65). Also the (normalized) Default Cascade is more volatile (44%) than DebtRank (30%). The same three patterns in the Default Cascade can be distinguished as for DebtRank. Anan underestimates the systemic risk

Referenties

GERELATEERDE DOCUMENTEN

In this research, a real-time parking area state (occupancy, in- and out-flux) prediction model (up to 60 minutes ahead) has been developed using publicly available historic

Used Social Network Analysis techniques to study communication and coordination at the team level (ORA: Carley & Reminga, 2004). Distinguished between different levels of

Hier wordt duidelijk of de historiestukken meer te vergelijken zijn met de genrestukken die hij vervaardigde voor de vrije markt, dan wel met de portetten die hij schilderde

psychological research to determine what research methods are being used, how these methods are being used, and for what topics (Article 1). 2) Critically review articles from

Hence, answering our second subquestion, we can conclude that, on the basis of backtesting statistics, the R-vine copula model does not perform worse than the benchmark copula models

The actors that will be studied in this research are The European Chemical Industry Council (CEFIC), Greenpeace, The European Environmental Bureau (EEB), International

The overview of specific protein properties and modifications that affect biodistribution and tumor uptake of protein drugs, as presented in this chapter, may facilitate a

South Africa became a democratic nation in 1994 and as a result assumed full-fledged membership of the international community regional and multilateral organizations such as