• No results found

Simulating the term structure with a Self-Organizing Map

N/A
N/A
Protected

Academic year: 2021

Share "Simulating the term structure with a Self-Organizing Map"

Copied!
59
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Self-Organizing Map

Master thesis Econometrics

Zongqi Zhang s1754858

April 16, 2012

Supervisor:

Prof. Theo. K. Dijkstra

Vincent van Bergen(ABN AMRO)

(2)

The term structure of interest rate is one of the essential concepts in finance. It mainly describes the relationship between interest rate and the monetary liquidity. In the field of asset and liability management, computing economic capital for interest rate risk requires a set of simulated data which can fully represent the characteristics of the empirical data. This master thesis pro-poses a new method for the term structure simulation which combines the block sampling method with a self-organizing smoother. It increases the amount of economic capital to a more reasonable level.

(3)

The master thesis is the final outcome of my graduation internship. As a graduate student in Econometrics at the University of Groningen, I would like to thank the following people.

Professor Dijstra is the person who triggered my interest to Financial Econo-metrics. He was very patient to the progress of my master thesis. Much freedom was given during his supervision so that I could design my method-ology. I also thank Doctor Plantinga for his effort on the assessment.

Mr. Vincent and his teammates helped me quite a lot during my internship. We had many discussions, many communications. Not only academically, but also personally. I really appreciate the time that I had in his team.

Pieter Marres introduced me many bank professionals and offered me useful information when I was a undergraduate student. Meeting such a person is certainly my fortune.

(4)

1 Introduction 1

2 Data 5

3 The block sampling method 10

3.1 The block size . . . 10

3.2 The range of input data . . . 11

3.3 Simulation . . . 17

3.4 Results . . . 17

4 The SOM-parametric method 24 4.1 The SOM smoother . . . 24

4.2 Simulation . . . 26

4.3 Results . . . 30

4.4 Discussion of the model risk . . . 33

(5)

1

Introduction

The term structure of interest rate has been studied in many articles. It mainly illustrates the relationship between the rate of return and the mon-etary liquidity. Simulating dynamics of the term structure and computing economic capital for interest rate risk is one essential task in asset and lia-bility management(ALM). The ALM modelling team of ABN-AMRO bank posed this issue to me as the topic of my master thesis when I was doing an internship there. They wanted me to create an advanced yield curve simulation model which improves their current one.

This article introduces a semi-parametric method which integrates the His-torical Simulation(HS) method with a stochastic correction algorithm. The methodology is built based on a novel optimization algorithm initially dis-cussed in Kohonen(1982). It can overcome the drawbacks originated from the traditional HS, hence produces a more reliable yield curve evolution. Consequently, our methodology brings the amount of economic capital up to a more reasonable level.

HS certainly creates a time series with unrealistic continuity, because it breaks down the order of the original time series and places them in a to-tally disordered sequence. Theoretically speaking, the time series simulated in the traditional way is lacking in the autocorrelation. This is problematic in ALM since the objective of the simulation is to recover a set of long-horizon time series, normally for one year long. In this case, replicating the autocorrelation really matters.

Randomly drawing overlapping blocks, namely sampling blocks, can indeed improve the performance of HS, but still has many issues to be discussed. For example, the block size, the range of selected input data. Meanwhile, the advantages that we can take from this approach are very significant. At first, the cross-sectional correlations between different maturities are completely kept since we simultaneously pick interest rate changes at all maturities. Secondly, the cross-sectional autocorrelation is partially captured subjective to the block size. Remember that there still does not exist any parametric expressions which can build a clear and suitable form to model both the cross-sectional correlation and the cross-sectional autocorrelation of a term structure. We will have a thorough study about it in the following sections.

(6)

autocorrelation than traditional HS, it is shown in several articles that the autocorrelation we can recover in the block sampling approach is restricted: there exist inner-autocorrelation between observations only in a single block. Obviously, there is clear discontinuity of the autocorrelation between inde-pendently picked blocks. The simulated data hence can only have the auto-correlation no stronger than the block size.

To compensate the lack of the autocorrelation between independently picked blocks, we first simulate a sequence of the term structure via the block sam-pling method. Then we suggest creating connections between successive blocks in a self-organizing map(SOM). The SOM is widely applied in neural, sensory and visualization science. It can modify multidimensional data with strong dissimilarity into a similar pattern. Naturally, if those dissimilarities between blocks are weakened, we then will get stronger autocorrelation. Fur-thermore, modified data topologically converges to the empirical data, which in turn indicates that the result of Principal Component Analysis(PCA) on both original and modified data will be asymptotically equivalent. By the way, the SOM requires only a few parameters and runs fast.

In order to assess the quality of a yield curve simulation model, Rebonato, Mahal, Joshi, Buchols and Nyholm(2005) in abbreviation RMJBN(2005) proposed a list of features that a simulated data should recover. We will set those features as our evaluation criteria while we compare the block sampling method with our novel semi-parametric method. After the verification via the RMJBN(2005)’ criteria, the model risk derived from this new method will be discussed qualitatively. We will consider two kinds of risk that are decisive to measure the model risk..

To validate the usage of our SOM smoothing algorithm, we implement it on three balance sheets consisting of only zero-coupon bonds. For balance sheets which contain many assets and liabilities to be claimed at a certain date, the value changes of this balance sheet in total then is highly depen-dent on the quantity changes of a yield curve. It is quite customary to measure interest rate risk in terms of profit and loss(P &L). We calculate the value of P &L by a Taylor approximation which requests the duration and the convexity of its corresponding maturity. Although P &L is calcu-lated approximately, the remainder between Taylor approximation and the theoretical method is negligible, at least, to the extent of time consumption.

(7)

accor-dance with those daily P &Ls, we quantify the density function of yearly P &L to calculate yearly EC in terms of V aR99(−P &L) − E(−P &L). It

gives us the amount of capital so that a bank needs to hold that its bal-ance sheet stays solvent over a certain period with 99%. We will illustrate the importance of autocorrelation by comparing EC generated in the block sampling method with EC generated in the SOM-parametric method. We shall see that strong autocorrelation and correct computation of P &L can indeed increase the amount of EC considerably.

Besides HS, there are other two possibilities, PCA and Monte Carlo simu-lation(MC), which are also widely-implemented approaches in ALM.

PCA is usually considered if a bank’s business concerns several risk fac-tors such as exchange rate, interest rate, stock returns. This method can incorporate all those kinds of risk factors into a pure parametric form and approximately simulate them altogether. According to the complexity of the cross-sectional correlation between different maturities, PCA can only perform a robust simulation of the term structure, but not precise enough. The usage of PCA is too general to handle such a specific type of risk fac-tors, it fits the data well, but simulates poorly. However, the variance of the real data and the simulated data can be almost fully represented by the first four principal components. It will be often used to evaluate the quality of the simulated data as suggested by RMJBN(2005)

The MC mechanism can overcome the disadvantages of the block sampling approach very well. The time series of each single maturity performs very ideal evolution among its own time horizon. To model the relationship be-tween the time series of different maturities, it selects Copula to build their cross-sectional correlation. This unfortunately will lead to a couple of unre-alistic shapes of the cross-sectional correlation. This is normal since Copula can not reproduce the relationship between maturities well due to the com-plexity of the term structure. According to the shortcoming of MC and PCA method, we shall only discuss HS and its generalized algorithm in the rest of this article.

(8)
(9)

2

Data

The data that we are using to simulate is the EURIBOR interest rate down-loaded from THOMSON REUTERS DATASTREAM. It is a multidimen-sional time series which is composed of 3050 yield curves in time sequence, it includes the daily movement of yield curves from 25/02/2000 to 03/11/2011. The yield curve is constructed from eight different tenors. They are tenors at 3 months, 6 months, 1 years, 2 years, 5 years, 10 years, 20 years and 30 years. Interest rates at two tenors, 20 years and 30 years, are not avail-able before 02/04/2002. The whole dataset therefore appears like a 3050×8 matrix. The behaviour of each yield rate is presented in figure 1 and figure 2.

It is obvious that the shape of the term structure has fluctuated quite a lot around 11/12/2007 in figure 2, but notice that the short-term rates vary weaker than the long-term rates in short run, whereas stronger in long run. In other words, short-term rates have unstable movement with weak noise, while long-term rates have stable movement with strong noise. This is one of the most representative features of a term structure.

Since the incompleteness of the data on the 20-year rate and the 30-year rate at the beginning, we only present basic statistics of quantity changes of yield curves since 02/04/2002 in table 1. It includes mean, standard devia-tion, skewness and so on. Notice that the volatility of each yield rate rises as maturity increases, the maximum and the minimum of quantity changes also becomes larger as the maturity goes longer. The mean of each tenor is approximately equal to zero while the skewness of most tenors are around zero as well.

3m 6m 1y 2y 5y 10y 20y 30y

min -0.5249 -0.1309 -0.1606 -0.2309 -0.2190 -0.2051 -0.3268 -0.3278 max 0.4430 0.5020 0.1994 0.2354 0.2160 0.2215 0.3063 0.4051 average -0.0008 -0.0008 -0.0010 -0.0012 -0.0013 -0.0012 -0.0012 -0.0012 SD 0.0214 0.0207 0.0280 0.0414 0.0450 0.0429 0.0446 0.0464 skewness -2.6120 5.1017 0.0982 0.2259 0.1742 0.0443 -0.0941 -0.0953 kurtosis 243.7502 149.7378 7.0415 5.5892 4.3638 4.3542 7.2784 9.8843 Table 1: Descriptive statistics of quantity changes of interest rates in

(10)

Figure 1: 3D plot of the yield curve evolution, 25/02/2000 − 03/11/2011

(a) (b)

(11)

Histogram in figure 3 corresponds to the characteristics as we mentioned in table 1. We find that all histograms are symmetrically bell-shaped, coin-cides the zero mean and skewness. The variation of quantity changes in long maturity is larger than shorter maturity. Moreover, we clearly see that the width of each histogram goes up as the maturity gets longer. This coincides the row of maximal and minimal value in table 1. This phenomenon clearly illustrates the empirical distribution of each maturity.

(a) (b)

(c) (d)

(12)

To describe yield curve movements in a more precise way, we analyze the evolution of a yield curve in two features: the autocorrelation with respect to each tenor and the cross-sectional correlation between different tenors. In figure 4, we present the lag-1 autocorrelation of non-overlapping n-day changes of all the tenors. It is apparent that the autocorrelation approaches to zero as the maturity becomes longer since the interest rate movement in long-term behaves like Brownian Motion, having low sensitivity to current monetary policies, in other words, unpredictable.

Figure 4: Autocorrelation of n-day quantity changes in different maturities. n=0,...,30

(13)

3m 6m 1y 2y 5y 10y 20y 30y 3m 1.0000 0.6261 0.2289 0.1368 0.1047 0.0988 0.0873 0.0932 6m 0.6261 1.0000 0.6601 0.5117 0.4049 0.3159 0.2567 0.2481 1y 0.2289 0.6601 1.0000 0.9198 0.7952 0.6353 0.4952 0.4396 2y 0.1368 0.5117 0.9198 1.0000 0.9184 0.7624 0.6131 0.5486 5y 0.1047 0.4049 0.7952 0.9184 1.0000 0.9244 0.7928 0.7231 10y 0.0988 0.3159 0.6353 0.7624 0.9244 1.0000 0.9346 0.8742 20y 0.0873 0.2567 0.4952 0.6131 0.7928 0.9346 1.0000 0.9776 30y 0.0932 0.2481 0.4396 0.5486 0.7231 0.8742 0.9776 1.0000

Table 2: Cross-sectional correlation matrix

(14)

3

The block sampling method

This thesis is mainly about the discussion of interest rate, so we first need to obtain some common knowledge of the term structure before we start developing the methodology. Although the block sampling method seems to be the most suitable approach for the simulation of the term structure, remember that the question of the block size and the range of input data are left unsolved. Empirically studying the historic evolution of the term structure can actually help us get a rough idea and give rise to a possible solution.

3.1 The block size

The main purpose of sampling overlapping blocks is to strengthen the au-tocorrelation compared with the vector sampling method. The inevitable drawback is that we get a low probability to draw observations near the beginning and the end of original time series. The larger the block size is, the more information from both side we will lose. Choosing the appropri-ate block size then becomes a tradeoff between the autocorrelation and the information set. A graphical interpretation is presented in figure 5, we real-ize that the observations in the center have equal probability to be picked. However, notice that the probability to draw observations located in the be-ginning and the end decreases linearly, which is the problem just caused by randomly drawing overlapping blocks. If we draw non-overlapping blocks, then this problem does not exist, but we then can not recover relatively strong autocorrelation.

(15)

Unfortunately, there is no theory indicating the optimal size of a block in the block sampling method. We can only handle the tradeoff by ourselves, the block size should be in a medium size and can bring satisfactory auto-correlation conditional on not losing too much information as explained in figure 5. Therefore, we plot an autocorrelation graph and try to find the most suitable number of the block size which can give the significant im-provement. In figure 6, we see that there is a global decrease of magnitude of autocorrelation after the 20th lag. For simplicity reason, I personally decide to keep the block size fixed at 21. We only perform the autocorrelation plot for 3 month tenor, the autocorrelation of all other tenors are quite low and vary around zero. We therefore do not show the autocorrelation of those tenors.

Figure 6: Overlapping n-lag autocorrelation for 3 month maturity interest rate

3.2 The range of input data

(16)

occur compared with the history from one year ago. Some risk managers take the current year data as input, some take all the available historical data. In this section, we will elaborate on the existence of the yield curve movement cycle and take the data from the current yield curve movement cycle as our input. We first have a look at the U.S interest rate in figure 7, which has a quite long history in the bond market.

It is clear that the yield curve first went up and then decreased with fluctuations. However, it is very remarkable that those fluctuations do not behave randomly but more like a series of U-shaped movement. Likewise, the cycle of the yield curve movement also exists in the EURIBOR rate. In figure 2, we see there are two U-shaped movements, it means possibly two cycles. If these two U-shapes are real cycles, then we must have similar term structure evolution between those two cycles. We therefore compare the topological structure between them by the application of PCA, which is widely used to analyze the yield curve evolutions. Bliss(1997) implemented PCA on analyzing the yield curve movement and showed its great usefulness in representing the yield curve evolution.

In figure 8, we present the Eigenvector of PCA on each year among our ten year data (since 02/04/2002). Comparing the shape that can be seen in figure 8b with figure 8h, we clearly find the similarity. They are approx-imately equivalent to each other. We hence may guess that the last yield curve movement cycle sustained about six years. However, we do see some differences between this two cycles. The topological pattern of figure 8b only remained two years, see subfigure b and c, whereas the pattern from the current cycle made it happen for three years, see subfigure h, i and j. These differences are reasonable since different cycle would sustain for different time period, different magnitude. Those differences in period and magnitude are caused by the different current market situation, for example: the efficiency of the bond market, the liquidity risk and the monetary policy.

(17)
(18)

(a) 02/04/2002 to 18/03/2003 (b) 19/03/2003 to 03/03/2004 (c) 04/03/2003 to 17/02/2005

(d) 18/02/2005 to 03/02/2006 (e) 06/02/2006 to 22/01/2007 (f) 23/01/2007 to 08/01/2008

(g) 09/01/2008 to 24/12/2008 (h) 25/12/2008 to 10/12/2009 (i) 11/12/2009 to 26/11/2010

(j) 29/11/2010 to 03/11/2011

(19)

log σ3m2 σ6m2 σ1y2 σ2y2 σ25y σ10y2 σ20y2 σ230y (t,t+1) 0.5398 0.7249 0.6404 0.6652 0.6738 0.6430 0.6079 0.5712 (t,t+2) 0.4054 0.6434 0.5683 0.6078 0.6129 0.5583 0.4841 0.4713 (t,t+3) 0.2247 0.5242 0.5158 0.5401 0.5382 0.5143 0.4310 0.4377 .. . ... ... ... ... ... ... ... ... (t,t+64) 0.4589 0.5045 0.2624 0.3202 0.3761 0.2276 0.1173 0.0828 (t,t+65) 0.4394 0.6753 0.3902 0.3736 0.4209 0.3035 0.2517 0.2686 (t,t+66) 0.4367 0.7096 0.4129 0.3174 0.3713 0.3447 0.2299 0.2157 .. . ... ... ... ... ... ... ... ...

Table 3: Autocorrelation of log-variance in each tenor. Lag=1,...,80

We calculated the variance in each month and computed the autocorrela-tion of variance in each tenor. Since 02/04/2002, we have 2504 observaautocorrela-tions and 8 tenors, this 2504×8 matrix will generate a 119×8 matrix composed of monthly variances. The autocorrelation of 8 tenors in different number of lags is shown in table 3. The corresponding figure is shown in figure 9. Notice that the term that we use here is not σ2 but log(σ2). This trans-formation can make the linear regression reach a better fit, therefore more clearly reflecting the autocorrelation. The scatter plot in figure 10 presents the improvement that is given by log-transformation.

Table 3 and figure 9 obviously point out the existence of the yield curve movement cycle. The autocorrelation first went down and turned to be negatively autocorrelated, then finally increased to almost the same level as before. This indicates that the evolution of log-variance from the first month has a very similar trend with the evolution of log-variance from around the 70th month. This just coincides with our visual prediction from figure 8 where we saw a six-year cycle. This phenomenon just reveals the issue of input data. We are supposed to use the data from the latest cycle because those previous cycles are unlikely to happen. Relevant economic indicators which influence the bond market probably have changed a lot.

(20)

Figure 9: Autocorrelation of log-variance, Lag=1,...,80

(a) ρ(σ2

5y,t, σ5y,t+12 ) = 0.62 (b) ρ(log σ25y,t, log σ25y,t+1) = 0.67

(21)

3.3 Simulation

The block sampling method generates a sequence of quantity changes of the yield curve in the long horizon, each block of quantity changes is drawn uni-formly from our six-year long data, a 1565×8 matrix. We set the length of our block to be 21(number of trading days in one month) after we consider a tradeoff between the loss of data and the recovery of the autocorrelation.

Before we start drawing blocks, we create 1544 blocks of overlapping monthly quantity changes from the 1565 days history, we obtain:

{∆r1, ..., ∆r21} {∆r2, ..., ∆r22} {∆r3, ..., ∆r23} .. . {∆r1543, ..., ∆r1564} {∆r1544, ..., ∆r1565}

Suppose we want to simulate a time series of the yield curve n = 21 × h days forward, we independently draw h blocks from those 1544 blocks, then cumulatively add them to a initial term structure one after one. Certainly, the latest observed yield curve will be selected as initial term structure. This procedure is illustrated in equation 1

rt+nτ = rτt +

n

X

i=1

∆riτ (1)

where rtτ denotes the initial term structure and ∆riτ denotes the quantity change of interest rate at tenor τ in our randomly picked blocks. n denotes the length of the simulated data.

3.4 Results

In order to give an assessment of the quality of the simulated data from our sampling method, we propose considering the criteria referred to RMJBN(2005). A sufficient simulation ought to enjoy the following features:

• It reproduces asymptotically Eigenvalues and Eigenvectors of daily quantity changes.

(22)

• It attempts to replicate the autocorrelation of n day non-overlapping quantity changes as real as possible, n = 1, ..., 30

It already seems that the block sampling method will result in a better simulation because this method randomly draws blocks instead of vectors. But, if we apply this evaluation criteria on the block sampled data, our expectation probably will no longer hold. We will simulate the data with size 1565×8, then the analysis of the simulation result is assessed according to the RMJBN(2005)’s criteria. We refer all the following figures that are relevant with the SOM to the next section. You may wish to skip them temporarily in this section.

The shape of the first three Eigenvectors from historical data and the data is shown in figure 11a and figure 11b. Intuitively, Eigenvectors from the block sampled data should perform approximately the same shape as figure 11a, since we uniformly draw blocks from the empirical data which just con-structs figure 11a. However, figure11b just supports us an counter-example. According to the law of large numbers, we will have a simulated data ap-proaching to figure 11a if we draw an infinitely long time series of quantity changes. Because we simulate a time series of the yield curve which has only length for 1565 days. The probability that Eigenvectors of the simu-lated data differ from Eigenvectors of the empirical data is thus quite high.

Figure 12a reveals the value of first three principal components, figure 12b interprets the percentage of variance that is explained by each principal component. For both the empirical data and the block sampled data, we find that the first component can represent around 75% of its variance, the second one represents 15%, the third term does not explain too much but still around 5%. Those three terms in total explain approximately 95% of the variation of the term structure. This commonality in explaining the variance points out the goodness of the block sampling method. However, like discussed in last section, we still have a small probability to obtain dra-matic difference from the empirical data due to the law of large numbers.

(23)

(a) Real data, 03/11/2005 to 03/11/2011 (b) The block sampled data

(c) The block sampled data with the SOM smoothing

(24)

(a) Eigenvalues (b) Percentage of variance explained

Figure 12: Eigenvalues of first three principal components and its explained information

changes are replicated at a satisfactory level. This can be logically explained by our block size. Remember that we succeed in recovering the autocorrela-tion in a single block, but the correlaautocorrela-tion between blocks is not figured out. This inner-autocorrelation in each block spreads out in the complete simu-lated time series so that we can only reproduce autocorrelation till 15-day changes, but not 21-day changes.

In figure 14a and figure 14b, we present the variance of non-overlapping quantity changes of each yield rates. The reason that we consider the vari-ance in the context of non-overlapping n-day change is subjective to our interest in long-horizon simulation. If we focus on daily change, then the 1-day quantity change would be our interest. If weekly change is of interest, then the 5-day change. If monthly, then the 21-day change.

Remark:

(25)

(a) The block sampled data (b) The block sampled data with the SOM smoothing

Figure 13: Autocorrelation of n-month quantity changes. n=0,...,30

We immediately see the global decrement of the variance in the block sampled data, it is very usual that the variance of n-day change from the block sampled data is lower than the original one. In theory, this just reflects the weaker autocorrelation caused by reordering blocks in an illogical sequence. The relationship between the autocorrelation and the variance is shown in equation 2:

V ar(rt+n− rt) = V ar(Pni=1∆ri)

= Pn i=1 Pn j=1Cov(∆ri, ∆rj) = Pn i=1 Pn

j=1Corr(∆ri, ∆rj) ∗pV ar(∆ri) ∗ V ar(∆rj)

(2) where the value of V ar(∆ri) and V ar(∆rj) remains approximately equal to

original data due to the definition of HS. PCA in figure 11a, b and figure 12a, b also partially explains that the variance structure of 1-day quantity change is approximately maintained after sampling blocks. Intuitively, the variance of multiple days will become smaller since the term Corr(∆ri, ∆rj)

which defines the autocovariance becomes much smaller after breaking down the original sequence of the empirical data. Therefore, when n ≤ 21, vari-ances are well recovered, when n > 21, varivari-ances will be biased due to the loss of long-horizon autocorrelation.

(26)

(a) Real data, 03/11/2005 to 03/11/2011 (b) The block sampled data

(c) The block sampled data with the SOM smoothing

(27)
(28)

4

The SOM-parametric method

So far, we have noticed how important the autocorrelation is in simulating long-horizon term structure changes via the comparison in the last section. In order to build the autocorrelation between independently picked blocks, we recommend incorporating a smoothing algorithm which can associate them. We first introduce the definition and explain the ususal application of it, then we embed this smoother into the block sampling method. Following this, we will elaborate on the superiority of this upgraded method following the RMJBN(2005)’s criteria. Finally, the model risk arisen from the SOM smoother will be discussed.

4.1 The SOM smoother

The Self-Organizing Map(SOM) is originally created in Artificial Neural Science and Vector Quantization. It is an iterative procedure modifying randomly picked vectors into a map or a series of vectors such that these modified vectors can optimally describe the domain of original vectors. Dur-ing organizDur-ing, vectors are automatically adapted into a logical metric which finally performs similarities between neighbour vectors. It mainly has two functions in practice:

1. Convergence to a sequence in logical order, namely, smoothing. 2. Topological property preserving, namely, equivalence in PCA.

The first function is very straightforward, whereas the second one worths a interpretation. Through this master thesis, we represent the topology of the data in the context of PCA. It is already customary among biological and geographical descriptions. In addition, Yin.H(2008) demonstrates that the final result of a SOM is actually equal to its nonlinear PCA, even more convenient than PCA, which ensures the applicability of a SOM.

(29)

(a) Random sample (b) Run the SOM 200 times

(c) Run the SOM 1000 times (d) Run the SOM 2000 times

(30)

Despite the wide usage in data managing technique, its mathematical theory is extremely difficult and is being developed.

4.2 Simulation

Since the block sampling method creates the pattern of Eigenvector dif-ferent from the empirical data under a low but not negligible probability, topology preserving property of a HS method then is desired. While weaker autocorrelation and weaker variance of multiple days usually occurs within the block sampled data, a smoothing technique is required. Incorporating a SOM into the block sampling method can just satisfy our request, both specific aspects of the SOM smoother can simulate the empirical data with outperforming results.

As discussed before, the block sampling method will generate a multidi-mensional time series with illogical dynamics. The poor perfomance on the autocorrelation is caused by those observations which make a bad connection within the simulated time series. Those observations do make the connec-tion between their neighbour observaconnec-tions, but a bad connecconnec-tion which is unlikely to happen in the real word. If we can find out those bad connections and modify them on the basis of the empirical data, then the dynamics of the simualted data will be much more resonable.

The SOM smoother can accomplish the task of searching and modifying automatically. If those observations which play a bad connection deviate very much from the empirical data, then they will be selected and corrected many times. If those observations deviate but not so large, then they will be found out and modified relatively less times. The benefit from using the SOM smoother is that the topological pattern of the block sampled data converges to the topological pattern of the empirical data even though we smooth their observations.

The SOM smoothing technique is described as following:

Step 1: Decompose daily yield curve changes into daily changes of three predefined terms: level factor changes, slope factor changes and curvature factor changes via the Nelson-Siegel(NS) model, λ=12×0.0609:

(31)

In principle, λ varies with repect to different types of yield curve owing to different bond markets. However, to make the NS model applicable among most financial markets, the value of λ is often fixed to be 12×0.0609.

Ordinary Least Square(OLS) is applied for the decomposition. We could have used the Kalman Filter. It can increase the accuracy of the esitmation result, but has no positive impact on our methodology. OLS is thus a better option.

Let the sequence of vector      ∆lt ∆st ∆ct      t=1,...,1565

denote the decomposed

terms from the block sampled yield curves via equation 3.

Let the sequence of vector      ∆Lt ∆St ∆Ct      t=1,...,1565

denote the decomposed

terms from empirical yield curves via equation 3. From now on, we man-age our algorithm with respect to only level factor changes and slope factor changes, curvature factor changes is ignored.

Reducing the dimension by the NS model is compulsory. It transforms a eight-dimensional time series into a three-dimensional time series. The SOM can not avoid the curse of multidimensionality. It works well only in low dimension. The mathematical proof of a SOM in low-dimension is com-plete while the proof for application in high-dimensional case is not done yet. See proof in Kohonen(1982), Kohonen(1984), Bouton and Pages(1993), Fort(2006).

The curvature factor, ∆ct is often ignored since it is very volatile and hard

to explain in a macroeconomic approach. Owing to simplicity and its poor macroeconomic sense, we decided to run the SOM only with ∆l and ∆s

Step 2: Randomly pick one vector  ∆Li ∆Si  from      ∆Lt ∆St ∆Ct      t=1,...,1565

(32)

only one single vector, we then need to run this step many times to make sure that every vector is fairly selected as benchmark.

Step 3: Compare it with each vector in  ∆lt ∆st  t=1,...,1565 and

deter-mine the best matching vector(BMV) 

∆l∗ ∆s∗



which minimizes the Eu-clidean distancep(Li− l∗)2+ (Si− s∗)2

After we randomly picked a benchmark vector from the empirical data, we then search for vectors from the block sampled data which have potential to be modified. To allocate these vectors with the potential of modification, we compare the benchmark vector with all simualted vectors and find out the one which has the minimal Euclidean distance, namely, the BMV. It often occurs that multiple BMVs from the simualted data are allocated.

Step 4: Modify the BMV and its neighbour vectors:  ∆lθt ∆sθt  =  ∆lθt ∆sθt  + η  Li Si  −  ∆lθt ∆sθt 

where θ denotes the index including the BMV and the neighbour vectors. Remember to replace orginal vectors with these θ indexed vectors(BMV and neighbour) during every repetition.

The task of modification is achieved by the above equation. It discounts the differences between the θ indexed vectors and the randomly selected benchmark vector by the adaptation parameter, then adds these discounted differences to the θ indexed vectors. If this modification action is applied iteratively, the topological pattern of the modified vectors will converges to the topological pattern of the empirical vectors. The topology preserving property of the SOM smoother just functions on this step.

(33)

Neighbour vectors are defined as the BMV’s ten former and latter observa-tions. The number of the neighbour vector we want to modify can be either constant or a decreasing function. It is proven that setting the neighbour size as Gaussian function can accelerate the process of the self organization. By considering the extensive computation time used during sampling blocks, we set the neighbour size to be fixed.

Step 5: After running N times on step 2-4, we obtain the new time se-ries composed of smoothed factor changes:

( c ∆lt c ∆st !) t=1,...,1565

Remember that the SOM smoothing technique modifies the simulated yield curves indirectly, because we only implement smoothing technique on these factor changes, but not on observations.

Step 6: Construct smoothed yield curve changes on the basis of equation 1 but with additional terms from the SOM, we then obtain a new time series of the term structure with a more realistic evolution:

t+n= rτt + n X i=1 ∆rτi + n X i=1 αiTβτ (4) αi = c ∆li− ∆li c ∆si− ∆si ! βτ =  1 [1−exp(−λτ )] λτ  (5)

(34)

There also exist many other smoothing algorithms. We select the SOM due to its two specific aspects. The smoothing and the topology presev-ing. Remember we have ever illustrated before in equation 2. In the block sampling method, when n >21, those terms with index over twenty-one vanish since we independently draw blocks under a certain size, twenty-one. The SOM smoother can build correlation over this limit. In the SOM-paramatric method, when n >21, those terms with index over twenty-one become nonzero.

If one time series is smoothed, the variance of a single day will normally be decreased. However, the SOM smoother approximately remains its origi-nal topology, namely, the structure of PCA. Since the result of PCA is kept, we then will have the variance of daily quantity changes unchanged. This is actually the feature that we appreciate compared with other smooth algo-rithms: Building the correlation between blocks while the variance of daily quantity change is unchanged.

We could have established other forms rather than equation 3 for the di-mension reduction, for instance: NS-svensson model, Legendre polynomial or Spline. See Svensson(1994), Belousov(1962) and Schoenberg(1946). NS model is specified due to its applicability and accuracy in decomposing yield curves.

4.3 Results

Although the SOM runs iteratively, we have to be aware of the smoothing effect. As figure 15 indicated, repeating step 2-4 too many times will lead to a curve with no noise, only a totally smoothed curve. My personal experi-ence so far is to set the number of repetition no larger than 34 of the length of time series. The results analyzed in this section is the self-organized data, size is 1565 × 8. p = 0.94, N = 1, ..., 1000.

Before we assess the goodness of simulation with the SOM smoothing, we should first observe how the SOM smoother works in a NS function form. The time series of the term αi is plotted in figure 16. We immediately see

(35)

a flat straight line. Although the block sampling method is stochastic and defines no clear rule about smoothing, the SOM still provides us of a so-lution which does the correction step automatically. The only parametric form that is introduced in this correction step is NS model.

(a) c∆li− ∆li (b) c∆si− ∆si

Figure 16: The time series of αr term

Figure 17 shows how the SOM smoother affects the yield evolution. No-tice the magnitude of the increment on the maturity 3m, 6m, 1y ,2y in subfigure c, d becomes larger than subfigure a, b’s. The SOM algorithm detected the increasing trend and made it stronger. Meanwhile, the decre-ment in the end of the time series on the maturity 5y, 10y, 20y, 30y becomes weaker by the correction of the SOM smoother. Obvious, clustering pattern of the empirical data is better reproduced, this would be decisive in study-ing tail dependency. Overall, the SOM can make the movement trend of simulated yield curves as consistent as possible, which is just the character of the autocorrelation.

(36)

(a) The block sampled data (b) The block sampled data

(c) After the SOM correction (d) After the SOM correction

(37)

Applying a smoother can just compensate the missing information between distinctive blocks.

We verify the superiority of the SOM correction in accordance with the RMJBN(2005)’s criteria. We checked before that the block sampling method may generate data with Eigenvectors which do not match the real data, shown in figure 11a and figure 11b. The self-organized one in figure 11c indicates that the SOM can fix it up, if the input data for the SOM is miss-specified. Moreover, Eigenvalues also remain the same position as real data, see figure 12.

The reason why we introduce the SOM-parametric method is to replicate the missing character, the autocorrelation. By watching on figure 13, it turns out that applying the SOM smoother on the block sampled data can indeed strengthen the autocorrelation, especially quantity changes in long term maturity. It might not completely replicate the autocorrelation of the real data, but at least can make it convergent to the real one.

Recall the relationship between the variance and the autocorrelation of n-day changes which is described in equation 2, we can already forecast before we run the SOM that the multiple days variance will become larger. Indeed, figure 14 reflects that the variance is better reproduced, variances among longer non-overlapping days are much larger than the block sampled one.

We have fully analyzed the superiority of the SOM-parametric approach. No matter the block sampled data has shortcomings in reproducing the topological graph, the autocorrelation or the variance, running a SOM cor-rection can always change these three characteristics more similar to the empirical data’s. Theoretically speaking, the convergence and the topology preservation of the SOM smoother function well on HS. In appendix, we will see more examples presenting the function of the SOM smoother.

4.4 Discussion of the model risk

(38)
(39)

feasible for long-horizon simulation. If the horizon that we simulate is one year long, then we have to wait many years(at least thirty years) to obtain a sufficiently large binary data for the statistical test.

Even though we can not determine the model risk quantitatively, a qual-itative discussion is possible. The main source of the model risk in the SOM-parametric method is from the assumption. We simply assume that the event happened in bond market will happen again in the future. In ac-cordance with this assumption, the selection of the appropriate input data will be a big issue. The data we shall use should just represent the ran-domness of the current bond market, including more or less will lead to the biasness of the output. That is the reason why we decided to take the latest yield curve movement cycle as the input data. If the data in the history is strongly controlled by the local financial bureau, or in other words, if the data we use is from an inefficient market, our assumption will no longer hold. Because we do not expect that the monetary policy implied before will be used again in the future. We only assume the bond market behaves randomly, but not determinstically. In principle, if we properly take the input data from the last yield curve movement cycle, the model risk from the input data will be much more decreased.

(40)

5

Economic Capital modelling

In this section, we will first explain the methodology of interest rate risk EC modelling, then implement the block sampling method and the SOM parametric method respectively to compute EC. Finally, we compare those ECs and analyze the impact from the SOM smoother.

5.1 Definitions

In ALM, to measure the interest rate risk of a balance sheet, we have to understand how the risk is defined. The value changes of fixed-income fi-nancial products caused by quantity changes of interest rate are the main risk that we are taking. The transformation from changes in interest rate to book value changes is accomplished by a Taylor approximation:

P &L

V = −D ∗ ∆r + 1

2 ∗ C ∗ ∆r

2 (6)

where V denotes the total value of a financial product, the value change on the basis of total value is defined as Profit and Loss(P &L). Duration(D) and Convexity(C), the feature that only fixed-income product has, are de-fined as the first order and the second order sensitivities to ∆r. This second order Taylor approximation can compute P &L very close to the real value if there exist a parallel shift on a yield curve. However, if we consider the non-parallel shift, or equivalently speaking, we take into account the value changes derived from the steepness and the curvature of a yield curve, equa-tion 6 will not perform very well since no addiequa-tional terms which represent the sensitivity of the steepness and the curvature are defined to measure them. A more accurate approximation is proposed in Nawalkha, Soto, Be-liaeva(2005).

To keep a balance sheet solvent, we are supposed to keep an amount of capital to overcome the uncertainty in the future. VaR is defined as the capital to cover all the possible losses, includes expected and unexpected losses. Whereas, in bank business, the unexpected losses is compensated by clients. We hence only hold the capital against the unexpected losses caused by interest rate changes. This amount of capital is named as Economic Cap-ital(EC):

(41)

where V aR99%(−P &L) denotes the 99th percentile of the loss distribution and E(−P &L) the expected losses.

5.2 Application

In our case, we shall verify the usage of the SOM smoother by comparing EC computed in the block sampling method with EC computed in the SOM-parametric method. Suppose we shall compute ECs with repsect to different balance sheets shown in table 4, which has asset in short maturity and liability in long maturity for each adjacent tenors. Because of the unit value of assets and liabilities in each balance sheet, equation 6 then turns to be:

P &L = −D ∗ ∆r +1

2 ∗ C ∗ ∆r

2 (8)

3m 6m 1y 2y 5y 10y 20y 30y

balance sheet 1 1 -1 1 -1 1 -1

balance sheet 2 1 -1 1 -1 1 -1

balance sheet 3 1 -1 1 -1 1 -1

Table 4: Three Balance sheets in our simulation

Apparently, D and C decrease as time elapses due to its definition. At the end of every month, the duration and convexity of our assumed balance sheet will be considerably lower than before. To remain the stability of a bank’s business on asset and liability, the structure of the balance sheet is normally prefered to be consistent in years, which means D and C of the balance sheet has to adjusted to the right track every month. Monthly re-balancing assets and liabilities is normally done by trading products which can make current D and C larger or smaller. Considering this fact in the application of equation 8 will lead to a very complicated expression. We hence assume those two terms as constant during one month.

Since yearly EC is of our interest, to incorporate the monthly rebalancing effect in our computation, we compute yearly EC in the following form:

monthlyP &L = −D ∗ ∆R +12 ∗ C ∗ ∆R2

∆R = P21

i=1∆ri

yearlyP &L = monthlyP &L1+ · · · + monthlyP &L12

(42)

We multiply each tenor’s corresponding weight and take the sum in the con-text of equation 9 after each time of simulation. By simulating yearlyP &L thousands of times, we finally obtain a simulated density function of yearlyP &L which can empirically generate yearly EC.

5.3 Results

The long-horizon yield curve evolution is simulated in one year period. We randomly draw 12 blocks from the empirical data which provides us of 1544 overlapping blocks. This block sampling action is repeated 5000 times for each balance sheet. Likewise, in the SOM-parametric method, we also draw yearly evolution 5000 times, but then implement the SOM smoothing algo-rithm on every yearly evolution. Remember that we suggested smoothing the block sampled data no more than 3/4 times of its length, 252. Ac-cording to this principle, we decide to smooth every simulated evolution 100 times in each run. To shorten the computation time and illustrate the SOM effect more obviously, we accelerate the smoothing procedure by en-larging the adaptation parameter five times, η(N ) = 5 ∗ pN/1−p1−pN, p = 0.94, N = 1, ..., 100.

Descriptive statistics and the value of EC from different simulation meth-ods are shown in table 5. In order to observe the stability of each method, we implement each method five times with respect to each balance sheet. Table 6 summarizes the descriptive statistics from those five repetitions. It is clear from table 5 that the interval of observations, the variance and the 99th percentile are all increased by applying the SOM-parametric method, no matter which balance sheet, which run. This results in the significant change in EC. In table 6, we find that the mean of EC from this five rep-etitions is approximately doubled. Variance of EC in the SOM-parametric method is also much higher than the block sampled one. This happens in-tuitively because we correct HS simulation which is a stochastic method by another stochastic method. To lower the variance, we can consider running more times for the SOM-parametric method, but it will take much more time. Computing it with a Core2 2.2GHz CPU costs on average 2.75 hours for every 5000 run. Running more time to lower the variance is definitely not applicable. A more realistic approach is proposed in the following paragraph.

(43)

The block sampling method

# min max mean SD V aR99% EC

balance sheet 1 1 -2.2057 74.1257 16.4281 9.3862 45.2163 28.7882 2 -1.2315 86.0042 16.3196 9.4834 48.8344 32.5148 3 -1.2315 74.0386 16.2617 9.4143 49.1450 32.8833 4 -1.7509 77.6884 16.3602 9.4482 48.2318 31.8716 5 -0.6607 77.0913 16.2706 9.2899 46.5636 30.2930 balance sheet 2 1 2.9214 350.3588 75.9020 45.3278 230.2372 154.3352 2 3.2413 381.9414 75.5420 45.8853 235.1550 159.6130 3 6.0016 395.8253 75.2732 44.9278 229.316 154.0429 4 4.4343 365.5902 74.4908 43.9504 218.3757 143.8849 5 6.0504 447.8936 76.7804 45.5056 230.2809 153.5005 balance sheet 3 1 10.9318 845.1750 162.7331 101.2276 500.4157 337.6826 2 5.6456 918.7434 162.4063 100.5202 493.2206 392.7004 3 12.7537 962.5709 164.2197 102.8575 497.5592 333.3395 4 7.5092 908.5664 160.7859 99.3838 480.2196 319.4337 5 10.2708 935.2323 163.0739 106.2679 526.1328 363.0589

The SOM-parametric method

# min max mean SD V aR99% EC

balance sheet 1 1 -1.2536 342.9389 31.2200 20.0954 100.4403 69.2203 2 -0.7738 219.1752 31.5288 21.0328 114.1753 82.6465 3 -1.2536 342.9389 31.1784 20.3532 99.4862 68.3078 4 -0.7738 219.1057 31.5288 21.0328 114.1753 82.6465 5 -0.7738 219.1752 31.5288 21.0328 114.1753 82.6465 balance sheet 2 1 6.2731 1217.6 143.2273 89.6012 447.6657 304.4381 2 6.5205 929.6771 144.8968 94.8999 494.6961 349.7993 3 6.2731 1217.6 143.2276 89.6012 447.6657 304.4381 4 5.8414 892.8519 144.6300 92.3919 493.2919 348.6639 5 8.6233 1133.2 144.1983 91.2732 468.9047 324.7064 balance sheet 3 1 18.7554 3422.0 290.8511 187.1797 866.8798 576.0287 2 20.4824 1827.7 293.2291 183.5146 1000.8 707.5216 3 20.0886 2990.6 287.2584 177.8545 906.4966 619.2382 4 17.7769 2120.7 289.1480 182.2112 945.0445 655.8965 5 13.4980 1701.0 288.8898 178.7608 952.8438 663.9540 Table 5: Comparison of yearly EC between the block sampling method and

(44)

Block SOM

mean SD mean SD

balance sheet 1 31.2702 1.7050 77.0935 7.6106 balance sheet 2 153.0753 5.6989 326.4092 22.4206 balance sheet 3 349.2430 28.9594 644.5278 49.5140

Table 6: Summary of table 5

variance of its simulated density function. According to the thicker tailness, V aR99%(−L) is dramatically increased, so does the EC. Zoom in figure 19b, d, and f, we see that the tail is very heavy. Some observations are discon-tinuously visible and locate very far from its main part. Those outliers give the less stability of yearly EC computed in the SOM-parametric method. To deal with those outliers, we can set a certain threshold for it. Outliers larger than this threshold are deleted, then the variance of yearly EC will naturally be much lower. The value of the threshold should be set by those senior managers. They can provide a maximial value of EC that they can accept, we then take it as the threshold. I personally suggest this solution rather than simply running more times. It may lose some information, but saves much more time. The threshold should be set by those senior man-agers due to their experience.

(45)

(a) Block, balance sheet 1 (b) SOM, balance sheet 1

(c) Block, balance sheet 2 (d) SOM, balance sheet 2

(e) Block, balance sheet 3 (f) SOM, balance sheet 3

(46)

(a) Block, 6 months (b) SOM, 6 months, smoothed 100 times

(c) Block, 2 years (d) SOM, 2 years, smoothed 100 times

Figure 20: Simulated path of 252-day changes in different maturities

(47)

(a) Block, 10 years (b) SOM, 10 years, smoonthed 100 times

(c) Block, 30 years (d) SOM, 30 years, smoonthed 100 times

(48)

6

Summary

At the beginning, the characteristics of quantity changes from the empirical data including the variance, the autocorrelation and the cross-sectional au-tocorrelation are discussed. We then understand roughly how the quantity change is distributed from last ten years’ yield curve evolution. Clearly, the autocorrelation and the cross-sectional autocorrelation between tenors are the most representative characteristic in a term structure.

During the procedure of determining the block size and the range of in-put data, we noticed that the pattern of Eigenvectors from nine years ago has a very similar shape as the pattern from three years ago. This partially indicates the existence of the yield curve movement cycle. Also, the au-tocorrelation of the variance points out approximately that the yield curve movement cycle happens about every six years. Moreover, during the empir-ical investigation of the yield curve movement cycle, we accidentally found that the evolution of term structure changes among each cycle has a certain form which provides us of a possibility to model it. This is not discussed in the article because of the irrelevance with my topic. But, it really worths an investigation.

To elaborate on the superiority of the SOM-parametric method which can create more realistic long-horizon term structure evolution, we used the idea from RMJBN(2005) to assess the goodness of our methodology. Af-ter comparing PCA result, the autocorrelation and long-horizon variances, we clearly saw better performance from the SOM-parametric method. Its topological preserving property ensures that the domain of the empirical data remains unchanged, so pattern of Eigenvectors converge to the empir-ical one. Its smoothing property strengthened the autocorrelation of the block sampled data, which in turn replicated the variance of n-day quantity changes very well. In addition, a clear mathematical expression explained the relationship between the autocorrelation and the variance of n-day quan-tity changes.

(49)
(50)

7

Conclusion

The importance of the autocorrelation in long-horizon simulation is already known by many risk managers. The ALM modelling team suggested us-ing the block samplus-ing method. I thoroughly analyzed this method and explained its drawbacks: the autocorrelation of simulated data is still not extensively recovered therefore leading to less variance of P &L, insufficient EC. Building the autocorrelation on the historically simulated data is still not solvable nowadays. There exist no parametric function identifying how the autocorrelation should be, how to modify them. We therefore choose using a stochastic algorithm, the SOM-parametric method, to cancel out the discontinuity brought from the block sampling method. We believe that randomness created in both stochastic algorithms can be neutralized if we aggregate them in a formal method. Indeed, it reaches the objective of my original design and gives larger requirement of yearly EC.

Notice that we strengthened the autocorrelation on the basis of certain cri-teria, not blindly. This proves the applicability and the superiority of our advanced method.

Final remark:

(51)

8

Further research

During the period of studying, we casually encountered some problems which may attract more interest from other reseachers. We list and leave them for further development.

• We only smoothed the block sampled data in the context of first two terms of the NS model, because our attention on the yield curve nowa-days is only about the first two terms, level factor and slope factor. Other more aspects of the yield curve can also be smoothed due to models that we normally use for the decomposition.

• The yield curve evolution during every cycle seems to share a similar feature, especially when the yield curve is nearly flat. This makes the movement of the term structure predictable, there hence exist arbitrage opportunities. Discovering this effect will help us make a cyclical strategy for risk management.

(52)

Appendices

More results from the SOM-parametric method

In the main content of the article, we proposed implementing the RMJBN(2005)’s criterion to assess the quality of the simulated data from the SOM-parametric method. We compared the simulation result from the block sampling method with the one from the SOM-parametric method. However, in HS, we need to run the smoothing procedure thousands of times, only one single exam-ple can not represent the function of the SOM smoothing algorithm entirely. Therefore, to demonstrate the stability of the SOM smoother, we place more examples in the appendix.

By observing figure 22, 23, 24, 25, we realize that the block sampling method generally replicates the pattern of PCA, the autocorrelation and multiple days variance poorly in accordance with the RMJBN(2005)’ criterion. No-tice that:

In the block sampling method:

• The pattern of Eigenvectors ususally deviate from the empirical one, mostly happens on second factor and three factor.

• The autocorrelation of short-term tenors normally performs weaker than the empirical one, the autocorrelation of long-term tenors nor-mally locates below zero level which is supposed to locate around zero. • Variance of multiple days is globally lower than the empirical data. In the SOM-parametric method:

• The topology preserving property functions, but not perfectly. Cor-rected data has pattern of vectors convergent to the empirical one. Although convergence is weak, sometimes on the second and third factor, sometimes only on the third factor, sometimes even not visible, it is still better than nothing.

• The autocorrelation of short-term tenors are totally increased, it ap-proaches to the real one. The autocorrelation of long-term tenors deviates around zero.

(53)
(54)

(a) Block (b) SOM

(c) Block (d) SOM

(e) Block (f) SOM

(55)

(a) Block (b) SOM

(c) Block (d) SOM

(e) Block (f) SOM

(56)

(a) Block (b) SOM

(c) Block (d) SOM

(e) Block (f) SOM

(57)

(a) Block (b) SOM

(c) Block (d) SOM

(e) Block (f) SOM

(58)

References

[1] Baillie, R.T., T. Bollerslev. Prediction in dynamic models with time-dependent conditional variances. Journal of Econometrics 52(1992) 91-113.

[2] Belousov, S.L., Tables of normalized associated legendre polynomials, Mathematical tables, 18, 1962, Pergamon Press.

[3] Bliss, Robert R. Movements in the term structure of interest rates. Fed-eral Reserve Bank of Altlanta, Economic Review, fourth quarter 1997. [4] Bollerslev, T., Generalized autoregressive conditional heteroskedasticity.

Journal of Econometrics 31(1986) 307-327.

[5] Bouton, C., Pages, G., Self-organization of the one-dimensional Kohonen algorithm with non-uniformly distributed stimuli. Stochastic processes and their applications 47, 249-274, 1993.

[6] Brockwell, P.J., Davis, R.A. Time Series: Theory and Methods, 2nd edition. Springer, 2009.

[7] Christoffersen, P.F., Evaluating interval forecasts. International Eco-nomic Review, Volumn. 39, No. 4, November 1998

[8] Cottrell, M., de Bodt, E., Gregoire, Ph., Henrion, E.F. Analyzing shocks on the interest rate structrue with Kohonen map. Conference on Compu-tational Intelligence for Financial Engineering - New York, IEEE 1996, 162-167.

[9] Cottrell, M., Fort, J.C., Pages, G. Two or three things that we know about the Kohonen algorithm. Proc of ESANN, M. Verleysen ED., D Facto, Bruxelles.

[10] Embrechts, P., C. Kluppelberg, and T. Mikosch. Modelling extremal events for insurance and finance. Springer New York, 1997.

[11] Fort, J.C., SOM’s mathematics. Neural Networks 19(2006) 812-816. [12] Hamilton, J., Time series analysis. Princeton University, 1994.

(59)

[14] Jondeau, E., Rockinger, M., The Copula-GARCH model of conditional dependencies: An international stock market application. Journal of In-ternational Money and Finance 25(2006) 827-853.

[15] Kohonen, T., Analysis of a simple self-organizing process. Biological Cybermetics 44, 135-140, 1982.

[16] Kohonen, T., Self-organization and associative memory. Spring New York, 1984.

[17] Kohonen, T., Self-organized formation of topologically correct feature maps. Biological Cybermetics 43, 59-69, 1982.

[18] Mcneil, A.J., Frey, R., and Embrechts, P. Quantitative risk manage-ment:Concepts, Techniques, and Tools. Princeton University, 2005. [19] Nawalkha, S.K., Soto, G.M., and Beliaeva, N.A., Interest rate risk

mod-elling: fixed-income valuation course. John Wiley & Sons, Inc., 2005. [20] Nelson, C.R., Siegel, A.F., 1987. Parsimonious modeling of yield curves.

Journal of Business 60, 473-489.

[21] Rebonato, R., Mahal, S., Joshi, M., Buchholz, L., Nyholm, K. Evolving yield curves in the real-world measures: a semi-parametric approach. Journal of Risk, Volume 7/Number 3, 2005.

[22] Schoenberg, Contributions to the problem of approximation of equidis-tant data by analytic functions, Quant.Apppl.Math., vol.4, pp 45-99 and 112-141, 1946.

[23] Silverman, B.W., Density estimation for statistics and data analysis. Chapman&Hall London.

[24] Svensson, E.O., Estimating and interpreting forward interest rates: Sweden 1992-1994. IMF working papaer 94/114, 1994.

Referenties

GERELATEERDE DOCUMENTEN

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright

Sergio Espagna, Politecnical University of Valencia, Spain Robert Feldt, Blekinge Institute of Technology, Sweden Vincenzo Gervasi, University of Pisa, Italy. Smita Ghaisas,

The burner used in this boiler is a Stork Double Register Burner (DRB), using an enhanced Y-jet steam assisted atomizer. The steam is injected with the oil in a

The xUML constructs covered include class diagrams with class generalisations and object associations, and state ma- chines which consist of composite and concurrent states and

Lyle en na hom ds. Op taktvolle wyse is die keuse van die onderwysmedium aan die ouers oorge- laat: gevolglik i s Engelsmediumonderwys bevorder omdat dit die gewildste keuse

block.sty is a style file for use with the letter class that overwrites the \opening and \closing macros so that letters can be styled with the block letter style instead of the

’In geen enkele studie rondom duurzaamheid wordt er gesproken over de winstgevend- heid en economische duurzaamheid van de ondernemer, maar continuïteit is na- tuurlijk een

In particular, after trying to explain the variation of the correlation coefficient with the components obtained from the Nelson and Siegel model I find that