• No results found

Principal component analysis VaR for balance sheet interest rate risk

N/A
N/A
Protected

Academic year: 2021

Share "Principal component analysis VaR for balance sheet interest rate risk"

Copied!
52
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master Thesis EORAS

Principal component analysis VaR

for balance sheet interest rate risk

Author:

Paul Wonderman

Supervisor: Prof. Dr. Laura Spierdijk

(2)

Abstract

In this thesis we develop a Principal Component Value-at-Risk (VaR) method-ology for interest rate risk exposure. The interest rate curve is decomposed into underlying risk factors using Principal Component Analysis (PCA). Simulation of these risk factors is used to compute the VaR of three syn-thetic portfolios under an appropriate choice of the principal component distribution function. Finally, a test analyses is performed to compare the results under the historical VaR and the PCA VaR.

(3)

Contents

1 Introduction and Problem definition 3

1.1 Introduction . . . 3 1.2 Problem definition . . . 6

2 Literature review 8

3 Yield curves, Value-at-Risk and fitting 12

3.1 Interest rate curves . . . 13 3.1.1 Different theories of interest rate curve structure . . . 13 3.2 Value-at-Risk . . . 14 3.2.1 Simulation techniques to estimate the VaR . . . 16 3.2.2 Advantages and disadvantages of the different

tech-niques . . . 17 3.3 Modeling the risk factors . . . 18

4 Principal Component Analysis 20

4.1 Basics of PCA . . . 20 4.2 Geometrical representation . . . 22 4.3 Mathematical formulation . . . 23

5 Empirical results 25

(4)

6 Conclusion 44

6.1 Summary . . . 44

6.2 Subquestion 1 . . . 45

6.3 Subquestion 2 . . . 45

6.4 Subquestion 3 . . . 46

6.5 Main research question . . . 47

6.6 Topics for further research . . . 47

(5)

Chapter 1

Introduction and Problem

definition

1.1

Introduction

In the 1990’s pension funds benefitted from excessive stock market returns and high interest rates. This prosperty on the financial markets collapsed in the early years of this millennium in response of the dotcombubble, the terrorist attacks on September 11 and the credit crunch of 2008. After these events stock markets declined and interest rates dropped, resulting in the opposite situation of the 1990’s: the combination of both negative equity returns and low interest rates. This resulted into a dramatic decline of the average funding level of both corporate and public pensionschemes.

One of the reasons this combination of events has such a dramatic impact on the funding level is the exposure to interest rate risk on both sides of the balance sheet. Assets such as bonds increase in value when interest rates decrease and on the liability side pension payments are affected. Liabilities outweigh the fixed income assets in both size and duration, therefore making pension funds very sensitive to a decrease in interest rates.

(6)

a pension fund is typically exposed to. At the beginning of a pension it is settled to which extend the pension is compensated for changes in the price level, therefore making it dependent on changes in the price level.

Besides market risk a pension fund faces credit and longevity risk. Credit risk is defined by Resti and Sironi (2007) as the possibility that an unex-pected change in a counterparty’s creditworthiness may generate a corre-sponding unexpected change in the market value of the associated credit exposure. Usually pension funds invest a huge part of their available capital in bonds (ABP, 2012), making them exposed to credit risk. Credit risk is present since the existence of the uncertainty whether or not the counter-party of the bond is able to repay its obligations.

According to Milevsky (2006) longevity risk has its origin in the uncertainty of the future lifetime of the participants. Pension funds are exposed to longevity risk as a result of plan members living longer on average than ex-pected, causing liabilities to be larger than what the fund initially accounted for.

Being aware of all risks a pension fund is exposed to, we focus in this thesis on interest rate risk. Fiori and Ionnatti (2006) point out that the interest rate risk assessment refers to balance sheet positions, where a distinction is made between two types. On the one hand balance sheets with a positive duration gap and on the other hand balance sheets with negative duration gaps. Positive duration gaps are also called asset sensitive, they tend to finance medium and long-term assets with short term liabilities. Therefore, being exposed to positive interest rate changes. Conversely, balance sheets with a negative duration gap are also called liability sensitive. Short term positions tend to be financed with long term maturities. It follows that this type is exposed to decreasing interest rates.

(7)

overview of VaR models used in the field of finance. A distinction can be made between parametric and non-parametric models, where in parametric models the underlying distribution of risk factors is specified. This spec-ification is often done by fitting a distribution function on the risk factor data. As said before, risk factors are the market risks that influence the value of a financial position. The non-paramteric method does not assume any underlying distribution of risk factors, it uses the distribution of the raw data.

According to Fiori and Iannotti (2006) simulation can be a very time con-suming method since the main obstacle of estimating the VaR is the com-putational burden of portfolio revaluation. This burden is due to the high number of risk factors and the large number of positions which need to be fully revalued under many different scenarios. A method to speed up calculations is using a dimensionality reducing technique such as Principal Component Analyses (PCA). Press et al. (1996) define PCA as a statis-tical technique that is used to determine whether the observed correlation between a given set of variables can be explained by a smaller number of unobserved and unrelated common factors. PCA is especially useful when considering a large number of datapoints and one believes that there is some redundancy in these points. In this case, redundancy means that some of the variables are correlated with one another, possibly because they are measur-ing the same construct. Because of this redundancy, it should be possible to reduce the observed variables into a smaller number of artificial principal components that will account for most of the variance in the observed vari-ables (Hatcher, 1994).

(8)

function are compared: the parametric and non-parametric approach. With the parametric approach the assumption is made that the risk factors are normally distributed. So they do not take into account the fat tails of the risk factors in this approach. The novelty of their work is the non-parametric estimation where the fat tails are taken into account. After estimating the PC distribution function a Monte Carlo simulation is applied to generate a large number of possible shocks to the yield curve. From the simulated risk factor distribution the P&L distribution of multiple Italian banks balance sheets is derived by using a delta-gamma approximation. The interest rate risk exposure is found by taking the first percentile of the profit and loss distribution.

In this thesis we will discuss and evaluate the different approaches of us-ing PCA for interest rate risk exposure in risk management. By dous-ing so we follow the approach of Fiori and Iannotti, but our focus will be on the modelling of the risk factors using a parametric distribution. In the remain-der of this chapter we state the main research question more precisely. A literature review is set out in Chapter 2 describing the available research that has been done in the field of VaR and PCA. The risk measure VaR is discussed in the Chapter 3, together with the concepts and distributions used in this research. This chapter can be seen as the theoretical part of the paper. Chapter 4 is devoted to Principal Component Analysis. First an intuition of PCA is given following a mathematical description. In Chapter 5 the empirical part is presented where the theoretical concepts described in earlier chapters are applied to European Zero Coupon Bond data. Chapter 6 summarizes the main findings and concludes.

1.2

Problem definition

In this part we describe the research question in more detail and we present the scope of our research.

(9)

different distributions that do account for fat tails of the data in order to compare the results with the normal distribution.

Estimation of different parametric distribution is done on the principal com-ponents distribution. Fiori and Iannotti investigated a yield curve deposed into three principal components. In our research more principal com-ponents are added and the results are compared to this base case of three components.

Fiori and Iannotti considered eighteen balance sheets where the exposure to interest rate risk is varied enough to reflect all possible impacts. We do not focus on the balance sheets of a bank but we give attention to balance sheets of pension funds. With a pension fund the interest rate risk exposure is more explicit since the duration of liabilities is usually very high. We can mimic the pension funds balance sheet by building a portfolio consisting of zero coupon bonds to produce different cash flows patterns. Different types of balance sheets are produced where the extreme results are evalu-ated on. We look at balance sheets which have a positive duration gap and balance sheets with a negative duration gaps The VaR of these bond port-folios consisting of multiple bonds of different maturities is then measured. The portfolios represent a simplified pension fund balance sheet. We ask ourselves if the use of this data reducing method is allowed when applying to the interest rate curves.

The main research question of this thesis is:

Is PCA an appropriate method for estimating interest rate risk exposure? In order to answer our main topic of research the focus will be on the fol-lowing questions:

• How many principal components are needed in representing the yield curve and are the principal components stable over different time hori-zons?

• In comparison with the historical VaR, how does the VaR changes when selecting different principal components for the yield curve rep-resentation?

(10)

Chapter 2

Literature review

Representing the yield curve by a number of principal components is widely known and extensively used in the literature. In this section we discuss the main findings and conclusions derived from literature in which PCA is the subject of interest. Not only do we discuss the use of PCA to arrive at a number of risk factors for determining the underlying drivers of interest rate curves, but also shortcomings and other fields of research where PCA is used. In order to arrive at an answer for the appropriateness of PCA for interest rate risk exposure we analyze the available literature. Especially the representation of the yield curve using principal components and the stability of the principal compents representation over time is of importance in our research following subquestion one.

In our research the prime subject is the usage of PCA on the dimensionality reduction of risk driving factors of the yield curve. But PCA is not only used in extracting risk factors from financial markets. The method finds its origin in applied linear algebra and according to Shlens (2007) it is called one of the most valuable results in this field. PCA is used abundantly in all forms of analysis because it is a relatively simple, non-parametric method of extracting the most relevant information from large confusing datasets. Draper et al.(2003) investigated the use of PCA in the context of a baseline face recognition system. Also astrophysics made use of the findings from PCA. Brosche (1973) applied PCA to describe the statistical properties of galaxies.

(11)

market data such as spot exchange rates, stock markets and interest rate products. In extracting risk drivers PCA is applied to reduce the dimension-ality of data in revealing underlying patterns. If the original data is highly correlated few principal components are needed to explain a large fraction of total data variation. The fraction of variance explained by successive principal components is used to obtain an estimate of the effectiveness of the dimensionality reduction.

From the various data types Loretan used, we focus on interest rate data. The yield curve is given by a combination of the level of the interest rate in combination with the time to maturity. Loretan investigated two seperate parts of the yield curve, namely the short term and long term. For the short term interest rates, data of nine countries is taken together and PCA is performed on this set. None of the principal components of this short term analysis explained a large fraction of the total variance. The same procedure is followed for long maturity interest rates, but in comparison with the short term rates these long term rates are highly correlated with each other. This high correlation leads to the main result that three factors can be used in explaining most variation in the yield curve. The first three risk factors are known as the shift, tilt and curvature factors.

(12)

The stability of the model reveals variations in time in the sense that the model deteriorates significantly when it is estimated using the most recent data in comparison with long sample data.

Besides the data reduction capability of PCA some shortcomings are men-tioned in the literature. According to Loretan (1997), PCA is strongly affected by the choice of units of the series. Therefore PCA will not detect risk factors that do not contribute significantly to the total variability of the data. One can solve this shortcoming by multiplying the series with the appropriate portfolio weights. But this is cumbersome since one has to have knowledge of the actual portfolio.

Kreinin (1998) shows a portfolio that is constructed such that it has a large position sensitive to risk factors that appears to be unimportant in the PCA. This means that to perform efficient PCA on a portfolio, one should keep in mind that selecting PCA on the basis of total variation in the yield curve it could be underestimating the VaR of a portfolio. Therefore the author selects principal components not based on the explanatory power of the total variability of the data but on how much of the variability of the particular portfolio. This method of selecting principal components is also known as portfolio PCA. More research on this topic is done by Hull (2005) who presents a portfolio that has little exposure to the first component but significant exposure to the second component (calculated for U.S. Treasury data). So in selecting principal components on the basis of their contribu-tion of the total risk factors variance could drastically underestimate risk. A clear disadvantage of selecting components on the basis of the portfolio is the actual knowledge of the precise allocation of the portfolio.

(13)
(14)

Chapter 3

Yield curves, Value-at-Risk

and fitting

Our main research question considers interest rate risk, the risk measure VaR and PCA. In this chapter we touch the subjects interest rate risk and the risk measure VaR. The next chapter will consider PCA.

In this chapter the basic concepts and theories used in our thesis are de-scribed were we rely largely on Hull(2008). Interest rate risk is the market risk under consideration, therefore an evaluation of the different interest rate curves is given. Multiple theories about the shape and direction of the curves can be distinguished, we briefly discuss the differences.

As a risk measure the VaR is used, which shows the risk of a portfolio into a single number. In this chapter an extended description of this risk measure is given.

Moving on to the reason why we want to use PCA in the first place, the simplification of the data in order to speed up the calculations in simulation. PCA can also be used to see investigate if there are any underlying patterns our main risk drivers present. So different simulation methods are evalu-ated and compared, since we want to consider the pro’s and cons between historical and monte carlo simulation.

(15)

3.1

Interest rate curves

In the financial world several interest rate markets can be distuingished. The money market is the market for financial institutions which require cash and want to borrow or lend money overnight or particularly in the short term. The time window of this market for borrowing and lending money is within one year. Banks who enter the money market and do financial transactions with each other are often overnight. The rate at which banks in London offer each other money is known as the London Interbank Offered Rate (LIBOR). Another common interbank offering rate is the Euro Interbank Offered Rate (EURIBOR). It is the benchmark on which AAA or AA rated banks offer each other money in the European Monetary Union.

For longer term funding institutions turn to the capital market. Here funding is done with equity or bonds to ensure themselves with longer term fund-ing. Besides banks the government or other companies could raise longer term funding on the capital market. In contrast to the money market, the time period of funding is typically longer than a year. For governments this means that treasury notes can be issued which mature after a time period varying between one until twenty years. Treasury bonds mature between twenty and thirty years. Also government corporations can issue bonds. For these bonds issued by governments a curve exists where the market interest rate is given for each maturity. In general the curves for govern-ment bonds have lower yields for each maturity point in comparison with the yield curves of corporations. Corporate bonds are riskier and there-fore investors require an extra compensation for the risk they are exposed to. Usually bonds pay out, each year or half year, an amount of money to the holder. This payment is known as the coupon payment. These money payments are taken into account when valuing a bond. Also bonds with no coupon payments exist and one can construct a curve of these market values, which is the zero curve. The interest rates we focus on in this thesis are the zero coupon bond rates for the Euro area.

3.1.1 Different theories of interest rate curve structure

(16)

Three theories can be distinguished in particular.

The Expectations theory suggests that the long-term interest rates should reflect expected future short term rates. So, it argues that a forward interest rate corresponding to a certain future period is equal to the expected future zero interest rate for that period.

Market segmentation theory conjectures that there need to be no relationship between short, medium and long term interest rates. So the supply and demand in the short term bond market completely determines the short term interest rate. Same goes for the medium and long term.

The last theory, which is most appealing, is the Liquidity preference theory. It is argued that forward rates should always be higher than expected future zero rates. The main idea behind this theory is investors prefer to keep their money liquid and therefore invest in short periods of time. On the other hand, borrowers want to borrow at a fixed rate for longer periods of time. This leads to the situation that the yield curve is sloping upward.

3.2

Value-at-Risk

In our research we investigate if PCA is an appropriate method of estimating interest rate risk exposure. In this part we evaluate more on the interest rate risk exposure measure. The risk measure under consideration is the Value-at-Risk(VaR). Here we describe why the VaR is the risk measure of choice. Furthermore we evaluate on the specifics and details of VaR as a risk measure.

Basak and Shapiro (2001) describe the VaR as the loss that can occur over a period, at a given confidence level, due to exposure to market risk. In recent years the usage of a VaR-based market risk management emerged as it became the industry standard by choice or by regulators (Jorion, 1997). According to Mitra (2009) the introduction of VaR represented a significant change in the direction of risk measurement. The change in direction can best be described by three different reasons. First, VaR initiated a shift in focus of using risk measures for the management of risk in an industry context. In comparison with earlier risk measures, the VaR was created to measure risk across the whole institution under one holistic risk measure (Dowd,2002). This holistic approach was a significant change in focus of risk measurement which deviated from previous risk measures.

(17)

based upon VaR in 1995. This regulation has subsequently fuelled interest in VaR and it was a reason of becoming a popular risk measure (Danielsson, Shin and Zigrand, 2004). Final reason, previous risk management measures focussed on explaining the return on an assets based on some theoretical model of the relation between risk and return. The most known example of such a model is the Capital Asset Pricing Model. With the introduction of VaR the focus shifted to measuring and quantifying risk itself. This quantification was done rather in terms of losses than on expected returns. These losses are derived from shocks and therefore VaR is calculated to set a minimum capital requirement to protect against these shocks.

According to Hull (2008) the 100x% 1-day VaR is defined by, P rt(∆Pt+1≥ −V aRx) = x,

where Pt is the value of the instrument of portfolio at time t (measured in

days) and ∆Pt+1= Pt+1− Pt, the profit or loss of the next day. The focus

is on VaR at 95% and 99% confidence intervals. Most regulators require a 95% and 99% confidence intervals. So the VaR is just a quantile of the distribution function. If the returns of the portfolio under consideration are normally distributed we would get a VaR like Figure 1.

Figure 3.1: (100 -X%) VaR of a normal distributed P&L

(18)

3.2.1 Simulation techniques to estimate the VaR

The VaR can be computed using different techniques. In our research we focus on the comparison between parametric and non-parametric simual-tion. We want to consider the impact and the differences, therefore we describe in this section the fieerent simualtion techniques of the parametric and non-parametric methods. First, the non-parametric historical simula-tion is discussed followed by the parametric Variance-Covariance and Monte Carlo simulation.

Historical simulation

In historical simulation the future market factor changes are assumed to be properly represented by the historical values. The assumption is made that all values occured in the past are a reliable source of possible future move-ments. As earlier the value of the portfolio at time t is given by Pt. This is

the value of today as we can observe it. If we go one step ahead the value of the portfolio Pt+1 at time t + 1 is a random variable depending on market

risk factors. These risk factors are for instance exchange rates or interest rates. One selects a certain period of time in the past and collects all the market returns. So the past values are used only once in the simulation. The market return changes are re-valuated trough full valuation. After the value changes are calculated, they are ranked from lowest to highest. In this way one can find the VaR by taking a quantile of the empirical distribution which has been made.

A modification which can be made is called bootstrapping. Whereas the historical simulation uses the past returns only once, the bootstrapping ap-proach is more flexible (Efron, 1979). It can use the past observations more than once, creating some sort of random data generation.

Variance-Covariance

The parametric Variance-Covariance method, which is also known as the Delta-Normal method, is the most widespread approach used among risk managers. Under this method the portfolio’s distribution of profits and losses is modelled by making two assumptions.

First assumption is the linearity of the portfolio under scope. A portfolio is linear if the change in portfolio price V (t) is linearly dependent on its constituent asset prices Si(t). That is,

∆V (t) =

N

X

i=1

(19)

So the price changes of a particular asset in the portfolio is fully represented in the price of the total portfolio. This assumptions especially holds when the portfolio does not contain derivatives.

The second assumption is the joint normal return distribution of the assets, which implies that the returns of the portfolio are normally distributed as well.

When assuming the linearity and normality we can describe the portfolios profit and loss using a Normal distribution. With this distribution the VaR can be found by an explicit formula, which is clearly simple to evaluate. Monte Carlo simulation

Monte Carlo simulation is also based on random number generation, but in a more complex way than the bootstrapping method. On the historical sample there is an estimation of the parameters of a particular probabil-ity distribution. The N simulated risk factors N ∈ N are extracted from this probability distribution. The estimation of the risk factors following a particular distribution makes this approach a parametric one.

3.2.2 Advantages and disadvantages of the different tech-niques

All the different approaches have their advantages and disadvantages. The following findings are largely based on Hull (2008).

The Monte Carlo (MC) approach has the advantage of generating more val-ues than the number of observations from the original sample. The original sample usually consists of a certain number of years, where each year has 252 trading days. For the MC simulation the number of data point generally used is 10,000. But a clear disadvantage is that these observations are gen-erated by taking certain assumptions about the distribution of the original sample. Also the length of the sample taken influences the parameters since taking another length of datapoints the distribution can be different again. The historical simulation approach has the advantage that it does not as-sume any underlying distribution of the original data. No assumptions have to be made about the distribution, therefore it is a non-parametric approach. Only the observed risk factor changes have to be analyzed. A drawback of this method is the lack of data and the decision of the timespan collecting data from. If the dataset is too short you lack a sufficient number of points, but if the sample is too large the data could be outdated. Since there is no distribution the data points are only used once.

(20)

drawback is the assumption of normally distributed returns which is gener-ally rejected for financial time series.

3.3

Modeling the risk factors

In the third subquestion we want to consider the impact on the risk measure VaR when considering different distributions underlying the risk factors. In this section we describe the distributions used in estimating possible under-lying distribution of the risk factor returns.

In the paper of Fiori and Iannotti (2006) the principal components are mod-eled using the normal distribution, but this distribution often does not fit the fat tailness of financial time series. Therefore we fit other distributions in the empirical part of this thesis to take into account the non-normality. The use of distributions different than the Gaussian gives us the opportunity to compare the results. Distributions with the fat tail property are consid-ered, which are the Student-t, Laplace and the Logistic distribution. In this section we briefly state the distributions under consideration and their basic characteristics where the densities are taken from Rice (1999).

Normal distribution

The most widely known distribution in statistics and used in modeling the interest rate returns by Fioro and Iannotti (2006) is the normal distribution. It is a symmetric distribution with the classic bell-shaped curve. The den-sity function depends on two parameters, µ and σ, (where −∞ < µ < ∞, σ > 0): f (x) = 1 2πσ2 exp −(x−µ)2 2σ2 −∞ < x < ∞ . Student t-distribution

Another widely used distribution is the Student t. It is like the normal distribution symmetric and bell shaped, but the main difference is the t-distribution has fatter tails in comparison with the normal t-distribution. The density function of the t dsitribution with n degrees of freedom is,

f (t) = Γ(n+12 ) nπΓ(n 2)  1 +tn2 −n+12 −∞ < t < ∞ , where Γ is the Gamma function.

Laplace distribution

(21)

the normal distribution has a squared difference from the mean, the Laplace expresses this in terms of the absolute difference from the mean. Due to this fact, the Laplace distribution has fatter tails than the normal distribution. The density function is given by:

f (x|µ, b) = 1

2bexp(− |x−µ|

b ) −∞ < x < ∞ ,

where µ ∈ R is the location parameter and b > 0 is the scale parameter. Logistic distribution

Also the logistic distribution resembles the normal distribution in shape but has heavier tails. Again useful in modeling the interest rate changes, which have fatter tails than the normal distribution accompanies for. The density function is given by:

f (x|µ, s) = s(1+exp −(x−µ)/s)e(x−µ)/s 2 −∞ < x < ∞ ,

(22)

Chapter 4

Principal Component

Analysis

Our main research question is the appropriateness of PCA in estimating interest rate risk exposure. In this previous chapter previous chapter we described the risk measure used for interest rate risk exposure, the VaR. In this chapter we analyze the data reducing technique principal component analysis (PCA). First we focus on the technique itself by describing the basics and interpret the concept using a graphical representation. Then a mathematical formulation of the method will be given where all the steps used in performing PCA are described. The describtion of the working of PCA is neccesassry since we want to consider the impact of including and excluding principal components on the VaR.

4.1

Basics of PCA

(23)

correla-tion but it can come close to perfect correlacorrela-tion. The data of interest rates are highly correlated with each other, and therefore one can believe that it should be possible to reduce the observed variables into a smaller set of uncorrelated principal components that will explain most of the variability. According to McNeil, Frey and Embrechts (2005) the aim of Principal Com-ponent Analysis is to reduce the dimensionality of highly correlated data by finding a small number of uncorrelated linear combinations that account for most of the variability of the original data. Therefore PCA is not a model itself, it is rather a data reducing statistical technique. From McNeil, Frey and Embrechts (2005) we find that the principal components are a linear combination of optimally weighted observed variables. The key idea is to perform a linear transformation of the original dataset into principal com-ponents that are orthogonal to each other. This can be seen as a change in the axis such that the Euclidean distance between the axis and the data points are minimized. If x = (x1, x2, . . . ,x n) and y = (y1, y2, . . . , yn) are

points in a space R2, the Euclidean distance is given by,

d(x, y) =p(x1− y1)2+ (x2− y2)2+ · · · + (xn− yn)2.

The first axis is chosen in such a way that the sum of squared Euclidean distances is minimized. Therefore it is just a line that represents the best fit that one finds when performing ordinary regression. Then the second axis is chosen orthogonal to the first axis, such that it minimizes the sum of squared Euclidean distances. This process continues until the number of components extracted is equal to the number of variables that exist in the data.

The first component accounts for the largest part of total variance present in the data. It is correlated with some of the observed variables. The second component will account for the a maximum amount of variance in the data that was not accounted for by the first principal component. This component is again correlated with some of the data but it is uncorrelated with the first principal component. One can proceed and finds that each component has zero correlation with the other components. Also the components explain the total variance of the data where the first explains the largest part and the second accounts for a smaller part until all the variation is explained by the components. What exactly is meant by total variance of the data is simply the sum of variances of the observed variables. For standardized data each variable has mean 0 and variance 1. Therefore, the variance of these datasets is equal to the number of observations.

(24)

minimum adequate sample size when performing PCA should be at least hundred or five times the number of variables analyzed. The sample that we use consists of 3238 datapoints per maturity so it should work properly.

4.2

Geometrical representation

In order to clarify the idea of PCA we use a geometrical representation of how the principal components are selected. Since we are dealing most of the time with datasets that have a dimension larger than three it is impossible to make a graphical representation. Therefore a simplified dataset of two dimensions is used in order to show the idea of how PCA works.

As said before the first component is selected such that it explains most of the variance of the original data. Then the second component is selected, which explains most of the variability that is left and is orthogonal to com-ponent 1. In Figure 2 the process of Euclidean distance minimization is shown. Here it is shown that the components projection is chosen in such

(25)

Figure 4.2: Principal Component Analysis for a two dimensional space

4.3

Mathematical formulation

After explaining the the basics of the technique we move on to formulate the mathematics. Here the general steps are explained where we rely on Mellin(2004).

Consider a set of n variables X = x1, . . . , xn, with zero empirical mean and

nonsingular covariance matrix Σ, that is, E(X) = 0 Cov(X) = Σ.

The objective is to find a linear combination of random variables x1, . . . , xn,

such that this combination contains a much of the variability of the original data as posible. So the first principal component is found by taking the following linear combination of the random variables,

βTx =

n

X

i=1

βixi,

and maximize the variance,

D2(βTx), subject to,

||β2|| = βTβ = 1

where D = diag(λ1, λ2, . . . , λn) are the eigenvalues of covariance matrix Σ

(26)

of each variable xj affects the variance of the linear combination βTx. It

can be shown that,

maxD2(βTx) = β1tΣβ1 = λ1

where λ1 is the largest eigenvalue of the covariance matrix Σ and β1 the

eigenvector corresponding to the largest eigenvalue of the covariance matrix Σ. It now follows that the 1st principal component is given by,

y1 = β1Tx.

The next is finding the linear combination of random variables x1, . . . , xn

that is uncorrelated with the 1st principal component ans contains as much variability of the random variables as possible. Then, λ2 is the second

eigenvalue of the covariance matrix Σ and β2 the eigenvector corresponding

to the largest eigenvalue of the covariance matrix Σ. Also the 2nd principal components is given by,

y2 = β2Tx.

When continuing this process one finds all the eigenvalues and eigenvec-tors corresponding to the covariance matrix of the original data. Then the following relationship is true:

Σ = BDBT Matrix B is an orthogonal matrix, so

BTB = BBT = I.

The corresponding eigenvectors are the columns of this matrix, therefore B = [β1...β2... · · ·...βn].

The principal components can be expressed as, yi = βiTx = β1ix1+ · · · + βN ixN

representing the i-th principal component where i = 1, 2, . . . , n. After finding the Principal Components the original variables can be expressed as a linear combination of these components:

xi = βiy = βi1y1+ · · · + βiNyN.

If all the components are included the original variable xi is again found.

(27)

Chapter 5

Empirical results

In this chapter we present the result of estimating the VaR for a synthetic bond portfolio using both exact and approximate techniques. In the ap-proximating technique we use different principal components to describe the yield curve. Distributions are estimated on these principal component series. The estimated distributions are used to derive the VaR of the portfo-lio. We make a comparison between a normal distribution and distributions which account for fat tails.

First we provide a data description of the dataset used. Yield curves are an important source of risk for the value of the portfolios under investigation. The portfolios consist of bonds in which the value is almost completely de-termined by the position of the yield curve. Principal component analysis is evaluated on the yield curve. First the pattern of the principal components is investigated. Next the impact on the explanatory power of the variance of adding components is described. The stability of the result is compared on different subsets of the whole period.

(28)

5.1

Data description

We collected swap rates from Bloomberg for the European swap curve, rang-ing from January 1, 1999 to May 31, 2011. Data is collected for fifteen different maturity points which are 3 month, 6 month, 1 year until 10 year and the longer maturities 15, 20 and 30 years. For all these maturities daily closing values are collected, in total a set of 3238 datapoints for each matu-rity, making a grand total of 48,570 datapoints. In investigating the yield curve we want to do research on the zero rates, since these rates are used in discounting cash flows. Therefore we transfom the swap curves into zero curves by using the bootstrapping method. In Figure 5.1 we plot the curve for the 1 years zero curve.

Figure 5.1: The zero curve of the 1 years maturity on which the vertical axis is the yield in percentages and on the horizontal axis the time in years.

The pattern of this curve indicates that it was fairly unstable on the financial market. At the start we see an increase followed by a downward-sloping trend after the burst of the dotcombubble in 2000. Afterwards, an increase is visible followed by a dramatic decrease which can be explained by the credit crunch of 2008 hitting the markets.

When combining different maturities we can compare the movement to ea-chother. In Figure 5.2 the zero curves for maturities 1, 5, 10 and 30 are shown. It is clearly visible that the different yield curves are moving more or less in the same direction.

In order to investigate the data for stationarity we use the Dickey Fuller test. For all the sixteen different maturities we can not reject the hypothesis of stationarity given the outcomes of the test.

(29)

Figure 5.2: Four zero curves with varying maturities on which the vertical axis is the yield in percentages and on the horizontal axis the time in years.

directly. Instead, we are going to use logreturns of the yield curve data. Lo-greturns are found by taking the natural logarithm of a maturity point daily closing divided by the closing value of the next day. Therefore logreturns are given by,

rt= ln



pt,close

pt−1,close



where pt,close is the closing value for day t and pt−1,close is the closing value

for day t − 1. Since we have daily observations the time t is measured in days.

For the four maturities indicated earlier the daily logreturns are plotted over time. The results are shown in Figure 5.3.

(30)

Figure 5.3: Daily returns in percentages plotted over time in years for the 5 and 30 year maturities

1 year 5 years 10 years 30 years

Mean 0.000 0.000 0.000 0.000 Standard deviation 0.009 0.013 0.011 0.016 Minimum -0.050 -0.062 -0.078 -0.162 Maximum 0.078 0.075 0.073 0.263 Skewness 0.750 0.278 0.009 1.774 Kurtosis 7.833 2.187 4.046 46.106

Table 5.1: Descriptive statistics of four maturities from the dataset

(31)

the 30 year maturity is the most extreme. The kurtosis is a measure to in-dicate the peakedness of the data. The normal distribution has for instance a kurtosis of 3. A kurtosis larger than 3 indicates a distribution with values concentrated around the mean and thicker tails. Therefore leading to high probability of extreme values. The skewness is a sign of assymetry and de-viation around the mean. For the given maturities the skewness is positive, indicating that the data is more concentrated to the right of the mean than to the left.

5.2

Principal Component Analysis

Since the work of Litterman and Sheinkman (1991) for the US market, various empirical studies have shown that around 99 % of the variation in yield changes is explained by three principal components. Also the first factor alone explains around 90% of the variation.

In this section the outcomes of Principal Component Analysis on the zero curve are discussed. We start with an analysis of the whole curve including all maturity points and see which components explain most of the variability in the given data. We show the effects on the yield curve of including and excluding different principal components, and see if this explanatory power is stable over time.

We start the principal component analysis for fifteen maturity points for the period of the whole dataset. In the work of Fiori and Iannotti (2006) the analysis is done on level changes of the yield values. We use percentage changes in the curve, since we should not allow the magnitude of the data have an decisive impact. In Table 5.2 the cumulative proportion of the zero curve variance explained by the different factors is given.

Proportion Cumulative Factor 1 0.675 0.675 Factor 2 0.155 0.830 Factor 3 0.047 0.877 Factor 4 0.039 0.916 Factor 5 0.017 0.933 Factor 6 0.015 0.948

(32)

To explain at least 95% of the variance we have to include at least 6 principal components. In order to verify that this number of factors and percentage of variance explanation is consistent and stable over time we take different subsamples within the total sample. We are investigating another 5 subsets of the total dataset that is, Set 1 is the whole dataset ranging from 4-1-1999 until 31-5-2011. Set 2 is ranging from 31-5-2010 until 31-5-2011, that is the last year in the sample. Set 3 from 30-5-2008 until 31-5-2011 and Set 4 from 31-5-2006 until 31-5-2011. Also a sample from the beginning of the dataset is taken, that is Set 5, ranging from 4-1-1999 until 4-1-2000. Set 6 consist of a middle sample, 4-1-2002 until 3-1-2005. The cumulative proportion of total variance explained by each factor is for each set shown in Table 5.3.

Set 1 Set 2 Set 3 Set 4 Set 5 Set 6 Factor 1 0.675 0.781 0.712 0.719 0.544 0.721 Factor 2 0.830 0.894 0.886 0.885 0.775 0.817 Factor 3 0.877 0.930 0.926 0.923 0.843 0.869 Factor 4 0.916 0.954 0.953 0.951 0.885 0.900 Factor 5 0.933 0.968 0.964 0.962 0.924 0.931 Factor 6 0.948 0.977 0.973 0.971 0.946 0.950

Table 5.3: Cumulative proportion of variance explained by factors for each set

From this results we can see the best results are achieved by taking a time period which is most recent and short. Then only three factors are needed in order to explain 93% of the total variation in the zero curve. Fac-tor 1 explains less variantion when selecting a sample from the beginning of our dataset. After 6 factors the explaination is around 95% for all different samples. Therefore we conclude that following the results of the different subsample the principal component analysis is stable over time.

We consider the whole dataset again. In the previous part, six factors and their explanatory proportion where given. To see how these factors have an influence on the different maturity points over time we plot the first three principal component factor loadings in Figure 5.4. It is shown how these factors affect the movement in the zero curve.

(33)

Figure 5.4: Factor loadings for the first three factors of the whole dataset

way than the other maturities. Therefore we exclude the very short term maturities and do the analysis again. The result are presented in Figure 5.5. The principal components behave in a way that is consistent with the

Figure 5.5: Factor loadings for the first three factors after removing the short maturities

literature in the way that PC1 is positive for all maturities, PC2 is first positive and afterwards negative and PC3 is switching between positive, negative and positive. No hump is observed for the factor loadings at each maturity point.

(34)

Cum. exp. whole set Cum. exp. short years removed Factor 1 0.675 0.724 Factor 2 0.830 0.886 Factor 3 0.877 0.933 Factor 4 0.916 0.952 Factor 5 0.933 0.966 Factor 6 0.948 0.976

Table 5.4: Cumulative proportion of variance explained by factors for two datasets

Representation of the yield curve with PCA

In the previous section we have seen the explanatory power of the yield curve in terms of total variance. Now we investigate the shape of the estimated yield curve using principal components in comparison with the original yield curve. The whole data set is under consideration, so a time period from 1999 until 2011 where the short term maturities are excluded. The excluded short term maturities are the 3 month, 6 month and 12 month points.

In section 4.3 it is shown howo to represent the yield curve using principal components. We are investigating the 5 years curve. In Figure 5.6 the original yield curve is plotted over time together with the estimated yield curve using only one principal component.

(35)

Figure 5.6: The 5 year empirical zero curve with the estimated 1 principal component curve together with the absolute differences in percentages

5.3

Comparison of historical VaR and empirical

PCA VaR

(36)

Figure 5.7: The differences between empirical and the estimated 2 and 3 principal component curves

First we present the synthetic portfolios, which consist of simple zero coupon bonds. The reason for the simplicity is to show the direct impact of the use of PCA. On this portfolio we calculate the Profit & Loss distribution of own funds. The historical VaR of the portfolio’s is calculated using the sample data. This is the benchmark case in which the portfolio reflects the mar-ket. This situation is compared with the principal component yield curve analysis. In comparison with the paper of Fiori and Ionnatti we consider parametric distributions which account for fat tails, where Fiori and Inon-natti only investigated the normal distribution.

Description of the synthetic portfolio

(37)

Asset sensitive balance sheets tend to finance medium and long-term assets with short-term liabilities, being exposed to interest rate risk when interest rates go up. Conversely, the liability sensitive balance sheets tend to go short up to five years, with relatively small long positions in the highest maturities, being exposed to decreasing interest rates. Also an intermediate situation is considered in which intermediate assets are financed with very short and very long term liabilities.

The fixed income investments of the pension fund consists in our situation only of government bonds. A bond is a security sold by governments and corporations (the issuer) to raise money from investors (the holder) today, in exchange for the promised future payment (Berk and DeMarzo, 2007). Suppose that a bond provides a cash flow ci at time ti where (1 ≤ i ≤ n).

The price of the bond B and the bond yield y (continously compounded) are related by,

B =

n

X

i=1

cie−yti.

The price of a fixed rate bond is equal to the sum of all the discounted cash flows. A zero coupon bond is a bond which does not have any coupon payments in between the moment of issueance and expiration. Therefore zero coupon bonds have a duration equal to the time to maturity.

In the following stylized portfolios a typical pattern is shown in which the pension fund is funded with help of long term assets in the liability sensitive portfolio. For the asset sensitive portfolio the pension fund is funded with short term liquidity. We also describe an intermediate cash flow portfolio in which middle term cash flows are financed with short and long liabilities. The different structures of funding are given in Table 5.5.

In our analysis we will revalue the present value of the portfolio’s own funds for each day in the sample to get a profit and loss over the whole region. The present value of these portfiolio’s are a reflection of the own funds of a balance sheet. This is true since the liabililties in the portfolios are indicated as being short positions. We have,

P V (Assets) − P V (Liabilities) = OwnF unds. VaR of the portfolios

(38)

Liability sensitive Asset sensitive Intermediate sensitive 2 years 100 -100 -100 3 years 100 -100 -100 4 years 100 -100 -100 5 years 100 -100 300 6 years 100 -100 300 7 years 0 0 300 8 years 0 0 300 9 years 0 0 300 10 years -250 250 -100 15 years -250 250 -100 20 years -250 250 -100 25 years -250 250 -100 30 years -250 250 -100

Table 5.5: Description of the three portfolio’s under consideration, where for each maturity the value of cash is given in euro

against the yield of our last datapoint. In Figure 5.6. the results of the P&L distribution of the historical returns are shown.

We observe a positive and negative change in the present value of own funds for the asset sensitive portfolio. Notice that the maximun and mini-mum change are more than a 100% change in the present value of the own funds. In Figure 5.9 the present value of own funds is shown for the liability sensitive portfolio.

Notice here that the change in own funds over time is the same as for the asset sensitive portfolio but then the other way around. This is true since the long and short positions are the exact opposites of each other. A difference in VaR is still possible since we are looking at losses of the profit and loss distribution. A profit in the asset sensitive portfolio is a loss in the liability sensitive portfolio.

The change in own funds for the intermediate term sensitive portfolio is given in Figure 5.10. Here the change in own funds is more concentrated and does not have that large changes. One of the reasons could be the distribution of cash over the time periods, it is more spread in comparison with the other two portfolios.

(39)

Figure 5.8: Change in the value of own funds for the asset sensitive portfolio

To challenge this VaR we compared it with the VaR calculated using the own fund change representation by principal components.We calculated the VaR found by including and excluding the risk factors found in principal component analysis. In the literature three principal components are se-lected to represent the yield curve, but we are analyzing the effects on the VaR by including and excluding different principal component risk factors. The results of the comparison of VaR is given in Figure 5.11.

For the asset sensitive portfolio an empirical VaR of -113% is observed. When looking at the principal component representation we see an over es-timation of the VaR when using less principal components. After including three components to replicate the yield curve the VaR is close to the empiri-cal VaR and stays more or less the same. For the liability sensitive portfolio it is the other way around. When looking at the intermediate portfolio a different pattern can be observed. For the first component the VaR is more negative and for the components after three the VaR is less than the empir-ical VaR.

From the results given in the Table we can say the inclusion or exclusion of components has little influence on the VaR of the portfolio’s.

5.4

Comparison of historical VaR and parametric

PCA VaR

(40)

Figure 5.9: Change in the value of own funds for the liability sensitive portfolio

Figure 5.10: Profit and Loss distribution of the intermediate sensitive port-folio

(2006) the Normal distribution is used to simulate the risk factors. We investigate if this is a distribution is a good representation of the risk factor distributions and if other distributions give a better fit. These simulated risk factors are used to derive the VaR using the Profit and Loss distribution. The VaR for the different scenario’s is given to make comparison of the impact on making different assumptions.

5.4.1 Estimation of the distribution of the data

(41)

Figure 5.11: Evaluation of VaR for the asset sensitive portfolio when includ-ing principal components against the empirical VaR

Figure 5.12: Evaluation of VaR for the liability sensitive portfolio when including principal components against the empirical VaR

capture the tails of the returns because it is more centered. The blue line is the normal density with the maximum likelihood estimators of the data. As one can see the peek in the center is much more higher than the normal distribution estimates. Also in the tails, especially for maturities 30 and 10 the mass is underestimated. The maximum likelihood (ML) estimators are given by

(42)

Figure 5.13: Evaluation of VaR for the intermediate sensitive portfolio when including principal components against the empirical VaR

Maturity 2 Maturity 5 Maturity 10 Maturity 30 Normal distribution

µ -.00006 -.00003 -.00001 -.00005

σ .00711 .00703 .00684 .00651

Table 5.6: Normal distribution ML estimators

namely the Kolomogorov Smirnoff (KS) test and the Anderson Darling (AD) test. In these tests the goodness of fit of the sample is tested with a reference distribution, in our case the normal distribution. The null hypothesis is that the sample is drawn from the reference distribution.

The Kolmogorov Smirnoff distance is defined by: DKS = max|FR(xi) − FE(xi)|,

where x1, . . . , xn represents the sample, FRrepresents the cumulative

refer-ence distribution and FE is the empirical distribution function. The

refer-ence distribution is defined by the estimated parameters of the dataset of returns.

The Anderson-Darling test is defined by, DAD = max

|FR(xi) − FE(xi)|

pFR(xi) ∗ (1 − FR(xi))

,

(43)

Figure 5.14: Histograms of returns of the 5 years and 30 years maturity with a fitted normal distribution

Figure 5.15: QQ Plots of maturities 2, 5, 10 and 30 years

Summarizing, the normal distribution gives a poor fit for all four of the maturities, especially in the tails. Therefore we are going to focus on the data representation of distributions which give a better fit. By means of principal components risk factor we want to capture tail events and represent the data in a good way.

5.4.2 Estimating the distribution of the PC’s

(44)

Maturity 2 Maturity 5 Maturity 10 Maturity 30 p values

Anderson-Darling 0.000 0.000 0.000 0.000

Kolmogorov-Smirnoff 0.000 0.000 0.000 0.000

Table 5.7: p-values for Anderson-Darling and Kolomogorov-Smirnoff test

We fit the normal distribution, t-distribution, Laplace distribution and the logistic distribution on the principal component series. For each principal component serie an estimation is performed on the fit of the curve.

The normal distribution was estimated on the first three principal compo-nent series. The QQ-plots are given in Figure. We can see

Figure 5.16: QQ Plots of normal distribution fit on PC1 and PC2 When comparing the estimates of the normal distribution to the t-distribution a difference is clearly visible on the fit of the t-distributions, especially in the tails. The fit of the t-distribution is shown in Figure.

5.4.3 Comparison of VaR

(45)

Figure 5.17: QQ Plots of t distribution fit on PC1 and PC2

component series, especially in the tails the fit was poor. The t distribu-tion gave a better fit, since the tails of the distribudistribu-tion have more mass. Therefore we see that the fit over the whole sample period gave a significant improvement in comparison to the normal distribution. For the laplace and logistic distribution we get a very poor fit and therefore we did not further evaluate on these distributions.

(46)

Chapter 6

Conclusion

In this section the conclusions and findings will be discussed. We will give an overview of the work performed and we will answer the subquestions followed by the main research question. Also the relevance and shortcomings of our research are presented. We start with a brief summary.

6.1

Summary

In this thesis we evaluated the performance of the interest rate VaR through an principal component based analysis. By using twelve years of daily data on interest zero curves, the interest rate risk is evaluated through a VaR using historical and Monte Carlo simulation of interest rate percentage changes. The PCA VaR of the profit and loss distribution of three types of balance sheets is derived using two different approaches: a parametric and a non-parametric approach. The balance sheets where the PCA VaR is evaluated on are synthethic balance sheets which produced extreme cases of cash flow patterns. The synthetic balance sheets mimic an asset sensitive, liability sensitive and intermediate sensitive balance sheet.

(47)

parametric VaR. We investigated the VaR using a t-distribution, a normal distribution, logistic and laplace distribution.

6.2

Subquestion 1

In the replication of the yield curve using principal component analysis the fit after using two principal components was significantly improved in compar-ison with one component. Including three components gave an even bettter fit which we was expected. We could verify the result from the literature of explaining at least 95% of the total variance using three components. When investigating the shape of the factor loading over the maturity points we found that the pattern was different for the shortest maturities, that is the 3 month, 6 month and 12 month maturities. Therefore we excluded the short term maturities in order to investigate the fit. After the exclusion the explanatory power of the variance was improved and therefore we excluded the shorter maturities from the sample in our research.

The stability of the principal component analysis over time was investi-gated. It is shown that for different subsamples the explanatory power of the principal components varies. When taking a short sample of most recent datapoints the explanatory power is the largest. An explanation could be that the datapoints are most recent and highly correlated with each other. We would have expected that for short samples the explanatory power would increase, since data that has a small time between them are more correlated to each other. In comparing the results to the whole sample period we see a decrease of explanatory power when using the whole period. This is ac-cording to our expectations. Overall the principal components explanatory power in terms of total variance is fairly stable.

6.3

Subquestion 2

The VaR is calculated on the profit and loss distribution of three portfo-lios. These portfolios have different characteristics, that is an asset sensitive portfolio, a liability sensitive portfolio and an intermediate term sensitive portfolio. Whe compared the VaR of the empirical distribution using his-torical data to the PCA VaR. This PCA VaR is found by incuding principal components in order to represent the yield curve.

(48)

data. For the liability sensitive portfolio the VaR is underestimated for the first components, but after including three components the results are com-parable. For the intermediate portfolio the VaR is overestimated using less than three components. When including three, four or five components the results are the same in comparison with the empirical data. Afterwards the VaR is underestimated when including more components.

For the asset portfolio we see large negative cash flows for the longer maturties. The overestimation when including one principal component could be ex-plained since the first principal component accompanies for level shifts. Therefore including only one component in the VaR heavily overstimates this larger cash negative cash flows and therefore overestimates this shock. Including more components dempens this effect. For the liability sensitive portfolio the other way around is true.

For the intermediate portfolio the underestimation using less than three components could have the same reason. The underestimation when includ-ing more components is a results we could not verify with the theory. When including all the components the results should be the same, therefore we would expect the same VaR when including all the components.

6.4

Subquestion 3

The VaR when using historical data is compared to the VaR when using parametric distributions in estimating the principal component series. We estimated the normal, t, laplace and logistic distribution on the principal components series. The normal distribution gave a fit that was underesti-mating the principal component series, especially in the tails. The t distri-bution gave a better fit, since the tails of the distridistri-bution have more mass. For the laplace and logistic distribution we got poor fit and therefore we did not further evaluate on these distribution.

When comparing the empirical VaR to the parametric VaR for the normal and t distribution we see an underestimation of the VaR when using the normal distribution. The VaR is underestimated for all principal compo-nents in comparison with the empirical distribution. For the t distribution we see a VaR that is larger, so to say an overestimation in comparison with the original data. This is the result we expected and can be explained by the fact that the tails of the t distribution has more mass. Therefore there is a higher probability of including more extreme points in the data, which leads to a VaR that is higher than using the normal distribution.

(49)

for the interest rate curve but this impact is very small. When using three principal components the VaR is very close to the original empirical VaR.

6.5

Main research question

We can answer the main question as follows: PCA is an appropriate method for estimating interest rate risk under several conditions. Regarding the replication of the yield curve we can concur with the literature that for recent data three principal components are needed to represent the yield curve with 95% of the variance explained. Also, when using three principal components in the calculation of the VaR of the portfolios we find values that are very close to the VaR using the historical data.

Following Fioro and Iannotti who showed that the PCA VaR using a normal distribution is underestimating the interest rate risk. Where they concluded the underestimation of risk when using a normal distribution via a non-parametric method, we can conclude that the there is an underestimation of risk in comparison with the parametric heavy tails distributions. When using distributions of the risk factor the normal distribution is underestimating the risk for all three portfolio’s.

Therefore we can conclude, PCA is a method which can be used in order to derive interest rate risk, but is a method which should be used with caution.

6.6

Topics for further research

The VaR itself is a risk measure which could be underestimating risk since it is only considering a point estimate. A research can be done using other risk measures and investigate the performance of principal component anaylsis on these other risk measures.

(50)

Chapter 7

Bibliography

1. Basak, S. and A. Shapiro (2001): ”Value-at-Risk Based Risk Manage-ment: Optimal Policies and Asset Prices”, London.

2. Bohdalova, M. (2007): ’A comparison of Value-at-Risk methods for measurement of the financial risk,’E-leader, Prague.

3. Brosche, P., (1973):”Systematics of the central velocity gradients of spiral galaxies,” Astron, Astrophysics, 23, 259.

4. Danielsson, J., Shin, H.S., and Zigrand, J.-P. (2004): ”The impact of risk regulation on price dynamics” ,Journal of Banking and Finance , 28(5), pp. 1069-87.

5. Dowd, K., (1998):Beyond Value at Risk: The New Science of Risk Management. John Wiley Sons, London.

6. Dowd, K. (2002): ”An Introduction to Market Risk Measurement,”J. Wiley Hoboken, NJ.

7. Draper, I. et al. (2003): ”Recognizing faces with pca and ica,”The Pennsylvania State University.

8. Efron, B. (1979): ’Bootstrap Methods: Another Look at the Jack-knife,’ The Annals of Statistics, 7(1):1-26.

(51)

10. Frye, J. (1997): ”Principals of Risk: Finding Value-at-Risk Through Factor-Based Interest Rate Scenarios,”NationsBanc-CRT, Chicago. 11. Galichon, A. (2009): ”The VaR at risk”,Chaire Finance and

devel-oppement durable.

12. Hatcher, L. (1994): A Step-By-Step Approach to Using the Sas System for Factor Analysis and Structural Equation Modeling,SAS Institute Inc.,Cary, NC, USA.

13. Hull, J. (2008):Options, Futures, and Other Derivatives. Pearson Prentice Hall, New Jersey.

14. Jamshidian, F. and Y. Zhu(1997): ’Scenario Simulation: Theory and methodology,’Finance and Stochastics’,1, 43-67.

15. Jorion, P., (1997); Value-at-Risk: The New Benchmark for Controlling Market Risk, Irwin, Chicago, I11.

16. Kreinin, A., Merkoulovitch, L., Rosen D. and Zerbs, M., (1998): ”Prin-cipal Component Analysis in Quasi Monte Carlo Simulation”, Algo research quarterly, 1(2), pp. 21-30.

17. Litterman, R, and J.A. Sheinkman, (1991): ”Volatility and the Yield Curve”, Journal of Fixed Income, June 1991.

18. Loretan, M., (1997): ”Generating market risk scenarios using principal components analysis: methodological and practical considerations,” Internal Report, Federal Reserve Board.

19. Malava, A. (2006): ”Principal Component Analysis on Term Structure of Interest Rates,” Independent Research Project in Applied Mathe-matics, Helsinki University.

20. Manganelli, S. and R.F. Engle, (2001):”Value at Risk Models in Fi-nance”,ECB Working Paper No. 75.

21. McNeil, A. J., R. Frey, and P. Embrechts (2005): Quantitative Risk management. Princeton University Press, New Jersey.

(52)

23. Milevsky, M., (2006): The Calculus of Retirement Income: Financial Models for Pension Annuities and Life Insurance. Cambridge Univer-sity Press, New York.

24. Mitra, S. (2009): ”Risk Measures in Quantitative Finance”, UK Uni-versity Risk Conference, UK.

25. Phoa, W.:”Yield Curve Risk Factors: Domestic And Global Contexts”, The Practitioners Handbook of Financial Risk Management.

26. Press et. al (1996): Numerical Recipes in C, London,Cambridge Uni-versity Press, Chapter 13.

27. Resti, A. and S. Sironi, (2007): Risk Management and Shareholders’ Value in Banking, Wiley Finance.

28. Saunders, A. (1999): ”Financial institutions Management: A Mod-ern Perspective (3rd ed.), Irwin Series in Finance, McGraw-Hill, New York.

29. Shlens, J. (2005): ”A Tutorial on Principal Component Analysis”. 30. Soto, G. M., (2004): ”Using principal component analysis to explain

term structure movements: Performance and stability”, Progress in Economics Research, volume 8. Nova Science Publishers, New York.

Referenties

GERELATEERDE DOCUMENTEN

The independent variable in the time series regression model are market risk, interest rate risk, dollar value risk, tanker freight rate risk, bulk freight rate risk,

We observe that the various risk factors do not have an equal factor contribution to the volatility of a portfolio are not diversified for the 1/N portfolio nor does the portfolio

In this section we will present three more or less different ways of looking at three-mode principal component analysis: we start with questions a researcher

De politiek van sommige tijdschriften om de correlatie- of gelijkenismatrices waarop hoofdassen-analyse, factor analyse of meerdimensionale schaaltech- nieken zijn toegepast, niet

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden Downloaded from: https://hdl.handle.net/1887/3493.

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden Downloaded from: https://hdl.handle.net/1887/3493.

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden Downloaded from: https://hdl.handle.net/1887/3493.

With the exception of honest and gonat (good-natured), the stimuli are labeled by the first five letters of their names (see Table 1). The fourteen stimuli are labeled by