• No results found

The use of jump processes in the estimation of the funding spread.

N/A
N/A
Protected

Academic year: 2021

Share "The use of jump processes in the estimation of the funding spread."

Copied!
74
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The use of jump processes in the estimation of the

funding spread

C. Comanne (Student No. 5886694)

S.A. Broda (Supervisor)

Master’s thesis in Financial Econometrics,

Faculty of Economics and Business, University of Amsterdam

August 27, 2014

Abstract

This research evaluates the estimation of the funding spread for risk management pur-poses. For the estimation of the funding spread, the addition of a jump process to the Vasicek process is evaluated, in which jumps take into account the rare larger movements. The jump process is based on a compound Poisson process and the jump size is either fixed or variable. The variable jump size is based on a normal distribution. The estima-tions are based on Ordinary Least Squares and Maximum Likelihood Estimation and the single name CDS spread is used as a proxy for the funding spread.

The results show that the addition of a jump process leads to an improvement of the fit of historical data. This result is stronger for the addition of a variable jump size process compared to a fixed jump size process. By the inclusion of jumps the estimated volatility and long-term mean of the underlying Vasicek process become smaller, which shows that it focuses on the smaller movements. The larger movements in the data are taken into account by the jump process. However, the estimation also shows that the jump process focuses on the slightly larger movements, instead of the rare largest movements. Therefore, this research also highlights the importance of future research on the estimation of the funding spread and the use of jump processes.

(2)

Contents

1 Introduction 1 2 Literature review 4 3 Data 8 3.1 Data selection . . . 8 3.2 Data characteristics . . . 9 4 Process 13 4.1 Process description . . . 13 4.1.1 Vasicek process . . . 13

4.1.2 Vasicek process with fixed jumps . . . 14

4.1.3 Vasicek process with variable jumps . . . 16

4.2 Process estimation . . . 17

4.2.1 General aspects . . . 17

4.2.2 Vasicek process . . . 19

4.2.3 Vasicek process with fixed jumps . . . 21

4.2.4 Vasicek process with variable jumps . . . 22

5 Results 25 5.1 Results estimation processes . . . 25

5.1.1 Vasicek process . . . 25

5.1.2 Vasicek process with one fixed jump process . . . 28

5.1.3 Vasicek process with two fixed jump processes . . . 31

5.1.4 Vasicek process with one variable jump process . . . 34

5.1.5 Vasicek process with two variable jump processes . . . 37

5.2 Comparison of processes . . . 40

6 Possible improvements 43

7 Conclusion 47

References 49

A Notation 51

B Derivations for Vasicek process 54

C Derivations for Vasicek process with fixed jumps 57

D Derivations for Vasicek process with variable jumps 61

(3)

List of Figures

1 Time series spread of UBS CDS EUR SUB 5Y . . . 10 2 Histogram first differences CDS spread . . . 12 3 Simulations for Vasicek process . . . 27 4 Comparison distribution of first differences CDS spread and simulations for

Vasicek process . . . 28 5 Simulations for VJF1 process . . . 30 6 Comparison distribution of first differences CDS spread and simulations for

Vasicek and VJF1 process . . . 31 7 Simulations for VJF2 process . . . 33 8 Comparison distribution of first differences CDS spread and simulations for

VJF1 and VJF2 process . . . 33 9 Simulations for VJV1 process . . . 36 10 Comparison distribution of first differences CDS spread and simulation for

VJF1 and VJV1 process . . . 36 11 Simulations for the VJV2 process . . . 39 12 Comparison distribution of first differences CDS spread and simulations for

VJV1 and VJV2 process . . . 39 13 Time series first differences CDS spreads . . . 44

(4)

List of Tables

1 Overview characteristics data . . . 10

2 Results for Vasicek process . . . 26

3 Results for VJF1 process . . . 29

4 Results for VJF2 process . . . 32

5 Results for VJV1 process . . . 34

6 Results for VJV2 process . . . 37

7 Information criteria for estimated processes . . . 41

(5)

1

Introduction

Financial institutions need to fund their banking business, as without funding no invest-ment can be made. For a bank to make a positive return, the cost of funding needs to be lower than the return on the investment. Generally, the maturity of funding is shorter than the maturity of the business activity, as products with a higher maturity require a higher return. Therefore, not only the current but also the future funding costs need to be taken into account when engaging in a business activity. The latest crisis has shown that even highly rated banks can get difficulties with funding as discussed in Van Rixtel and Gasperini (2013). Banks such as UBS were used to funding rates below LIBOR. During the crisis they experienced a sharp increase in the cost of funding. As a result they had to change their business model and reduce assets on their balance sheet rapidly. This shows the importance of monitoring funding costs, as funding difficulties can have a large impact on the net income and business strategy of a bank. This research will evaluate different processes to estimate the bank specific part of the funding costs, for which the focus is on risk management.

The bank’s funding costs are based on the interest rates paid on the different funding sources of the bank. These funding sources include for example retail deposits but also wholesale debt. An extensive amount of research has been performed on the estimation of interest rates. In this research some of the results for interest rates are used in the estimation of the funding spread. A collection of the processes used to evaluate interest rates are discussed in Gibson et al. (2001). Each of the processes used for interest rates captures different advantages and disadvantages in terms of the characteristics of interest rates they imply and the estimation of the parameters of the process.

As discussed in Babihuga and Spaltro (2014), some of the components that can in-fluence a banks funding costs are the level and quality of a bank’s capital, the credit worthiness of a bank, and shocks to the financial markets. In this research the focus is on the influence of the bank’s specific characteristics on the funding costs. The overall funding costs for a bank represent both the market and the bank’s specific influences. Therefore, not the funding costs but the funding spread is the focus in this research. The definition of the funding spread is given below.

Definition 1.1. Funding spread

The funding spread is the costs of funding based on the bank’s specific characteristics. Therefore the funding spread is the perceived risks by investors on an investment in a bank, relative to the risks perceived by the investor on the financial market, which is measured by the Credit Default Swap spread.

(6)

Several processes for the funding spread will be evaluated which are based on previous research on interest rates, as it is assumed that the funding spread follows a similar process. This assumption is motivated by the use of these processes for the credit spread, for example in Prigent et al. (2000), Cont and Kan (2011) and O’donoghue et al. (2014). The credit spread is the difference between the yield on different bonds, which reflects the difference in the perceived risks between the bonds. The credit spread and funding spread are similar as they both capture the difference in interest rates, however they are based on a different perspective. The perspective of the funding spread is the issuer and compares its own funding costs to the financial market, whereas the perspective of the credit spread is the investor which generally evaluates the return between bonds in comparison with the perceived risks.

The processes evaluated for the funding spread aim to capture a long-term mean and the possibility for large movements. The long-term mean for the funding spread is based on the assumption that the main characteristics of a bank have a long-term mean. The possibility for the large movements is based on the observations during the recent crises, as for example discussed by Babihuga and Spaltro (2014). This research will evaluate five different processes which are based on three types of processes. The first process evaluated is the Vasicek process. The second process type evaluates the addition of a jump process with a fixed jump size. For this type of process both the addition of one and two fixed jump size processes is evaluated. The third process type evaluates the addition of a variable jump size process to the Vasicek process. Also for this type the addition of one and two variable jump size processes is evaluated. The jump size in the variable jump size process is based on a normal distribution. The parameters of the processes are estimated with Ordinary Least Squares (OLS) and Maximum Likelihood Estimation (MLE).

This research evaluates how different processes can be used for the estimation of the funding spread for risk management purposes. To answer this main question a literature review is performed to evaluate the current information on the estimation of interest rate models and the use of these processes with respect to the credit and funding spread. Secondly, the choice of the CDS spread as a proxy for the funding spread is assessed with respect to this main question. Lastly, this research performs the estimation and simulation for the processes to evaluate how the processes can be compared to answer the main question.

The results show that the inclusion of the jump process provides an improvement of the estimation of the funding spread. The inclusion of the jump process provides a better fit to the historical CDS spread data. This effect is larger if a variable jump size is used. Based on the results the process with the addition of one variable jump size

(7)

process is seen as the best process for the funding spread. By the inclusion of the jumps the estimated volatility and long-term mean of the underlying Vasicek process become smaller, which shows that it focuses on the smaller movements. The larger movements in the data are taken into account by the jump process. However, the estimation also shows that the jump process focuses on the slightly larger movements, instead of the rare largest movements. Therefore, this research also highlights the importance of future research on the estimation of the funding spread and the use of jump processes. Some possibilities for further research are also discussed.

In the remainder of this paper first the previous literature is discussed in Section 2, after which the data are described in Section 3. Third, the estimation method for the different processes is discussed in Section 4, for which the results are given in Section 5. The possibility of further improvements and some additional estimations are evaluated in Section 6. Finally the conclusion is given in Section 7.

(8)

2

Literature review

In the last couple of decades a large number of papers has been published on different interest rate models. This section will evaluate multiple approaches to determine the main aspects for the estimation of the funding spread.

An extensive overview of interest rates processes is given in Gibson et al. (2001). How-ever they also indicate that there is no clear starting point for interest rate models, as different interest rate processes capture different characteristics. Such characteristics are mean reversion, volatility surface, term structure, and positive interest rates. Including one or multiple of these characteristics has both advantages and disadvantages. As dis-cussed in Gibson et al. (2001), different papers come to conflicting conclusions regarding the importance of capturing several of these characteristics. The general formula of the different types of processes discussed is presented in Equation (1), which aims to reflect the interest rate process.

drt = A(rt, t)dt + B(rt)dWt (1)

In Equation (1) rt is the instantaneous rate, Wt is a Brownian motion, the drift of the

process is given by A(rt, t) and the volatility by B(rt). For different types of processes,

different assumptions are made regarding the drift and volatility term. In the comparison of the different processes the focus is on the mean reversion property, and the possibility of including a jump process.

An introduction will be given to the Vasicek process, the Cox, Ingersoll, and Ross (CIR) process, and the Hull and White process as they all capture the mean reversion characteristic. The Vasicek process is described in Vasicek (1977) and is the first process to include the mean reversion characteristics for interest rates. The Vasicek process is based on the Ornstein-Uhlenbeck process, which was earlier used in physics. The Vasicek process captures mean reversion through the inclusion of rt in the drift term A(rt, t),

which becomes α(θ − rt) such that the size of the drift is based on the current value of

the interest rate. In the case that the current value of rt is below (above) the long-term

mean, the drift will be positive (negative).

After the introduction of the Vasicek process, several processes were created which aimed to resolve some of the issues of the Vasicek process. One of the issues of the Vasicek process is the possibility of negative interest rates. In Cox et al. (1985) the CIR process was introduced, which provided a solution to the possibility of negative interest rates. Additionally to rtin A(rt, t), the CIR process includes the square root of rtin B(rt),

(9)

which becomes √rtσ. This square root will lower the impact of the Brownian motion in

the case of low interest rates, and therefore increase the impact of the drift.

A second disadvantage of the Vasicek process is that the process cannot provide an exact fit for the initial term structure of the interest rates. This issue was addressed in Hull and White (1990), in which the Hull and White process was introduced. In comparison to the Vasicek process, the Hull and White process allowed the parameters of the process to be time dependent, which enables the process to meet the term structure of the interest rates. The consistency of the term structure of the interst rate with the market is important in the pricing of interest rate derivatives.

From these three processes the Vasicek process is used as the basis process in this research, based on which different jump processes will be added. The comparison of the different processes takes into account the different advantages and disadvantages of the processes and the main focus of this research. The main advantage of the Vasicek process is the exact solution to the Stochastic Differential Equation (SDE). Additionally, the advantage of the non-negativity and the fit to the term structure is not the focus in this research. Therefore, the advantage of introducing these characteristics do not outweigh the added complexity in the estimation of the process in combination with a jump process. Note that by the choice of the Vasicek process no restriction is imposed relating the non-negativity, as theoretically it would be possible that investors have a lower perceived risk for a specific bank compared to the general market. In the evaluation of previous research the results regarding both the funding and credit spread are taken into account, based on their discussed similarity. These papers either focus on evaluation of the spread during the crisis, the process estimation of the spread, or the estimation of the spread based on their drivers.

After the recent crisis several analyses were performed on the changes in the funding spread and the drivers for these changes. As discussed in Van Rixtel and Gasperini (2013), even banks with a high credit rating face rising funding costs and can have difficulties with funding in times of stress. In Babihuga and Spaltro (2014) different drivers of the funding costs are evaluated, which indicated that both market aspects and bank specific aspects determine the funding cost. Some of the bank specific aspects are the bank’s credit rating and quality of capital. For the market specific aspects the short term interest rate is identified as one of the drivers. In this research the funding spread reflects the bank specific drivers of the funding costs. Besides the different drivers of the funding costs, it is also indicated that the funding spread had times with large movements. The larger movements in the funding costs can be caused by a market event or an idiosyncratic event. In Babihuga and Spaltro (2014) the sharp rise in the funding spread, based on

(10)

analysing the Credit Default Swap (CDS) spread, is discussed. Their study indicates that this aspect needs to be taken into account when estimating the funding spread. Therefore, this research will focus on the addition of different jump processes to the Vasicek process. Several studies have been performed in which either the funding or the credit spread estimation is coupled with the credit rating, as this is one of the main bank specific drivers. The credit rating could be used directly in the estimation of the funding costs, which could indicate the bigger movements in the funding costs. In Aikman et al. (2009) it is discussed that the funding costs will be higher for a bank with a lower credit rating. They also indicate the opposite relation, in which higher funding costs could result in a worsening of the balance sheet state, which could result in a downgrade. Additionally, Prigent et al. (2000) evaluated the estimation of the credit spread for which a distinction was made in the credit rating of the evaluated product. He indeed found that the parameter estimates were different for different credit ratings. In this research I focus on the estimation of the process of the funding spread based on the historical data of a specific product. This product is assumed to be a good proxy for the evaluation of the funding spread over time, and therefore indirectly captures the indicated bank specific drivers.

In Prigent et al. (2000) also the inclusion of a jump process is evaluated to capture the larger movements in the credit spread. Additional studies which evaluate the inclusion of a jump process are Dominedo et al. (2010) and Brigo et al. (2007), from which it can be seen that the jump process can be added to the underlying process in different ways. The jump process added to the Vasicek process in this research is similar to that described in Brigo et al. (2007), which discusses the inclusion of the jump process based on a compound Poisson process in combination with normally distributed jump sizes. Additionally, this research will evaluate a process with fixed jump sizes, to evaluate the effect of different jump sizes. It is expected that the process including the flexible jump process will outperform the jump process with a fixed jump size or the exclusion of a jump process. A more detailed description of the added jump processes is given in Section 4.

The data used as a proxy for the funding spread is the five year single name CDS spread, which is based on the funding costs discussed in Babihuga and Spaltro (2014) and Button et al. (2010). In these papers the marginal funding costs, the cost of additional funding, is based on a combination of the three month LIBOR and the five year CDS spread. As discussed in Hull (2012), the LIBOR rate provides a reference rate at which banks deposit money to each other. This research makes a difference between the market aspects and the bank specific aspects in the funding costs, therefore this is also done for this definition of the marginal funding costs. The LIBOR rate is regarded as the market component of the funding costs, which leaves the five year single name CDS spread to

(11)

represent the bank specific part of the funding costs. Therefore, I will use the five year single name CDS spread as a proxy for the funding spread. The data usage is discussed in more detail in Section 3.1.

Numerous studies have been executed on the CDS spreads for the evaluation of the credit spread. The focus of these studies was the use of the CDS spread for the evaluation of the probability of default (PD) of the underlying product. As discussed in Brigo and Mercurio (2006), one of the types of models used is a reduced form model in which the default probability is evaluated with the inclusion of an exogenous jump process. The probability of default in dt is the hazard rate, which can be used for the evaluation of the instantaneous credit spreads. For this hazard rate any positive interest short-rate model can be used for the estimation of the credit spread. A different type of model is discussed in Martin (2009), which uses a structural model in the evaluation of the CDS spread. Based on the results, it was concluded that the markets capture the pricing of the jump risk in the CDS spread.

Additionally to default probability based models for the CDS spread, the process of the CDS spread itself is also evaluated. This approach got more focus after the crises. Cont and Kan (2011) indicates that this focus was triggered by the losses in the CDS portfolios without a default in the underlying instrument, caused by large movements in the CDS spreads. In Cont and Kan (2011) and O’donoghue et al. (2014), different aspects of the CDS spread are evaluated, such as mean reversion, the two-sided heavy tails, and non-negativity of the CDS spread. Both studies evaluated these aspects in the estimation of the log return of the CDS spread and included a jump process. It should be noted that the changes in the CDS spread in Cont and Kan (2011) are evaluated with a jump diffusion model for the hazard rate. In this research I will evaluate a simplified version of these models, in which the aspects captured are similar but now based on the characteristics of the funding spread.

(12)

3

Data

In this research the 5 year CDS spread is used as a proxy for the funding spread of a bank. The choice for this proxy is discussed in Section 3.1, and Section 3.2 discusses the characteristics of the dataset.

3.1

Data selection

As given in Definition 1.1, the funding spread reflects the risks investors assign to a specific financial institution compared to the perceived risks of the financial market. If investors assign a higher risk to a specific financial institution, the funding cost and therefore the funding spread will be higher. To reflect the risks captured in the funding spread it is important to select the right proxy.

The funding of a financial institution can consist of multiple sources, which differ in product characteristics, maturity, and counterparty. The combination of the different sources of funding result in a complex overall funding cost which is hard to determine. Similar as described in Babihuga and Spaltro (2014) I will use the marginal cost of funding, which reflects the additional costs for new funding. The advantages are that the marginal funding cost is not based on a complex combination of funding sources and reflects the current perception investors have of the financial institution.

However, this raises the question how to determine the marginal funding costs. First, the marginal funding costs should be based on unsecured funding. The secured fund-ing sources take into account the value of collateral, and therefore do not fully reflect the funding costs. Second, the funding sources from retail clients are excluded, as it is unrealistic to assume that a large amount of additional retail funding can be raised in a short period of time. Therefore, wholesale unsecured funding is used as the marginal source of funding, as also suggested in Babihuga and Spaltro (2014). For the proxy of the marginal funding costs they use a combination of the 3 month LIBOR rate and the spread on the 5 year Credit Default Swap (CDS). As discussed in Hull (2012), the LIBOR rate provides a reference rate at which banks deposit money to each other. This research makes a difference between the market aspects and the bank specific aspects in the fund-ing costs, therefore this is also done for this definition of the marginal fundfund-ing costs. The LIBOR rate is interpreted as the market component of the funding costs, which leaves the 5 year single name CDS spread to represent the bank specific part of the funding costs. Therefore, I will use the 5 year single name CDS spread as a proxy for the funding spread. To describe the CDSs I use the information provided by Button et al. (2010). A CDS is a derivative product traded as an Over The Counter (OTC) product, and provides

(13)

an insurance for the buyer against defined credit event(s) in the underlying instrument. For this insurance the buyer pays the seller a periodic or upfront amount. The periodic amount paid by the buyer is referred to as the CDS spread and is given in basis points (bp). As the CDS protects against certain credit event(s), the price reflects the market view on the risks of the underlying instrument.

It should be noted that this research does not aim to evaluate the best model for the CDS spread, as evaluated in Cont and Kan (2011) and O’donoghue et al. (2014), but the use of the estimated process for the funding spread. The desired characteristics of mean reversion and the larger movements in the funding spread are reflected in the CDS spread. However, a disadvantage of this proxy is the non-negativity of the CDS spread. Theoretically, the funding spread could be negative and therefore the Vasicek process is still justified as a good basis process. The occurrence of negative estimated funding spreads is evaluated in Section 5. Second, the single name CDS spread of financial institutions seem to be correlated with the CDS spread of other financial institutions. This could indicate that the CDS spread also capture some market aspects. However, it is argued that the changes in the underlying market affect different banks in a similar fashion, which can lead to similar movements in the CDS spread. Therefore the CDS spread is still believed to be a good proxy.

3.2

Data characteristics

The CDS chosen for the funding spread should be liquid and the underlying must reflect unsecured debt. This section will discuss the different characteristics of the data and evaluate the characteristics discussed as in the previous sections.

In this research I will use UBS CDS EUR SUB 5Y, in which sub means that the underlying is subordinated debt. Bloomberg was used for the collection of the data, which consists of the observations between 14/05/2002 and 30/06/2014. However only the data starting at 01/05/2003 are used as for the first part of the data the price stays equal for several days, which indicates that the CDS contract is not traded frequently. The CDS spread used in the estimation is based on percentage points. Figure 1 shows the development over time of the discussed CDS spread.

Note that the time series for the UBS CDS EUR SUB 5Y has a spread under the 1% up to the end of 2007, while afterwards the CDS spread rises and seems to have larger movements. Cont and Kan (2011) used data for April 2005 to July 2009 and splits the dataset in two samples, and therefore capture data of both before and during the latest crisis in the evaluation of the CDS spread seperatly. Based on the shown graphs for the

(14)

Figure 1: Time series spread of UBS CDS EUR SUB 5Y

5Y CDS spread of Pfizer Inc. it can be seen that O’donoghue et al. (2014) used data for February 2008 to June 2010, which therefore only evaluates the CDS spread during the latest crisis. I choose to use the data between May 2002 and June 2014, to capture both the time before the crisis and during the crisis. By including both the CDS spread before and during the crisis, I aim to evaluate the inclusion of the jump process to capture the larger movements that occurred during the crisis. Therefore, also the characteristics for the CDS spread are evaluated for this time interval.

Table 1: Overview characteristics data

CDS CDS FD Length 2899 2898 Mean 1.17 0.00 Minimum 0.07 -0.83 Maximum 5.11 1.51 Standard derivation 1.03 0.08 DF stat -2.10 -DF p-value 0.25 -Skewness - 1.30 Kurtosis - 70.24 JB stat - 5.47e+5 JB p-value - ≤0.001

The general characteristics of the data are shown in Table 1, in which CDS refers to the level amount of the CDS spread and CDS FD refers to the first difference of the level

(15)

amount of the CDS spread. From the table it can be seen that the minimum amount of the CDS spread is 0.07%, which confirms that the CDS spread was always positive. The maximum amount of the CDS spread was 5.11%, which was on 18/09/2008. For the CDS data a Dickey Fuller (DF) test is performed, which tests for a unit root. Equation (2) shows the equation used for this test.

∆yt = a0+ δyt−1+ ut (2)

The null hypothesis of the DF test is that δ is zero, which indicates a unit root. The alternative hypothesis is that δ is below zero, which indicates a stationary and mean reversion series. Based on the result of the test it can be seen that the null hypothesis is not rejected, therefore the presence of a unit root cannot be rejected. However, Cavaliere and Georgiev (2009) indicates that the DF test is influenced by outliers and that the outliers could lead to an inconsistent OLS estimation for the DF test. One possible solution discussed by this paper is the introduction of a dummy for each of the outliers. However, I will evaluate the mean reversion based on the estimation of α for each of the processes, as they take into account the outliers through the inclusion of the jump process. Additionally, I will use the mean reversion in the processes as it is assumed that the funding spread is mean reverting. This assumption is based on the estimation of the credit spread in Prigent et al. (2000), Cont and Kan (2011) and O’donoghue et al. (2014). In the last two studies it is shown that the process is only mean reverting in the log return of the CDS spread. However, the effect of the outliers is not captured in the evaluation of the tests. The outliers have a different effect on the level amount of the CDS spread and the log return of the CDS spread. The log return of the CDS spread has an outlier at the moment the jump occurs. For the CDS spread however, the jump will affect the consecutive values of the CDS spread as well. Therefore I will evaluate the level of the CDS spread and evaluate the mean reversion based on the estimation of α in each of the processes, as this takes into account the effects of the jumps.

Additionally I will discuss some characteristics of the CDS FD as they reflect the movements in the level amount of the CDS spreads. Figure 2 shows the histogram of the observations of CDS FD, from which it can be seen that there is a large peak around the mean. The large movements are indicated as those movements larger than two times the standard deviation, which results in a boundary of 0.16. This occurs both for the negative and positive movements, which indicates the two sided heavy tails as discussed in Cont and Kan (2011). This aspect is also reflected in the high kurtosis for the distribution. Additionally, the table shows that there is a positive skewness, which indicates that there

(16)

are more positive movements in the CDS data compared to the negative movements. Both the skewness and kurtosis are used in the Jarque Bera (JB) test, which tests the null hypothesis if the data comes from the normal distribution. As shown in the table, the p-value of the JB test is below 0.01, which means that the null hypothesis is rejected with a significance level of 1%. Based on these results it is expected that a jump process will improve the estimation of the funding spread.

−1 −0.5 0 0.5 1 1.5 2 0 200 400 600 800 1000 1200 1400 1600 1800

First difference CDS spread

Frequency

(17)

4

Process

The three types of processes evaluated are the Vasicek process, the Vasicek process with a fixed jump size process, and the Vasicek process with a variable jump size process. In the first part of this section the different processes are discussed. The second part describes the estimation of the processes, which start with some general aspects of the estimation of the processes.

4.1

Process description

Each of the processes is described separately. However first the Brownian motion, the mean reversion characteristic and the large movements are discussed shortly.

Each of the processes rely on a Brownian motion, which is a continuous process often used in finance. As discussed in Etheridge (2008) a stochastic process Wt is a Brownian

motion if it meets four properties. First, the starting value W0 is zero and secondly the

the increments need to be stationary and independent. Third, the increment Wt− Ws is

normally distributed with mean zero and variance σ2(t − s) in which σ is a constant, for all values of s between zero and t. Lastly, the stochastic process needs to be continuous. Additionally, each of the processes incorporate the mean reversion characteristic for which it is assumed that there is a long-term mean for the funding spread, θ, around which the spread fluctuates. The long-term mean is incorporated in the drift of the process. If the current value of the funding spread is above (below) the long-term mean, the drift will push the interest rate down (up). How fast the funding spread moves towards its long-term mean depends on the speed of mean reversion, which is captured in α. With a higher α the process will move faster towards its long-term mean. Besides the long-term mean the processes will evaluate the inclusion of one or two jump processes for either the positive larger movements or the positive and negative larger movements.

4.1.1 Vasicek process

The first process evaluated is the Vasicek process, which will be used as a starting point in this research. The Vasicek process was introduced in Vasicek (1977) and is based on an Ornstein-Uhlenbeck process. Equation (3) shows the formula for the Vasicek process, in which Wt is an Brownian motion.

(18)

In this formula, α represents the speed of mean reversion, θ represents the long-term mean, and σ represents the volatility of the funding spread process. The advantage of the Vasicek process is its relatively simple estimation method and capturing the mean reversion characteristic. However, based on the data the probability of larger movements in the funding spread seems to be higher than assigned by the normal distribution. As the Vasicek process is based on a normal distribution through the Brownian motion, it is therefore expected that the Vasicek process cannot capture these larger movements.

4.1.2 Vasicek process with fixed jumps

This section will discuss the second type of the processes evaluated, which evaluates the addition of a fixed jump size process to the Vasicek process to capture the larger movements in the funding spread. The abbreviation VJF is used for this process, which indicates the Vasicek process with a Jump process with a Fixed jump size. Different ways to incorporate a jump process to the Vasicek process are discussed in Dominedo et al. (2010), Prigent et al. (2000) and Brigo et al. (2007). This section will discuss the addition of a jump process based on a compound Poisson process with a fixed jump size. Equation (4) shows the formula for the VJF process. This section will discuss the newly introduced parameters and the interpretation changes to the original parameters.

dct= α(θv− ct)dt + σdWt+ dJt (4)

For the jump process the frequency and size of the jumps need to be determined. For the occurrence of the jumps a compound jump process is used, which creates the possibility to incorporate the jump size. The information provided to define the compound Poisson process is based on Dominedo et al. (2010). The Poisson process is shown in Equation (5), which follows a Poisson distribution as shown in Equation (6). The Poisson distribution is based on the intensity parameter λ, which indicates the average amount of jumps in t, which represents a year in this research. The Poisson process, Pt, counts the number

of events occurring up to time t and is based on Xi and Sn. The variable Xi represents

the time until an event, such that Sn captures the time until n events occurred. Then Pt

counts the number of events which occurred up to time t by comparing the value of Sn to

t. In Equation (7) Jtis a compound Poisson process, in which the jump size of each of the

(19)

Sn= n X i=1 Xi Pt= X n≥1 1t≥Sn (5) P(Pt = ηt) = e−λt (λt)ηt ηt! (6) Jt= Pt X i=1 Yi (7)

By the addition of the jump process two new parameters are added compared to the Vasicek process, and the interpretation of the long-term mean has changed. The added parameters are:

• λ, representing the intensity parameter of the compound Poisson process, which indicates the frequency of the jumps.

• µy, representing the jump size of a single jump Yi.

The interpretation of the long-term mean has changed, as generally the average jump size is non-zero. To reflect the long-term mean of the process therefore the non-zero mean of the jump size needs to be incorporated. In Equation (8) the process is rewritten in which the jump process is adjusted to have a zero mean. It can be seen that θvy now

reflects the long-term mean of the overall process, while θv reflects the long-term mean

of the underlying Vasicek process. Equation (9) shows the calculation of θvy based on

the transformation of the process. The interpretation of α and σ are the same as in the Vasicek process. dct = α(θvy− ct)dt + σdWt+ dJt∗ (8) dJt∗ = dJt− µyλdt θvy = θv+ λ αµy (9)

As discussed in Section 3.2 the data contains both large negative and positive move-ments. Additionally it was discussed that there was a positive skewness, which indicates that there are more positive movements. Therefore, the first jump process added will be a jump process with a positive fixed jump size. However, to also account for the larger negative movements, additionally a process that captures both a positive and negative

(20)

fixed jump size process is evaluated. Due to the positive skewness it is expected that the positive jumps will either occur more often, or have a higher absolute jump size. By accounting for the larger movements it is expected that the fitted normal distribution has a better fit to the peak around zero as shown in Figure 2.

The assumption of fixed jump sizes comes with both advantages and disadvantages. The main advantage of assuming fixed jumps is that a jump process is added without introducing too much more complexity to the estimation of the process. However, the disadvantage is that a variable jump size seems more realistic. This is also indicated by the histogram of CDS FD as there is not a fixed jump size which can be observed. If this was the case, it would be expected that there was a small bump in the tail of the distribution. This bump would show more observations around the value of the fixed jump size. This possibility of a variable jump size is discussed in the next section.

4.1.3 Vasicek process with variable jumps

As suggested in the previous section, the process evaluated in this section will incorporate a jump process with a more flexible jump size. The abbreviation used for this process is VJV, which indicates the Vasicek process with a Jump process with a Variable jump size. The process is shown in Equation (10). The jump process is captured in the same term dJt as in the previous section. The jump process is still based on a compound Poisson

process, however now the jump size Yi follows a normal distribution. The process as

discussed in this section is based on the research presented in Prigent et al. (2000) and Brigo et al. (2007).

dct= α(θv− ct)dt + σdWt+ dJt (10)

The variable jump size follows a normal distribution. Therefore, now the addition of the jump process does not only introduce a parameter for the average jump size, but also for the volatility of the jump size. Similar to the previous process the addition of the jump process also adds a parameter for the intensity parameter of the compound Poisson process. The interpretation of the long-term mean of the process has changed in a similar way as in the previously discussed process. With the addition of the jump process θv

captures the long-term mean of the underlying Vasicek process and θvy represents the

long-term mean of the funding spread process. The three additional variables are:

• λ, representing the intensity parameter of the compound Poisson process, which indicates the frequency of the jumps.

(21)

• µy, representing the average jump size of a single jump.

• σy, representing the volatility in the jump size of a single jump.

Similar as for the previously discussed process, I will evaluate the addition of one and two variable jump size processes. The average jump size of the first variable jump size process is positive. The average jump size for the second variable jump size process is negative. As the jump size is variable it is expected that this will have a larger effect on the estimation of the underlying Vasicek process.

Even though this process has the advantage of capturing more variability for the jump process, this process has a disadvantage as well: by introducing the variable jump size, more parameters must be estimated based on the same dataset. However, it is expected that the addition of the variable jump size processes will outweigh this disadvantage.

4.2

Process estimation

The process will be estimated based on historical data. Note that a calibration, which is based on current market data, is often used for interest rate processes. In the calibration of the process the parameters are estimated under the risk neutral measure and based on current market data of different types of products. This method will ensure consis-tency with current market data, which is important in calculating the price of a product. However, the estimation of the parameters in this research are based on the real world measure, as the purpose is risk management. Additionally, for the calibration alternative products, which capture information on a single companies’ funding costs, needs to be found. For this reason all parameters estimated in this research are based on historical data. This section will discuss the general aspects and the process specific aspects of the estimation.

4.2.1 General aspects

In this research the parameters are estimated with both Ordinary Least Squares (OLS) and Maximum Likelihood Estimation (MLE), which are both regularly used estimation methods in the estimation of interest rate models based on historical data. This section will discuss some general aspects applicable for each process.

The Stochastic Differential Equation (SDE) specified for each of the models is based on a Brownian motion, which is a continuous time process. However as the dataset is discrete, the SDE needs to be integrated to evaluate the process at discrete data points. This approximation will be better when smaller time steps are used in the integration. The

(22)

time steps used in this research are based on the time steps of the data and represented by ∆t, which is the difference between ti and ti−1 in which i indicates an observation in

year t. The dataset consists of daily data with an average of 260 observations per year, such that ∆t is set to one over 260 in the estimation of the process. As the time steps of t are yearly, this needs to be taken into account in the interpretation of the parameters. For example, the volatility estimated for the process represents the yearly volatility.

The second aspect is the probability measure in which different processes are estimated, which will be the real world measure in this research. The information used relating the real world and risk neutral measure is based on Hull et al. (2014) and Giordano and Siciliano (2013). The risk neutral measure is based on the calibration to current market prices, and is used for pricing. Contrary to the real world measure, this does not include an aspect of risk aversion of investors. By the use of historical data this is generally included, by which the process is then estimated under a real world measure. By the use of the historical data of the CDS spread, the estimations performed in this research are based on the real world measure. Additionally, as discussed in Hull et al. (2014), Girsanov’s theorem indicates that the volatility should be the same based on either the risk neutral or real world measure. Therefore the volatility could be directly calculated based on the CDS spread data. However Hull et al. (2014) also indicates that this does not seem to be the case in practice. The risk neutral measure is based on current market prices and reflects the forward looking perspective of the volatility. In the real world measure the volatility is based on historical data and is therefore backward looking. By the use of current market prices the implied volatility reflects the overall volatility of the process in a risk neutral measure. However, as discussed in Section 4.2.4 the volatility of the process with a variable jump size is based on both the volatility of the underlying Vasicek process and the volatility of the jump size. Based on the current definition and approach for the split in volatility, this cannot be derived based on only the implied volatility. To create a consistent approach over the estimation of the processes, therefore the estimation of the volatility is based on MLE as well.

The parameters of the processes are estimated with OLS, which are then used as starting values of the MLE. In the OLS approach the estimated parameters minimizes the sum of the squared errors. For the starting values relating the jump process addi-tional calculations will be performed, as the OLS estimation does not take the jumps into account. Additionally, the exclusion of the jumps can lead to an omitted variables bias for the estimated parameters if the jumps are present in the data.

As discussed in, among others, Brigo et al. (2007) the MLE returns the parameters for which the likelihood of the observed data is maximized based on the assumed probability

(23)

mass function. The likelihood function is the product of the probability mass function of each single observation. The likelihood function as shown in Equation (11) is based on the set of parameters of the process and the historical values of the process. In this equation Γ consists of the parameters of the process, which therefore differs among the different processes. In Equation (12) the likelihood function of observation ti is only

based on ti−1 and the set of parameters, which is referred to as the transition likelihood.

In this case the transition likelihood can be used as the evaluated processes are Markov Processes. As discussed in Brigo et al. (2007) a process is said to be a Markov process if the distribution of the process only depends on the current value and not on any other past values. As both the Brownian motion and the compound Poisson process are Markov processes, the evaluated processes in this research are Markov processes. Formula (13) shows the transition log likelihood, which I will use for the MLE.

L(Γ) = n Y i=1 fXti|Xti−1Xti−2...Xt1Xt0;Γ (11) L(Γ) = n Y i=1 fXti|Xti−1;Γ = n Y i=1 fΓ(xti) (12) LL(Γ) = n X i=1 log(fΓ(xti)) (13)

Instead of maximizing the likelihood function a minimization is done on the negative value of the log likelihood function. For the estimation of the process I will make use of Matlab, in which several predefined functions are used. The main predefined function used is Fmincon which performs the conditional minimisation for the MLE and is discussed in MathWorksr (2014b). The algorithm used in this function is the interior-point approach which is discussed in MathWorksr (2014a). Additionally it should be noted that it is out of scope of this research to evaluate the mathematical verification of the consistency of the parameter estimations. Instead a simulation of the processes based on the parameter estimation is compared to the historical data, which is used to evaluate the estimated parameters.

4.2.2 Vasicek process

The first process which is estimated is the Vasicek process, which is shown in Equation (14). This section will discuss the result of the integration and discretization of the Vasicek process and the estimation of the parameters based on OLS and MLE. This section will

(24)

make use of the results of the derivation described in Appendix B.

dct= α(θ − ct)dt + σdWt (14)

For the integration of the Vasicek process, first a transformation of ct to Zt is performed,

which ensures that the It´o Integral can be calculated for Zt. After the integration of Zt

the inverse of the transformation is used to get the solution of ct, which is presented in

Equation (15). The integral relating the Brownian motion is an It´o integral. As ct is a

linear combination of the Brownian motion ct follows a normal distribution.

Zt = cteαt ct = cse−α(t−s)+ θ(1 − e−α(t−s)) + σe−αt Z t s eαudWu (15) ct ∼ N (cse−α(t−s)+ θ(1 − e−α(t−s)), σ2 2α(1 − e −2α(t−s) )) (16) ct = cse−α(t−s)+ θ(1 − e−α(t−s)) + σ r 1 2α(1 − e −2α(t−s))  t (17)

Based on the solution for the Vasicek process the discretization of the process is used to estimate the parameters with OLS. As discussed in Section 3.3 in Glasserman (2004) this is an exact discretization, for which the result is shown in Equation (17). Based on the estimated parameters of the AR(1) equation the parameters of the Vasicek process can be calculated.

As ct is based on a normal distribution, the probability density function used in the

MLE is the normal distribution with the mean and variance as given in Equation (16). The likelihood function as programmed in Matlab is presented in Appendix E, which shows that the predefined probability density function (pdf) for the normal distribution is used. The calculation of the log likelihood captures the probability density calculated for each of the data point, which is based on the last data point and the evaluated parameters. Additionally, the Matlab code shows that the density is multiplied with minus one, such that the log likelihood can be minimised.

As described in Brigo et al. (2007) the MLE conditional on the first observation has a closed form solution, which equals the estimators based on OLS. However, both esti-mations are performed to create a consistent approach with the estimation of the other processes.

(25)

4.2.3 Vasicek process with fixed jumps

This section will describe the estimation of VJF process, which is shown in Equation (18). Similar to the previously evaluated process the estimation is based on the discretization of the solution of the process for ct. This section will discuss the estimation based on the

derivations as described in Appendix C, which is based on the inclusion of one additional jump process with a fixed positive jump size. The addition of a second fixed jump size process is similar to the addition of the first jump process.

dct= α(θv− ct)dt + σdWt+ dJt (18)

Even though this process includes an jump process, the integration of the process is similar to the Vasicek process. In the derivation of the solution for cti it is important to assume

that the Brownian motion and the jump process are not correlated. With this assumption the added jump process will only add one additional term to the solution of the process. An approximation is used for the integral relating the jump process, which is based on the approach as described in Brigo et al. (2007). With this approximation the total amount of jumps in ∆t is finite. Due to this the solution and distribution of ct are conditional on

the amount of jumps. Based on Equation (20) it can be seen that the additional due to the variable jump size is ηtµje−α(t−s), which represent the addition to the mean given the

amount of jumps. Z t s eαudJu ≈ eαs∆Jt ct≈ cse−α(t−s)+ θv(1 − e−α(t−s)) + σe−αt Z t s eαudWu+ e−α(t−s)∆Jt (19) ct|ηt ∼ N (cse −α(t−s) + θv(1 − e−α(t−s)) + µyηte−α(t−s), σ2 2α(1 − e −2α(t−s) )) (20) ct|nt= cse−α(t−s)+ θv(1 − e−α(t−s)) + σ r 1 2α(1 − e −2α(t−s))  t+ e−α(t−s)µyηt (21)

The estimations for the parameters based on OLS are similar as for the Vasicek process, in which the calculation of the parameters is based on the discretization shown in Equation (21). However, with the addition of the jump process not all variables can be calculated based on result of the OLS estimation of the AR(1) equation. The starting values for the intensity parameter and the jump size are based on the occurrences of large movements in the dataset. For the addition of a jump process with a positive fixed jump size the steps are given below. The calculation of the additional parameters of the negative fixed

(26)

jump size are similar.

1. Calculate first differences of CDS spread.

2. Indicate the jumps by evaluating which first differences of the CDS spread are larger than 0.16.

3. Calculate the jump size by subtracting 0.16 from the first difference of the CDS spread for the indicated jumps.

4. Calculate the average jump size for the positive fixed jump size process, µyp, as the

average of the jump sizes over the jumps.

5. Calculate the expected value of ∆Jt, by calculating the average of the jumps size

over all observations. In the case that no jump occurred at time t, the observation is captured with a zero.

6. Calculate intensity parameter based on Equation (22).

E(∆Jt) = E(ηtµj) = λ∆tµj

λ = E(∆Jt) ∆tµj

(22)

The log likelihood function used in the MLE is based on the research performed in Brigo et al. (2007) and the Matlab code used is shown in Appendix E. It can be seen that the log likelihood function is based on the normal distribution and the Poisson distribution. The mean and variance for the normal distribution are as given in Equation (20).

In comparison with the log likelihood function of the previous process it can be seen that some additional steps are introduced. As the normal distribution for ctis conditional

on the amount of jumps, different amount of jumps are evaluated for each data point. In the log likelihood function the maximum amount of jumps evaluated is set to 20, as the chance of more jumps is assumed to be negligible. Similar as in the estimation of the Vasicek process the logarithm of the probability density is multiplied with minus one, such that a minimization can be used.

4.2.4 Vasicek process with variable jumps

The third process type evaluated is the VJV process. The process in shown in the Equation (23). In this process a normal distribution is assumed for the jump size. This section

(27)

will discuss the estimation of the process based on the solution for ct. The derivations

for the necessary results in the estimation are given in Appendix D, which is based on the addition of one variable jump size process. The addition of a second jump process is similar to the addition of the first jump process.

dct= α(θv− ct)dt + σdWt+ dJt (23)

The solution for this process is based on the assumption of no correlation among the compound Poisson process, the jump size and the Brownian motion and has a similar derivation to the previous process. The derivations for the VJV process is discussed in Appendix D. Additionally, the integration of the jump process is based on a similar approximation as in the previous section, based on which the solution and distribution of ct is conditional on the amount of jumps. However, as now the jump size is flexible, the

volatility of the process is influenced by the volatility of the jump size. In Equation (25) it can be seen that the additional term for the volatility of the jump size is ηtσy2e

−2α(t−s) . ct≈ cse−α(t−s)+ θv(1 − e−α(t−s)) + σe−αt Z t s eαudWu+ e−α(t−s)∆Jt (24) ct|ηt ∼ N (cse −α(t−s) + θv(1 − e−α(t−s)) + µjnηte−α(t−s), σ2 2α(1 − e −2α(t−s) ) + ηtσy2e −2α(t−s) ) (25) ct|nt= cse−α(t−s)+ θv(1 − e−α(t−s)) + σ r 1 2α(1 − e −2α(t−s))  vt+ e−α(t−s)ηtyt (26)

The estimation of the parameters based on OLS is similar to the Vasicek process, in which the calculation of the parameters is based on the discretization shown in Equation (26). Note that now the estimation of σ based on the AR(1) equation the volatility of the jumps is included. However, it is assumed that this is still a good starting for the volatility of the underlying Vasicek process. Similar to the previous process, the parameters relating the jump process cannot be calculated based on result of the OLS estimation of the AR(1) equation. The calculation for λ and µy are as described in the previous section.

Additionally σy is calculated by taking the standard derivation of the jump size for the

jumps in the dataset. For the identification of the jumps the same boundaries of 0.16 and -0.16 are used.

After the OLS parameters are estimated, the log likelihood for the Vasicek process with the addition of the variable jump size process is maximized. The MLE approach is based on the research performed in Brigo et al. (2007). The likelihood function is based

(28)

on the normal distribution and the Poisson distribution. The mean and variance for the normal distribution are given in (25).

Appendix E shows the code used for the log likelihood function in Matlab for this process. It can be seen that similar to the fixed jump size process, additional intermediate steps are used for the jump process. As the normal distribution for ct is conditional on

the amount of jumps, different amounts of jumps and the probability of the amounts of jumps are evaluated for each data point. The maximum amount of jumps evaluated is 20 jumps, as the chance of more jumps is assumed to be negligible. Note that the jump process now influences both the mean and the variance of the normal distribution. Similar to the estimation of the previous two processes, the logarithm of the probability density is multiplied with minus one, such that a minimization can be used.

(29)

5

Results

This section will discuss the results for the estimation of the different processes. First the estimated parameters with both OLS and MLE are discussed for each of the processes, after which the different processes are compared.

5.1

Results estimation processes

This section will evaluate the parameter estimation with OLS and MLE. For the MLE different combinations of starting values and boundaries are used to evaluate all possible outcomes. However, for each of the processes the OLS estimator with only theoretical boundaries resulted in the highest log likelihood value. The theoretical boundaries are non-negativity for the estimations of σ, σy and λ, as these parameters are by definition

bounded by zero. A theoretical boundary of reasonable values for the long term mean which is driven by the economical interpretation. Additionally, the sign for the average jump sizes has a boundary as well which is introduced by the construction of the param-eters. For the jump process with a positive average jump size, which is represented with µyp, this boundary results in non-negativity for µyp. For the jump process with a negative

average jump size, which is represented with µyn, this boundary results in negativity for

µyn.

5.1.1 Vasicek process

This section will discuss the results for the Vasicek process based on both OLS and MLE. The results of the estimations are shown in Table 2. As indicated in Section 4.2.2 the results based on OLS and MLE are similar.

Additional to the estimated parameters Table 2 shows the standard deviation of the estimators. The standard deviation is based on the Hessian matrix, which is calculated within the Matlab function fmincon. Based on the variance a Wald test is performed for several parameter estimations, the test statistic is shown in Equation (27). In the formula r represents the function r(β) for which r(β) = 0 is tested, R represents the Jacobian matrix of the restriction function r(β) and A represents the covariance matrix of the estimation. The critical value for the Wald test is based on a chi squared distribution, for which the degrees of freedom depend on the amount of restrictions tested. Several Wald tests are performed which are based on the null hypothesis that the estimated parameter is equal to zero, the results are shown in Table 2. The Wald test will not be preformed for the speed of mean reversion, as the test statistic does not follow a chi squared distribution.

(30)

This is caused due to the unit root under the null hypothesis, and therefore similar to the Dickey Fuller test the chi squared distribution cannot be used for the critical values.

W ald = r0∗ (R ∗ A−1∗ R0)−1∗ r0 (27)

Table 2: Results for Vasicek process

OLS MLE Wald test p-value statistic Wald test ˆ α 0.77 0.77 (0.195) ˆ θ 1.22 1.22 7.26 0.01 (0.453) ˆ σ 1.26 1.26 (0.035) Log likelihood 3284 3284

Based on the table it can be seen that ˆα is between zero and one as expected for the mean reversion. It is expected that α is between zero and one, as this indicates that based on the drift the funding spread moves towards its long term mean, without alternating around its long term mean. In the case that α is bellow zero the funding spread will move further away from its long term mean. In the case that α is above one, the process overcompensates the difference between the current funding spread and its long term mean. By disregarding the Brownian motion, the value of 0.738 for ˆα indicates that the difference between the long term mean and the current CDS spread will be reduced by 73.8% over one year. In the case that ˆα is zero, the process has a unit root. A 95% confidence interval is created for ˆα to evaluate its value. The confidence interval is (0.39; 1.15) which shows that the confidence interval captures values below and above one, therefore no clear conclusion can be drawn for α. The estimation of the long-term mean of the CDS spread, ˆθ, is 1.22%, which is relatively close to the mean of the CDS spread of 1.17% as indicated in Table 1.

The estimation for σ reflects the estimation of the volatility of the process, which in this case is 1.22. Note that compared to the estimation of the long-term mean the estimated variance is quite high. This high variance could be an indication for jumps, as the larger movements now need to be captured by the volatility of the process.

Based on the estimated parameters a simulation is performed, for which the Matlab code is shown in Appendix E. The simulation is based on the discretization of the solution

(31)

for the Vasicek process, as shown in Equation (17) in Section 4.2.2. In this simulation the starting value is equal to the starting value of the process, and consists of 5000 paths. A 98% interval is created based on the 1% and 99% percentile for each time steps in the simulations. The historical path of the CDS spread is compared to this 98% interval. The results of this comparison is shown in Figure 3, in which it can be seen that the largest values of the CDS spread cannot be captured by the Vasicek process. Additionally, it can be seen that a substantial amount of the simulations results in a negative CDS spread, which is most likely caused by the combination of the relatively low long-term mean with a relatively high volatility. Even though the definition allows for rare negative values, the amount of negative simulated spreads could indicate that another process is preferable.

Second, the first difference based on the last time step of the simulated funding spread is compared with the first differences of the CDS spread, which is shown in Figure 4. Based on the figure it can be seen that the fitted distribution aims to capture the larger movements by a large variance, which was also indicated by the estimation of σ.

Figure 3: Simulations for Vasicek process

Additionally it can be concluded that the peak around zero cannot be fully captured. Together with the other discussed results, it is expected that the addition of a jump process will be beneficial for the estimation of the funding spread.

(32)

Figure 4: Comparison distribution of first differences CDS spread and simulations for Vasicek process

5.1.2 Vasicek process with one fixed jump process

The second process estimated is the VJF1 process, which indicates that one jump process is added with fixed jump sizes. This first added process has a positive fixed jump size. Similar as in the previous section Table 3 shows the results of the estimations. The results of the Wald tests are based on a null hypothesis in which the estimated parameter is equal to zero.

The OLS estimates for α and σ are equal to their estimates for the Vasicek process, as they are based on the same calculations as discussed in Section 4.2.3. However, in this process the average jump size needs to be taken into account. Therefore the estimation of θ through OLS reflects the long-term mean of the process. The parameter ˆθv, which

reflects the long-term mean of the underlying Vasicek process, is calculated based on ˆθvy,

which reflects the long-term mean of the overall process, and the average jump size. Based on the results in Table 3 it can be seen that the OLS and ML estimates differ from each other. These changes reflect the effect of the inclusion of a jump process, the main effects can be seen in ˆα and ˆσ. The value of ˆσ decreases by the incorporation of the jumps, as now a part of the larger movements is captured by the jump process. The range of the movements that need to be captured with the underlying Vasicek process

(33)

Table 3: Results for VJF1 process

OLS MLE Wald test p-value statistic Wald test ˆ α 0.77 2.95 (1.50) ˆ θv 0.47 0.75 16.45 0.00 (0.19) ˆ σ 1.26 1.03 (0.09) ˆ λ 4.93 4.26 8.55 0.00 (1.46) ˆ µy 0.12 0.30 4.04 0.04 (0.15) ˆ θvy 1.22 1.19 Log likelihood 3446 3659

becomes smaller, such that the estimated volatility becomes smaller.

In the ML estimation a big change can be seen in the estimation of the speed of mean reversion, which is now above one. A value higher than one can be interpreted as the process overcompensating the difference between the current funding spread and the long-term mean. Similar as in the previous section a 95% confidence interval is constructed for

ˆ

α. The 95% confidence interval is (0.01; 5.89), from which it can be seen that both values below and above one are captured. Therefore no clear conclusion can be made relating the speed of mean reversion in this process.

As indicated earlier the average jump size needs to be taken into account for the estimation of the long-term mean. It can be seen that the ML estimate for θvy is close

to the OLS estimate of the overall long-term mean and close to the mean of the CDS spread observations. As the average jump size is positive, the ML estimate of θv becomes

smaller. Based on the Wald test it can be seen that ˆθv still differs significantly from zero.

The jump process added to the Vasicek process is based on λ and µy. The intensity

parameter ˆλ indicates the frequency of the amount of jumps per year. Based on MLE the value for ˆλ is 4.26, which indicates that the average amount of jumps per year is approximately 4.26. The estimate of the average jump size is 0.303, which is used as the fixed jump size for a single jump. Note that both ˆλ and ˆµy significantly differ from zero

based on the performed Wald test.

(34)

shown in Appendix E. The simulation is based on the discretization of the solution of the VJF1 process as described in Section 4.2.3. The starting value of the simulation is the same as the starting value of the process, and 5000 paths are simulated. Based on these simulation paths a 98% interval, which is shown in Figure 5. Based on this figure it can be seen that the large movements in the CDS spread are not captured in the 98% interval is constructed, which indicates that the observations are more extreme than 98% of the simulations. Additionally it can be seen that a relatively small amount of simulations returns a negative spread. It is expected that this is caused by the lower volatility of the underlying Vasicek process and the presence of the positive jumps.

Figure 5: Simulations for VJF1 process

Additionally the simulation is used to compare the first differences of the CDS spread observations with the first differences of the last time step of the simulation. This compar-ison is shown in Figure 6, in which the simulation of the Vasicek process is also captured. The figure shows that the VJF1 process has a higher peak compared to the Vasicek pro-cess, but cannot capture the peak as in the data. Note that the addition of the jump process caused a small bump in the right tail, showing the jumps with a fixed size captured in the simulation.

(35)

Figure 6: Comparison distribution of first differences CDS spread and simulations for Vasicek and VJF1 process

5.1.3 Vasicek process with two fixed jump processes

The third process estimated is the VJF2 process, which indicates that two jump processes are added with fixed jump sizes. The second added process has a negative fixed jump size. Similar as in the previous section, Table 4 shows the results of the estimations. The results of the Wald tests are based on a null hypothesis in which the estimated parameter is equal to zero.

The main difference in the estimations compared to the other processes is the high standard deviation for ˆα, ˆθv and ˆσ, based on which the Wald test for ˆθv results in a high

p-value and the 95% confidence interval for ˆα will be quite large. This higher standard deviation therefore needs to be taken into account in the interpretation of the estimators. For ˆα the 95% confidence interval is constructed to evaluate the estimate of the speed of mean reversion. The confidence interval is (−11.3; 12.5), which indeed is quite large. Therefore no clear conclusion can be drawn for the speed of mean reversion in this process. It is expected that the higher standard deviations are caused by the larger amounts of variables that need to be estimated from the same amount of observations.

Additionally it can be seen that the estimation of the characteristics of the jump process changed due to the addition of a second jump process. In the VJF1 process

(36)

Table 4: Results for VJF2 process

OLS MLE Wald test p-value statistic Wald test ˆ α 0.77 0.64 (6.07) ˆ θv 1.37 0.45 0.01 0.94 (5.79) ˆ σ 1.26 0.44 (0.89) ˆ λn 4.40 22.4 8.79 0.00 (7.55) ˆ λp 4.93 30.5 4.76 0.03 (13.98) ˆ µyn -0.16 -0.13 1138 0.00 (0.004) ˆ µyp 0.12 0.11 7265 0.00 (0.001) ˆ θvy 1.22 1.23 Log likelihood 3598 4604

the value for ˆλp and ˆµyp were 4.26 and 0.30 respectively. The estimation of the intensity

parameters for this process are larger with values of 30.5 for the positive jump process and 22.4 for the negative jump process. By combining the two jump processes this indicates that there are on average approximately 53 jumps per year, which are more jumps than initially found for the jump process. However, the average jump size is quite small. This indicates that the jump processes account for slightly larger movements in the observations of the CDS spread. Consequently, the volatility of the underlying process decreased further, as now the jump process captures even more movements. As the value of both

ˆ

θv and ˆσ is small this indicates that the underlying Vasicek process aims to reflect the

observations close to zero.

As in the previous sections a simulation is performed based on the ML estimated parameters and the discretization of the process as discussed in Section 4.2.4. Note that the Matlab code for the simulation of the VJF2 process is similar to the VJF1 process as shown in Appendix E. The addition of a second jump process is similar to addition of the first jump process. The starting value of the simulation is the same as the starting value of the process, and 5000 paths are simulated.

(37)

Figure 7: Simulations for VJF2 process

Figure 8: Comparison distribution of first differences CDS spread and simulations for VJF1 and VJF2 process

Referenties

GERELATEERDE DOCUMENTEN

Comprehensibility results from the different types of sign presentation indicated that participants who were presented with signs in context showed the highest levels of

This study identifies that when validation steps are well established and integration with sales is achieved, more often will the S&OP user deviate from the sales plan

The interviews were conducted with different stakeholders of the public funding process, namely: applicants of accepted and rejected projects, the management authority, and members

As a consequence, the recurrent events model is more flexible than the Poisson model, and is able to model effects such as a temporary absence from the population or

Canonical height, Deligne pairing, dual graph, effective resistance, Green’s function, height jump divisor, labelled graph, N´ eron model, resistive network.... As was shown

Of patients in the Metropole district 96% had not received a dose of measles vaccine or had unknown vaccination status, and the case-fatality ratio in under-5s was 6.9/1

Zoals reeds tijdens het vooronderzoek werd vastgesteld zijn er grote hoeveelheden greppels, kuilen en paalkuilen vastgesteld waarvan het merendeel een vol middeleeuwse datering

Keywords: invariant measure; piecewise-deterministic Markov process; random dynamical system; jump rate; continuous