• No results found

Risk management: a least squares Monte Carlo approach.

N/A
N/A
Protected

Academic year: 2021

Share "Risk management: a least squares Monte Carlo approach."

Copied!
42
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Amsterdam

MSc Thesis

Risk Management: A Least Squares

Monte Carlo Approach

Author:

Rutger Frohn contact:

rutger.frohn@gmail.com

Supervisor (UvA): 1. Prof. Peter Boswijk Supervisors (Towers Watson): 1. Dhr. Gijsbert de Lange 2. Dhr. Arthur Stroij

September 22, 2014

Final version

Abstract

Risk managers of insurance companies traditionally determine the Sol-vency Capital Requirement (SCR) using nested simulation methods which are very time consuming. Faster methods are desired since SCRs have to be calculated more than once a year. In this thesis, I propose to use Least Squares Monte Carlo simulations instead of ordinary nested Monte Carlo sim-ulations. I compare these two methods and show that Least Squares Monte Carlo (LSMC) is faster and consistent in determining the SCR if the liabil-ities consist of a single swap rate dependent embedded option with payoffs 50 years ahead. Also, I show that results obtained by the LSMC approach are consistent with results obtained by an analytical approximation. Future interest rates are simulated using a two-factor Gaussian interest rate model.

Keywords: Least Squares Monte Carlo, nested simulations, embedded options, two-factor Gaussan interest rate model, SCR, profit sharing, forward measure.

(2)

Contents

1 Introduction 1

2 Solvency II Requirement 3

3 Swap Rate Dependent Embedded Option 5

3.1 Profit sharing payoff function . . . 5

4 Mathematical Framework 7 4.1 Discount factors and zero-coupon bonds . . . 7

4.2 Forward swap rate . . . 8

4.3 No-arbitrage, changing numeraire and forward measure . . . 8

4.4 Pricing the embedded option . . . 9

5 Two-factor Gaussian interest rate model 11 5.1 Analytical price of a zero-coupon bond . . . 12

5.2 Calibration to real-market data . . . 13

5.3 Generating correlated stochastic processes . . . 14

5.4 Recap and prospect . . . 18

6 SCR Calculation 19 6.1 Nested Monte Carlo approach . . . 19

6.2 Least Squares Monte Carlo approach . . . 21

6.2.1 Theoretical overview . . . 21

6.2.2 Differences and similarities . . . 24

6.2.3 LSMC algorithm . . . 24

7 Application 26 7.1 10-year average of 7-year swap rate . . . 26

7.2 Results . . . 26

7.2.1 Nested Monte Carlo results . . . 27

7.2.2 Least Squares Monte Carlo results . . . 28

7.2.3 Analytical results . . . 33

7.3 Remarks . . . 34

8 Conclusion 35

Bibliography 37

(3)

1

Introduction

In order to satisfy the requirements at the EU Solvency II Directive, insurance companies must hold an amount of capital to reduce the risk of insolvency. This amount is called the Solvency Capital Requirement (SCR). More formally, it is the capital requirement that ensures the insurance company can meet its obligations over the next 12 months with a probability of at least 99.5%.

The SCR differs across insurance companies and over time, and is usually calculated using simulation techniques. This procedure is also called an Asset Liability Management study (ALM-study). In order to perform an adequate and realistic ALM-study, the insurance company has to generate future scenarios of the economy over the next 12 months. These scenarios will be simulated by the computer, based on a model for the risk drivers. Insurance liabilities involve em-bedded options, which need to be priced at time 0, and also one year ahead. Since many embedded options are path-dependent, they have no analytical pricing for-mula, and simulations are needed. The way embedded options are priced at time 1 is usually done by Monte Carlo simulation from every time-one scenario. This results in a nested Monte Carlo simulation method and is very time consuming. To illustrate, if thousand possible scenarios over the first year are being considered (also called ‘outer scenarios’) and in every scenario another thousand simulations (or ‘inner scenarios’) are used to calculate the Monte Carlo price at time 1, then the total number of simulations is one million (i.e the number of outer scenarios times the number of inner scenarios). In addition, insurance companies need to calculate their SCR more than once for sensitivity analyses and maybe more than once a year. All together, although a correct method, using nested simulations the computational effort is very time consuming. Thus, a faster method is desired.

One solution might be to use the Least Squares Monte Carlo approach, which has the advantage of being efficient and accurate, even for a low number of Monte Carlo simulations. The gain of efficiency is mostly in terms of computational time. This method was first proposed by Longstaff and Schwartz (2001) in their seminal paper. They used it to price path-dependent equity options (American, Bermudan and other exotic options) consistently. In general, at any exercise time, the option holder optimally compares the payoff from immediate exercise with the expected payoff from continuation. Then he chooses to exercise if the immediate exercise payoff is higher. This optimal strategy is fundamentally determined by the conditional expectation of the payoff from continuing to hold the option. Ba-sically, the least squares Monte Carlo (LSMC) approach estimates this conditional expectation from the cross-sectional information in the simulation by using least squares. Hence the name Least Squares Monte Carlo.

(4)

In this paper, I will investigate the performance of the LSMC approach, rela-tive to the nested Monte Carlo approach in terms of simulation time and quality, to determine the SCR when the portfolio of liabilities of an insurance company consists of only one embedded option.1 More specifically, I consider a swap rate dependent embedded option. Asset holdings are assumed to be constant during the first year. In the context of risk management, the SCR will be the amount of capital required to be able to pay the future payoffs determined by a speci-fied (stochastic) payoff function with 99.5% probability. For this, I use the swap rate dependent embedded option introduced in Pelsser and Plat (2009) as a bench-mark, generating payoffs up to 50 years ahead. As said before, there exist no exact analytical pricing formula for most embedded options, but for the one considered in this thesis, Pelsser and Plat have derived an analytical approximation for this insurance product. Basically, the approximation is based on a Black-Scholes-alike option-pricing formula, since the individual underlying rates are approximately normally distributed. This analytical approach will also be used to compare the performance of the LSMC approach, besides the nested Monte Carlo approach. For a detailed derivation and explanation of the analytical approximation funda-mentals, I refer to Pelsser and Plat (2009). The underlying interest rate model will be the two-factor Gaussian (G2++) to simulate zero-coupon-bond prices and hence forward swap rates.

The remainder of this thesis is set up as follows. The next section discusses the background regarding Solvency II requirements. Section 3 covers the characteris-tics of the swap rate dependent embedded option. In Section 4, a mathematical background regarding pricing multiple payoffs using Monte Carlo simulations will be given. After that, specifications of the underlying two-factor Gaussian interest rate model will be discussed in Section 5. Section 6.2 discusses the Least Squares Monte Carlo algorithm in case of interest rate dependent options. In Section 7, numerical results will be discussed. More specifically, the LSMC results will be compared with (ordinary) nested Monte Carlo results as well as with analytical approximations. Finally, Section 8 summarizes my findings.

1

In case of more than one liability, the LSMC approach can be extended to calculate several SCRs, possibly simultaneously (depending on the risk drivers of the individual liabilities), to find the aggregate SCR.

(5)

2

Solvency II Requirement

Applying LSMC to determine the SCR of a life insurer was first done by Bauer, Kiesel and Reuss (2010). They applied the technique based on numerical exper-iments using the participating contract model introduced in Bauer, Kiesel and Ruß (2006). That approach can be seen as from the asset side of an insurance’s balance sheet, while in this thesis the liability side will be considered. In this section, a summary is given from Section 2.1 in Bauer et al (2010), but adapted to the liability side of the balance sheet. It contains formal (mathematical) defi-nitions of the SCR. From the definition it will be clear why we need Monte Carlo simulations to determine the SCR.

The amount of capital that EU insurance companies must hold to reduce the risk of insolvency, is prescribed by the Solvency II Directive. As seen from the liability side of the balance sheet, the SCR is a buffer that covers the risks of increasing liabilities over the next 12 months. Define the one year loss function, evaluated at t = 0 as

L(L1; r, L0) := e−rL1− L0, (2.1)

where Lt are the liabilities at time t and r is the one-year risk-free spot rate at

t = 0. Since r and L0 are known at t = 0, the one year loss function is a function

of L1 only. The SCR is then defined as the α-quantile of L, where α is set to

99.5%: SCR :=argminx n P (L < x) > α o (2.2) =argminxnP (L > x) ≤ 1 − αo (2.3) In words, the probability that the loss over one year exceeds the SCR is less than or equal to 1 − α

The distribution of L depends on the model assumed for the risk factors (eq-uity risk, interest rate risk) and the dependence of the liabilities on those risk factors. Because this dependence is typically non-linear, a closed-form expres-sion for the distribution function F (L) is usually not available. Therefore, Monte Carlo simulation is used to approximate this distribution. In this thesis, I will use the definition in (2.2) to compare both nested Monte Carlo simulations and least squares Monte Carlo (LSMC) to determine the SCR. As mentioned in the Introduction, nested Monte Carlo simulation is very time consuming, while LSMC is supposed to be a faster approach. This hypothesis will be tested in Section 7 for the embedded option described in the next section.

(6)

complete probability distribution of L1. Once we know all the possible outcomes

of L1, the SCR would then be (consistently) estimated by the α-quantile of all

these outcomes, discounted to time 0 and subtracting the liabilities at time 0: [

SCR = e−rLc1(m)− L0, (2.4) where m = bN1· α + 0.5c with N1 the number of outer simulations. In the next

sections, I will explain how to determine the complete probability distribution of L1 in case of a swap rate dependent embedded option.

(7)

3

Swap Rate Dependent Embedded Option

Embedded options are the more complex features in insurance products. More-over, they are an important part of the market valuation of liabilities. The aim of this thesis is to determine the SCR for insurance companies in case where the liabilities consist of only one embedded option. This section explains the char-acteristics of an embedded option which is a very common insurance product in Europe: a profit sharing rule based on a moving average fixed income rate, com-bined with a minimum guarantee. This specific embedded option is also discussed and priced in Pelsser and Plat (2009). They derived an analytical approximation for its price at time 0 and compared it with its Monte Carlo price.

3.1 Profit sharing payoff function

In words, the swap rate dependent embedded option can be described as follows. At time 0, the policyholder pays an amount of cash, P0. In turn, he receives a

payoff P S(t) at time t, where t ∈ {T1, . . . , Tn}. The payoff P S(t) can be seen as

an interest rate percentage times a notional S(t). The contract can be seen as a strip of call options, in which case the policy holder gains from upside potential (if the interest rate increases), but is protected against downside risk (if the interest rate decreases).

The exact profit sharing rate is either very complex or not completely known, or implied volatilities from the market are not available in cases where the profit sharing rate depends on a certain fixed income rate. In practice, these kinds of options are often valued using an (average) forward swap rate as an approximation for the profit sharing rate. In that case, the profit sharing payoff P S(t) in year t is:

P S(t) = S(t) max{c(R(t) − K(t)), 0} (3.1) where S(t) is the profit sharing basis, c is the percentage that is distributed to the policyholder and K(t) is the strike of the option. The strike equals the sum of the technical interest rate T R(t) and a margin (i.e. K(t) = T R(t) + margin). R(t) is the profit sharing rate and is a (weighted) average of historic and forward swap rates.

The payoff function given in (3.1) can be seen as a call option on a rate R(t) that matures at time t, and has to be valued using option valuation techniques. In this thesis, I assume the profit sharing is paid directly.2 In the Netherlands, it is common to base the profit sharing rate R(t) on a moving average of the u-rate.

2

Pelsser and Plat (2009) also investigate the case in which all payoffs are compounded and paid at the end of the contract.

(8)

The u-rate is the 3-months average of parts, where the subsequent u-rate-parts are weighted averages of an effective return on a basket of government bonds. This leads to a complicated expression, and no implied volatilities are available for government bonds. Therefore, it is common practice to approximate the u-rate by a swap rate. That gives the following expression for R(t):

R(t) = 1 n t X i=t−n+1 yi,i+m(i) (3.2)

where yt,t+m(t) is the m-year swap rate at time t. Thus, in order to price the

embedded option, we need to simulate forward m-year swap rates. The next section discusses how to calculate these rates from zero-coupon-bond prices.

(9)

4

Mathematical Framework

Before I move to the mathematical characteristics and applications of the two-factor Gaussian interest rate model (G2++), I will first introduce some termi-nology and definitions regarding interest rates and valuation principles. Most of it can be found in Brigo and Mercurio (2007) in more detail, but for complete-ness, I will repeat specific results here which will be used in later sections. After that, I will derive the no-arbitrage price of the embedded option to be priced in Section 4.4 and conclude that the key step in pricing the option is to simulate zero-coupon-bond prices. Section 5 gives a detailed discussion of how to use the G2++ model in Monte Carlo simulations.

4.1 Discount factors and zero-coupon bonds

Since interest rates are assumed to be stochastic (meaning they can be modeled by a stochastic differential equation SDE), discount factors D(t, T ) are closely related, but not equal, to the value of a zero-coupon bond P (t, T ). A stochastic discount factor between two time instants t and T , is the amount at time t that is ‘equivalent’ to one unit of currency paid at time T :

D(t, T ) = B(t) B(T ) = exp  − Z T t rudu  (4.1) where B(t) = exp  Z t 0 rudu  (4.2) is the value of a bank account at time t ≥ 0, B(0) = 1 and rt is the instantaneous

interest rate at time t. Equation (4.2) implies that B(t) evolves over time according to the differential equation

dB(t) = rtB(t)dt. (4.3)

A zero-coupon bond is a contract that guarantees its holder one unit of currency at time T . The contract value at time t ≤ T is denoted by P (t, T ). The differ-ence between D(t, T ) and P (t, T ) is that the former is an ‘equivalent amount of currency’ and the latter is a ‘value of a contract’. The relationship between the two is given by P (t, T ) = Et " exp − Z T t rudu !# = Et[D(t, T )], (4.4)

where Et[·] is the expectation conditional on the Ftσ-field under the risk-neutral

(10)

sub-sections, it will be clear that zero-coupon bonds are the most important elements in pricing the embedded option. Section 5.1 discusses in detail how to calculate these bond prices using the G2++ model.

4.2 Forward swap rate

Since the profit sharing payoff at time t depends on the swap rate at time t (and n − 1 historical u-rates, see equation (3.2)), we have to calculate these forward swap rates. The swap rate yn,n+m is the par swap rate at which a person would

like to enter into a swap contract with a value of 0, starting at time Tn (first

payment at time Tn+1) and lasting until Tn+m. We can express these rates in

terms of zero coupon bonds. In general, the formula for the forward swap rate at time t is given by yn,n+m(t) = P (t, Tn) − P (t, Tn+m) Pn+m i=n+1τiP (t, Ti) (4.5) In this thesis, the daycount convention (regarding τi) is constant and equal to 1

in each period.

To summarize, once we know the bond prices P (t, T ) for different maturituies T , we know the forward swap rate at t, hence the underlying rate R(t) and the profit sharing payoff P S(t).

4.3 No-arbitrage, changing numeraire and forward measure

The fundamental assumption in option price valuation (paper by Black and Sc-holes, 1973), is the absence of arbitrage. That is, it is impossible to invest nothing today, while earning money on that investment in the future. Or, two portfolios having the same payoff at a given time in the future must have the same price today. Using this assumption in combination with the valuation formulas, we should be able to price the embedded option.

The valuation formula (or risk-neutral valuation formula) is given by

πt= EQ[D(t, T )HT|Ft], (4.6)

where EQ[·] means expectation under the risk-neutral measure Q, equivalent to P, and πt is the price at time t of the attainable contingent claim3 HT. Notice that

equation (4.6) implies (4.4) if HT = 1. Ft is the natural filtration (it contains all

information concerning the stochastic process on the interval [0, t]). However, since

3

Formal definitions of equivalent martingale measures, self-financing trading strategies and an attainable contingent claim are given in Chapter 2 in Brigo and Mercurio (2007).

(11)

we have stochastic interest rates, the presence of the stochastic discount factor D(t, T ) complicates the calculation of the expectation in (4.6). The solution is to change the real probability measure P to another probability measure, equivalent to P.

Geman et al. (1995) provides a proposition that is a generalization of (4.6) and can be used to price a derivative with respect to any numeraire (so other than the bank account B(t)). I will repeat it here, because it is needed to price our embedded option later.

Proposition 4.1 Let U be an arbitrary numeraire. Then the absence of arbitrage implies that there exists a probability measure QU, equivalent to the initial measure P, such that the price of any attainable claim Y normalized by U is a martingale under QU, i.e., Yt Ut = EU YT UT |Ft  0 ≤ t ≤ T. (4.7)

In our situation, a useful numeraire is the zero-coupon bond whose maturity T is equal to the maturity of the longest insurance liability (i.e. embedded option). In fact, the proposed numeraire Ut is P (t, T ), and UT = P (T, T ) = 1, so the

embedded option can be priced by calculating the expectation of HT divided by

one. The corresponding measure will be called the T -forward measure and is denoted by QT. The related expectation is denoted by ET.

Applying proposition 4.1 using the above settings, the price of the embedded option πt at time t is given by

πt P (t, T ) = E T  HT P (T, T )|Ft  πt= P (t, T )ET[HT|Ft], (4.8)

for 0 ≤ t ≤ T , and where HT is the claim payoff at time T .

4.4 Pricing the embedded option

So far, we have repeated basic interest rate relationships and pricing formulas that we need in order to calculate the payoffs P S(t). Now we use the theory discussed in the previous subsections and move to the no-arbitrage price of the n-year average of m-year swap rate embedded option, characterized in Section 3.1. To start the derivation of the theoretical price of the embedded option, we notice that the value of a (known) payoff at time T > t is equal to the value of the payoff at time S > T divided by P (T, S). The following proposition formalizes this.

(12)

Proposition 4.2 If H is an FT-measurable random variable, we have the identity: EQ[D(t, T )H|Ft] = EQ  D(t, S)H P (T, S) |Ft  , (4.9) for all t < T < S.

The proof uses the tower property of conditional expectations and the relationship D(t, S) = D(t, T )D(T, S).

Assume that the embedded option has n payoffs P S(Ti), determined and

known at Ti, for i ∈ {1, . . . , n}, and is directly paid to the policyholder (see

Section 3.1). Then the value of the liability (or simply the price) at time t < T1,

Lt, is given by Lt= Pt= n X i=1 EQ[D(t, Ti)P S(Ti)|Ft] (4.10) = n X i=1 P (t, Ti)ETi[P S(Ti)|Ft]. (4.11)

However, the last expression calculates the price by using n different forward mea-sures, which is not convenient. Therefore, we use proposition 4.2 and consider just one forward measure, namely the Tn-forward measure (date of longest maturity).

By equation (4.9) we have EQ[D(t, Ti)P S(Ti)|Ft] = EQ  D(t, Tn)P S(Ti) P (Ti, Tn) |Ft  , (4.12) so we obtain the value of the liability at time t < T1

Lt= Pt= n X i=1 EQ D(t, Tn)P S(Ti) P (Ti, Tn) |Ft  (4.13) = P (t, Tn)ETn  n X i=1 P S(Ti) P (Ti, Tn) |Ft  . (4.14)

Thus, in Monte Carlo pricing, the last expression can be used to simulate the evolution of the underlying variables under a unique measure, the Tn-forward

measure. As mentioned earlier in this section, determining the bond prices is the most important part in pricing our embedded option. Recall that the profit sharing rate only depends on the bond prices and all other factors of the payoff at time t are deterministic and known. The next section describes how to calculate the value of the embedded option Lt using (4.14).

(13)

5

Two-factor Gaussian interest rate model

In the previous section, the derivation of the theoretical price of the option was outlined and resulted in the expression in (4.14). However, the expression in the expectation of (4.14) depends on future interest rates (by noting that the payoffs P S(Ti) at Tidepend on the underlying rate R(Ti) which in turn depend on forward

swap rates and in the denominator the zero coupon bond prices P (Ti, Tn) depend

on future spot rates). Therefore, we need a stochastic interest rate model to simulate these future spot rates.

There is a wide variety of stochastic interest rate models in the academic literature with all upsides and downsides in each application. Starting with the endogenous term-structure one-factor models, for example Vasicek’s (1977) model, the Dothan (1978) model and the Cox, Ingersoll and Ross (1985) model, these models treat the current term-structure of rates as output rather than as input of the model. Moreover, the Vasicek model allows rates to have negative values with positive probability (in contrast to the Cox, Ingesoll and Ross (CIR) model, which only assumes positive rates by construction). In addition, due to the one-factor dependence of these models, they are less attractive when also long term rates are of interest. In practice for example, the thirty-year interest rate at a given instant is not perfectly correlated with the three-month rate at the same instant, while one-factor models do not account for this non-perfect correlation. Therefore, some of these models (and more) were extended to capture this non-perfect correlation of rates of different maturities. The solution was to add another (non-perfectly correlated) stochastic factor, i.e. the two-factor interest rate models4.

Altough there are other two-factor models for the short rate, I will use the specific Gaussian two-factor interest rate model (G2++). The instantaneous-short-rate process r(t) is given by the sum of two correlated Gaussian factors x(t) and y(t), plus a deterministic function ϕ(t), which is properly chosen such that the model exactly fits the current term structure of discount factors. Besides the fact that this model is preferred to one-factor models by the correlation argument, this model is analytically tractable in that explicit formulas for discount factors can be readily derived by its factors and model parameters. Unfortunately, the drawback regarding negative rates with positive probability remains in the two-factor variant. Finally, for practical reasons this model is chosen, because Pelsser and Plat (2009) used it too to derive analytical approximations of the embedded

4

In general, the more factors one adds to a model, the better the model fits market data. So some authors refer to ‘multi-factor’ interest rate models. However, historical analysis of the whole yield curve (JPY, USD and DEM data) has shown that using one factor explains 68% to 76% of the total variation, two factors 85% to 90% and three factors can explain 93% to 94% of variations in the yield curve (Jamshidian & Zhu, 1997).

(14)

option in Section 3.1 so that in Section 7 I can replicate their results to some extent and use it as a benchmark to test the LSMC technique.

Mathematically, the G2++ model can be written as

r(t) = x(t) + y(t) + ϕ(t), r(0) = r0, (5.1)

where the processes (or factors) {x(t) : t ≥ 0} and {y(t) : t ≥ 0} satisfy dx(t) = −ax(t)dt + σdW1(t), x(0) = 0,

dy(t) = −by(t)dt + ηdW2(t), y(0) = 0, (5.2) with dW1(t)dW2(t) = ρ dt, −1 ≤ ρ ≤ 1 and r0, a, b, σ, η are positive constants

and the function ϕ(t) is deterministic and well defined in the time interval [0, T∗], with T∗ a given time horizon (Brigo and Mercurio, 2007).

As mentioned before, one of the advantages of the G2++ model is its analytical tractability, since zero-coupon-bond prices P (t, T ) can be expressed by the two factors x(t) and y(t). P (t, T ) is said to be an ‘affine function’ of the two factors x and y and is given in equation (5.7). The following subsection outlines the steps to come to this analytical expression.

5.1 Analytical price of a zero-coupon bond

In order to find an expression for the zero-coupon-bond price in (4.4), first sub-stitute the model (5.1) in the integral in (4.4) and note that (5.1) consists of a stochastic part x(t) + y(t) (of which the integral RtT[x(u) + y(u)]du is normally distributed with mean M (t, T ) and variance V (t, T )), and a deterministic part ϕ(t). Then use the fact that if Z ∼ N (µ, σ2), then E{exp(Z)} = exp(µ +12σ2), to see that P (t, T ) =E n exp  − Z T t r(u)du o =Enexp− Z T t

x(u) + y(u) + ϕ(u)duo = exp  − Z T t ϕ(u)du − M (t, T ) + 1 2V (t, T )  , (5.3) where M (t, T ) = 1 − e −a(T −t) a x(t) + 1 − e−b(T −t) b y(t) (5.4)

(15)

and V (t, T ) =σ 2 a2  T − t + 2 ae −a(T −t) 1 2ae −2a(T −t) 3 2a  +η 2 b2  T − t + 2 be −b(T −t) 1 2be −2b(T −t) 3 2b  + 2ρση ab  T − t +e −a(T −t)− 1 a + e−b(T −t)− 1 b − e−(a+b)(T −t)− 1 a + b  . (5.5) Now, model (5.1) fits the currently-observed term structure of discount factors if and only if, for each T ,

exp  − Z T t ϕ(u)du  = P M(0, T ) PM(0, t) exp n −1 2[V (0, T ) − V (0, t)] o . (5.6)

Substitute (5.6) in (5.3) so that the zero-coupon-bond prices at time t are given by P (t, T ) =P M(0, T ) PM(0, t) exp{A(t, T )} A(t, T ) :=1 2[V (t, T ) − V (0, T ) + V (0, t)] − M (t, T ). (5.7)

Formal proofs of the intermediate steps can be found in Brigo and Mercurio (2007). Notice that we do not need to know the exact function ϕ(t) to exactly fit the current term structure of discount factors, since the specification in (5.7) already accounts for it. Hence, these equations can easily be programmed in Matlab, although they look lengthy at first sight. It also shows that P (t, T ) depends on the model parameters a, b, σ, η and ρ, as well as the stochastic processes x and y, the time variables t and T , and the observed term structure of discount factors in the market (which is denoted by PM(0, ·)). The next subsections discuss how to obtain these model parameters and how to generate such stochastic processes under the correct probability measure.

5.2 Calibration to real-market data

In order to fit the G2++ model, real-market volatility data is required. I will briefly consider a calibration procedure to swaption volatilities (another possibility is to use cap volatilties for instance). For a more detailed discussion on calibration to real-market data, see for example Brigo and Mercurio (2007).

Assume the availability of swaption-volatility quotes with different swaption maturities and tenors of the underlying swaps. Then the five G2++ model param-eters are obtained by minimization (possibly a numerical minimization algorithm)

(16)

of the sum of squares of the percentage differences between model and market swaption prices. I will not go any further on this topic here, since I will not use it in the remainder of my thesis.5

5.3 Generating correlated stochastic processes

Once the five parameters are obtained from the calibration procedure, the final step in calculating zero-coupon-bond prices using (5.7), is to generate correlated stochastic paths for x(t) and y(t). These processes partially determine M (t, T ) in (5.4). This subsection describes how to generate these correlated stochastic paths under the correct probability measure (Tn-forward measure).

Recall from Section 3.1 that we assumed that n profit sharing payoffs are paid at the discrete times t ∈ {T1, T2, . . . , Tn}, where for simplicity the time steps in

the simulation procedure will be set to 1 (i.e. Ti− Ti−1= 1 for 2 ≤ i ≤ n). These

are also the times at which we need the term structure. Note that the model described as in (5.1) is in case where time is a continuous variable. So for our application, we have to discretize the processes and use dt = ∆t = 1 instead. Simple integration of equations (5.2) leads to the explicit solutions for each s < t:

x(t) = x(s)e−a(t−s)+ σ Z t s e−a(t−u)dW1(u) y(t) = y(s)e−b(t−s)+ η Z t s e−b(t−u)dW2(u) (5.8)

The way these equations can be discretized so that they can be used in Monte Carlo simulations, is to consider the following changes of variables: t := t + ∆t and s := t. Now we have the processes

x(t + ∆t) = x(t)e−a∆t+ σ Z t+∆t t e−a(t+∆t−u)dW1(u) y(t + ∆t) = y(t)e−b∆t+ η Z t+∆t t e−b(t+∆t−u)dW2(u) (5.9)

Notice that both stochastic integrals in (5.9) can be written in the form Js,t =

Rt

s f (u)dW (u). Now use the property that if f (u) is non-stochastic, then Js,t ∼

N [0,Rstf (u)2du]. Applying the property with f (u) = e−a(t+∆t−u) to the stochastic process for x(t + ∆t), we see that

Z t+∆t t e−a(t+∆t−u)dW1(u) ∼ N  0, Z t+∆t t e−2a(t+∆t−u)du  . (5.10) Evaluating the variance results in the following expression:

(17)

Var = 1 2a h e−2a(t+∆t−u)iu=t+∆t u=t = 1 2a  1 − e−2a∆t (5.11)

Of course, the same reasoning holds for the process for y(t + ∆t) in (5.9). In summary, taking ∆t = 1, generate the correlated discretized processes for x(t) and y(t) by:

x(t + 1) = x(t)e−a+ r σ2 2a  1 − e−2aZ x y(t + 1) = y(t)e−b+ r η2 2b  1 − e−2bZ y (5.12)

where Zx and Zy are generated according to the following procedure, using a

Cholesky decomposition.

First generate random normal variables V = (V1, V2)0 according to

V1 V2 ! ∼ N  0 0 ! , 1 0 0 1 !  . (5.13) Second, take ∆t = 1 in (5.9): x(t + 1) = x(t)e−a+ σ Z t+1 t e−a(t+1−u)dW1(u), y(t + 1) = y(t)e−b+ η Z t+1 t e−b(t+1−u)dW2(u), (5.14)

with dW1(t)2= dW2(t)2 = dt and dW1(t)dW2(t) = ρdt. Define

Ux= σ Z t+1 t e−a(t+1−u)dW1(u), Uy = η Z t+1 t e−b(t+1−u)dW2(u),

so that x(t + 1) = x(t)e−a+ Ux and y(t + 1) = y(t)e−b+ Uy. From Ito isometry,

it follows that σx2 = var(Ux) = σ2 Z t+1 t e−2a(t+1−u)du = σ 2(1 − e−2a) 2a , σy2 = var(Uy) = η2 Z t+1 t e−2b(t+1−u)du = η 2(1 − e−2b) 2b ,

(18)

and σxy = cov(Ux, Uy) = σηρ Z t+1 t e−a(t+1−u)e−b(t+1−u)du = σηρ Z t+1 t e−(a+b)(t+1−u)du = σηρ(1 − e −(a+b)) (a + b) . The correlation between Ux and Uy is given by

ρxy = σxy σxσy = σηρ(1 − e −(a+b))/(a + b) pσ2(1 − e−2a)/(2a)p η2(1 − e−2b)/(2b) = ρ (1 − e −(a+b))/(a + b) p (1 − e−2a)(1 − e−2b)/(4ab).

In case that a = b, the correlation simplifies: ρxy = ρ.

If we define Zx = Ux/σx and Zy = Uy/σy, then x and y can be simulated

according to (5.9).

Next, perform a Cholesky decomposition C (which is a 2 × 2 matrix) of the correlation matrix R of (Zx, Zy), where

R = 1 ρxy ρxy 1

!

. (5.15)

Finally, Zx and Zy in (5.12) are obtained by calculating C0V . Matlab performs

the Cholesky decomposition by calling a built-in function. For a more detailed discussion on the mathematics behind the Cholesky decomposition, I refer to other sources.

However, as concluded in Section 4.4, we can price the option using Monte Carlo simulation under the unique probability measure Tn (i.e. the Tn-forward

measure), while the above expression is under the risk-neutral measure Q. As a consequence, we must change the probability measure from Q to QTn. The

following lemma yields the dynamics of x and y under QTn, or simply QT.

Lemma 5.1 The processes x and y under the forward measure QT evolve accord-ing to dx(t) =  − ax(t) − σ 2 a (1 − e −a(T −t)) − ρση b (1 − e −b(T −t))  dt + σdW1T(t), dy(t) =  − by(t) −η 2 b (1 − e −b(T −t) ) − ρση a (1 − e −a(T −t) )  dt + ηdW2T(t), (5.16)

(19)

where W1T and W2T are two correlated Brownian motions under QT with dW1T(t)dW2T(t) = ρdt.

Moreover, the explicit solutions of equations (5.16) are, for s ≤ t ≤ T ,

x(t) = x(s)e−a(t−s)− MT x(s, t) + σ Z t s e−a(t−u)dW1T(u) y(t) = y(s)e−b(t−s)− MyT(s, t) + η Z t s e−b(t−u)dW2T(u), (5.17) where MxT(s, t) =  σ 2 a2 + ρ ση ab h 1 − e−a(t−s) i − σ 2 2a2 h e−a(T −t)− e−a(T +t−2s)i − ρση b(a + b) h e−b(T −t)− e−bT −at+(a+b)si, MyT(s, t) =  η 2 b2 + ρ ση ab  h 1 − e−b(t−s)i− η 2 2b2 h e−b(T −t)− e−b(T +t−2s)i − ρση a(a + b) h

e−a(T −t)− e−aT −bt+(a+b)si,

so that, under QT, the distribution of r(t) conditional on F

s is normal with mean

and variance given respectively by

ET[r(t)|Fs] = x(s)e−a(t−s)− MxT(s, t) + y(s)e−b(t−s)− MyT(s, t) + ϕ(t),

V arT[r(t)|Fs] = σ2 2a h 1 − e−2a(t−s)i+η 2 2b h 1 − e−2b(t−s)i (5.18) +2ρ ση a + b h 1 − e−(a+b)(t−s)i.

The proof can be found in Appendix B of Chapter 4 in Brigo and Mercurio (2007). Using the explicit solutions given in (5.17) with the same reasoning as before (so change of variables t := t + ∆t = t + 1 and s := t and the property of a stochastic integral with non-stochastic part f (u)), result in the exact discrete processes as

x(t + 1) = x(t)e−a− MxT(t, t + 1) + r σ2 2a  1 − e−2a  Zx, y(t + 1) = y(t)e−b− MyT(t, t + 1) + r η2 2b  1 − e−2b  Zy, (5.19) where Zx and Zy as in (5.12).

(20)

5.4 Recap and prospect

The previous section served as a mathematical framework to derive the theoretical price/value of the swap rate dependent embedded option introduced in Section 3. We derived the value under QTn, which resulted in the expression in (4.14) and

concluded that the most important part was to calculate (future) zero-coupon-bond prices under the same probability measure. As such, those prices were analytically derived in the beginning of Section 5.1 using the G2++ model. The underlying stochastic processes were discretized and resulted in (5.19).

The next sections will cover two approaches to calculate the Solvency Capital Requirement using theory and formulas from the past sections. First, the nested Monte Carlo approach is discussed. Second, the alternative, hypothetically faster, Least Squares Monte Carlo approach will be discussed. After that, an applica-tion to a specific swap rate embedded opapplica-tion will be studied and results will be discussed.

(21)

6

SCR Calculation

In order to calculate the Solvency Capital Requirement (SCR), we need to calcu-late the value of the liabilities today (at t = 0, L0), as well as the 99.5th quantile

of the liabilities’ value next year (at t = 1, L1), see equation (2.4). Due to the

complexity of the liabilities, in general there exists no analytical expression for the value at time 0 (or any other time). Therefore, risk managers rely on sim-ulation methods to calculate the SCR. For L0, one simply uses N Monte Carlo

simulations from t = 0 until t = T , the longest maturity of the liabilities (under QT) and calculate the Monte Carlo average. However, for the estimation of the

99.5th quantile of L1, one needs to estimate the probability distribution of L1.

This section discusses three methods, the more traditional ‘nested Monte Carlo approach’, the more sophisticated ‘Least Squares Monte Carlo approach’ and . As mentioned in previous sections, these methods will be applied in a situation in which the liabilities of an insurance company consist of only one liability and assets are constant. In a more complex (or realistic) world, where N liabilities appear on a balance sheet, the value of the liabilities at time t can be calculated by the sum of its individual parts at time t:

Lt= L1t + . . . + LNt = N

X

i=1

Lit,

where each Lit might be valued using an analytical expression, a simulation study as considered in this thesis (nested simulation or LSMC) or some other method. The aggregate SCR for the insurance company can than be calculated as the sum of the individual SCRs.

6.1 Nested Monte Carlo approach

A graphical illustration of the nested Monte Carlo approach is given in Figure 1. This method is traditionally used by risk managers, but can be quite burdensome due to its long simulation time. Therefore, other methods were desired. But in order to compare another method, I will first discuss the nested Monte Carlo approach.

Looking at Figure 1, it becomes clear that the total number of simulations can be significant, increasing the calculation time for the probability distribution of L1. First, generate N1 ‘outer scenarios’. These are real world scenarios for

the risk drivers during the first year (illustrated with red arrows). In this case, the (economic) risk drivers are the stochastic processes x(t) and y(t) in (5.19), since these factors determine the zero-coupon-bond prices, hence the underlying

(22)

Figure 1: Graphical illustration of the nested Monte Carlo approach. Source: Moody’s Analytics

profit sharing rate R(t) and the profit sharing payoff P S(t). In practice, there are several ways of generating first-year scenarios. For instance, one option is to generate scenarios under the one-year forward measure. Another option is using fitting scenarios by choosing a multi-dimensional range over which one would like to approximate the liability function (Koursaris, 2011). Or, use an external Economic Scenario Generator (ESG). Without loss of generality, in this thesis I will use the one-year forward measure as real world scenarios for the risk drivers and assume these measures being equal. This assumption has no effect for the methodology of either simulation approach.

Next, from each real world scenario at time 1, the Monte Carlo price needs to be calculated using another N2 market consistent ‘inner scenarios’ from t = 1

until t = T (illustrated with blue arrows). Usually these are simulated under the QT −1-forward measure. The reason why it is T − 1 rather than T is because at

time 1 the liabilities are one year closer to maturity (T ). This results in a total number of simulations of N1· N2. For a discussion about optimal choices of the

number of outer and inner scenarios, see Bauer et al. (2010).

Mathematically, use equation (4.14) in combination with Monte Carlo simula-tion to calculate N1 values for L(i)1 according to

L(i)1 = 1 N2 N2 X j=1 L(j)1 , i ∈ {1, . . . , N1}, (6.1)

where L(j)1 is the value of the liabilities at time 1 in scenario j and is calculated ac-cording to (4.14). Explicit formulas for L(j)1 are given in the next section (equation

(23)

(7.1) for example).

6.2 Least Squares Monte Carlo approach

The previous subsection discussed the nested Monte Carlo approach, while in this subsection, I will introduce the Least Squares Monte Carlo approach (LSMC) in determining the SCR. This method was first proposed by Longstaff and Schwartz (2001) in their seminal paper where they price American style stock options. In-stead of simulating many risk-neutral market consistent scenarios in each time period, only one risk-neutral scenario is being simulated for the valuation of the option. This results in highly imprecise valuations at each time period, but a re-gression function is fitted through all these imprecise points. The decision to exer-cise at a certain moment or to continue the American option is a nested stochastic problem (due to the conditional expectation). Longstaff and Schwartz discovered a method that uses least squares regression through Monte Carlo scenarios to approximate the continuation value, hence the name Least Squares Monte Carlo simulation.

Graphically, the difference between nested simulations and LSMC is illustrated in Figure 2. This subsection describes the procedure in case of risk management purposes. More specifically, I will describe how to use this method in order to determine the distribution of L1.

(a) (b)

Figure 2: Illustration of the different pricing methods. (a) Nested simulations

approach. (b) Least squares Monte Carlo approach. In the second stage two scenarios are simulated per real world scenario. Source: Moody’s Analytics

6.2.1 Theoretical overview

As mentioned before, Longstaff and Schwartz (2001) introduced a method that ‘solved’ the nested simulations problem in case of pricing American-style options.

(24)

From ordinary American options to more complex options containing Asian and Bermudan features or American options on an asset which follows a jump-diffusion process, each of them can be valued using the Least Squares Monte Carlo method. Key point is to replace the conditional expectation function by a linear regression. In this case, the conditional expectation is the optimal exercise strategy: the conditional expecation of the payoff from continuing to keep the option alive.

More precisely, as was pointed out by Cl´ement, Lamberton and Protter (2002), the LSMC approach consist of two different types of approximations. The first one is to replace the conditional expectation by a finite linear combination of ‘basis’ functions. The second is to use (few6) Monte Carlo simulations and ordinary least squares to approximate the linear combination. Also, they prove that under certain completeness assumptions on the basis functions, the estimated conditional expectation approaches (with probability one) the true conditional expecation as the number of basis functions goes to infinity. Furthermore, they determine the convergence rate and show that the normalized estimation error is asymptotically Gaussian. In other words, it presents a valid approach to the pricing problem and compared to the nested Monte Carlo approach a considerable more efficient approach.

Moreno and Navas (2003) tested the LSMC approach with respect to robust-ness of the basis functions in case of American put options. They test many different regression functions (linear combinations of basis functions) and con-cluded that this approach is quite robust to the choice of basis functions, but that for the more complex derivatives the choice can slightly affect option prices. Also Longstaff and Schwartz (2001) concluded that the choice of basis functions does not affect the option prices they investigate.

Let’s illustrate the LSMC approach with a simple example in which an Amer-ican put option can be exercised one period prior to maturity or at maturity itself (similar to the example Longstaff and Schwartz started their paper with). The op-timal exercise strategy prior to maturity is to compare the the immediate exercise value with the discounted expected value of holding the option one more period (i.e. to maturity date) and exercise if immediate exercising is more valuable. Now, the key step is to identify the conditional expected value of continuation. We start by simulating N = 5 different stock price paths, beginning at S0= 1, resulting in

random (fictive) values for S1(i), stock prices at time 1 in path i. From each path at t = 1, only one risk-neutral scenario is simulated to generate N = 5 values for S2(i), the stock price at maturity. At this moment (t = T = 2), the optionholder has no decision to make, and simply exercises the option if it is in the money,

6

(25)

or otherwise let the option expire worthless. However, one period prior to ma-turity (t = 1), the optionholder has to decide whether to exercise immediately, or to keep the option for at least one more time-period. Note that if the option is out of the money prior to maturity, the decision is simple: not exercising. On the other hand, if the option is in the money, he has to make a decision. His optimal strategy is to compare the immediate exercise value with the discounted expected value from continuation (i.e. the value of the payoffs received if he does not exercise early). Here the regression method will be applied. Let X denote the stock prices at time 1 (for in-the-money paths only) and Y the correspond-ing discounted payoffs received at time 2 if the option is not exercised at time 1. Thus, X will be a vector of S1(i), where S1(i) < K, the strike of the put option. And Y = e−rmax{K − S(i)2 , 0}, where r is the riskless interest rate, applicable to the period from time 1 until time 2. Next, to estimate the expected payoff from continuing the option’s life conditional on the stock price at time 1, perform OLS-estimation using the following (simple) regression function:

E[Y |X] ≈ β0+ β1X + β2X2 (6.2)

More general specifications for the conditional expectation are possible, for exam-ple using Laguerre, Hermite or Legendre polynomials or Fourier or trigonometric series. But Longstaff and Schwartz find almost no difference in value when using different regression functions.

Estimated coefficients in equation (6.2) specify the conditional expectation and result in the optimal decision rule. More specifically, fit the regression by

E[Y |X] ≈ ˆY = ˆβ0+ ˆβ1X + ˆβ2X2 (6.3)

and compare the immediate exercise value at time 1 with the fitted regression value ˆY (continuation value). Choose to exercise if the former is greater than the latter.

This procedure can be generalized for larger N , more exercise dates prior to expiration and stock prices can be simulated according to stock price models (such as stochastic differential equations, SDE’s). For a numerical example and pricing exotic options, I refer to Longstaff and Schwartz (2001).

Having introduced the basic principles regarding option-pricing using LSMC, I will now focus on pricing the embedded option (at t = 1). First, I will point out the differences and similarities regarding our embedded option and the above American put option example. Second, I will generalize the method and propose an appropiate regression function and state variables such that we can price the

(26)

option using LSMC.

6.2.2 Differences and similarities

Section 3 contains a detailed explanation about the embedded option’s character-istics. It generates a series of nonnegative profit sharing payoffs P S(t) determined and paid at time t according to equation (3.1), possibly 50 years ahead, other than the two exercise periods considered in the example above. Thus the maturity dif-fers considerably. Moreover, there is no early exercise feature in contrast to the put-option example given in the previous subsection. So there is no decision to make during the life of the option. It is simply a valuation problem at a future moment in time of multiple payoffs. Secondly, the underlying profit sharing rate, determining the profit sharing at t, is an n-year moving average of m-year swap rates other than a simple stock price at time t. Therefore, instead of the stock price used in the put-option example, suitable variables for the regression function are the underlying rate at time 1, R(1), or the two risk factors driving this rate (so the G2++ factors x(1) and y(1)), including its squares, cubes and/or cross terms. Finally, the insurance product can be seen as a strip of call options other than a single put option.

The next subsection outlines the steps to implement the LSMC approach in determining the SCR.

6.2.3 LSMC algorithm

In order to determine the Solvency Capital Requirement, we need the distribu-tion of the liabilities in 12 months time, L1 (see equation (2.4)). However, from

(4.14), we see that this quantity is an expectation (under QT) conditional on F1.

Analogue to Longstaff and Schartz (2001), we approximate this conditional expec-tation by a finite linear combination of basis functions. Second, one risk neutral inner scenario (intead of many N2 inner scenarios) and ordinary least squares are

used to approximate the linear combination of basis functions in step 1.

Practically, the first approximation is to replace the conditional expectation, L1, by a finite linear combination of M basis functions bi(D1) (liability function),

which are functions of the economic (and possibly non-economic) risk drivers at time 1 (D1), L1 ≈ LM1 (D1) = M X i=1 βi· bi(D1), (6.4)

assuming the sequence (bi(D1))i≥1 is linearly independent and complete in the

(27)

Next, generate N independent first-year scenerios for the risk drivers at time 1 (D1). As in the nested Monte Carlo approach, these first-year paths could be

generated in multiple ways. Then, for t = (1, T ], generate one path for the risk drivers from each first-year scenario. Calculate the discounted cumulative payoffs at time 1 in each scenario, L(i)1 , using (4.14).

Use the N first-year realizations for L1 to approximate the coefficients in (6.4)

by OLS regression. Use these estimated coefficients to fit the liability function to obtain the second approximation:

L1 ≈ LM1 (D1) ≈ ˆLM1 (D1) = M X i=1 ˆ βi· bi(D1) (6.5)

Based on (6.5), the empirical distribution function can be determined, hence the SCR. A short guideline is given in the following.

Step 1: Replace the liability function L1 in (4.14) by a finite linear combination of

basis functions bi(D1), where D1 is the set of risk drivers determining the

value of the liabilities at time 1:

L1 ≈ LM1 (D1) = M

X

i=1

βi· bi(D1),

Step 2: Generate N paths (D(1)t , . . . , D(N )t ) for t ∈ (0, T ], where the first year is simulated under the one-year forward measure (assumed to be equal to the real world measure for simplicity) and under the (T − 1) forward measure for the remaining periods.

Step 3: Calculate the realized values of the liabilities at time 1 in each scenario to obtain N realizations of L(i)1 according to (4.14).

Step 4: Use ordinary least squares in the regression specified in step 1 to estimate the coefficients βi.

Step 5: The estimated coefficients, denoted by ˆβi, are used to obtain the second

approximation, given in (6.5).

Step 6: The fitted regression function results in the empirical distribution function. Hence, the SCR can be estimated using this distribution.

(28)

7

Application

As an example framework for my considerations, I use the swap rate dependent embedded option introduced and priced in Pelsser and Plat (2009). This specific insurance product is applied in pricing the u-rate profit sharing in the Nether-lands. Technical specifications of this option were discussed in Section 3. In this section I will compare both nested Monte Carlo results as well as LSMC results in determining the SCR in case where the liabilities consist of only this specific embedded option. For simplicity, asset holdings are assumed to be constant over the first year. Simulations are carried out using Matlab on a Windows machine with Intel(R) Core(TM) i5-2400 CPU, 3.10GHz and 2.85 GB RAM.

7.1 10-year average of 7-year swap rate

Recall that the profit sharing payoff function is given by P S(t) = S(t) max{c(R(t) − K(t)), 0}

(see equation (3.1)). It is based on the profit sharing basis S(t) and the technical interest rates T R(t) given in Appendix A, Table 4. This is based on an example portfolio of a long term pension insurance portfolio, with cash flows up to 50 years ahead. The underlying rate R(t), specified in equation (3.2), is based on a 10-year moving average (n = 10) of 7-year swap rates (m = 7). The swap curve used as well as the five model parameters of the Gaussian two-factor interest rate model can be found in Table 4 (a = b = 2.75%, σ = 0.51%, η = 0.28% and ρ = 0.497). A margin of 0.5% is added to the technical interest rate to define the strike of the option and c = 1. Furthermore, the first payoff of the embedded option is paid directly at time 0 and is deterministic, since the underlying rate at time 0, R(0), is calculated using the average of nine historical u-rates and the currently observed 7-year swap rate (which are all deterministic at time 0). From t = 1, all payoffs are stochastic and need to be simulated. Thus, conformably with the terminology in Sections 4 and 5, payoffs are paid at times {T1 = 1, T2 = 2, . . . , T49 = 49}, but

also a deterministic direct payment at T0 = 0. The nine historical u-rates can be

found in Appendix A, Table 5.

7.2 Results

In Sections 6.1 and 6.2, two different methods on how to estimate the SCR for insurance companies were introduced. In what follows, we test and compare these methods in the setup described in Section 7.1 and discuss the shortcomings,

(29)

ad-vantages and remarks of these methods. In addition, the analytical approximation derived by Pelsser and Plat (2009) will be implemented in our context to approx-imate the SCR analytically. This third method will be used to test the quality of the LSMC approach if one is looking for a faster calculation technique than the nested Monte Carlo approach in determining the SCR.

7.2.1 Nested Monte Carlo results

As indicated in Section 6.1, the nested simulations method starts generating N1

outer (real world) scenarios and then N2 inner scenarios to calculate the Monte

Carlo value of the liability at time 1. I choose two different combinations of N1

and N2, namely (N1; N2) = (1, 000; 1, 000) and (10, 000; 200). In order to calculate

the liability at time 1, we use equation (6.1) and specify L(j)1 by

L(j)1 = L(j)T 1 = P S (i)(1) | {z } :=V1(i) + P(j)(1, 48)E48  49 X i=2 P S(j)(Ti) P(j)(T i, T49) |F1(j)  | {z } :=V2(j) , (7.1)

where V1(i) is the profit sharing payoff at time 1 in outer scenario i (so V1(i) is for every inner scenario starting from scenario i the same) and V2(j) is the value of the profit sharing payoffs at times {2, 3, . . . , 49} in scenario j (interest rates are simulated under the 48-year forward measure, since there are only 48 periods from time 1 until maturity). Figure 3 shows the two empirical density functions for the two different combinations of N1 and N2. As one can see, the right figure with

more outer scenarios and fewer inner scenarios is smoother than the left figure, using a higher number of market consistent Monte Carlo valuations at time 1. Therefore, I will treat the right figure as the ‘true’ density function and I will compare LSMC outcomes with the right Figure. The first-year scenarios for the two factors of the interest rate model are generated under the one-year forward measure. Realizations of (x(1), y(1)) can be found in Figure 4 (left one). As one can see, the 10,000 realizations of the pair (x(1), y(1)) are symmetricly centered around the startpoint of the stochastic processes (0, 0). This is because of the antithetically generated random normal variables in (5.19).

Valuing the option at time 0 was done by using N = 400.000 Monte Carlo simulations and resulted in a value of cL0 = 105.1, compared to Pelsser and Plat’s

Monte Carlo price of 103.2. This difference could for example be explained by rounding differences in S(t) and K(t) or the way of using different interpolation methods for the zero rates at time 0, since the option is very sensitive to interest rates. Thus, using L0and the complete probability distribution of L1, we are able

(30)

40 60 80 100 120 140 160 180 200 220 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 L1

empirical density function

0 50 100 150 200 250 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 L1

empirical density function

Figure 3: Empirical density functions of L1. The left plot shows the case

if (N1; N2) = (1, 000; 1, 000), the right plot shows the case if (N1; N2) =

(10, 000; 200). It takes 8 hours to obtain the left figure and 16 hours to obtain the right figure.

to calculate the SCR in the base case: [

SCR =e−rLc1(m)− cL0 =e−0.04184.0 − 105.1 =71.7.

Of course, increasing the number of inner and outer simulations result in even smoother empirical density functions than showed in the above right figure and consequently in more accurate ‘true’ SCRs. However, for the remainder of this thesis, the right figure is treated as the ‘true’ density, hence 71.7 is treated as the ‘true’ SCR. Its 95% confidence interval7 is given by [69.8, 73.4].

7.2.2 Least Squares Monte Carlo results

As concluded in the previous subsection, many simulations are needed in the nested Monte Carlo approach. Consequently, this may not be feasible for more complex options. On the other hand, the least squares Monte Carlo approach needs less simulations. Hence, it should be a faster method in calculating the SCR. As the specifications of the embedded option (insurance liability) imply, obvious choices for regressors are R(1) = R, x(1) = x or y(1) = y (the risk drivers of the option at time 1). The five regression functions being investigated in this thesis are given in Table 1. The regressors in the first four regression functions are

7To obtain the 95% confidence intervals in this thesis, a nonparametric bootstrap method has

(31)

−0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.015 −0.01 −0.005 0 0.005 0.01 0.015

Real World Scenarios

x1 y1 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 Quasi−Random Scatter x1 y1

Figure 4: Scatterplots of real world scenarios and quasi-random fitting scenarios.

‘normal’ polynomial functions of R and (x, y), while the fifth regression function considers the first three Laguerre polynomials, evaluated in R(1).

# Regression Function SCR[1 SCR[2 SCR[3 [SCR4 1 β0+ β1R + β2R2 73.1 64.5 68.7 70.4 2 β0+ β1R + β2R2+ β3R3 76.3 66.1 60.4 67.5 3 β0+ β1x + β2x2+ β3y + β4y2+ β5xy 68.4 65.3 69.5 71.3 4 β0+ β1x + β2x2+ β3x3+ β4y + β5y2+ β6y3+ β7xy + β8x2y + β9xy2 73.5 67.0 61.4 69.6 5 β0+ β1e−R/2+ β2e−R/2(1 − R) + β3e−R/2(1 − 2R + R2/2) 76.3 66.2 60.4 67.5

Table 1: Regression functions for the LSMC approach and corresponding

esti-mated SCRs. The results for [SCR are based on 1,000, 10,000, 20,000 and 40,000 first-year scenarios respectively, simulated under the one-year forward measure and one inner scenario per outer scenario under the 48-year forward measure.

Table 1 also shows the estimated SCRs using 1,000, 10,000, 20,000 and 40,000 similar first-year scenarios as in the left plot in Figure 4. A more extensive table with 95% confidence intervals for the 99.5th quantile of the loss function in (2.1) is given in Table 2. Furthermore, the five empirical density functions of L1 are

plotted in Figure 5 for the four different numbers of first-year scenarios, as well as the ‘true’ density obtained from the nested Monte Carlo approach (green lines).

Looking at these densities, a remarkable result is that regression #2 and #5 are almost identical in each case. Also, 1,000 and 10,000 first-year scenarios seem

(32)

−500 0 50 100 150 200 250 0.02 0.04 0.06 0.08 0.1 0.12 L 1 Empirical pdf 1000 first−year scenarios #1 #2 #3 #4 #5 true 0 50 100 150 200 250 300 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 L 1 Empirical pdf 10000 first−year scenarios #1 #2 #3 #4 #5 true 0 50 100 150 200 250 0 0.02 0.04 0.06 0.08 0.1 0.12 L 1 Empirical pdf 20000 first−year scenarios #1 #2 #3 #4 #5 true 0 50 100 150 200 250 300 0 0.02 0.04 0.06 0.08 0.1 0.12 L 1 Empirical pdf 40000 first−year scenarios #1 #2 #3 #4 #5 true

Figure 5: Four different numbers of first-year scenarios using the five regression

functions in Table 1, including the ‘true’ density from the nested Monte Carlo approach obtained in Figure 3.

too little to obtain good results. However, for the 20,000 case, regressions 1 and 3 perform quite well in contrast to the other 3 functions. Including third power terms in the regression seem redundant (which is in line with results in Longstaff and Schwartz (2001)). Moreover, 40,000 first-year scenarios gives the best result, by also considering the 95% confidence interval (which is smaller than if using 20,000 first-year scenarios). All together, the first and third choices of basis functions in Table 1 approximate the value obtained via nested simulations quite well ([SCRnested = 71.7). The major advantage of the LSMC approach is that

the 40,000 scenarios only took 24 minutes on the same computer, in contrast to the 16 hours of the nested Monte Carlo approach. A disadvantage of LSMC is that if one is interested in other (lower) quantiles of L1, the LSMC does not approximate

(33)

Figure 6 is obtained. It can be seen that the tail is being approximated quite well using 40,000 scenarios in either regression function.

150 200 250 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 L 1 Empirical pdf 1000 first−year scenarios #1 #2 #3 #4 #5 true 150 200 250 0 0.005 0.01 0.015 0.02 0.025 0.03 L 1 Empirical pdf 10000 first−year scenarios #1 #2 #3 #4 #5 true 150 200 250 0 0.005 0.01 0.015 0.02 0.025 0.03 L 1 Empirical pdf 20000 first−year scenarios #1 #2 #3 #4 #5 true 150 200 250 0 0.005 0.01 0.015 0.02 0.025 0.03 L 1 Empirical pdf 40000 first−year scenarios #1 #2 #3 #4 #5 true

Figure 6: Four different numbers of first-year scenarios using the five regression

functions in Table 1, including the ‘true’ density from the nested Monte Carlo approach obtained in Figure 3

(34)

Regression # N SCR[ LB UB 1 1,000 73.1 57.7 78.1 10,000 64.5 62.6 67.3 20,000 68.7 67.1 71.6 40,000 70.4 68.7 72.5 2 1,000 76.3 58.9 82.2 10,000 66.1 64.1 69.2 20,000 60.4 59.4 61.9 40,000 67.5 66.5 69.2 3 1,000 68.4 55.7 75.8 10,000 65.3 63.1 69.9 20,000 69.5 67.5 71.8 40,000 71.3 69.4 72.7 4 1,000 73.5 57.8 97.2 10,000 67.0 64.0 70.6 20,000 61.4 60.2 62.9 40,000 69.6 68.1 71.3 5 1,000 76.3 57.5 82.1 10,000 66.2 64.1 69.0 20,000 60.4 59.4 61.9 40,000 67.5 66.4 69.2

Table 2: 95% confidence intervals for the estimated SCRs in five different

(35)

7.2.3 Analytical results

As for the LSMC approach, the analytical approach needs less simulations and hence will be a good alternative for the nested approach. However, most embedded options do not have closed form expressions for the value at time t. Fortunately, for the embedded option considered in this thesis, Pelsser and Plat (2009) derived an analytical approximation for the price of this insurance product at t = 0. Basically, the approximation is based on a Black-Scholes-alike option-pricing formula, since the individual underlying rates are approximately normally distributed. For the exact formulas and its derivations, I refer to Pelsser and Plat (2009). Implementing their findings to our context, the analytical approximation can be used to price the option at t = 1, hence use it to approximate the SCR analytically. The only inputs for the approximation at time 1 are the five Gaussian model parameters and the zero rates at time 1. So like in the LSMC approach, a number of first-year scenarios (zero rates) needs to be generated. For each zero-curve at t = 1, the value of the embedded option is approximated. Then the estimated SCR is given by the difference of the present value of the 99.5th percentile of the values at t = 1 and the value of the embedded option at t = 0.

Results are given in Table 3 for different numbers of first-year scenarios. N SCR[ LB UB

1,000 75.5 60.8 80.3 10,000 70.1 68.6 72.4 20,000 70.1 68.8 72.6 40,000 70.1 69.1 71.9

Table 3: 95% confidence intervals for the estimated SCRs using the analytical

approximation.

Comparing these estimates with the ones obtained by (for example) regression #1 in the LSMC approach (see Table 2), the LSMC estimates are close to the analytical approximations. This implies that the LSMC approach, specifically using regression #1, is consistent, assuming that the results obtained from the analytical approximation are the ‘true’ values. The basis for this assumption is tested in Pelsser and Plat (2009), by comparing the Monte Carlo price of the option with the analytical approximation of the price of the option at time 0.

(36)

7.3 Remarks

In this thesis, only OLS is used to estimate the regression coefficients in Table 1. Of course, other techniques could be used to estimate these coefficients. For example, weighted least squares (WLS), putting more weights to the tailobservations to obtain better estimates in the tail of the distribution. However, this method also has disadvantages. It assumes the weights are known exactly, which is not the case here.

Next, the choice of basis functions could have implications for the significance of the individual basis functions in the regression. For example, regression 3 has correlated basis functions (by definition of the correlated stochastic processes x(t) and y(t). This results in estimation difficulties for these individual coefficients due to the multicolinearity problem. However, this does not affect the LSMC algorithm, because it focuses on the fitted value of the regression rather than on individual coefficients. Hence, the fitted regression is unaffected by the degree of correlation among the basis functions.

Finally, for practical implementations, the LSMC approach is convenient if one is interested in the sensitivity of several model parameters or other input variables, such as the zero curve. Using LSMC, different results are obtained in minutes, rather than hours. Also, when SCRs need to be calculated more than once a year, risk managers will save a significant amount of time when using the LSMC approach.

(37)

8

Conclusion

EU insurance companies must satisfy the requirements described in the Solvency II Directive. One of these requirements is that they must hold an amount of cap-ital such that they can meet its obligations in 12 months time with a probability of 99.5%. This amount is called the Solvency Capital Requirement (SCR). Since no analytical expressions exist for most insurance products (liabilities), risk man-agers rely on simulation methods. Traditionally, they determine the value of the liabilities in one year time, hence the SCR, by using nested Monte Carlo simula-tions which are very time consuming. Therefore, the Least Squares Monte Carlo (LSMC) approach is used to determine the value of the liabilities in one year.

In this thesis, the performance of both approaches has been investigated, where the nested simulation approach served as a benchmark for the LSMC approach. In addition, analytical approximation formulas have been used to test the quality of the LSMC approach. To test these techniques, a numerical example has been used to explore their performance. I used the swap rate dependent embedded option introduced by Pelsser and Plat (2009), with direct payments up to 50 years ahead (described in Section 3). In Section 4, I derived the formula which calculates the value of the embedded option under the Tn-forward measure. Section 5 discussed

the underlying two-factor Gaussian interest rate model and how the underlying correlated stochastic processes can be used to simulate zero-coupon-bond prices under the Tn-forward measure (equation (5.19)). Consequently, the formulas and

theories from Section 4 and 5 are used in Section 6 which described the two simulation methods. Implementing both approaches in Section 7 resulted in a SCR of 71.7 for the nested Monte Carlo approach. This was obtained by using 10,000 outer scenarios, simulated under the one-year forward measure, and 200 inner scenarios, simulated under the 48-year forward measure. Time to run this simulation procedure in Matlab takes approximately 16 hours. Using the LSMC approach, five regression functions were tested. Satisfying results were already obtained for 20,000 first-year scenarios for the first and third regression functions (SCR of 68.7 and 69.5 respectively, see Table 2). Time to run this simulation procedure is approximately 12 minutes on the same computer. Increasing the number of first-year scenarios to 40,000 increases the accuracy of the SCRs for all regression functions. Again. regression 1 and 3 perform best with a SCR of 70.4 and 71.3 respectively. The corresponding simulation time in this case is approximately 25 minutes. Implementing the analytical approximation formulas from Pelsser and Plat resulted in another, faster method to estimate the SCR. This method is used to test the quality of the LSMC approach. While in general no closed form expressions exist for pricing embedded options, in this case the

(38)

analytical approximations showed that the LSMC approach is a good alternative when no approximation formulas are available.

To conclude, Least Squares Monte Carlo simulation is a good alternative ap-proach to determine the SCR of an insurance company. The distribution of the value of the embedded option (liability) at time 1 approximates the distribution obtained by the nested Monte Carlo approach quite well, considering the huge amount of time that is saved by this method.

(39)

References

[1] Bauer, D., Kiesel, R., & Ruß, J. (2006). Risk-neutral valuation of partici-pating life insurance contracts. Insurance: Mathematics and Economics, 39, 171-183.

[2] Bauer, D., Kiesel, R., & Reuss, A. (2010). Solvency II and Nested Simu-lations–a Least-Squares Monte Carlo Approach. In Proceedings of the 2010 ICA congress.

[3] Black, F. & Scholes, M. (1973). The Pricing of Options and Corporate Lia-bilities. Journal of Political Economy, 3 637-654.

[4] Bolder, D. (2001). Affine Term-Structure Models: Theory and Implementa-tion. Working Paper.

[5] Brigo, D. & Mercurio, F. (2007). Interest Rate Models - Theory and Practice. 2nd ed. New York: Springer Verlag.

[6] Cl´ement, E., Lamberton, D., & Protter, P. (2002). An analysis of a least squares regression method in American option pricing. Finance and Stochas-tics, 6 (4), 449-471.

[7] Duffie, D. & Kan, R. (1996). A Yield-Factor Model of Interest Rates. Math-ematical Finance, 6, 379-406.

[8] Etheridge, A. (2002). A Course in Financial Calculus. New York: Cambridge University Press.

[9] Geman, H., El Karoui, N., & Rochet, J.-C. (1995). Changes of Numeraire. Changes of Probability Measure and Option Pricing, Journal of Applied Prob-ability, 32, 443-458.

[10] Haastrecht, A. van, Pelsser, A. & Plat, R. (2010). Valuation of guaranteed an-nuity options using a stochastic volatility model for equity prices. Insurance: Mathematics and Economics, 47, 266-277.

[11] Hull, J. (2006). Options, futures and other derivatives. 8th ed. Boston: Pren-tice Hall.

[12] Jamshidian, F. & Zhu, Y. (1997). Scenario Simulation: Theory and method-ology. Finance and Stochastics, 1 43-67.

[13] Kousaris, A. (2011). The Advantages of Least Squares Monte Carlo. Insights, Barrie & Hibbert Ltd.

Referenties

GERELATEERDE DOCUMENTEN

Additional vials were prepared containing only methanol, which served as the placebo (a) for each statin. A volume of 1 ml of each statin standard solution was transferred to

Having presented the higher option values obtained using the adjusted Black &amp; Scholes model in both Table 2 and Table 7, compared to Hull &amp; White and

In this paper we examine an altogether different ap- proach to the problem, based on the Monte Carlo method for performing statistical averages 10 : random configurations

Inherent veilige 80 km/uur-wegen; Ontwikkeling van een strategie voor een duurzaam-veilige (her)inrichting van doorgaande 80 km/uur-wegen. Deel I: Keuze van de

Praktijkbegeleiding van nieuwe chauffeurs wordt nuttig en noodzakelijk geacht, maar niet alleen voor verbetering van verkeersinzicht; de begeleiding wordt vooral ook

De resultaten van het archeologische waarderingsonderzoek maken zeer duidelijk dat er een dense Romeinse occupatie was aan de westkant van de Edingsesteenweg te Kester en dat

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers).. Please check the document version of