• No results found

Least-squares Monte Carlo within Solvency II : an advanced numerical method for the determination of Solvency Capital Ratios

N/A
N/A
Protected

Academic year: 2021

Share "Least-squares Monte Carlo within Solvency II : an advanced numerical method for the determination of Solvency Capital Ratios"

Copied!
94
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Least-Squares Monte Carlo Within

Solvency II

—An Advanced Numerical Method for the Determination of Solvency Capital Ratios

Dani¨

el Johan Maria Lintvelt

Master’s Thesis to obtain the degree in Actuarial Science and Mathematical Finance University of Amsterdam

Faculty of Economics and Business Amsterdam School of Economics

Author: Dani¨el Johan Maria Lintvelt

Student nr: 5880882

Email: Daniel.Lintvelt@gmail.com

Date: August 15, 2016

Supervisor: Dr. D. Linders

(2)

This document is written by Student Dani¨el Lintvelt who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creat-ing it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintveltiii

Abstract

The option pricing technique of Least-Squares Monte-Carlo has recently been marked as a potential candidate for efficient determination of Solvency II’s new capital require-ments. However, these settings are not necessarily equivalent and not much is known yet about this new application of the method. Of primary importance in the new regulatory setting is the Solvency Capital Requirement. If one does not wish to rely on standard methods to determine this, one quickly arrives at the computationally infeasible nested simulations approach. Least-Squares Monte-Carlo is able to handle the resulting compu-tational overload through two approximations. Firstly, the unknown functional form of a conditional expectation is replaced by a linearly independent combination of relevant basis functions. Secondly, this approximation is evaluated through a simple least-squares regression of pay-offs on corresponding basis function values. Relevant considerations, extensions and theoretical results are identified and investigated. Overall, the method seems to be very promising but not devoid of drawbacks such as light dependency on problem dimensionality. There are indications that its implementation speed may enable more advanced risk management in the future, potentially greatly impacting insurance undertakings and other businesses.

Keywords Solvency II, Option Pricing, Monte-Carlo Simulation, Numerical Methods, Least-Squares Monte-Carlo, Regression, Nested Simulation, Solvency Capital Requirement, Quantita-tive Risk Management

(4)

Preface vi

1 Introduction 1

1.1 Introduction and motivation. . . 1

1.2 Thesis structure. . . 3

2 Solvency II and the SCR 4 2.1 Solvency II Background . . . 4

2.1.1 History of European Solvency Legislation . . . 4

2.1.2 Solvency II Development . . . 5

2.2 The Solvency II Framework . . . 6

2.2.1 Pillar I : Quantitative Requirements . . . 7

2.2.2 Pillar II : Governance and Risk Management . . . 8

2.2.3 Pillar III: Reporting and Transparency. . . 8

2.3 SCR Formulation . . . 8

3 Use of Risk Measures for (Solvency) Capital Ratios 10 3.1 Defining Risk Measures . . . 10

3.2 Theoretical Properties for Coherent Measures of Risk. . . 11

3.3 Value-at-Risk as a Measure of Risk . . . 12

3.3.1 Possible Shortcomings of VaR . . . 12

3.3.2 Possible Benefits of VaR . . . 13

4 Determining the SCR 15 4.1 The Standard Method for SCR Determination . . . 15

4.2 Standard SCR Aggregation Formula . . . 16

4.3 Internal Models and Relevant Considerations . . . 16

4.4 The SCR for Internal Models . . . 17

4.5 Methodology for Implementing Internal Models . . . 18

5 Using Simulation to Establish Required Capital 19 5.1 Monte Carlo Simulation . . . 19

5.1.1 Mathematical Foundation of MC Simulation . . . 19

5.1.2 Efficient Simulation . . . 20

5.1.3 Issue of Bias . . . 20

5.2 Nested Simulations Approach . . . 21

5.2.1 Determining Future Portfolio Values . . . 22

5.2.2 Embedded Options . . . 22

5.2.3 Nested Simulation . . . 22

5.3 Formal Setting for Nested Simulation. . . 23

5.3.1 Formalization of Nested Simulation. . . 23

5.3.2 Risk-Neutral and Objective Probabilities . . . 25

5.4 Possible Solutions to Computational Overload . . . 25 iv

(5)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintveltv

6 Least-Squares Monte Carlo 26

6.1 The Algorithm . . . 26

6.1.1 Basic Example . . . 26

6.1.2 Underlying Mechanics of LSMC. . . 28

6.2 Translation to SCR Setting . . . 29

6.3 Convergency Results from Option Pricing . . . 30

6.4 Relevant Differences Between Option Pricing and Risk Capital . . . 31

6.5 Regress-now and Regress-later. . . 33

7 LSMC within the Solvency II Framework 34 7.1 Review of Applied Publications . . . 34

7.2 Theoretical Contributions . . . 35

7.2.1 Bauer and Ha(2015) . . . 36

7.2.2 Broadie et al.(2015) . . . 38

7.3 Synthesis and Discussion . . . 39

8 Conclusion 41 8.1 Research Questions . . . 41

8.2 Further Research Suggestions . . . 43

Appendices 45

Appendix A (Doff, 2008) Table 45

Appendix B More on Alternative Risk Measures 46

Appendix C Derivation of Standard Aggregation Formula 48

Appendix D Global Modeling Methodology: Integrated versus Modular 49

Appendix E (Pfeifer and Strassburger, 2008) Table 56

Appendix F More on Alternative Dependency Measures 57

Appendix G Capital Allocation Within a Simulation Set-Up 60

Appendix H Simulation History and Intuition 62

Appendix I Derivative Pricing Background of LSMC 64

Appendix J Previous Actuarial Applications of LSMC 66

Appendix K Review of Applied LSMC Literature 67

Appendix L Model Framework from Bauer and Ha (2015) 70

Appendix M Additional Results from Bauer and Ha (2015) 71

Appendix N Proofs and Derivations from Bauer and Ha (2015) 73

(6)

This thesis has been submitted in partial fulfillment of the requirements for the Master’s degree in Actuarial Sciences and Mathematical Finance at the University of Amster-dam. Officially, it represents 420 hours, or 15 ECT, of coursework.

Firstly, I would like to thank Dr. Dani¨el Linders, who was kind enough to supervise the process. He advise me to focus the research on Solvency II rather than on option pricing, which I feel has greatly added to its relevance and usefulness. Additionally, his comments on my initial structuring of the document markedly improved the end-product in comparison to earlier versions. Although I am not yet familiar with the appointment of the second reader, I would also like to thank him or her for going through the trouble of examining yet another Master’s thesis.

Furthermore, I wish to thank head of department Bart Bos at De Nederlandsche Bank for taking some time out of his busy schedule in order to give me a private lecture on implied volatility surfaces last summer. This led me to read up on (American) options, which inevitably made me stumble into Least-Squares Monte-Carlo. To my surprise, the technique also turned out to be highly relevant to Solvency II and potentially to actuarial modeling for years to come.

My thanks further extends to emeritus Professor Rob Kaas, who with his collab-orators Angela van Heerwaarden, Michel Vellekoop, Katrien Antonio and Frank van Berkum at the UvA supplied the highly useful thesis template of which this foreword is the result. Although working with LATEXand especially Bibtex was still a struggle from

time to time, it greatly eased my suffering.

Lastly, I would like to thank my family and in particular my parents and younger sis-ter. My parents for keeping me from personal insolvency through a somewhat extended period of study and periodic support. My sister for much needed diversion through her adventures in the world of Pok´emon Go, which I luckily have been able to avoid so far.

The image on the front page is taken from Bauer et al. (2010) and depicts the

cen-tral problem of this thesis. The photograph portrays the Monte Carlo casino in Monaco during the early twentieth century. Not only does its black and white version better depict the inherent beauty of the place, it will also hopefully be easier to print if nec-essary. The image was taken from Wikipedia, where I have spent many hours without ever wasting a single one.

(7)

Chapter 1

Introduction

1.1

Introduction and motivation

In January of this year, the new European-wide insurance regulatory framework “Sol-vency II” has gone into full effect. Through its definitive installation, participating member states of the European Union now possess one of the most modern insurance regulation frameworks in the world. The scale of this operation becomes apparent when one views the lengthy preparation, the numerous postponements and the immense cost to the industry. Over almost a decade, hundreds of millions or even billions of euros have had to be invested by insurance issuers in order to comply with the new regulation.

In return, the rules and guidelines concerning prudent undertaking of insurance are now largely in line with contemporary risk management practices. Risk management nowadays incorporates far more complex and elaborate systems than the outdated rules and methods of the past. Through these, insurers hope to be better adapted to an in-creasingly volatile and complicated economic reality. As such, it is expected that not only consumers will benefit, but that insurers themselves will gain as well through an enhanced understanding of the risks they are exposed to. The construction of these systems has been an ongoing development for some years now, but regulation such as Solvency II currently provides even more compelling arguments to further improve and expand them.

The process has not been smooth however. As insurers attempted to implement some of the more advanced aspects of the framework, it soon became clear that previously used methods were inadequate in light of the new challenges. Establishing the new capital requirements proved to be particularly testing, most notably for the complex products issued by many life insurers. This resulted in some heated debate on which techniques were most suitable for resolving the new difficulties and the issue of capital determination in specific. Now that the dust has settled to some degree, one method has come forward as being especially promising. This is the technique of Least-Squares Monte-Carlo.

Practitioners have not felt restrained in expressing their praise for the method.

Descriptions include “State-of-the-art”(Rowland,2013) and “enough to make even the

most hardened actuarial modelers giddy with excitement”(Kousaris,2011). In the Nether-lands too, the method’s potential has been noticed, as evidenced by publications such as Haastrecht and Plat (2013); Plat (2012). Its primary listed appeal is the ability to (sufficiently) accurately estimate the aforementioned capital requirements in an accept-able time-frame, even when there are a large number of complicating factors from a modeling perspective.

Moreover, the method is considered to be highly flexible, capable of incorporating a wide variety of modeling features. This has previously been a point of concern for many of its competitors (Leitschkis and H¨orig,2012). In addition, its implementation speed is

(8)

highly regarded among practitioners. Not only is this relevant in a fast-paced business setting, it also allows for more powerful follow-up applications. These may include more advanced risk hedging, efficient capital allocation and the establishment of management rules. Lastly, it is possible to determine confidence intervals for the method’s outcomes. Clearly, expectations concerning the (further) development of Least-Squares Monte-Carlo are very high indeed. Whether or not this enthusiasm is warranted, is more dif-ficult to asses however. Apart from the obvious commercial interests of its developers, industry practitioners may not always have the means or motivation for more funda-mental research. Often, authors refer to its option pricing background for mathematical and theoretical validation of the method. This may be problematic because pricing (ex-otic) options does not necessarily translate directly to the determination of risk capital

in an insurance context. As both Bauer and Ha (2015) and Broadie et al. (2015)

as-sert, the method’s properties have not yet been extensively studied in this new setting. Academical publications on the topic are especially scarce. Overall, the method is still relatively new and unknown within the actuarial community. Nevertheless, if any such technique becomes popular for business’s risk assessments, this may potentially have a large impact, socially, economically and from an actuarial modeling viewpoint.

The objective of this thesis, therefore, is to provide a more thorough overview of Least-Squares Monte-Carlo’s application to Solvency II from a predominantly academic per-spective. In this way, the thesis aims to offer greater insight into its application within this new, somewhat unknown, regulatory setting. Although the method’s usability ex-tends beyond regulation and insurance purposes, the discussion will thus mostly be held in terms of Solvency II. There are several benefits to framing the analysis in such a way. An important reason is increased focus, as the research means of a Master’s thesis are limited. By embedding the method in the larger Solvency II discussion, its (broader) relevance will also become more readily appreciable.

As a first step, it will be of interest to establish exactly how and where the method contributes to the new regulation. For this, it will be necessary to perform a closer inspection of the framework’s construction. Because it is not self-evident that insurers should choose to invest in such advanced techniques, some attention will also be paid to the underlying methodological considerations.

Within the context of Solvency II, deriving capital requirements will play an im-portant role. Therefore, a secondary goal will be to establish an exact formulation for this problem and investigate how it relates to Least-Squares Monte-Carlo. From this point on, the main focus will be the method itself and its application to determining (regulatory) capital requirements. In specific, its general workings will be explained and relevant assumptions and considerations examined. Furthermore, differences between the original option setting and Solvency II capital determination will be presented. Any potential difficulties arising from this will subsequently be highlighted.

Following this, the discussion focuses on available academical literature which di-rectly considers Least-Squares Monte-Carlo in light of Solvency II. Here, it will be of primary interest to identify any complications previous authors have come across. Also, general modeling practices will be reviewed in order to determine the used methodolo-gies, as well as possible additions and extensions for the basic method. Lastly, some of the more sophisticated and recent theoretical publications are examined to establish how these might contribute to a greater understanding of Least-Squares Monte-Carlo within Solvency II.

Summarizing, the research will be conducted along the lines of the following questions: • How is Solvency II constructed? In particular, which components may benefit from

(9)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt3

– What necessitates the development of more advanced techniques such as Least-Squares Monte-Carlo?

– What problem(s) does it solve exactly?

– Why is this technique especially suited for dealing with these issues?

• How does Least-Squares Monte-Carlo function? What is the basic intuition behind the method?

– What assumptions need to be satisfied in order for it to be valid? – How is it implemented? Which choices need to be made?

– How is it connected to option pricing? What are the relevant differences be-tween Solvency II and option pricing? How might these affect the technique? • What may be derived from academical publications on the subject?

– What general modeling issues have been encountered in practical studies? – Are there any extensions and adaptations available?

– What theoretical results have been derived?

To answer these questions, literature will be studied concerning all the relevant topics: Solvency II, risk capital determination, option pricing and Least-Squares Monte-Carlo. While the thesis itself does not set out to implement Least-Squares Monte-Carlo, its research will be directed towards identifying possible modeling issues and improvements. Because the method’s workings are still relatively unfamiliar to actuarial practitioners, conducting a literature study was considered to be the most efficient methodology to gain more insight into this subject. Also, alternatives will not be studied extensively here, so comparative analysis will be limited. Doing so in a meaningful manner would most likely warrant a study of its own.

We believe such research to be of interest to several groups of people. Firstly, to anyone active in a professional setting which demands some basic knowledge about the relevance of the method to Solvency II. Secondly, more seasoned modelers may be interested in what drives Least-Squares Monte-Carlo, which considerations need to be taken in mind and possible improvements or extensions that are available. Thirdly, it may be of use to academics looking to conduct new research on the topic and who thus wish to know which avenues of approach may be most promising.

1.2

Thesis structure

In the first part of the thesis, Chapter2first establishes the historical and socioeconomic relevance of Solvency regulation. Then, its structure is laid out and lastly its new capital

requirements are examined more closely. In Chapter 3 the topic of risk measures is

considered briefly because of its connection to risk capital. Chapter4explores the various options presented by Solvency II to determine these capital requirements.

For the second part, Chapter 5 provides the necessary framework for the adopted

simulation methodology. Also, the problem of capital determination is made exact here. This will lead directly to the relevance of Least-Squares Monte-Carlo. Subsequently,

Chapter 6 fully introduces the method in its original option setting and Chapter 7

considers it specifically in the context of Solvency II.

The thesis is structured such that anyone (overly) familiar with Solvency II, full stochastic modeling and simulation may choose to skim through these parts to Section 5.2. All relevant information to pick up the discussion at this point, is recalled here. Concerning the appendices: in part, these contain information directly relevant to the main text but inappropriate to fully include there. Some also provide more extensive background information on selected topics. This may not interest all readers and these are thus fully optional.

(10)

Solvency II and the SCR

This first chapter will provide a general overview of how the Solvency II framework has come to be and how it has been constructed. At the end of the chapter, the SCR is mathematically defined.

2.1

Solvency II Background

2.1.1 History of European Solvency Legislation

European-wide solvency regulation for insurance companies dates back to the early seventies with the introduction of the First Non-life Directive in 1973. The First Life Directive followed shortly thereafter in 1979. The focus of these first directives lay mainly on the prescription of volume- and rule-based minimal capital requirements, a focus which has only recently shifted with the introduction of Solvency II.

Regulation in any form or kind has always been subject to criticism and solvency standards have been no exception. For a critical look on solvency regulation during this

early period, see for instance Munch and Smallwood (1980). This paper argues on

em-pirical grounds that American non-life solvency standards only affected insolvency rates by barring smaller companies from the market and did not actually affect the solvency positions of companies which did enter the market. This then raises the question whether this outcome justifies the added administrative costs to which existing companies were subjected.

Additionally, Cummins et al. (1994) strive to provide a conceptual framework that

may be used when assessing the adequacy of regulatory frameworks. Interestingly, they do so for the analysis of risk-based capital requirements, akin to current European regulation. In specific, their article concerns the United States’ property and liability insurance market. Risk-based regulation had already been in place in North America’s banking sector since the 1950’s, but after several increasingly major insolvencies, was judged to be beneficial to the insurance sector as well. Consequently, this led to the Na-tional Association of Insurance Commisioners (NAIC) introducing Risk-Based Capital (RBC) as an additional regulatory tool. For more information on RBC and the NAIC, visit the NAIC’s website:NAIC (2016).

In their article, Cummins et al.(1994) develop a total of seven criteria which they believe risk-based supervision should adhere to. Summarily, these consist of: 1. Appro-priate incentives, 2. Risk-sensitive formulae, 3. ApproAppro-priate formula calibration, 4. For the economy in its entirety, focus on the highest insolvency costs, 5. Focus on economic values, 6. Misreporting should be discouraged and 7. As simple as possible formulae. Dickinson(1997) provides further analysis on risk-based capital, identifying its two main purposes. Firstly, it obviously serves to monitor capital adequacy from a regulatory per-spective and secondly, they underscore its importance for management in order to assist them in financial control and planning. The analysis goes on to extract lessons from the

(11)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt5

United States’ implementation. Possibly most importantly, it notes the lack of

inclu-sion of management risk in the American system. Dickinson(1997) also concludes that

the European system at that time was already outdated; the author may have been surprised to learn that restructuring would take another twenty years.

Sources such as (Klein, 1995; Eling et al., 2007; Doff, 2008) and the previously cited articles provide more information on insurance regulation in general, including the benefits and trade-offs concerned. A full exposition would not be possible within the scope of this thesis, but we do note that the economic perspective identifies both agency problems and costly information as providing the main rationale behind regulation (Doff,

2008). Both can be seen as a form of asymmetry resulting in an imperfect market. The

agency problem refers to the possible exploitation of policyholders by insurance firms and their management through the asymmetry of information between the two groups. Acquiring all of the relevant information would naturally be prohibitively expensive for any individual prospective policyholder, necessitating some form of external body, such as a regulator.

Unhampered by the more critical analyses and fueled by various political motivations and the desire to protect consumer interests, insurance regulation has continued to develop and expand. This has been the case both inside the European Union (EU) and world-wide. Within the EU, further directives were issued during the 1990’s in order to foster European market integration. However, this did not yet create a “level playing field”, where everyone complies to the same set of rules, as each EU member was allowed to retain their own supervisory framework.

The early 2000’s saw two more directives being introduced in 2002, eventually leading to Solvency I going into effect at the start of 2004. These developments, including the introduction of Solvency I, did not differ significantly from the original regulation laid out in the early seventies. It remained focused on providing rules for minimal capital requirements and as such can be seen as updated versions of the original legislation from some thirty years earlier.

2.1.2 Solvency II Development

Solvency II changes this. Initially requiring only partial implementation from the is-suance of the Solvency II directive in 2009 onward, it has now gone into full effect since the beginning of 2016. It departs from the previous framework by adapting a risk-centered approach on an enterprise, or holistic, level, rather than simply laying out

minimal capital standards for individual insurers. Section 2.2 will provide more

com-prehensive information on Solvency II’s construction and how it seeks to reach these goals.

Eling et al.(2007) presents a detailed report of the developmental stages of Solvency II. Furthermore, they compare the planned European proposals to risk-based capital regulation already in place in various other parts of the world. Primarily, they find that model complexity is not necessarily beneficial in identifying weak insurers. Flexibility such as present in the Swiss Solvency Test (SST) is mentioned as being desirable. They go on to point out that capital ratios alone are not sufficient to prevent insolvency, necessitating additional measures to identify aspects such as poor management. Lastly, they advocate greater transparency. Not only would this increase market discipline, it would also enable more research to be done.

Doff (2008) and Holzm¨uller (2009) both provide further analysis by applying the Cummins et al. (1994) criteria to proposed Solvency II plans. Doff (2008) summarizes his findings concisely in a table, which is included in Appendix A. Generally, he finds that Solvency II addresses all relevant issues adequately. He does warn against putting too much emphasis on quantitative aspects, stating that Solvency II’s efficiency could be increased by a more balanced approach. Operational and business risks should also be given full attention, as well as the decision making process. He combines this with a

(12)

rec-ommendation to focus on economic consequences of insolvency, rather than insolvency by itself. Finally, he notes the adverse affects of setting hard quantitative limits.

Holzm¨uller (2009) agrees with Doff (2008) that Solvency II checks the criteria set

out by Cummins et al. (1994) and goes on to add four more. These consist of: 8.

Ad-equacy in economic crises and systemic risk, 9. Assessment of management, 10. Flex-ibility of framework over time and 11. Strengthening of risk management and market transparency. These are then checked against the RBC, the SST and Solvency II. Both Solvency II and the SST score well on the additional criteria. Holzm¨uller (2009) does note that Solvency II particularly lacks an adequate consideration of management risk. Furthermore, she points out the shortcomings of using Value-at-Risk as a measure of risk. The last point is most notable from an actuarial perspective and receives a more

in-depth treatment in Chapter3.

Having provided a historic and conceptual context wherein current Solvency II reg-ulation may be viewed, we now move on to its implementation in the following section.

2.2

The Solvency II Framework

Structurally, Solvency II is rather similar to the international regulatory framework for banking, Basel II. This is not coincidental. The first stage of development took place during the period 2001 to 2003, roughly coinciding with the construction of the new Basil Capital Accords (Eling et al.,2007). During this period the European Commission (EuC) ordered several studies to be conducted in order to lay down fundamentals for

the future policy framework. The first study resulted in KPMG (2002), which advised

the EuC to adopt a three-pillar framework similar to the one being constructed for the international banking sector. This recommendation was accepted by the Commission.

KPMG (2002) was followed by another mayor study known as the Sharma Report

(ISS,2002), named after the chairman of the executive work group. Part of this study was a detailed three-part survey of all member state regulatory bodies. We will provide

the resulting recommendations here as they are presented in Eling et al. (2007) and

ISS(2002). Their importance cannot be understated, as they provide the foundation of

Solvency II, but the specifics are outside the scope of our research:

1. It needs to ensure that insurers are able to cope financially with the effects of the risks that they are exposed to; [capital adequacy and solvency]

2. It needs a range of early-warning indicators and other diagnostic and preventative tools that help us to detect and correct potential threats to the solvency of insurers before their full effects materialise; [availability of a broad range of tools to cover full causal chain]

3. Finally the regime needs to pay more attention to internal factors such as the qual-ity and suitabilqual-ity of management, adequate corporate governance practice and codes and an insurers risk management systems. [assessing management quality and adequacy of internal systems].

Eling et al. (2007) notes that a disproportionately large amount of attention has been paid to the first point, focusing on capital models. Meanwhile, the report itself tried to draw attention to the importance of internal factors.

A more detailed outline was subsequently developed during the second phase of Solvency II. This further development followed the procedures of what is known as the Lamfalussy committee, also named after its chairman. The procedure consisted of gathering opinion reports, consulting market participants and conducting public forums. These results were bundled into a new directive, approved by the EU parliament. Finally, the Committee of European Insurance and Occupational Pension Supervisors (CEIOPS)

(13)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt7

was tasked with constructing concrete implementation. CEIOPS has since been renamed to the European Insurance and Occupational Pensions Authority (EIOPA).

Further information on the process, implementation and the numerous postponements

may be found on EIOPA’s website: (EIOPA, 2016). The remainder of this section is

dedicated to an overview of the pillar structure of Solvency II, leading to the core of the thesis in the following sections.

2.2.1 Pillar I : Quantitative Requirements

The first pillar of Solvency II concerns itself with the quantitative aspect of the frame-work. This makes it the natural focus of attention from an actuarial perspective. How-ever, the actuarial role is not necessarily limited to the first pillar, as a major part of Solvency II consists of transferring (quantitative) knowledge to management level. Pillar I contains two major innovations of Solvency II: market-consistent valuation and risk-based capital requirements.

The first innovation contained in Solvency II implies that all participants must val-uate both assets and liabilities on the basis of economic value. That is to say, they are valued such that an independent party would be willing to acquire them at this price. On the asset side this is usually accomplished by determining their current market value. Insurer liabilities consist most importantly of technical provisions, which are held on the balance sheet in order to fulfill obligations to policyholders. In contrast to the asset side, there seldom is an appropriate market available where these provisions are traded or their price may be derived from other instruments. Therefore, their valuation gen-erally depends more extensively on stochastic modeling, which provides expectations, termed best estimates in an insurance setting. Aside from this expectation value, a risk margin is usually required as well as a measure of the risk involved in obtaining the best estimate. It accounts for the fact that independent parties will not be willing to take over liabilities at the bare expected value. For more information on the balance sheet under Solvency II, see for instance DNB (2014c).

“Marking-to-Model” is also used for some of the more complex or less liquid invest-ments, where information from related products may be used for pricing. The breakdown of some of these methods during periods of market stress, most notably the subprime

credit crisis, have also earned them the popular name “mark-to-myth”.Turnbull et al.

(2008) discusses the dangers of such methods in the context of the subprime crisis. Another important aspect of the new regulatory framework is the modernized cap-ital requirement which insurers must meet. On the most basic level, this includes an absolute Minimum Capital Requirement (MCR) as set out in Article 129 of the Solvency

Directive (EuC,2014). The MCR is designed to be calculated through a simple formula

with auditable data. Should an insurance company fall below the MCR, regulators will receive full control over the organization and may prepare for a run-off. However, this situation is ideally prevented and therefore Solvency II has implemented an intervention ladder which starts at an earlier level.

This warning level is formed by the Solvency Capital Requirement (SCR). As noted earlier, the SCR is risk-based and seeks to do justice to the variety of risks and their in-terplay inherent to an insurance undertaking. This may not even affect the total capital requirement, but rather lead to a redistribution of capital over different areas. As such, it provides a more detailed exposition of the risks an insurer may be exposed to. Once an insurer breaches the SCR, their regulator will be expected to step in with increasingly intrusive measures. However, supervisors such as the Dutch regulator “De Nederland-sche Bank (DNB)” have already signaled that they would also strongly discourage any

organization to move too close to the SCR level, even without breaching it (DeBrauw,

(14)

2.2.2 Pillar II : Governance and Risk Management

In contrast to the first pillar, Pillar II focuses on the more qualitative aspects of proper risk management. It seeks to do so both on the level of an individual insurer and on a group, or holistic, level. In the Netherlands, Pillar II resembles the Financial Supervision

Act (Wft) legislation on sound and ethical operational management (DNB,2012).

These qualitative aspects mainly consist of an insurer’s organizational structure and operational management. Furthermore, Pillar II provides the basis for the supervision of the expertise and trustworthiness of directors and key officers, risk management, internal control, key functions and outsourcing. For larger and more complex organizations, this supervision will generally be stricter. However, insurers do remain responsible for how they choose to structure their organization. Additionally, Pillar II sets new requirements for the actuarial function (DNB,2014b).

An integral part of the second pillar is the Own Risk and Solvency Assessment (ORSA). In the Netherlands, the ORSA was preceded by the temporary and less com-prehensive Own Risk Assessment (ORA). The ORSA is to be submitted once a year and is designed to provide the insurer and its stakeholders with information on the risks they are exposed to and how these may be handled. Organizations are free to choose their own form to do so, but must meet some required elements. These may be found inDNB (2014e). As long as these choices are properly motivated, insurers may deviate from the standard assumptions in determining their own risk position. The ORSA does not replace the SCR or the MCR, but it does provide insurers with an opportunity to communicate how their organization differs from regulatory assumptions. It is impor-tant to note that both the management board and the internal audit department are expected to be aware of and be able to understand the risks they are exposed to.

2.2.3 Pillar III: Reporting and Transparency

The final pillar of Solvency II is dedicated to the reporting aspects of the framework. It wishes to safeguard insurer solvency by setting out requirements for the disclosure of public information and for the reporting to supervisors. As is the case with the first two pillars, this applies both on an individual and on a group level.

In principal, insurers must publicly report annually. This frequency may be increased when, for instance, the insurer breaches the SCR. For a full list of items expected to

be disclosed, both publicly and in confidence to supervisors, see DNB (2016b). From

an actuarial viewpoint, Pillar III holds less of interest than the first and second pillars. Therefore, it will not be discussed any further. This is not to diminish its importance. Previously cited authors, such asEling et al.(2007), state that Pillar III’s contribution to market discipline through enhanced transparency may very well be more effectual in creating efficient and effective insurance markets than more direct solvency regulation.

This concludes the structural form of Solvency II and a first look at the SCR. The full Solvency II directive may be found here:EuC (2014). It is, however, difficult to form a clear picture of Solvency II from this primary source. For this purpose, one might do

better by browsing the collection of fact-sheets published by DNB: DNB (2016a).

Al-ternatively, industry publications such as DeBrauw(2015), which provides an overview

from a Dutch legal perspective, may be consulted. The latter two sources both provide links to the relevant articles inEuC(2014).DeBrauw(2015) is only available in Dutch.

2.3

SCR Formulation

The level at which the SCR is set, equals the 99.5 percent Value-at-Risk (VaR) level of the aggregate loss over a one-year period. Mathematically, the SCR of the aggregate

(15)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt9

loss, for instance over d individual losses, L =Pdi=1Li, therefore constitutes a quantile

and may be expressed as:

SCR = VaRp[L] = FL−1(p) = inf{x ∈ R|FL(x)≥ p}, (2.1)

where p is the 99.5 percent probability level, F is the cumulative distribution function of the insurer’s aggregate loss L over one year and x is the SCR. The SCR thus equals the level of available capital at the current time t = 0, ACt=0, expected to be sufficient

to absorb any capital losses over a one-year horizon with a probability p = 99.5%. As the name implies, this capital should be free for use as a buffer. The MCR is set at the p = 85% VaR level, with the restriction that it has to fall between 25 and 45 percent of the SCR.

Within this thesis, we will simply assume that the capital available to absorb losses at a general time t, is given by the difference between the value of assets and liabilities:

ACt= Vt− Bt.

Here Vtand Btrepresent the values at time t of the assets and the liabilities, respectively.

Following the Solvency Directive, these need to be derived through market-consistent valuation. Also, for Bt, we will assume that it only refers to the best estimate of the

liabilities. In reality, a number of complicating factors arise such as a market-consistent valuation of risk margins or deferred taxes, but we here strive to provide a simple, general framework only intended to illustrate the main issues in Solvency II SCR calculations. Lastly, by definition, a business is no longer solvent at a time t if ACt< 0. Intuitively,

it follows that the SCR may also be defined in terms of available capital ACt. This notion

will be used in Section4.4to continue the discussion in terms of available capital, rather than losses, to better connect with the literature considered in this thesis.

(16)

Use of Risk Measures for

(Solvency) Capital Ratios

In the preceding section, it has been established that within Solvency II’s first pillar, capital requirements are derived through the VaR risk measure. However, many more measures of risk are available and the study and applications of risk measures expand far beyond Solvency II. Still, only VaR has been chosen by the EuC and therefore this section primarily sets out to investigate some of its properties. This is needed to better assess the methods for SCR determination in the following chapters. An attempt will be made to embed the study in the larger discussion regarding risk measures. Also, some popular alternatives to VaR used in other legislative environments are included

in Appendix Bto better judge the comparative suitability of VaR for measuring risk.

Far more extensive information may be found in other sources, such as Szeg˜o (2004).

This particular source is vehemently opposed to the usage of VaR though, to the point

of being derogatory. For a more balanced exposition, McNeil et al. (2015) provides

an overview with many of the latest advancements in the field of risk management, including risk measures.

3.1

Defining Risk Measures

Before continuing this chapter, a more formal introduction into what constitutes a risk

measure is required. Firstly, let M denote the space of random variables representing

losses over a fixed time interval. In our setting the losses will derive from an insurer’s

balance sheet and the time interval will equal one year. It is then assumed thatM is a

“convex cone”, such that when L1, L2 ∈ M then L1+L2 ∈ M and λL1 ∈ M holds for all

λ > 0. Now, a risk measure may be defined as a real-valued function mapping the losses

from the cone to a scalar measure: ρ :M → R. The outcome of the measure of risk ρ is

most often interpreted as either the riskiness of a portfolio, or as the amount of excess capital that is needed to make the portfolio “acceptable” from some risk management perspective.

This definition does not yet provide any guidance on what one would expect from a risk measure. Understandably, this is a subject of much debate. Risk, understood in its broadest sense as uncertainty, naturally is an elusive concept and these measures of risk necessarily reflect only some aspects of the available quantitative data. It then becomes a modeler’s challenge to choose wisely between possible measures to most adequately represent this information. Clearly, this choice will be highly situational. In what has become a well-known article,Artzner et al.(1999) provide some direction by propound-ing several theoretical properties which they think a risk measure should possess. A risk measure satisfying all of these conditions is then considered to be “coherent”. A brief overview of the article and the properties is provided in the following section.

(17)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt11

3.2

Theoretical Properties for Coherent Measures of Risk

This section considers the properties of risk proposed in the article by Artzner et al.

(1999). An earlier article,Artzner et al.(1997), presents the essentials of the later article with less technical detail. Lastly, Artzner et al. (1999) themselves explicitly point out that their approach does not lead to a single ”best” risk measure. Instead, they state that economical considerations will be the decisive factor. Also, complications such as data availability are largely ignored byArtzner et al. (1999).

Having made these points, we continue by providing a simplified representation of the original proposed properties. To begin with, let X and Y be random variables,

rep-resenting losses, within the convex cone M, ρ denote a risk measure and α and λ be

real numbers, with λ ≥ 0. The risk measure ρ maps these random variable to a real

number and for this, Artzner et al. (1999) identify the following axioms.

Translation Invariance: For the entire set of risks in M and all real numbers α,

we have

ρ(X + α) = ρ(X)− α.

Subadditivity: For all combinations X and Y ∈ M,

ρ(X + Y )≤ ρ(X) + ρ(Y ).

Positive Homogeneity: For all λ≥ 0 and all X ∈ M,

ρ(λX) = λρ(X).

Monotonicity: For all X and Y ∈ M with X ≤ Y ,

ρ(X)≥ ρ(Y ).

Translation invariance implies that adding a sure profit to a portfolio, such as a cash injection, diminishes its risk by this amount. Subadditivity states that it is not pos-sible to decrease a portfolio’s risk by splitting it up. Conversely, this also means that diversification is possible and combining portfolios will not increase the total risk posi-tion. Positive homogeneity communicates that position size by itself should not affect the riskiness of the portfolio. Furthermore, it should be independent of the currency in which it is measured. Finally, within the definitions of the Artzner et al.(1999) article, a risk measure satisfying monotonicity assigns higher values to risks that almost surely have larger future losses.

These properties are not conclusive and some of them have also been extensively debated. Subadditivity and Positive Homogeneity are considered to be the most con-troversial ones. The first because some find it arguable whether diversification is always possible, the second because it does not penalize risk concentrations or liquidity risks. Artzner et al.(1999) themselves state that especially the second concern should not be incorporated in the property of risk measures themselves.

Alternatives and adaptations exist, such as convex risk measures proposed inF¨olmer and Schied(2002). Here the subadditivity and positive homogeneity axioms are replaced by the axiom of convexity:

Convexity: For X and Y ∈ M and λ ∈ [0, 1],

ρ(λX + (1− λ)Y ) ≤ λρ(X) + (1 − λ)ρ(Y ).

(18)

Artzner et al. (1999) fall short of their intention. They find that in the specific case of mergers, subadditivity may lead to unintended results from a regulator’s perspective. Therefore, they propose to improve upon the original set of axioms by replacing this criterium by one of their own construction, the “regulator’s condition”. This discussion

underlines the comments made in the original article by Artzner et al. (1999) that

economic reality is the decisive factor. Logically, it follows that coherence is in fact not all-important, but rather may be a concise signpost.

3.3

Value-at-Risk as a Measure of Risk

The discussion in the previous section allows us to address some of the most encountered criticisms of VaR as a measure of risk.

3.3.1 Possible Shortcomings of VaR

Famously, it fails to satisfy the axiom of subadditivity. That is, it does not necessarily reward diversification. This issue had already been identified in theArtzner et al.(1999) article. Here it is shown that within a relatively simple context, diversification of bond issuers, VaR is not able to identify the concentration risk of lending capital to a single borrower.

In some instances, VaR does satisfy all of the proposed axioms, including that of subadditivity. This class has been identified as encompassing the set of elliptical dis-tributions. Elliptical distributions derive their name from the shape of their iso-density plots for low dimensional joint distributions. In the two dimensional case for instance, all points with the same density of probability form ellipses. Elliptical distributions gener-alize the multivariate normal distribution and include distributions such as multivariate Student’s t or the near pathological Cauchy distributions. The latter generally will not appear in a financial context however, as both its variance and its mean are undefined. In mathematics, distributions are identified through their characteristic functions. For the elliptical family we have the following (univariate) definition from Valdez(2005): Elliptical Distributions The random variable, or risk, X is said to have an elliptical distribution with parameters µ and σ2 if its characteristic function can be expressed as

E[exp(itX)] = exp(itµ)· ψ(t2σ2) for ψ itself a characteristic function.

Like the univariate normal distribution, this definition shows that elliptical distribu-tions may be fully characterized through two parameters. One for location, the other one for scale, or the mean and variance for normal distributions. In the case of risk measures, this does imply that VaR, whilst coherent in this instance, does not provide any additional information that might not be gained from these two measures. For ellip-tical distributions, a traditional, Markowitz mean-variance approach is thus equivalent, largely defeating the purpose of using VaR for determining these risks according to Szeg˜o (2004).

Generally, distributions encountered in financial and especially in insurance practice will not be of an elliptical nature. Underlying variables may follow different probability distributions or there may be non-linearity present in the payoff function. Both of these issues would result in the risk measured by VaR to deviate from a normal or elliptical distribution (Yamai and Yoshiba,2004). It may for instance exhibit skewness or excess kurtosis. This possibility of leptokurtic, or “heavy-tailed”, distributions then leads us to the second major flaw inherent to the usage of VaR.

(19)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt13

Importantly, VaR does not consider the distribution of the measured risk beyond the probability level p. This exposes it to what is known as ”tail risk”, or the existence of heavy tails. In other words, after the cut-off probability level, VaR does not provide any information on the distribution of the risk in question. Therefore, it might be that the last part of this distribution dies out particularly slowly. This would indicate that there is a significant probability of an excessively large loss which is not identified by VaR. This issue could of course be dealt with by determining the VaR for more than one probability level p, analyzing VaR as a function of p. This idea is related to some of

the alternative risk measures included in AppendixB. For a formal definition and more

accurate information on ”thick” tails, see section 7.3 ”More dangerous risks” of Kaas

et al.(2008).

3.3.2 Possible Benefits of VaR

The previous discussion begs the question why the EuC has chosen to adopt VaR as the basis for the SCR. The reasons for this are mostly of a practical nature. Since its introduction in 1994 in RiskMetrics and the subsequent promotion by J.P. Morgan and Reuters, VaR has become one of the most widely used measures in the financial industry. Furthermore, before it was considered for Solvency II, it had already been adopted by bank regulators. Combined with its, possibly deceptively, simple interpretation and thus communication, it makes for a logical choice from an implementing point of view.

Additionally, the alternative risk measures from Appendix B may suffer from

esti-mation issues. At the 99.5 percent probability level there is likely to be a scarcity in available data. As it is located in the far end of the distribution, there are by definition few data points which may be used for calibration. VaR is then considered to be more robust in comparison, raising confidence in its value. As has been discussed, this value may still be unreliable when there are relevant losses beyond the 99.5 percent level.

At this point, it is also important to note that the concept of risk measures is closely related to the field of premium principles in actuarial sciences. See for instance chapter 5 of Kaas et al. (2008), ”Premium principles and risk measures”. Or, as an alterna-tive, the article by Goovaerts et al.(2012) which establishes links between several risk measures more advanced than VaR and additionally shows how these may be extended from premium calculation principles to solvency applications. The first source states that the concept of coherency is not necessarily suited, or ironically, coherent, for an insurance context. Due to the abundance of incomplete market segments in insurance, diversification is not always possible. This may result in risk-averse individuals being willing to pay an extra premium for the combination of risk. Exponential and Esscher premiums are named as examples which actually are super-additive.

In this light, it may be better to view VaR as a “good enough” risk measure. From the standpoint of a regulator, it may be of greater interest to have a measure which fulfills its objectives of practicality and communicability. Also, VaR does not stand by itself, because it has been embedded in the larger Solvency II framework. On it its own, it is indeed an all-or-nothing measure, which does not consider how large the losses may be in the very worst scenarios. Within Solvency II, this is not its intended usage. Rather, it forms a first buffer, which must be somewhat high and through its computation provide valuable information to supervisors, the shareholders and indeed to the insurer itself. Once it has been breached, this signals at a relatively early moment to the regulator that more attention is required.

On a more fundamental level, one may also question the feasibility of assessing a one in two hundred probability for an entire insurance operation over the period of one year and accurately capturing this within a single number. Possibly it is more interesting to arrive at a sufficiently prudent buffer, which is also somewhat informative. This would then make it understandable from the position of the EuC not to let the conceptual “flaws” of VaR be the decisive factor.

(20)

To get some idea of how VaR holds up to its most popular alternatives, Appendix

B provides a succinct overview of their comparative attractiveness. From the

informa-tion in the appendix, it follows that although these alternatives are certainly superior in conceptual terms, they might be more difficult to assess reliably in practice. Combining this with the discussion in the preceding section, we thus find that the usage of VaR for the determination of the SCR is a defensible choice from the perspective of the EuC. In any case, the choice has already been made and is unlikely to be altered in the near future. However, it may have been both interesting and informational if the EuC had decided to implement one of its alternatives as secondary measures. Having said this, the following sections are dedicated to the prescription by Solvency II for the actual calculation of the SCR.

(21)

Chapter 4

Determining the SCR

At the very start of this thesis, it has already been pointed out that regulation implies a trade-off. On the one hand there is the added benefit of smoothing out market imperfec-tions, on the other hand there exist the heightened costs inherent to the implementation of regulatory requirements. This is particularly relevant to smaller organizations, for which the extra cost burden may be too great to bear. Solvency II recognizes this issue and seeks to compensate for it so that smaller organizations are not forced out of the market.

One way it does this is by providing numerous allowances for simplifications in the calculation of the SCR. Also, for some of the smallest insurers, Solvency II Basic exists in the Netherlands, specifically tailored to this group. The very smallest insurers do not fall under the supervision of a regulator at all, as they are not considered to be of significant economic consequence. On a larger scale, the Solvency II Directive stipulates the usage of a standard approach. This standardized approach ensures that insurers do not have to start from scratch in assessing their solvency position, thus lightening the regulatory burden. The following section details the mechanics of the prescribed method.

4.1

The Standard Method for SCR Determination

In order to substantiate the standardized approach, Solvency II breaks down an insurers position into several risk modules. These consist primarily of market, life, non-life and

health risk and may in turn be compromised of multiple submodules (DNB, 2014d).

These (sub)modules are then aggregated from the bottom up according to what is known as the standard formula. Here Solvency II stipulates (linear) correlation coefficients between the different modules as a measure of their interdependence, which must be taken into account when adding up these risks. Anything but a perfect correlation indicates some diversification, lowering the aggregate capital requirement.

This notion of correlation-adjusted aggregation is closely related to the concept of

elliptical distributions introduced in Section 3.3. As such, it may be provide a poor

representation of reality due to the abundance of non-normality and asymmetry among distributions in practice. AppendixDelaborates on this issue.

Implementation mainly consists of calculating the effects of predetermined scenarios on the balance sheet. These outcomes are then aggregated as described above. Addi-tionally, operational risk is quantified through a factor-based approach and added to the aggregated capital requirement at the final step. Other noteworthy modules include the counter-party default risk and the concentration risk module.

(22)

4.2

Standard SCR Aggregation Formula

The procedure as described in the previous discussion can be captured with a math-ematical formula. This formulaic expression takes the form of what is known as the

“square-root formula” due to its typical shape. First, define d ∈ N risk modules with

corresponding loss random variables Li, 1≤ i ≤ d. Now, let the aggregate loss L over

one year be L =Pdi=1Li. When the SCR is defined as the p = 99.5% VaR level, L is a

linear combination of the module losses and these Li are jointly normally distributed,

the following formulation is justified:

SCR = VaRp[L] = d X i=1 µi+ v u u t d X i=1 (SCRi− µi)2+ 2 X 1≤i<j≤d %ij(SCRi− µi)(SCRj− µj).

Here, µi = E[Li], SCRi = VaRp[Li] and %ij denotes the linear correlation coefficient

between risk variables Li and Lj as imposed by Solvency II.

The expression can be shown to be sensible in the case of VaR when p > 0.5 and when the module losses are indeed jointly normally distributed. The derivation is in-cluded in AppendixC, alongside proposition 3.1 fromMcNeil et al. (2009) which states the formula is applicable under the more general conditions of joint elliptically dis-tributed module losses and a risk measure satisfying only the first and third of Artzner et al. (1999)’s axioms as well as law-invariance. Law-invariance implies that two iden-tically distributed risks will result in equivalent values of the risk measure. Both VaR

and the main alternatives from Appendix Bsatisfy these.

Crucially, the real-world performance of this aggregation formula will depend on the true underlying distributions and their dependence structure. Some insurers may find that their situation is captured rather poorly by these prescriptions in terms of modules, scenarios and, importantly, linear dependency. This is particularly relevant for larger or more specialized organizations, for which both the stakes are higher and the ability to assess their own risk is more advanced. They stand to benefit most from a more detailed exposition of their structure and exposures, either through lowering overall capital requirements or by obtaining more insight into their risk position.

Solvency II acknowledges this viewpoint as well and accommodates to it by providing the possibility of “internal models”. These allow organizations to propose their own assessment of their risk positions, albeit within some constraints. From an actuarial or financial engineering point of view, this is one of the most interesting aspects of the new regulatory framework. Also, this is the component of the Solvency II framework where the LSMC technique comes into play. Therefore, after an introduction of the subject in the following section, this will be the working environment for the remainder of this document.

4.3

Internal Models and Relevant Considerations

As has been mentioned in earlier sections, the option of internal models (IM) allows insurance undertakings to develop risk assessment models specifically tailored to their own organization. However, every IM needs to gain approval from a national regula-tor before it may be used for the determination of an insurer’s SCR. This approval involves meeting stringent requirements but does not enforce a certain form on the IM. Additionally, insurers will also set out their own needs which their IM must meet.

From the supervisor’s perspective, requirements concerning the following issues have been put forward by Solvency II: suitability for the company’s risk profile, appropriate-ness of underlying foundations and assumptions in conceptual and mathematical terms

(23)

Least-Squares Monte Carlo Within Solvency II — Dani¨el Johan Maria Lintvelt17

the level of governance and organizations must be able to demonstrate how they will use these models within their business. Lastly, insurers are expected to provide inter-nal model validation, after which supervisors will evaluate the process. Guidance on

implementation is provided byDNB (2013), DNB(2014a) andCEIOPS (2009).

The considerations of an insurer may be split up into a business and into a technical segment. From a business point of view, the IM needs to pair flexibility of input with rapid, but accurate, output. Also, its workings need to be sufficiently intelligible so that it may be understood by higher management levels. Finally, it will require repeatability and the ability to audit. Technical considerations involve issues such as assessing the divergence from the standard approach, modeling operational restrictions and allowing efficient capital allocation (Rowland,2013).

Due to its scope and complexity, the implementation of an IM on the level of an en-tire insurance undertaking is one of the most challenging tasks the insurance industry faces. Therefore, we do not set out to attempt to capture this in its totality and di-versity. Rather, several key issues are selected which will not only be relevant for the construction of IMs, but also have an intrinsic academical and actuarial value.

4.4

The SCR for Internal Models

The standard approach provides a clear-cut definition of the SCR through the standard aggregation formula. This is not the case for IMs however. Article 101 of the Solvency II Directive only contains a definition in descriptive form, which allows for some leeway in its mathematical translation. The exact meaning of a statement like “SCR ensures solvency in one-year’s time with a given probability level” is thus debatable.

Christiansen and Niemeyer(2014) examines this issue and identifies several possible interpretations available in the Solvency II literature. The contributions ofBarrieu et al. (2012),Bauer et al.(2010),Devineau and Loisel(2009),Kochanski(2010) andOhlsson and Lauzeningks (2009) are named as some of the few which in their view provide definitions with sufficient mathematical substance. Three main definitions are identified in this paper, involving either discount factors, martingale measures or a minimization problem. These approaches do not necessarily converge to the same outcome, as is

evidenced by a simple example provided on page 4 ofChristiansen and Niemeyer(2014).

The main issue that arises is the appropriate discount factor to be used. Christiansen

and Niemeyer (2014) find that depending on this choice, some of the approaches may be equivalent, but others may not.

For future purposes, the definition from Bauer et al. (2010) will be used. Both the original article and Christiansen and Niemeyer (2014) conclude that it is an intuitive

definition of the SCR. For calculation purposes, Bauer et al. (2010) first define the

one-year loss function at t = 0 as

Ldef= AC0− AC1

1 + i.

Here i equals the one-year risk free rate at t = 0 and ACtretains its previous definition

as the difference between assets and liabilities. Using this, they define a simpler version, usable for practical applications, of the SCR introduced in the first chapter:

SCRdef= argminx  P  AC0− AC1 1 + i > x  ! ≤ 1 − α  .

The same simplification is used in the SST for determining their capital requirements (Bauer et al.,2010). The objective thus becomes to establish the (stochastic) distribution of AC1, the available capital in one-year’s time. The solvency capital ratio may then be

defined as

SR = AC0

(24)

4.5

Methodology for Implementing Internal Models

In this chapter, the two main options for determining Solvency II capital requirements have been introduced. Insurers may either opt for the simple standard approach, or choose to derive their SCR through a more complex internal model. Furthermore, it has already been observed that the standard approach may provide a poor representation

of an insurer’s solvency position. This topic is further explored in Appendix D where

empirical research is examined, studying the effects of using the standard formula when its assumptions are not satisfied. Summarily, the results are that the standard formula becomes both unreliable and uninformative for more realistic settings and thus does not provide a framework for further applications needed for decision making.

Therefore, many of the larger or more specialized organizations choose to implement an alternative methodology for their internal capital assessments. This method is known as integrated or (full) stochastic modeling. Here, the balance sheet of an organization is projected forward into time in order to obtain the entire distribution of its future capital. The need to project either financial results or capital (requirements), arises in several settings, ranging from Solvency II to IFRS to pricing complex products. Meth-ods derived for this purpose will thus serve multiple uses in a business. Solvency II has however provided an extra impetus to this development because the usage of an IM necessitates stochastic modeling.

In general, projecting the entire balance sheet item by item will not be a feasible approach. The integrated methodology thus identifies several risk factors, or drivers, which are chosen to adequately represent the exposures of the undertaking. These may for instance include relevant equity indices, interest rates or mortality figures. For these, a full stochastic model is constructed which then may be used to assess the state of these risk drivers at future times.

As an illustration, consider at time t d risk factors: ~Z = (Zt,1, ..., Zt,d). The

mul-tivariate stochastic process is then represented by ~Z = ( ~Zs, s ≥ t). In order to derive

pay-offs or risks from these risk drivers, a mapping from the risk factors to the original risks needs to be constructed. Let Vs be a relevant quantity such as a portfolio value at

a future time s, then at time t we are interested in evaluating the expression Vs= fs( ~Zs, s).

Here, the mapping fs needs to be available and contain all relevant information on

portfolio composition and valuation formulas. Because the stochastic process ~Z is not

generally analytically tractable, an Economic Scenario Generator (ESG) is adopted to evaluate such expressions, employing simulation techniques.

The necessary framework for these simulation techniques is further developed in Chap-ter 5. As we progress in this chapter, it will become apparent why the fully stochastic

methodology necessitates advanced techniques such as LSMC. Appendix D contains

more information on stochastic modeling, ESG’s and their benefit relative to methods such as the standard formula.

As a last remark, it is not generally necessary to replace the standard approach completely when opting for an IM. Solvency II includes options for the development of partial IMs. Here, an insurer pinpoints the aspects of its undertaking which require more accurate risk modeling and integrates these local internal methods into the stan-dard formula. This integration is subject to some conditions itself, but this will not be discussed here.

(25)

Chapter 5

Using Simulation to Establish

Required Capital

This chapter provides the necessary framework for the simulation techniques employed in this thesis. Also, the problem of determining SCR’s will be cast in to this setting. From this, it will follow in Section5.2that it is necessary to develop further techniques such as LSMC for the approach to be feasible. The simulation techniques used refer to

Monte-Carlo (MC) random sampling, the basis for which is provided in Section 5.1.

Furthermore, one of the benefits of stochastic modeling is that it enables the deter-mination of efficient capital allocation. This topic may be reviewed in AppendixG. The following section starts with the mathematical framework underlying MC simulation.

AppendixH may be consulted for the history and intuition behind this technique. An

interesting topic touched upon in the appendix but not discussed in this chapter is the fact that MC simulation lends itself well to computer systems employing parallel com-puting. AppendixHcontains a small discussion indicative of its potential. The topic has become increasingly relevant over recent years due to the still-increasing capabilities of computers and the availability of cloud-computing. The latter allows individual users access to larger processing units while demanding a much smaller investment.

5.1

Monte Carlo Simulation

This section first provides the mathematical foundation of MC simulation. Additionally, basic issues such as faster convergence and estimation bias are discussed.

5.1.1 Mathematical Foundation of MC Simulation

AsGlasserman(2003) explains, probability may be formalized mathematically by inter-preting it as a set with a volume relative to the universe of all possible outcomes. MC methods on the other hand, determine volumes by interpreting them as probabilities. The most well-known example of this, is using the method to determine π by randomly sampling from the unit square and counting how many points fall within the unit circle. From volumes we may extend to integrals by interpreting them as expectations and

drawing uniformly and independently from their domain.Glasserman (2003) offers the

following simple illustration for an integral over a function f : α =

Z 1

0

f (x)dx.

Given n independent drawings Ui from U ∼ Uniform(0, 1) we then obtain the MC

estimate b αn= 1 n n X i=1 f (Ui) 19

Referenties

GERELATEERDE DOCUMENTEN

De resultaten van dit onderzoek zijn in principe niet extrapoleerbaar naar deze bedrijven, maar gelden alleen voor Nederlandse beursgenoteerde multinationals. Door alleen deze

De kijker wordt gevraagd niet alleen naar de wereld die de film representeert te kijken maar ook de documentaire zelf te zien als een constructie of representatie.. De reflexive

We introduce an efficient, scalable Monte Carlo algorithm to simulate cross-linked architectures of freely jointed and discrete wormlike chains.. Bond movement is based on the

In hoofdstuk 3 kwam naar voren dat diversificatie voordelen kan opleveren voor verzekeraars, omdat met diversificatie risico’s kunnen worden verminderd en er

Overigens verwacht FPK geen grootse gevolgen voor de kapitaalsstructuur door gebruik van hybride kapitaal en de vergroting van leverage, omdat verzekeraars daar

Om GGOR's te kunnen afstemmen op de risiconormering voor wateroverlast (WB21), is inzicht nodig in de relatie tussen grond- en oppervlaktewaterstand. Met name is van belang vanaf

Inherent veilige 80 km/uur-wegen; Ontwikkeling van een strategie voor een duurzaam-veilige (her)inrichting van doorgaande 80 km/uur-wegen. Deel I: Keuze van de

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers).. Please check the document version of