• No results found

Climate change and insurance : Projection of premium level developments in a competitive property insurance market

N/A
N/A
Protected

Academic year: 2021

Share "Climate change and insurance : Projection of premium level developments in a competitive property insurance market"

Copied!
128
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

U

NIVERSITY OF

A

MSTERDAM

A

CTUARIAL

S

CIENCE

M

ASTER

T

HESIS

Climate change and Insurance

Projection of premium level developments in a competitive property

insurance market

Author:

M.A. M

ATTENS

B

SC Student number:10532617

Supervisor:

dr. T.J. B

OONEN

Second reader:

dr. S.U. Can

July 15, 2017

Periods 5 en 6 2016/2017

(2)

Statement of Originality

This document is written by Melchior Mattens, who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Abstract

Climate change is an important underlying risk factor for property insurers. Four different risk segments for property insurers have been recognized to be (possibly) affect by climate change, i.e. hail, wind, rainfall and lightning. A model for (aggregate) claim sizes is proposed, together with a competitive market model for insurance. Different market conditions are considered and premium and default rate developments are projected over a 30 year horizon. In this way, the effect of climate change on marginal costs per policy can be distinguished from the effect on the profit margin for insurers. In this thesis, simulations indicate that climate change contributes to both higher marginal costs and higher default rates, leading to less market competition. Consequently, the model predicts that premiums rise up to almost 20% more on average due to climate change. More market competition (e.g. more competitors or a higher price elasticity) seems to lead to both higher default rates and higher relative premium increases.

Keywords Climate change, Property insurance, Insurance market competition, Solvency capital require-ments, Claim size modelling, Default rates

(4)

Contents

Preface 5

1 Introduction 7

2 Literature study 9

2.1 Climate change . . . 9

2.2 Insurance market modeling . . . 12

2.3 Solvency II . . . 15 3 Theoretic framework 17 3.1 Claim model . . . 17 3.1.1 Storm claims . . . 18 3.1.2 Hail claims . . . 19 3.1.3 Rainfall . . . 21 3.1.4 Lightning . . . 23 3.1.5 Assumption support . . . 25

3.2 Insurance market model . . . 27

3.3 Estimating the marginal cost per policy . . . 32

3.4 SCR and premium determination . . . 34

3.5 Parametrization . . . 36

3.6 Research set up . . . 44

(5)

5 Conclusion 56

Appendices 61

A Proof of Theorem 3.1 62

B SCR calculation 67

B.1 Premium and reserve risk . . . 67

B.2 Catastrophe risk . . . 68

B.2.1 Storm catastrophe risk . . . 68

B.2.2 Hail catastrophe risk . . . 69

B.2.3 Catastrophe risk aggregation . . . 70

B.3 Aggregation of underwriting risk . . . 70

B.4 Counter party default risk . . . 71

B.5 Total SCR . . . 72

C Claim size distribution characteristics 73

D Temperature model 75

E Model parameter overview 80

F Impact of regulation on default probabilities 82

(6)

Preface

Over the last two years, I have been orienting and working in the non-life insurance sector. Given my personal background, e.g. being an active member of the animal rights party (PvdD) in the Netherlands, I was already interested in the way insurers are dealing with climate change. Sadly enough, my experience was that insurers seem to acknowledge climate change as a possible risk factor/threat, but many do not try to quantify the risk properly. Especially when it comes to pricing insurance products, I have heard more than once that the non-life insurance market is too com-petitive to add any safety loading for climate change risk at this moment. To my opinion, in a world where monetary valuation is essential, the externalities of human behavior should be more emphasized. One of the ways to achieve this is by a projection of future insurance premiums. This triggered me to write a thesis on this topic.

Based on all the evidence of climate change and its possible consequences that I have come across during my research, I would like to express my hope that this topic will be discussed more often in the actuarial field of research and professional work. I am well aware of some shortcom-ings of the claim models proposed here, of which the tail risk might be most important. These tail risks are related to the most extreme and unprecedented weather conditions. In the end, insurance can only cover financial losses due to unforeseen events. However, suffering material losses and especially losses of life cannot be made up with financial compensation for most people. Stressing both the financial consequences and the material consequences to the public more often and loudly, should become part of the actuarial society’s job and that of insurers in particular.

Finally, there have been many challenges when writing this thesis. Due to my working expe-rience at Arcturus B.V., I have become more acquainted with solvency capital requirement legis-lation (Solvency II). Next to this, I acquired more knowledge about claim size modeling methods.

(7)

I would like to express my gratefulness for this to all people at the company. I would also like to thank my thesis supervisor dr. Tim Boonen for giving very constructive feedback and assistance whenever needed. I appreciate all the help I received, whether it was advice on the content of my thesis or supporting me in many other ways.

(8)

1. Introduction

Over the last few years it has become more clear how climate change is going to affect our daily lives. Whereas the melting process of the ice in both polar regions seems to accelerate, the vari-ability of the weather grows at other latitudes as well. This should be of special concern to both property insurers and insurers in the agricultural sector, as the risk drivers of their portfolio might become substantially more dangerous than they used to be. However, many insurance companies are not yet at the point that they explicitly model climate change as part of the pricing procedure or incorporate it into their "internal model", which can replace the use of the standard formula under Solvency II regulation. This leaves a void, since insurers generally do not forecast their solvency in the medium to long run in any other way than by a subjective "ORSA" scenario set.

This apparent lack of understanding how the insurance market might develop over time leads to a danger to policyholders, as dividends or discounts are given now, whereas premiums might rise sharply in the coming decades. Moreover, it is questionable if discounts given on premiums now are actually in the interest of the policyholders if the probability of default of insurers goes up. On the other hand, one can imagine that a higher probability of default leads to mergers of insurance companies in order to diversify away risks. This might have some potential benefits, like cost reduction and lower risk premiums. However, it might be the case that less competition leads to higher premiums and so the disadvantages might outweigh the advantages of mergers and bankruptcies. The question arises how insurers deal with these problems in case they aim for profit optimization. The purpose of this thesis is quantifying the influence of climate change on default probabilities and market premium trends.

A way of substantiating climate change risk is a Monte Carlo analysis of the individual risk drivers that are affected by climate change. However, as the insurance market is competitive,

(9)

pre-miums cannot simply go up after the insurer suffered a big annual loss. In real life, insurers face many more constraints that they have to satisfy, like capital requirements under the Solvency II framework and keeping shareholders happy by sometimes paying out dividends. This case is quite similar to the case for life insurers, where longevity risk is a similar underlying risk driver as cli-mate change risk is for non-life insurers. Therefore, models considered in this thesis are relevant to the life insurance industry as well. The key notion is that actuarial valuation methods are affected by the underlying risk, whereas actual premiums are set by a management that also considers mar-ket conditions.

The exact influence of climate change is of course dependent on the geographical location in which the insurers are situated. It is not the purpose of this thesis to consider big reinsurance com-panies which can diversify their risks on a global scale. Not only is the effect of climate change on a global scale much harder to predict for insurance companies, as so many kinds of risks are in-volved. Also their portfolio consists of so many different risks, that climate change risk might only marginally change their result. Next to this, the effect of reinsurance on the underwriting result is left out of the model, as reinsurance premiums usually tend to fluctuate with actual suffered losses on country scale and the insurance companies considered are sufficiently large to retain the risk.

For this reason, the only feasible options to substantiate climate change risk is on either country scale or continental scale at largest. In this thesis, a claim generating model is specified for property insurance, a branch of insurance which is most likely going to suffer badly from climate change risk, as houses are fully exposed to natural catastrophes like storm, hail and excessive pre-cipitation. This model is specified in such a way that it can be used for many different countries, but the parameters are calibrated for the Dutch case. To make the model as realistic as possible, the goal is to carry out an analysis (time series) of multiple variables related to weather (like storm, precipitation, hail and lightning, using KNMI data) and, with help of research papers on climate and weather patterns, extrapolate trends for the expected number of events, the expect number of claims coming from an event and extreme event probabilities. By doing research on observed claim characteristics from the past, the claim generating model should be fairly similar to the true risk model. This means that, in line with the aforementioned purpose of the thesis, the influence of climate change for Dutch insurers and its policyholders should be quantified in a careful and prudent way.

(10)

2. Literature study

2.1

Climate change

In the most recent climate report of the Royal Dutch Meteorological Institute (KNMI, 2015) many different aspects of the Dutch climate are specified. The report mentions a few risk drivers for property and agricultural insurance, like storm, lightning, hail, drought and excessive precipita-tion. Next to this, the KNMI provides projections of the change of "the average" between 2015 and 2050 under 4 different climate scenarios. As there is uncertainty about the change in tempera-ture and the change in wind patterns, the KNMI specified two different temperatempera-ture scenarios and two different scenarios for the changing wind patterns and let these temperature and wind pattern scenarios overlap. The changes in risk driver characteristics over time is therefore specified per scenario.

However, the projections and characteristics in the KNMI report do not provide a full insight into the processes of the risk driver changes. As what was observed in June 2016, an extreme hail storm hit parts of the Netherlands, which caused an unprecedented amount of losses for insurers. What was so special about the weather conditions on June 23rd 2016? In order to understand this, a paper by Lenderink et al. (2011) on the intensity of extreme rainfall in a changing climate is very insightful. This paper elaborates on rainfall extremes and its relation to the (dew) temperature (or condensation temperature). If the average annual temperature increases, the daily temperature ex-tremes also increase. The Clausius-Clapeyron relationship states that the maximum humidity level increases with 7% per degree Celsius temperature increase. However, according to Lenderink et al. (2011), there is no indication the hourly intensity extremes will change in the same way as the maximum humidity level. As is noted by Lenderink et al. (2011), the average humidity level is

(11)

not likely to increase much and might even decrease a bit in the summer due to dehydration of the soil. Nonetheless, as was shown by data analysis of the dew temperature and the hourly rainfall sums (series of a few decades in De Bilt and Hong Kong), there seems to be a stronger relationship between the dew temperature and hourly rainfall extremes than the Clausius-Clapeyron (CC) re-lationship, i.e. 14% or two times the CC relationship. In other words, whenever the condensation temperature is one degree Celsius higher, the quantiles of the distribution of the hourly rainfall sums are shifted by 14%1. In Figure 2.1, the results of the data analysis by Lenderink et al. (2011) are replicated and show the quantiles of hourly rainfall at a given condensation temperature.

Figure 2.1 Hourly rainfall intensity and dew point temperature (stochastic) relationship (Lenderink et al., 2011).

1Lenderink et al. (2011) came up with an explanation why this relationship might hold. First of all, as the CC

relationship indicates, the maximum humidity level is higher when it is warmer. Next to this, the condensation tem-perature is higher whenever air contains more water vapor. As a consequence of condensation, warmth is released and reinforces an upward air current, bringing the still quite humid air to higher, colder parts of the cloud. In there, all re-maining water vapor condensates. The warmer it is, the faster the process goes and the more water vapor condensates. Consequently, extreme hourly rainfall can be even more extreme.

(12)

Closely related to rainfall is hail. As is stated in the KNMI climate report (2015), more conden-sation and stronger upward air currents can lead to more and heavier hail events2. Also lightning events can cause more damage, as the KNMI report indicates that the number of lightning strikes in case of a lightning event can increase by 10 to 15 percent per degree Celsius of temperature increase.

On the other hand, the KNMI report states that it is not sure whether storm conditions be-come more frequent in the future. In an article published by the KNMI in 2013 on severe storms in Europe in a warmer climate, as well as in Haarsma et al. (2013), it is noted that the severity nor the frequency of normal winter storms will increase. As can be seen in the left graph in Figure 2.2, the number of days with storm conditions above the North Sea seems to have been changing frequently during the last century.

Nevertheless, Haarsma et al. (2013) based their projections on a denser grid projection model, which enabled them to make projections for small size weather systems like tropical cy-clones. Much like rainfall extremes coming from local showers (Lenderink et al., 2011), smaller size storms, but with hurricane wind speeds, might hit the western European coast more often and stronger than in the past (Haarsma et al., 2013). The rationale behind this is that it becomes warmer at higher latitudes and so tropical storms stay longer powerful. Next to this, the research indicates that tropical storms might arise more to the east due to increasing ocean water surface tempera-tures. This means that the distance the storms need to cover to reach the western European coast is shorter. Lastly, the higher humidity levels above the ocean at higher latitudes might reinforce the strength of a tropical storm on its way to Europe. In the middle and right graph in Figure 2.2, the simulation results from the EC-EARTH model used by Haarsma et al. (2013) are depicted. Clearly, the number of severe storms expected to reach the North Sea area is likely to increase.

2If the upward air current is strong enough (which is more likely on warm days) and the air layer at the top of

the cloud cold enough, water drops start to freeze together. Once falling to lower parts of the clouds, the strong air currents bring the hail balls up to higher parts again, where condensation water can freeze to the hail balls. If this process continues for long enough (if the upward air currents are strong enough due to temperature and humidity conditions, using the same reasoning as for rainfall by Lenderink et al. (2011)), hail balls can potentially grow really big in size.

(13)

Figure 2.2 Left: number of storm days per year above the North Sea (KNMI, 2015); Middle: frequency of 11 or 12 bft 3-hour wind averages above the North Sea for the end of the 21st century (red) and now (blue) (Haarsma et al., 2013); Right: average number of hurricanes simulated in a 30 year period in the current climate (blue) and the future climate (red) for multiple regions (Haarsma et al., 2013).

As what is important from an insurance perspective, the KNMI (2015) notes that severe storms above land will become less powerful and frequent, as the metropolitan area of De Randstad ex-pands. If wind currents are broken by obstacles like buildings or forests, the wind speed reduces fast and so it becomes more unlikely that storms can cause damage further away from the coastal area. However, as the EC-EARTH model outcomes indicate that extremely heavy storms might reach the Dutch coast more often in the future, especially this metropolitan region is at risk. There-fore, insurers with big aggregate insured sums in provinces North-Holland and South-Holland face increasing natural catastrophe risks due to wind-storm.

2.2

Insurance market modeling

Insurance is based on the principle of shifting risk to someone who wants to bear the risk for a financial compensation. This compensation is the premium that insurance companies charge their policyholders. A topic to which a lot of attention is paid in actuarial science is determining this premium, i.e. the pricing of the insurance products. From an actuarial perspective, a premium could consist of a net premium, which is simply the expected value of the risk, plus a safety and cost loading (Kaas et al., 2008). Especially the safety loading gives a different view on insurance

(14)

practice, since the question arises how high this safety loading can be.

Just like its policyholders, an insurance company can be assumed to be risk averse, due to solvency capital regulation and shareholder’s loss aversion. Therefore, adding a safety loading on the net premium makes sense. This could provide a lower bound for what insurance compa-nies at least want in return for taking on risk. However, the upper bound is more interesting, as rational profit optimizing insurance companies rather charge a high premium. This upper bound is determined by a competitive market structure. Many applicable market structures3 have been proposed. Bertrand (1883) and Hotelling (1929) provided very classical market models which are used in many different settings. In the model specified by Bertrand, it is assumed that the total number of products sold depends on the price charged by the seller and by the prices charged by its competitors. All buyers in the market are assumed to be extremely price sensitive. If anyone of the sellers decides to sell his products at a price slightly below the price of the others, all consumers want to buy at the lowest price, meaning that everyone but the seller with the lowest price does not sell a single unit. This is the most extreme case of price sensitivity. Hotelling specified a model which takes the costs of buying into account. This model was based on the insight that buyers let their decision not only depend on the price, but also on matters like goodwill, product quality (or alleged quality) and many other things. This is a so called distance to all sellers, which is costly to bridge. In this model, a kind of relatively elastic market is created, as charging a higher price for your products does not necessarily mean that you lose all your customers.

However, more useful market models for the insurance market have been specified by Taylor (1986), Emms et al. (2007), Warren et al. (2012) and Wu and Pantelous (2017). Taylor looked at the solution to the optimization problem of the net present value of insurance profits with a finite time horizon, as a function of the premium of the insurer. He specified a market demand model which depends on the insurers premium and the average premium charged by competitors. Emms et al. (2007) contributed to this by making the average premiums of competitors stochastic and by looking at different premium strategies in order to optimize an utility function depending on the wealth of the insurer. Warren et al. (2012) took a more game theoretical approach, by specify-ing a rather simple market model for which they derive the optimal response functions. Wu and Pantelous (2017) derived analytic results on the insurers optimal utility function in a game setting

(15)

satisfying a Nash equilibrium. In their paper, a more complicated market structure is considered as well as a stochastic market size (total demand). They had to solve for the premium strategies of the insurers numerically.

The aforementioned papers either do not include a game theoretic approach or have a limited time horizon of only one period. Wu and Pantelous (2017) included a game theoretic approach, but the model considered is a one period model. The paper by Emms et al. (2007) projected average market premiums using a Brownian motion process and gave approximate optimal results for the premium strategy. However, the influence of the premium charged by an insurance company had no specified influence on the premiums of its competitors.

Another criticism that applies to all aforementioned models is a fixed marginal cost. The concept of a break-even premium is proposed in most of the papers, whereas the existence of such a premium is questionable. First of all, it is by no means certain that insurers are able to accurately estimate the mean claimed amount per risk in a homogeneous risk group. Many different estima-tion methods exist and, given estimaestima-tion errors and efficiency of different estimators, the estimate of the mean claimed amount for next period is likely to be off from the real mean. As optimal estimators can be chosen, this should not be too much of a problem. Nevertheless, it should be recognized that the estimated marginal cost, which insurers use to determine the optimal premium strategy, is stochastic in nature and so differs period after period. Besides this, it is also possible that the marginal costs can change over time due to a trend in (non-)life underwriting risk. A trend related to climate change risk makes estimation methods for the marginal costs more complex, as past losses cannot be assumed to be identically distributed. Regression analysis and/or other meth-ods need to be applied to correct for trends. As many different parameters seem to be changing simultaneously in a changing climate (cf. Section 2.1), classical estimation methods are hard to apply. This stresses the importance of incorporating practical estimation methods of the marginal costs, which are time dependent, in a profit optimization problem. The difficulty here is to place this in a game setting and make a multi period optimum. The insurance market model considered in this thesis lets insurers optimize their profit in a one year period model, not yet taking future profits into account. This is directly related to the fact that due to a changing underlying risk struc-ture, determining your optimal strategy for many years ahead can lead to a substantial mismatch between the estimated costs and the real costs and so it might be not optimal after all.

(16)

A different question is how people react to price changes. As mentioned, Hotellings model (1929) incorporated different rational aspects of loyalty to a supplier, even if the price of a homo-geneous product is (slightly) higher than charged by others. Wu and Pantelous (2017) specified a market demand model based on a function of premium ratios. This function describes that many policyholders are inclined to change their insurer if the relative premium difference becomes too large, amplified by the relative difference in increase/decrease of the premiums compared to last period. This elasticity function seems plausible, as it takes both relative premium differences (be-tween competitors) as well as relative premium increases/decreases into account. However, it is unclear whether a relative or an absolute price difference is a bigger driver for policyholders to cancel their insurance contract. For this reason, a similar model is discussed in Section 3.2, though including the absolute price difference as factor.

Another possible property of insurance markets, which Wu and Pantalous (2017) incorpo-rated into their insurance pricing model, is a stochastic market size. A stochastic market size makes sense, as economic circumstances can lead to a market demand reduction and expansion. However, even though the demand for property insurance might be influenced by economic circumstances, the influence might be less far reaching for property insurance than for insurance products related to luxury goods, e.g. travel insurance or car (all risk) insurance. Property insurance is in many cases obligatory for house owners, as the vast majority of (Dutch) proprietors finance their house by a mortgage. Banks usually demand property insurance, as they face a risk to lose the collateral (the property) due to natural or man-made causes (Vereniging Eigen Huis, n.d.). A fixed market demand for property insurance might therefore be a good approximation of reality.

2.3

Solvency II

As of January 1st 2016, the European solvency capital requirement regulation or Solvency II (SII), came into force. The capital requirements under SII (Solvency Capital Requirements, abbreviation: SCR) are set such that the capital buffer of the insurer should be able to cover the losses coming from an event (more broad than underwriting risk; includes market risk, default risk and opera-tional risk as well) with a return time of 200 years based on a Value at Risk approach. Although

(17)

it is questionable if the standard model to calculate the SCR, which is described in the Delegated Acts of Solvency II (i.e. Directive 2009/138/EC), actually reflects the risks of the portfolios of the majority of insurers properly, it is beyond doubt that capital requirements cannot be neglected in a market model for insurance premiums. Clearly, if insurers fail to meet the capital requirements, the supervisor (i.e. the Dutch central bank or DNB) can intervene. For instance, if the SCR level is below 100%, DNB can demand a recovery plan to get back to the required solvency level within 6 months (DNB, 2015). If the capital buffer goes below the Minimum Capital Requirements (MCR), DNB can impose a production stop, take over control by appointing a curator or withdraw the in-surance license. The latter option seems to be equivalent to the bankruptcy case, in which the insurer stops to exist.

As all market parties need to take their SCR level into account when pricing insurance prod-ucts, especially in the case that the capital buffer gets dangerously close to 100% SCR level or even approaches the MCR level, the market situation in which insurers optimize their profit becomes more complex. In some cases it seems that insurers need to deviate from their optimal response function4 because of SCR regulation. This is a new field of research, as measures taken by the supervisor give an extra dimension to optimal response functions. It is eminent that exact reper-cussions (in case of SCR shortfall) are difficult to predict and depend on the circumstances.

Another topic which should be addressed is how competitors can take advantage whenever insurers are in distress. The authorities could force the insurer in distress to increase the premium. This information could be of value to insurers who want to set their premiums in such a way that they make a higher profit, as lowering the premium could attract more customers. However, forced recovery by the authorities usually has a short duration, as for example DNB requires insurers to get back to a healthy SCR level within 6 months. Market intervention by authorities is therefore expected to last only for a short time, which makes the knowledge about a competitor in distress less valuable. Moreover, as catastrophe claim sizes on a local and national scale and extreme losses due to financial market risks tend to correlate highly between insurers, there is a quite high like-lihood that insurers get into financial distress simultaneously. In that case there seems to be no additional value of knowing the competitor’s financial situation.

4A concept of a function that describes how insurers can optimally benefit (when optimizing their profits using the

(18)

3. Theoretic framework

In order to carry out research on how climate change might affect the Dutch insurance market, a model is specified that generates claims from multiple causes. These causes are hail, wind, light-ning and rainfall. Each of them are considered homogeneous groups, i.e. claims coming from one of the groups are assumed to be independent from claims from other groups. Within the homoge-neous groups it is necessary to model dependence structures as mentioned in the last chapter. Each of the risk groups are affected by climate change in similar and very different ways. To model the change in the systemic risk, a trend will be extrapolated based on both figures produced by the scenario analysis of the climate up to 2085 by the Royal Dutch Meteorological Institute (KNMI) and by fitting models to temperature data (monthly average in De Bilt (NL)), wind data (daily max-imum wind speed in m/s of 9 measuring stations in the Netherlands) and precipitation data (hourly rainfall in mm for 11 measuring stations across the Netherlands) from January 1970 onward. The proposed model to fit climate change trends and general claim causing weather patterns is specified below. The exact parameter choice will be discussed in Section 3.5.

3.1

Claim model

The proposed model that generates aggregate claims for all insurers consists of 4 risk drivers: storm, hail, rainfall and lightning. These risk drivers come from three independent processes, i.e. the number of events, the number of claims per event and the individual claim sizes. Nonetheless, as stated before, these processes are affected by climate change, which is incorporated into the model specified in the next four subsections.

(19)

3.1.1

Storm claims

Number of events:

The mean number of storm events in year t, φtStorm, is assumed to follow a random walk with drift process:

φtStorm= φt−1Storm+ ψStorm+ εtStorm. (3.1)

Here εtStormiid∼ N(0, ψStorm). In (3.1), the error term is assumed to be Normally distributed. The

error term is assumed to have a standard deviation that is proportional to the drift term (i.e. the climate change effect), which allows for a wide range of possible outcomes of φtStorm. The starting value of this random walk process is given by the parameter φ2017Storm. For simplicity it is assumed that the number of events in year t, NtEvents∼ Poi φStorm

t . The Poisson distribution simulates only

integer values, which is an useful characteristic for simulating events. However, it might have a too low variance, as the number of storm events in the Netherlands seems to have a quite high variance. This problem is resolved by the proposed structure in (3.1). This structure captures both the changing underlying risk and provides a plausible distribution for the number of events, as a stochastic input parameter for a Poisson distribution increases the variability.

Number of claims:

The mean number of claims coming from event j in year t, λt, jStorm, is modeled in the following way:

E 

λ2017Storm 

= pSevere storm,t· ESevere storm+ (1 − pSevere storm,t) · ESmall storm, (3.2)

λt, jStorm= ESmall storm+ (ESevere storm− ESmall storm) · It, jStorm, (3.3)

where It, jStormiid∼ Bin(1, pSevere storm,t). The number of claims in year t and event j, Nt, jClaims∼ Poiλt, jStorm 

. As the input variable λt, jStorm is stochastic, this structure for claim numbers seems appropriate due to the higher variance compared to a Poisson distribution with fixed parameter, whereas the mean

(20)

is preserved. ESmall storm stands for the expected number of claims coming from a moderate storm

and ESevere stormstands for the mean number of claims produced by a severe storm. The probability

of having a severe storm could be changing over time. For simplicity it is assumed later on that pSevere storm,t is constant over time. The parametrization choice will be discussed in Paragraph 3.5.

Claim size:

The marginal distribution of the claim size Xi jStormis assumed to be LogN(µStorm, σStorm), but there is a dependence within an event:

Xi jStorm:= FLogN−1 ((1 − Bj) ·Ui, j+ Bj). (3.4)

Here Bjiid∼ θ1Storm· Bin(1, θ2Storm) and Ui, jiid∼ Uni f (0, 1). The characteristics of the distribution are

further explored in Appendix C. The total claim amount in the segment storm is then equal to:

CTtStorm= NtEvents

j=1 Nt, jClaims

i=1 Xi jStorm. (3.5)

3.1.2

Hail claims

Number of events:

The mean number of hail events in year t, φtHail, is assumed to follow a random walk with drift process:

φtHail= φt−1Hail+ ψHail+ εnHail, where εtHailiid∼ N(0, ψHail). (3.6)

Here the error term is assumed to be Normally distributed. The standard deviation of the error term is assumed to be proportional to the drift term (i.e. the effect of climate change). This process models the mean number of events in a year and the starting value of this random walk process is

(21)

given by the parameter φ2017Hail. The number of events in year t, NtEvents∼ Poi φHail

t . This structure

captures both the changing underlying risk and provides a plausible distribution for the number of events. In this way, it is possible that the error term compensates for the climate change trend, but it could also aggravate the expected number of hail events. This leads to an increased variance compared to a Poisson distribution with a fixed input parameter and is therefore more appropriate to use.

Number of claims:

The mean number of claims coming from an event in year t, λtHail, is assumed to follow a random walk with drift process:

λtHail = λt−1Hail+ ηHail+ ζnHail, where ζtHailiid∼ N(0, ηHail). (3.7)

The starting value used to calibrate the model is given by the parameter λ2017Hail. The number of claims Nt, jClaims in year t and event j comes from a Poisson distribution with parameter λtHail. The stochastic input parameter λtHail (the expected number of claims coming from an event), which is modeled in (3.7) through a random walk with drift with proportional variance to the drift term (representing the climate change effect), reflects both the changing mean claim number and in-creases the variance of the claim numbers. This makes the specified model more appropriate than a standard Poisson distribution with a constant mean.

Claim size:

The marginal distribution of the claim size Xi jHail is assumed to be LogN(µtHail, σtHail) with

µtHail = µt−1Hail+ δHail+ ωtHail, where ωtiid∼ N(0, δHail), (3.8)

(22)

i.e. random walk with drift processes, where the standard deviation of the error term is proportional to the drift term (i.e. the climate change effect). The starting values to calibrate the model are given by the parameters µ2017Hail and σ2017Hail. The assumed dependence structure within an event is given by Xi jHail:= FLogN−1 ((1 − Bj) ·Ui, j+ Bj), (3.10)

where Bjiid∼ θ1Hail· Bin(1, θ2,tHail) and Ui, jiid∼ Uni f (0, 1).

θ2,tHail = min(αHail+ βHaildt; 0.99). (3.11)

In (3.11), the minimum is taken, as one could make dt extremely large and obtain θHaillarger than 1, which is not possible. Nonetheless, dt should not be chosen too large, as the linearity assumption is more likely to be far off from reality. The characteristics of the distribution are further explored in Appendix C. The total claim amount in the segment hail is then equal to:

CTtHail= NtEvents

j=1 Nt, jClaims

i=1 Xi jHail. (3.12)

3.1.3

Rainfall

Number of events:

The mean number of rainfall events in year t, φtRain f all, is assumed to follow a random walk with drift process:

(23)

In order to calibrate this model, the initial number of rainfall events is modeled by the parameter φ2017Rain f all. The standard deviation of the error term is proportional to the drift term (the climate change effect), which allows for a wide variety of outcomes under a Gaussian distribution. Here the number of events in year t, NtEvents ∼ PoiφtRain f all



. This structure reflects the changing annual mean number of events and disperses the distribution, due to the stochastic φtRain f all. Number of claims:

The expected number of claims coming from a rainfall event in year t, λtRain f all, is assumed to follow a random walk with drift process:

λtRain f all = λt−1Rain f all+ ηRain f all+ ζnRain f all, where ζtRain f alliid∼ N(0, ηRain f all). (3.14)

First, the parameter λ2017Rain f all models the initial number of claims that are expected to arise from a rainfall event. The drift represents the effect of climate change on the average number of claims per event. For simplicity, it is assumed that the standard deviation of the mean number of claims is proportional to the drift term. The number of claims Nt, jClaims in year t and event j comes from a Poisson distribution with parameter λtRain f all. Clearly, the variance of the claim numbers is higher than that of a Poisson distribution with fixed input, which might be more realistic. Also the fitted trend in the mean claim numbers is an advantage of this specified structure.

Claim size:

The marginal distribution of the claim size Xi jRain f all is assumed to be LogN(µRain f all, σRain f all), but there is a dependence within an event:

(24)

Here Bjiid∼ θ1Rain f all· Bin(1, θ2Rain f all) and Ui, jiid∼ Uni f (0, 1).

θ2,tRain f all= min(αRain f all+ βRain f alldt; 0.99). (3.16)

In (3.16), the minimum is taken, as one could make dt extremely large and obtain θRain f all larger than 1, which is not possible. Nonetheless, dt should not be chosen too large, as the linearity assumption is more likely to be far off from reality. The characteristics of the distribution are further explored in Appendix C. The total claim amount in the segment rainfall is then equal to:

CTtRain f all= NtEvents

j=1 Nt, jClaims

i=1 Xi jRain f all. (3.17)

3.1.4

Lightning

Number of claims:

The mean number of claims occurring in year t, λtLightning, is modeled as follows:

λtLightning= λt−1Lightning+ ηLightning+ ζnLightning, where ζtLightningiid∼ N(0, ηLightning). (3.18)

The average number of claims in a year λtLightning is assumed to follow a random walk with drift process, with a Normally distributed error term. Like for the other risks, it is assumed that the standard deviation of the mean number of claims is proportional to the drift term (which reflects the climate change effect). The initial number of claims that are expected to arise in 2017 is modeled by the parameter λ2017Lightning. The number of claims in year t, NtClaims∼λtLightning

 . The stochastic input parameter with trend captures both the increasing mean and allows for a higher variance of the claim numbers.

(25)

Claim size:

The marginal distribution of the claim size XiLightningis assumed to be LogN(µLightning, σLightning), but there is a dependence within an event:

XiLightning:= FLogN−1 ((1 − Bt) ·Ui+ Bt). (3.19)

Here Btiid∼ θ1Lightning· Bin(1, θ2Lightning), i.e. an upward shock1 for all claim sizes in year t and

Uiiid∼ Uni f (0, 1). The characteristics of the distribution are further explored in Appendix C. The

total claim amount in the segment lightning is then equal to:

CTtLightning=

NtClaims

i=1

XiLightning. (3.20)

All parameters used in the models proposed in the last 4 subsections have to be estimated. The results of the estimation and the exact estimation methods are described in Paragraph 3.5 and Appendix E.

(26)

3.1.5

Assumption support

In the model specifications above there are a number of assumptions made for the claim process in the different branches. Unlike storm, rainfall and lightning, the distribution of the individual claim size for the segment hail is allowed to have an increasing mean and variance over time. The idea behind this is related to the increasing prevalence of heavy hail storms (see Section 3.5 and KNMI, 2015). Not only is it relevant to model the increase of the probability of the common upward shock in the dependence structure, but also the variability at larger quantiles might increase. When the upward shock does not happen, the upward trend in σt(if sufficiently small) has a fairly negligible

effect on the outcome Xi. Nonetheless, at very large quantiles, a small increase in σt can have a

significant impact and so fitting trends for µtHail and σtHail can be deemed appropriate. Fitting a trend for µtStorm and σtStorm is, given the high uncertainty of future wind patterns, the prevalence of storms and the increasing urbanization of the coastal area in The Netherlands (KNMI, 2015), less appropriate. This is also the reason why the probability of an upward shock in the number of claims per event pSevere storm,t is assumed to be constant over time. For lightning, on which only

very little data is available, one can safely say that the damage caused by a lightning strike is not going to be differently distributed from what it is now. Rainfall on the other hand will become more dangerous as the number of extreme events might increase. However, given that there is a lot of rain, the claim size won’t vary more than if such an event would happen today, since the whole area of where it happens is affected in the same way. This last note does not need to hold for hail, which explains the assumption made about the increasing trend of both parameters for the individual claim size.

Another assumption is the dependence structure of the claim sizes within an event. The idea behind this rather simplistic way of dependence modeling is that one can allow for both individual variation and a common shock to have individual variation at larger quantiles. Next to this, it gives freedom to model the shock size and shock probability and so this way of dependence modeling is very flexible. For hail and rainfall, the KNMI warns that the prevalence of extreme events will increase over time (KNMI, 2015), but leaving unspecified what will exactly happen to the total number of hail events. The paper does give an indication about rainy days and since hail is rather similar in some aspects to rain, it is not unreasonable that the number of hail events approximately

(27)

increases in the same pace as rainfall events. Nonetheless, the specified model gives way to model the increase in the number of events and the number of extreme events differently. Section 3.5 on parametrization will further elaborate on how parameters are chosen for the different segments.

Next to this, the implicit assumption of the model is that the number of claims and the average claim size are independent. This is not so much of an issue for rain, hail and lightning, however storm is modeled such that there can be an upward shock in the number of claims, whereas the average claim is independent and can both be large and small. The reasoning behind this assumption is that many different kinds of storms can happen, like for instance whirlwinds (which can be very local, but cause a lot of damage), moderate storms (which can lead to many claims, but with fairly small average claim size) and severe storms (from which both a large number of claims and a high average claim size arise).

Generally, as both Paolo et al. (2013) based on Danish fire insurance data and Umbleja and Käärik (2010) based on Estonian traffic insurance data indicate, claim size distributions are skewed and heavy tailed. Based on the Danish fire insurance losses, Log-normal random variables seem to underestimate the tail risk and usually do not fit losses extremely well. On the other hand Burnecki et al. (2000) fitted multiple distributions on a US claim size index, which is based on natural catastrophic risk events. The outcome of their research was that a Log-normal distribution seems to fit the index outcome on a quarterly basis best. Here it should be noted that an index based on quarterly losses is relatively smooth compared to individual losses, as more data is included in the calculation of an index. However, as there seems to be no research papers available on individual losses due to hail, storm, rainfall and lightning in the Netherlands or any other country, the exact claim size distribution is therefore uncertain. Nonetheless, the main argument in fire insurance against a Log-normal distribution, which does have a fairly thick tail, is that the tail is not thick enough. In the model discussed here, only weather causes are considered, which are way less likely to cause a total loss of the property than fire. Assuming instead of a thick tailed Pareto distribution a slightly thinner tailed Log-normal distribution is therefore more appropriate in this case.

(28)

3.2

Insurance market model

Next to the claim generating model, a model that represents market demand for home insurance policies needs to be specified. Here a few issues play an important role, like price elasticity, price response functions and the competitive model itself. To start with the latter, one could assume Bertrand competition in a very simple market where it is clear to everyone that the products of-fered by the different insurers are quite homogeneous. Therefore, it is always the logical choice to take out an insurance at the company with the lowest premium, which means that the com-pany with the lowest premium gets the complete market demand. Even though internet is making competition more severe, this is not realistic since in real life the (home) insurance market is not perceived as a complete homogeneous market. In fact it is not, as different policies can have very different coverage. Furthermore, phenomena as goodwill and loyalty of policyholders towards the insurance company clearly affect the price sensitivity in the market. Taking this into account, the following model for market demand ϒi,t for insurance policies of insurer i at time t in a n-player

market (for finite integer values n > 1) is proposed:

ϒi,t −→ Prt, −→ Prt−1  = 1 γi

Ξ − αi,t(Pri,t−1) · Pri,t+

1

n− 1

j6=iαj,t Prj,t−1 · Prj,t, (3.21)

where αi,t(Pri,t−1) = αiE·

 1 + β E i Pri,t−1  and n

i=1 1 γi = 1.

In (3.21), one sees three terms; the first term subdivides the market into relevant market shares for all companies, the second term models the sensitivity of demand to the current premium level of insurer i and the third term models the price sensitivity of demand to the premiums of i’s competi-tors. This market demand function depends on premiums of all market players of the current and last period, i.e.−Pr→t and−→Prt−1. These are vectors consisting of all market player premiums at times tand t-1.

In this model the total number of policyholders is assumed to be fixed and equal to Ξ ∈ Z+, independent of the time period t and of the premiums. Also, the market is subdivided in a constant

1

γ (γ ∈ R

(29)

in the model become absurdly high, this assumption is clearly too restrictive. Nonetheless, in nor-mal market circumstances it is not too unreasonable to say that all people with a house would like to insure themselves against extreme weather conditions2, some looking for the lowest premium, while others being a little less price sensitive. Additionally, factors like goodwill don’t change over time, as all γ’s are assumed to be constant. More complicated functions of past premiums and/or other factors could be proposed for γ, but this is out of the scope of this thesis.

Secondly, the price sensitivity is modeled by two components: the absolute price difference between policies and the difference in relative price increase/decrease over the last period between policies. These two components represent two decision parameters of policyholders, which pun-ishes the insurer for increasing the premium compared to its competitors in both absolute and relative terms. The parameter αiE indicates how sensitive the policyholders of insurance company i are to these price differences. It can be proposed that αiE is the same for all insurers. Addi-tionally, αiE could be a decreasing function of the number of market participants n, as the market becomes more intransparant for consumers if they get a so called choice overload3. The parameter βiE models the (relative) importance of the relative increase of the premium compared to the abso-lute difference between premiums of different insurers in the price sensitivity (equally weighted if βiE = 1 ∀i). The term αi,t can therefore be seen as the price elasticity of insurer i at time t.

Subsequently, the assumption is made that insurance companies are all rational players that aim for optimization of their profits. Given the time value of money, the uncertainty of future marginal costs per policy and the competitive nature of the insurance market, it is rational for in-surers to focus on optimizing the next years profit, instead of the present value of all future profits, like Taylor (1986) proposed. If all insurers use the same strategy of profit maximization with a one year horizon, a closed form optimal price formula in the Nash equilibrium can be derived from maximizing all profit functions simultaneously. The profit function for every individual insurer i is

2Note that property insurance is for many proprietors obligatory, since banks require property insurance for a

mortgage.

3Vos and De Jong (2009) report that in Dutch health care insurance a substantial part of the population, i.e. 15%,

indicates that they are unable to find a better health care insurer. Furthermore it is noted that people tend to switch most often based on the premium level, but forget or are unable to weigh premium with quality of service. This is a sign of complexity, which leads to either not switching of insurance or decision making based on solely the premium level.

(30)

given in (3.22). πi,t  Pri,t, Pr−i,t, − → Prt−1,Ci,t 

= (Pri,t−Ci,t) · ϒi,t

−→

Prt,−Pr→t−1 

. (3.22)

In (3.22), Pr−i,t denotes all premiums charged by the competitors of insurer i and together with

Pri,t it makes up the premium vector−Pr→t. Next to this, Ci,t denotes the marginal production cost

of insurer i at time t. Strictly speaking, the marginal cost Ci,t cannot be observed and has to be

estimated by the insurer. Therefore, the specified profit function should be seen as the expected profit function. All insurers have to set their premiums in such a way that an a priori deviation from an equilibrium is not profitable. For this reason, the definition of a Nash equilibrium is given below.

Definition 3.1: Nash Equilibrium

Let R+be the set of premium strategies available to player i at time t and πi,t



Pri,t, Pr−i,t,−→Prt−1,Ci,t



denotes the expected profit function as in (3.22) of player i at time t when playing premium strat-egy Pri,t ∈ R+, whereas Pr−i,t is the set of premium strategies played by all other players except

for player i at time t. Then−→Pr∗t ∈ Rn+is the set of premium strategies of all insurers in period t for which the estimated profit, based on the period dependent estimate of Ci,t, is such that a unilateral

deviation from this equilibrium premium strategy is not profitable for any insurer:

For all insurers i : πi,t(Pr∗i,t, Pr−i,t∗ ,

− →

Prt−1,Ci,t) ≥ πi,t(Pri,t, Pr−i,t∗ ,

−→

Prt−1,Ci,t) ∀Pri,t ∈ R+.

(3.23)

In the definition of a Nash equilibrium one observes that the premium is the strategy variable which is part of R+ (NB the set of feasible strategies can be smaller than every value in R+ due to solvency capital restrictions, c.f. Section 3.4), such that the next years profit is maximized given the premiums of the competitors (the payoff function). Now let n be the number of insurers competing in the insurance market, 1

γi the initial market share of insurance company i and αi,t(Pri,t−1) the price elasticity function (which will be abbreviated to αi,t) of insurer i at time t and assume full

(31)

marginal costs at time t,−→Ct, is known), then, given the aim to maximize the profit function, the

Nash equilibrium for the multi period model is given in Theorem 3.1.

Theorem 3.1

Given the profit function (3.22), the Nash equilibrium premium for insurer i at time t is unique and is equal to Pri,t∗ −→ Ct,−Pr→t−1  = n+ γi− 1 (2n − 1)γiαi,t Ξ + 1 (2n − 1)αi,t

j6=i αj,tCj,t+ n 2n − 1Ci,t ∀i. (3.24)

The proof of Theorem 3.1 is given in Appendix A. The problem of applying the equilibrium pre-mium expression is the asymmetric information supply and the fact that insurers have to simul-taneously set their premiums. Insurers have to set their premiums simulsimul-taneously, because in the internet market, policies might be sold directly after publication and distinguishing the premium between homogeneous risks depending on the signature date seems unfair. Therefore, insurers have to estimate the Nash equilibrium premium4. In general, insurers don’t know the marginal cost estimate of their competitors and so every insurer has to rely on estimates of their competitors marginal costs to determine the equilibrium premium level. Based on the information insurers have about their own portfolio at time t (the information set is denoted by Fi,t), insurers have

expecta-tions about the marginal cost of their competitors. This leads to replacing the value Cj,t in (3.24)

by the expectation of Cj,t given the information Fi,t that insurer i has.

4This notion does not mean that insurers with imperfect information can deviate from the stated equilibrium now.

A priori, insurers have no clue on their competitors premiums and, given that all other insurers for this reason charge the best estimate of the equilibrium premium, it is unprofitable to deviate from the equilibrium premium.

(32)

Definition 3.2: Asymmetric information Premium

In an incomplete information market, in which the competitors premiums Pr−i,t∗ −→Ct,−→Prt−1 as in (3.24) are stochastic through the marginal costs of the competitors C−i,t, insurers estimate their

competitors premiums Pr∗−i,t = EPr−i,t∗ |Fi,t



instead. The asymmetric information premium cPr∗i,t is the premium such that the expected profit in time period t is maximal:

For all insurers i : πi,t

 c

Pr∗i,t, Pr∗−i,t,−→Prt−1,Ci,t

 ≥ πi,t  Pri,t, Pr∗−i,t, −→ Prt−1,Ci,t  . (3.25) Corollary 3.1

The asymmetric information premium cPr∗i,t for all insurers i at time t, based on all available infor-mation Fi,t, is given by

c

Pr∗i,t= EPri,t∗ →−Ct,−Pr→t−1|Fi,t

 = n+ γi− 1 (2n − 1)γiαi,t Ξ + 1 (2n − 1)αi,t

j6=i αj,tE(Cj,t|Fi,t) + n 2n − 1Ci,t. (3.26)

Given the high correlation between portfolio aggregate claim sizes, a rational estimate by the insurers for E(Cj,t|Fi,t) is the observed marginal cost of the own portfolio. In that case, the result

in Corollary 3.1 can be simplified to the expression in Corollary 3.2.

Corollary 3.2

If the initial market shares of all players are equal and it can be assumed that E(Cj,t|Fi,t) = Ci,t

∀i, j,t, which represents insurer i’s best estimate of the marginal production costs of its competitor j, then the optimal premium strategy given asymmetric information for insurer i at time t is given by c Pr∗i,t = 1 αi,tn Ξ +  ∑j6=iαj,t (2n − 1)αi,t + n 2n − 1  Ci,t ∀i. (3.27)

(33)

One can observe in (3.27) that the information set of all insurers includes the price elasticity of their own and their competitors demand. In real life this is typically not (entirely) the case. How-ever, the optimization procedure is essentially the same as what insurers try to do in real life. A rational insurer tries to estimate the price elasticity, although they might not always be able to do so in an accurate way. Assuming that the expression in the above equation yields the profit maxi-mizing premium, we can conclude that any irrational behavior in real life leads to less than optimal outcomes.

Maybe the most important assumption made in the model specified above is the homoge-neousness of all individual risks. Every policyholder has an equal contribution to the claim number expectation parameter lambda and so the marginal costs Ci,t for insurer i are the same for every

policyholder. Self selection or averse selection is assumed to be non-existing in the model. The purpose of this paper is creating a model that only projects the effects of climate change on the insurance market and not the additional negative upward effects of other phenomena on the pre-miums. As averse selection can have very profound consequences, including averse selection can blur the conclusion of this research.

3.3

Estimating the marginal cost per policy

The estimation of the marginal costs per policy can be very difficult. Pricing is for many insurance companies a long lasting process, as there are many different actuarial methods available to esti-mate the expected value of the aggregate claim size per policyholder. Next to this, it is debatable whether the expected aggregate claim size per policyholder is a proper estimate for the marginal cost of a policy. As many insurers do show some risk averseness (some more than others, depend-ing on shareholders and capital requirements), no insurer might be willdepend-ing to take on a risk if it just gets the expected value of the risk in return. For this reason, a risk premium might be incorporated into the marginal costs on top of the expected value.

Another complicating matter seems to be the information asymmetry. It is assumed in the proposed model that the insurance companies don’t know how the systemic risk evolves over time and so do not know for certain if their risk increases or decreases. On a year to year base this is very likely to hold in reality. In order to keep things simple, the marginal costs are estimated based

(34)

on the estimated expected value of the individual risks plus a risk premium. The expected value of the individual risks are estimated based on a rolling window approach: at t = 0 there is a starting value provided and every next year the past years estimate of the expectation is weighted with the past years observation of the mean claim amount per policy. This approach is summarized by the formulas below.

Ci,t = (1 + ϑ ) · ℵt, (3.28)

where ℵtrepresents the expected value of the claim amount per policy at moment t. The expected

claim amount ℵt is then given by

ℵt = ω · ℵt−1+ (1 − ω) · 1 ϒi,ti∈segment

CTti ! , (3.29)

with segment = {Storm, Hail, Rainfall, Lightning}. As Ci,t can include a risk margin of ϑ % on top

of the estimated net premium, the implicit assumption is made that insurers use actuarial pricing methods to estimate the marginal costs. However, insurance brings in risk and insurers are there-fore required to hold capital buffers in case of shocks/catastrophes. In order to cover this, even in a very competitive market in which the profit margin goes to zero, the risk margin in the marginal costs seems necessary.

More restrictive is the way of estimating the expected value of the claim amount per policy. Correcting the initial marginal costs for the observed average loss per policyholder is one of the easiest ways of pricing in a rolling window approach. There is no reason to assume more compli-cated ways of pricing would always have results that substantially deviate from its outcome and therefore lead to other model results over a time horizon of more than 10 years. The rolling window approach, depending on parameter ω, corrects the marginal costs for upward trends in systemic risk, given the insurer has faced growing losses. In real life, insurers do not tend to increase their premium if they have not observed an upward trend in their loss ratio, as competition keeps the premium low. Hence they seem to ignore possible growing systemic risks as long as possible.

(35)

Therefore, in real life, premiums are only corrected upward after observing growing loss ratios, whatever complicated pricing models insurers may have.

A remark should be made about the model estimation of the marginal costs in the first year. The insurers are provided with the true expected value of their loss. The way to determine this value is explained in Appendix C. This approach has been chosen because it prevents insurers to set their premium in period one in a wrong way and to avoid simulation of additional data points with which the insurers should estimate their marginal costs. This shortens the run time of the model substantially.

3.4

SCR and premium determination

Another important part of operating an insurance company is complying with regulation. In order to model the insurance market properly, the effects of complying to Solvency Capital Requirements (SCR) regulation should be included in the pricing model. If the buffer of an insurance company decreases to a level below the SCR, the Dutch central bank (DNB) expects the insurance company to recover within 6 months. After 6 months without recovery, DNB has a variety of policy instru-ments to force an insurer to recover or withdraw/cancel the insurance license. The latter option is quite drastic and always depends on the degree of insufficiency of the buffer. For simplicity, the model used for this research assumes that DNB cancels the license if an insurance company does not meet the SCR for four years in a row.

Also a couple of other simplifying assumptions are made to calculate the Solvency II SCR for the insurers in the model, like there is no market risk as insurers only hold cash or cash equivalents. Holding cash in a bank deposit means that there is counter party default risk, but the simplifying assumption here is that the credit status of the counter parties of all insurers are the same and don’t change over time and the situation that the counter party (the bank) actually goes bankrupt is left out of the model. Every year all insurance companies receive a return rate r over their buffer (which is chosen to be 0.5%), which is considered to be of Tier 1 capital quality. For underwriting risk a homogeneous distribution of all policyholders over The Netherlands is assumed. There is no

(36)

lapse risk nor reserve risk present5and the expectation of the future premium base for the volume measure of premium risk is assumed to be equal to the current years premium base (in year t)6. Given the aggregation matrices specified in the delegated acts, the SCRs of catastrophe and pre-mium risk are aggregated and subsequently the aggregation of the SCRs of underwriting risk and counter party default risk give the base SCR (or BSCR; note that this is a reduced version), which will be the leading parameter in this model for determining whether insurers satisfy the capital requirements or not. The calculation formulas for the BSCR and references to the Delegated Acts are summarized in Appendix B.

From the result in Theorem 3.1 it is clear that if an insurance company has to come up with a recovery plan, its marginal costs are likely going to deviate substantially from the marginal costs of its competitors. The premium for the next period must at once make up for the lack of capital available. This capital charge is loaded on top of the estimated optimal premium under asymmetric information in (3.26). Nonetheless, due to the specified market structure, an expected profit margin is available in the asymmetric information premium as given by (3.26) in Corollary 3.1, and so the capital charge can be lowered with this expected profit (as it is not optimal to charge a premium any higher than necessary, given that deviation from the estimated optimum on average lowers profits due to the concave nature of the profit function). This leads to the definition of the premium function Pri,t for insurer i in financial distress at time t:

Pri,tSCR= Ci,t + max n+ γi− 1 (2n − 1)γiαi,t Ξ + 1 (2n − 1)αi,t

j6=i αj,tE(Cj,t|Fi,t) − n− 1 2n − 1Ci,t; SCRi,t−1− Bu f f eri,t−1 ϒi,t−1 ! . (3.30)

The SCRi,t−1 is calculated along the lines described in Appendix B. In (3.30), the premium is

equal to the maximum of the normal optimal premium in (3.26) and the marginal costs plus the

5For non-life insurance, lapse and reserve risk capital requirements depend on the size of technical provisions. For

simplicity, it is assumed that the technical provisions of the insurers are of negligible size. In real life, the duration of liabilities in segment fire and natural catastrophe insurance is very short (cf. Appendix B, Section 1), which makes this assumption more realistic. Besides, the model generates the whole claim size at once, so no reserve risk is involved.

(37)

capital charge. Therefore, in the worst case, there is no longer an expected profit included into the premium. The capital charge is based on making up the shortage based on last years number of policyholders. This is debatable, since next years exposure is most likely going to decrease if in-surer i has to increase its premium unilaterally above the optimal value. Therefore, the additional premium revenue might still not fully cover the capital shortage. However, given that the SCR decreases if the exposure goes down, this capital charge seems to be a fair approximation of what a rational insurer should charge to recover within one year. After recovery, insurers can use the standard optimal premium formula in (3.26) or (3.27) again. If an insurer does not recover within four years, it is assumed that DNB will withdraw its license, which means the insurer ceases to exist.

3.5

Parametrization

The model described in the last sections consists of many parameters, which either need to be estimated or chosen in a realistic way. All parameters relating to climate change, like the number of events or the number of claims coming from an event can be estimated using various sources of data and realistic estimation models. Parameters relating to claim size and dependence structure generally can’t be determined in an objective way, as no data is available on claims. However, these parameters can be chosen in a realistic way (although still arbitrary to some extent), based on realistic quantiles and mean claim sizes.

For all risk drivers, the parameters considered for the Log-normal distribution with a com-mon shock effect should generate a realistic mean, skewness and tail. In order to assess the ap-propriateness of the parameter choice, histograms of the distributions with and without shocks are generated. Rainfall and hail events usually don’t lead to big claims, and so the idea behind the parameter choice is that the mean parameter µ (which is for the Log-normal distribution not the mean, but does affect the mean exponentially) should be low. The volatility parameter σ should therefore be relatively large, as in the event of a shock, the tail should be thick enough to have realistic outcomes (i.e. a high likelihood of a big claim size, but still having the possibility of generating a relatively small claim value). This approach might be less appropriate for storms, as

(38)

one might want to have a slightly bigger expected claim size for moderate storms and keeping the tail relatively heavy. Since both parameters µ and σ model the tail of a Log-normal distribution, in order to increase the mean and not letting the tail become too heavy, one can increase the parameter µ and let σ decrease. Lightning is quite different from the other three risk drivers, as it is hard to assume a common shock effect to be material in the real world, so lightning as a risk driver should be more like fire risk in general. Therefore both µ and σ are set relatively large, as it is know from the Danish fire that a Log-normal distribution tends to underestimate the tail risk of fire losses. Based on the parameters (the chosen values at time t = 0) in Table 3.1, the histograms of the claim size distributions are generated.

Parameters Storm Hail Rainfall Lightning

µ 6 0.5 0.5 6

σ 2 3.5 3.5 2.5

θ1 0.5 0.9 0.95 0

(39)
(40)

Figure 3.1 Graphs of claim size distributions with and without shocks (for different risk drivers), based on a Log-normal distribution and the parameters in Table 3.1.

As one can observe in the plots above, the distribution starts at a value larger than zero when-ever a shock occurs. The parameter θ1 (of which the value is stated per risk driver in Table 3.1)

is the quantile of the claim size distribution at which the distribution starts in case of a shock. This means that the claim size in that case is drawn from the tail of the claim size distribution (Log-normal). For hail and rainfall, one still sees that the probability of a fairly low claim is high. However, when a shock occurs, it is clear that the distributions become substantially heavier tailed then when no shock occurs. What cannot be seen in the above plots is that the very large claim sizes become substantially larger compared to what is observed when no shock occurs. This also leads to a very few claims that have an irrealistic size of many tens of millions. In order to prevent these very few claims to affect the outcome of the model, a maximum individual claim size is set at the prespecified level of 5 million euro.

The next step is to determine the parameters related to climate change. As mentioned in the section on model assumptions, for the risk driver hail it is plausible to model a stochastic increase in the future parameter values for the individual claim size as well. Parameters δHail and γHail are

Referenties

GERELATEERDE DOCUMENTEN

The subsequeut ti~.e chapters describe the objective of ecououuc uitegration and state uistu~ance rnarket la~~- (chapters z aud 3), the legal development of insuraiue market

With intertrmporal contracting, a bank need not aake zero expected profit on a given borrower ín each time period. It can tax the borrower i n one period ar:d subsidize it in the

guarantees the existence of the separating contracts in the contract III nodes. The LHS is the borrower's utility under the pooling contract. The RHS gives the borrower's utility

Table 3 shows that performance expectancy, effort expectancy, and social influence, all significant correlated to both perception of negative age-discrimination climate and

To identify interaction effects that can have a moderating effect on the drivers of churn, a Pearson Chi-square correlation test has been performed for the variables of

Double support time and those parameters expressed as a percentage of the gait cycle (especially double support percentage) showed the largest relative differences and/or worst