• No results found

Master’s Thesis EORAS Comparison of Macro- and Micro-Level Reserving Models: an Insurance Type Specific Analysis

N/A
N/A
Protected

Academic year: 2021

Share "Master’s Thesis EORAS Comparison of Macro- and Micro-Level Reserving Models: an Insurance Type Specific Analysis"

Copied!
37
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master’s Thesis EORAS

Comparison of Macro- and Micro-Level Reserving Models: an

Insurance Type Specific Analysis

January 15, 2018 Author: J. Kool Supervisors: Prof. dr. R.H. Koning (University of Groningen) E. Roelse MSc

(EY Actuarial Services)

Second Assessor: Prof. dr. P.A. Bekker

(University of Groningen)

Abstract

(2)

Contents

1 Introduction 3 2 Methodology 6 2.1 Description of variables . . . 6 2.2 Macro-Level Model . . . 7 2.3 Micro-Level Model . . . 8

2.3.1 Position Dependent Marked Poisson Process . . . 9

2.3.2 Observed Likelihood Function . . . 10

2.3.3 Further Development of the Claim . . . 10

2.4 Model Comparison Criteria . . . 11

3 Data Generation 13 3.1 Claim Occurrence Time . . . 14

3.2 Reporting Delay . . . 14

3.3 Development Process . . . 14

3.4 Claim Size . . . 16

3.5 Parameters per Insurance Type . . . 17

3.6 Distributions in Estimation . . . 18

4 Simulation of the Future Development of Claims 19 4.1 IBNR Claims . . . 19

4.2 IBNeR Claims . . . 20

4.3 Predicted Outstanding Liabilities . . . 21

4.4 Simulation of the True Reserve Distribution . . . 21

4.5 Comment on the Fairness of the Model Comparison . . . 21

5 Prediction Comparison per Insurance Type 22 5.1 Loss Reserve Distribution . . . 22

5.2 Loss Reserve Error Distributions . . . 23

5.2.1 Personal Fire or Other Damage . . . 23

5.2.2 Motor Vehicle Liability . . . 24

5.2.3 Personal Liability . . . 25

5.3 Sensitivity Analysis . . . 25

(3)

5.3.2 IBNR Claims . . . 28 5.3.3 Variance in Payment Sizes . . . 29 5.3.4 Increased Settlement Delay . . . 30

6 Conclusions 31

7 Discussion and Further Research 32

(4)

1

Introduction

In insurances, claims can develop for a long period. It involves reporting, payments and settlement, in which long delays can appear. Insurers need to reserve capital for outstanding claims that are not reported or settled before a valuation time. In practice, actuaries call these claims Incurred But Not (enough) Reported, or IBN(e)r, claims. An accurate predictive distribution of the IBNR and IBNeR future claim payments is essential. Underprediction of the reserves causes solvency problems, while overprediction has a negative impact on the insurer’s profitability.

There is a difference between an IBNR and an IBNeR claim. An example of a claim process is illustrated in Figure 1. The claim process starts at the occurrence time of the claim event defined as T . Later, the insured client reports the claim at time W . If the time of valuation would be between the occurrence and reporting time, for example at τ1, it is

called an IBNR claim. As an IBNR claim is not yet reported, it is unobserved at the valuation time. Subsequently, the insurer compensates for the losses of the client with payments until the claim is settled at time S. If the valuation time would be between W and S, such as at τ2, we call it an IBNeR claim. In general, actuaries use techniques based on aggregate

Figure 1: An example of a claim development

(5)

papers investigate the use of the chain-ladder method based on generalized linear models (GLM) to incorporate trends or environmental changes. One of them is Renshaw and Verrall (1998), where they suggest that the chain-ladder technique is unsuitable for modeling claim amounts.

In time of ‘big-data’, other techniques are applicable. Namely, there is a lot of individual claim data available. A model based on individual claim data, also known as a micro-level model, allows a much closer modeling of the claim process. It is based on simulating developments of open individual claims, which includes modeling of the reporting, payment and settlement structure. Instead, a macro-level model aggregates payments per development and accident period. Therein, it assumes similar development patterns. The framework of the micro-level model is formulated in Norberg (1993) and later followed-up in Norberg (1999). In recent years, there were only a few papers that used empirical data to evaluate the in- and out-of-sample performances of the micro-level models.

One of them is Antonio and Plat (2014), where they find that the reserve prediction of the micro-level model is closer to the true required reserves than reserve predictions of other traditional methods. They also conclude that the micro-level model reveals a more realistic predictive distribution of the reserves. Jin (2013) continues on their research by a new case study. Also in this case study the micro-level model outperforms the macro-level models regarding the loss reserve predictions. A Pareto distribution including two covariates is used for the modeling of payment amounts. This improves the fit of large claims substantially, yet there is some improvement to gain. At the same time it deteriorates the fit of the small claims. Antonio et al. (2016) present a discrete-time multi-state framework in which a spliced distribution is used for the payment amounts. In this way small and large claims are modeled by separate distributions. This does not lead to very different results than in Antonio and Plat (2014), where the same data set is used. However, their results emphasize the importance of accurate modeling of extreme payments.

(6)

of macro- and micro-level models under multiple scenarios. The scenarios reflect changes in the claim environment1that could have occurred. Especially in an unstable environment, the micro-level models outperform the macro-level models. Another advantage of a simulation study is that comparison of the macro- and micro-level model performances can be based on a reserve error distribution rather than on one reserve error, since multiple pseudo-datasets can be generated. While they allow for environmental changes in their data generating process, the type of insurance is fixed. It is not clear whether the simulated datasets are representative for a real insurance portfolio and, if so, for what kind of insurance portfolio. Therefore, it may be worthwhile to investigate for what insurance type the application of a micro-level model is useful. The study also lacks in the investigation in effects of more extreme payment amounts, which is common in many insurance portfolios. Moreover, most results that the researchers present are based on the assumption that reporting delays do not exist. In other words, there are no IBNR claims. In certain portfolios IBNR have a great impact on the outstanding liabilities. Hence, it is useful to include the reporting delays in the data. We generate three types of pseudo-datasets with characteristics of claims in main insurances, in which we differ in the above described claim development elements.

In this paper we compare the performances of macro- and micro-level models per type of insurance. For what insurance characteristics in claim developments does a micro-level model predict outstanding liabilities more accurately than traditional macro-level models? To investigate this, we generate multiple pseudo-datasets, while we allow for changes in the parameter set of the data generating process. Parameter sets are considered that reflect the characteristics of a specific type of insurance. Subsequently, we perform a sensitivity analysis in the parameter set, such that the insurance characteristics in claim developments can be found, whereupon micro-level modeling is gainful. In this way, we can investigate the micro-level models performances irrespective from a particular case study, while insurance characteristics are taken into account. As a result, we give an actuary insight in the usefulness of a micro-level reserving model per type of insurance.

The remainder of this paper is structured as follows. In Section 2 the methodology is outlined. The data generating process that generates the pseudo-datasets is described in Section 3. The simulation procedure of future claim developments using the micro-level model is explained in Section 4. In Section 5 the loss reserve predictions results are evaluated. First, the model results for three parameters sets are examined. Thereafter, results are compared

1

(7)

by performing a sensitivity analysis in the parameter set. In Section 6 the conclusions of this paper are described, followed up by possible future research directions in Section 7.

2

Methodology

In this section we first define the variables that are used in the models. Subsequently, the macro- and micro-level reserving models are described.

2.1 Description of variables

Nowadays, insurers possess individual claim data such as the claim occurrence time, reporting delay and settlement time. We outline the variables that are likely to occur in an individual claim dataset, including its notations. An overview of these variables is presented in Table 5 in the Appendix. For an individual claim i, we denote the occurrence time as Ti and the

reporting delay as Ui, both of them in monthly time units. Lower case letters are used for

variables that are no longer random but a realization. Between the reporting time Wi :=

Ti + Ui and the settlement time, there is a development of the claim. The development

consists of a payment time, a corresponding type of event and payment. In that process three types of events are distinguished, with the following meanings

• Type 1 event: A claim is settled without a payment, which are treated as zero payments;

• Type 2 event: A claim is settled with a payment;

• Type 3 event: A claim is not settled, but there is an intermediate payment. More future payments could follow.

An event Type number is denoted by Eij, where j = 1, . . . , Jiis the payment number of claim

i. The corresponding payment amount and time are denoted by Pij and Mij, respectively,

where Mi0 = Wi. Subsequently, the interarrival times of payments are defined by Vij :=

Mij− Mij−1. The development process of individual claim i can then be formulated by

Di:= {(Vij, Eij, Pij) for j = 1, . . . , Ji}

and the total claim process by

Ci := {Ti, Ui, Di}.

Furthermore, we define the settlement delay since notification as SDi := MiJi − Wi. At

(8)

• IBNR claims: Wi > τ and Ti ≤ τ . Di is not observed. Although the occurrence time

is before the valuation time, the claim is not observed because it is not yet reported;

• IBNeR claims: Wi ≤ τ and Wi+ SDi > τ . However, the exact value of SDi is not

known because MiJi is not observed. In other words, Di is not or partly observed, i.e.

only the part Doi := {(Eij, Vij, Pij) for j = 1, . . . , Jio}, with Jio< Jipayments observed;

• settled claims: Wi ≤ τ and Wi+ SDi ≤ τ . This means that Di and consequently Ci is

completely observed.

The described variables will be used in the models, which are presented in the next subsec-tions.

2.2 Macro-Level Model

In macro-level models such as the chain-ladder, the loss data is cumulatively aggregated over the development periods for several accident periods. Therefore, many variables that are described are eliminated. In Table 1, the Laq cells are the cumulative losses observed and

ˆ

Laq cells are the cumulative losses to predict.

a,q 1 2 3 ... ... Q 1 2 Laq 3 .. . .. . Lˆaq A

Table 1: Run-off triangle

We define these cumulative loss amounts of accident period a = 1, . . . , A and development period q = 1, . . . , Q, in terms of the previous subsection’s variable notation, by

Laq = X i:Ti∈Ta X j:Mij∈Maq Pij,

(9)

seasonalities more optimally. Seasonal effects are ignored in the data generating process and therefore in the macro-level model too.

The chain-ladder method in Mack (1993) assumes the existence of development factors fq for q = 2, . . . , Q, which are estimated by

ˆ fq = PA−q+1 a=1 Laq PA−q+1 a=1 Laq−1 .

These estimated development factors are subsequently used to estimate future payments. Our primary interest lies in the sum of the future payments, because that represents the total outstanding liabilities. We estimate the outstanding liabilities up to and including development period Q by ˆ Rmacro,Q = A X a=2 ˆ LaQ = A X a=2 La,Q−a+1 Q Y q=Q−a+2 ˆ fq.

Since claim payments may even occur after development period Q in some of the considered line of businesses, a tail-factor is included. We follow the approach of Mack (1999), which means that first development factors after Q are estimated by a linear extrapolation of log( ˆfq−

1) over a straight line γ1· b + γ2, γ1 < 0. Subsequently, we find the estimated tail-factor

ˆ

ftail =Q∞q=Q+1fˆq and the macro-level model’s estimated outstanding liabilities

ˆ

Rmacro= ˆftail· ˆRmacro,Q.

Mack’s model is a distribution-free model for the computation of development factors and its standard errors. However, distributional assumptions are required to compute ultimate loss reserve distributions. This will be based on an Overdispersed Poisson (ODP) chain-ladder model. The bootstrap procedure of the ODP chain-ladder model is outlined in England and Verrall (2002). The point predictions are equivalent to the ones of the distribution-free method. Because of simulation of data, we have the availability of many datasets. Therefore we can also provide point prediction error distributions.

2.3 Micro-Level Model

(10)

It differs from a macro-level model by modeling claims on an individual level instead of aggregated per period. The micro-level model can be used to simulate unreported and open claim developments such that the outstanding liabilities are estimated. First, the PDMPP is described. Subsequently, it is used to set the likelihood function of the micro-level model.

2.3.1 Position Dependent Marked Poisson Process

In a PDMPP, the occurrence time of the claim is a point and its associated mark consists of the reporting delay and the development of the claim after notification. In the model, monthly time units are used, denoted by t = 1, . . . , τ . The claim occurrence intensity of the Poisson process is denoted by λ(t) and the associated mark by Z := (U,D) with distribution FZ|T :=

FU |T× FD|T,U. In the remainder of this section i subscripts are left out for convenience. The

full development Poisson process has intensity measure

λ(t) ×P(U = u|T ) × P(D = d|T, U), (t, u, d) ∈ C.

This intensity measure cannot be directly used for the optimization of a likelihood function, since the process of IBNR claims is not observed and the process for IBNeR claims is partly observed.

We define the claim process set of IBNR claims by Cibnr := (t, u, d) ∈ C|t ≤ τ, t + u > τ and the claim process set of IBNeR and settled claims by Cr := (t, u, d) ∈ C|t + u ≤ τ . According to the theory of Kaas et al. (2008), the process of the two sets are independent, because the sets are disjoint. We denote an indicator function by 1{x} which equals 1 if

statement x is true and 0 otherwise. Consequently, the process of IBNeR and settled claims has intensity measure

λ(t) ×P(U = u|T ) × P(D = d|T, U) × 1{(t,u,d)∈Cr}

and can be rewritten as

λ(t)FU |T(τ − t)1{t∈[1,τ ]}×

P(U = u|T )1{u≤τ −t}

FU |T(τ − t)

×P(D = d|T, U). (1)

(11)

development of the claim, on which we will focus later in this section. In a similar way we can write the intensity measure of the process of the IBNR claims as

λ(t)(1 − FU |T(τ − t))I(t∈[1,τ ])×P(U = u|T ))I(u>τ −t)

1 − FU |T(τ − t) ×P(D = d|T, U), (2) which can be interpreted per part in the same logic as the other intensity measure (1). It will be used for the simulation of IBNR claims.

2.3.2 Observed Likelihood Function

As earlier mentioned, IBNR claims are not reported. Hence, its intensity measure is not used in the likelihood optimization. Yet, all the parameters can be estimated by optimizing the observed likelihood function. Based on the specification of (1), we write the likelihood of an observed claim developments, as

Lo∝  Y i:ti+ui≤τ λ(ti)FU |T(τ − ti)  exp  − τ X s=1 λ(s)FU |T(τ − s)  × Y i:ti+ui≤τ P(U = ui|T ) FU |T(τ − ti) ×P(Dτ −ti−ui = d i|T, U )

where the superscript on Dτ −ti−ui stands for the censoring of D at time τ − t

i − ui. The

likelihood of the observed claims can be simplified to

Lo∝ exp− τ X s=1 λ(s)FU |T(τ − s)  Y i:ti+ui≤τ λ(ti)P(U = ui|T )  ×  Y i:ti+ui≤τ P(Dτ −ti−ui = d i|T, U )  .

The two parts can be optimized separately since none of the parameters coincide in these parts of the likelihood.

2.3.3 Further Development of the Claim

(12)

which consists of four parts without common parameters, such that it can be optimized in four separate parts in the likelihood function. We have

P(V1= k) = hF(k) k−1 Y n=0 (1 − hF(n)), (3) P(Vj = k + s|Vj−1= k, Ej−1= 3) = hL(s) s−1 Y n=0 (1 − hL(n)), (4)

for j ≥ 2, where hF(·) and hL(·) are first and later payment time hazard rates, respectively,

as a function of time. Detailed information about discrete survival models can be found in Klein and Moeschberger (2005). In our model we use piecewise constant hazard rates in specific time intervals. The parts (3) and (4) are both dependent on the right-censoring of the observations in the likelihood. We use the theory of Klein and Moeschberger (2005) to find the maximum likelihood estimators of the parameters in these censored distributions. The event type distribution, with ej = 1, 2, 3 and zj as a vector containing a one and possibly

relevant variables,

P(Ej = ej; β) =

exp(β0ejzj)

P3

i=1exp(βi0zj)

is assumed to be independent of censoring after controlling for zj. As a result, we can

optimize the likelihood by a multinomial logistic regression. Moreover, in our model the payment distribution is also not dependent on the censoring of the observations. We have P(Pj = 0|Ej = 1) = 1. The positive payment observations are used to optimize the payment

size likelihoodP(Pj = pj|Ej = 2, 3), where an appropriate distribution is selected based on

the data.

The micro-level model’s prediction of the outstanding liabilities, ˆRmicro is computed by the

development simulation of IBNR and IBNeR claims using the fitted parameters by maximum likelihood. The simulation procedure is outlined in Section 4.

2.4 Model Comparison Criteria

The evaluation of the models is mainly based on two criteria: the accuracy of the loss re-serve point prediction and the predictive distribution. Because of simulation of data, true outstanding liabilities are available. These true outstanding liabilities are denoted by Rb,

where b = 1, . . . , B and B is the amount of simulated datasets. Its prediction is denoted by ˆ

Rb, more specifically ˆRb,macro or ˆRb,micro, such that the reserve error is given by

(13)

Moreover, another measure of the quality of the reserve prediction is the root mean squared error of prediction (RMSEP), which is described in England and Verrall (2002). This measure and the mean absolute error are estimated by

RMSEP( ˆR) = v u u t 1 B B X b=1 ( ˆRb− Rb)2, MAEP( ˆR) = 1 B B X b=1 | ˆRb− Rb|.

The reason why the measure MAEP is also used, is that it is easier to interpret. One can interpret it as the average error of the loss prediction of the model. It gives a clear view of the average amount that is wrongly reserved.

In the perspective of solvency, predictive distributions and especially their high quantiles are important for the insurer. An often used measure for the high quantiles is the Value-at-Risk (VaR), with definition

VaRα(Rb∗) = inf{r ∈R : P(R ∗

b > r) ≤ 1 − α}.

Here, R∗b stands for the random losses that follow the assumed population distribution. We first draw one observation, which are the true outstanding liabilities Rb. By using more

simulations, the true distribution is constructed numerically. The procedure is described in detail in the next sections.

The bootstrap procedure of the macro-level model provides a predictive distribution. Hence, one can take its α-th percent quantile as estimate for VaRα(R∗b). The micro-level

model uses simulation. Consequently, it can provide a predictive distribution. Similarly, its α-th percent quantile represents the estimate for VaRα(R∗b). Again, a more detailed

description of the procedure can be found in the next sections.

We evaluate the accuracy of the macro- and micro-level model’s estimation for VaRα(Rb∗)

in a similar way as the point predictions. That is, we consider the measure

V Eb,α= VaRα( ˆR∗b) − VaRα(Rb∗),

(14)

squared and absolute error RMSEP VaRα( ˆR∗) = v u u t 1 B B X b=1 V E2 b,α, MAEP VaRα( ˆR∗) = 1 B B X b=1 |V Eb,α|.

Using the described statistics in this section, we can evaluate the performances of the models.

3

Data Generation

In this section we describe the structure of the data generating process. We do this first for a general parameter set. Subsequently, we interpret three parameter sets which correspond to characteristics of real insurances. Remember that an overview of the description and notation of variables is demonstrated in the Appendix.

The generated pseudo-datasets are used to compare the model’s loss reserves predictions. For each parameter set that will be considered in this paper, B = 500 pseudo-datasets are generated. It contains individual claim data. We make distributional assumptions for the reporting delay, occurrence time, development process and claim size. The distributions and their parameters are chosen such that they coincide with real data of a specific insurance as much as possible. This is based on papers about related topics2, expert judgment3 and Verbond van Verzekeraars4. The latter is the Dutch national institute for insurers. In this paper we focus on three type of insurances. We consider a personal fire or other damage, a motor vehicle liability and a personal liability insurance. These are main insurances which differ in characteristics. This will be discussed later in the section.

We simulate 96 accident months of individual claim data including the further develop-ment. The development of the claims after the 96th month is not in the estimation pseudo-datasets, i.e. τ = 96. It is equal to 8 years, which usually results in practice in a fair mix of sufficient observations, an upper bound for development years and data that is not outdated. We outline the structure of the pseudo-dataset generation in the next subsections.

2The papers will be mentioned in the next subsections. 3Expert judgment is from actuaries of EY Actuarial Services. 4

(15)

3.1 Claim Occurrence Time

The number of claim occurrences in a month is assumed to follow a Poisson distribution with an arrival rate λ(t), for t = 1, . . . , τ . That is,

N (t) ∼ Poisson λ(t),

where N (t) is the number of occurrences at monthly time interval t. Furthermore, we assume that the parameter follows a random walk, i.e. λ(t) = λ(t − 1) + t, with t ∼ N (0, σ2).

The reason to assume a random walk is that there are constantly changes in the claim environment that have an impact on the current and future occurrence intensity in a gradual way. We assume a 1% expected change of the parameter per month relative to the start-value λ(0), which is based on expert judgment and claim frequency statistics of Verbond van Verzekeraars. The 1% change can be found by solving E[|t|] = p2/π · λ(0) · σ = 1% for

σ. This is equivalently interpretable as a

36 · 1% = 6% expected change over three years. The selection of λ(0) is based on the claim frequency and the usual number of policies in the specific insurance portfolio.

3.2 Reporting Delay

The reporting delays have an impact on the number of IBNR claims. We assume that these reporting delays follow a geometric distribution, which is equivalent to an exponential distribution where the random variable is discretized. Likewise,

U |T = t ∼ Geometric p(t),

on the support u = {0, 1, 2, . . .}. Again, the expectation is assumed to follow a random walk: 1−p(t)p(t) = 1−p(t−1)p(t−1) + κt with disturbance κt ∼ N (0, σκ2) and σκ such that E[|κt|] =

p2/π · 1−p(0)

p(0) · σκ = 1%. The parameter p(0) is set such that the mean of the geometric

distribution, 1−p(0)p(0) , is representative for the expected months of reporting delay per insurance portfolio. Verrall and W¨uthrich (2016) wrote a paper about detailed distributions of reporting delays in claim processes. We use a simplification of their findings in combination with expert judgment to select the appropriate p(0).

3.3 Development Process

(16)

of a month with a maximum of one, which is more convenient in modeling. In the case that there would occur more payments, the payments are added up and subsequently treated as one payment. The hazard rates of the first and later payments are distinguished. For the hazard rate of the first payment we use three piecewise constants, which change after month b1 and b2. In practice, insurers hand over claims to another department after a fixed time,

which may influence the hazard rate. Hence, the values of b1and b2 are assumed to be known

and thus nonrandom. The hazard rate of the first payment occurrence is then formulated by

hF(v1) =                0 v1= 0, v1 > 120 hF1(v1) v1∈ [1, b1] hF 2(v1) v1∈ [b1+ 1, b2] hF3(v1) v1∈ [b2+ 1, 120].

Note that the intervals that are used are in discrete time of months and v1 is the time after

reporting of the claim. In some line of businesses, later payment hazard rates are likely to be different from first payment hazard rates. Therefore, we also use three piecewise constants for the hazard rate of later payments, i.e. j ≥ 2. The hazard rate of later payments is denoted by hL(vj) =                0 vj = 0, vj > 120 hL1(vj) vj ∈ [1, b1] hL2(vj) vj ∈ [b1+ 1, b2] hL3(vj) vj ∈ [b2+ 1, 120].

Survival theory can be used to find probabilities of payment occurrences. The probability of the first payment occurrence in v1 months after reporting is

P(Vi1= v1) = c1· hF(v1) v1−1

Y

i=0

(1 − hF(i)). (5)

The later payment occurrences probabilities at month vj = [1, 2, . . . , 120] are

P(Vij = vj|Eij−1 = 3) = c2· hL(vj) vj−1

Y

i=0

(1 − hL(i)), (6)

for j = 2, . . . , Ji. The specification of c1 and c2 can be found in the Appendix, which ensure

that the probabilities (5) and (6) add up to 1.

(17)

the settlement of the claim. As a consequence, the value of Ji is determined by the number

of consecutive Type 3 events, including the settlement by either a Type 1 or 2 event. The events are determined by a multinomial logit model. That is, the probability of a Type e event is given by P(Eij = e; β) = exp(βe) P3 e=1exp(βe) ,

for e = 1, 2, 3. Since no covariates are used, we write the constant event type probabilities as p1, p2 and p3 for e = 1, 2, 3, respectively.

3.4 Claim Size

The payments with an event Type 2 or 3 indicate a positive payment. For general insur-ance portfolios we assume that the positive payments follow a log-normal distribution. This distribution satisfies the properties of payments since they are positive and fat right-tailed. Moreover, this distribution is in practice often representative for payment data, see e.g. An-tonio and Plat (2014). As a result, we have

Pij = 0 for Eij = 1

(log(Pij)|Ti= t, SDi = d) ∼ N (µt,d, σ2) for Eij = 2, 3.

The log-normal distribution has moments

E[Pij] = exp(µt,d+ σ2/2) (7)

Var[Pij] = [exp(σ2) − 1] exp(2µt,d+ σ2). (8)

We assume that the parameter σ is constant over time, while the parameter µt,dchanges over

time in the following way

exp(µt,d+ σ2/2) = exp(µ0,0+ σ2/2) + t X i=1 acci + d X i=1 devi ,

with acci and devi as normal distributed variables with zero expectation. They are both set such that E[|acci |] and E[|dev

i |] are equal to 2% · µ0,0, in similar way as in Section 3.1 and 3.2.

(18)

for each portfolio, since also a mean of payments below 200 seems unrealistic. Although σ2 is constant over time, it does not mean that the variance of the payments is constant. Equation (8) demonstrates that µt,d affects the variance.

The structure of the data generating process is outlined. We discuss the selection of the parameter sets in next section.

3.5 Parameters per Insurance Type

For each insurance type specific parameter sets are chosen, see Table 2 for an overview. Information about claim frequencies and average claim payments are obtained from Verbond van Verzekeraars. As there are approximately 100 insurers with a total policies of 6,000,000 in the Netherlands, we consider an insurer with 60,000 policies in this study.

PFOD MVL PL N (t) λ(0) = 400 λ(0) = 50 λ(0) = 50 U p = 0.4 p = 0.3 p = 0.25 Vj hF1 = hL1 = 0.28 hF2 = hL2 = 0.16 hF 3 = hL3 = 0.08 (b1, b2) = (6, 12) hF1 = 0.16, hL1 = 0.18 hF2 = 0.08, hL2 = 0.14 hF 3 = 0.05, hL3 = 0.08 (b1, b2) = (12, 24) hF1 = 0.16, hL1 = 0.18 hF2 = 0.08, hL2 = 0.14 hF 3 = 0.05, hL3 = 0.08 (b1, b2) = (12, 24) Ej (p1, p2, p3) = (.35, .5, .15) (p1, p2, p3) = (.05, .25, .7) (p1, p2, p3) = (.05, .25, .7) Pj µ = 5.5, σ = 1.7 µ = 7.8, σ = 0.85 µ = 6.3, σ = 1.7

Table 2: Selected parameters per insurance portfolio.

Personal Fire and Other Damage to Property Insurance

(19)

Motor Vehicle Liability Insurance

Motor Vehicle Liability (MVL) insurances have a much lower claim frequency of 1%, resulting in λ(0) = 50. The average reporting delay is equal to 213 months, which leads to p(0) = 0.3. The hazard rates and event type probabilities are again derived from the plots in Antonio and Plat (2014), but this time from the injury claims. The specification of the event probabilities imply 3.2 expected positive payments. The expected first payment delay is equal to 7 months and the expected later intermediate payments delay is equal to 6 months. The settlement delay since reporting is then 20 months on average. The expectation of a payment is equal toe3,503 and its variance is equal to e1.3 · 106.

Personal Liability Insurance

Personal Liability (PL) insurances have a similar claim arrival rate as MVL insurances. The average reporting delay is equal to 3 months, i.e. p(0) = 0.25. The payment delays and expected number of positive payments are the same as in a MVL insurance. The payments have a higher variance than in a MVL portfolio. The expectation of a payment is equal to e2,310 and its variance is equal to e90.7 · 106.

3.6 Distributions in Estimation

Without prior knowledge about the distributional structure of the data generating process, one would perform a data analysis. Despite of the random walk disturbances over time in the parameters of the distributions, the true underlying distributions in the variables would fit the most optimal on the data. In the data generating process, the expected amounts of claims and reporting delay change over time. However, in this setting it is hardly observable, as parameters fluctuate over time without clear trends. Moreover, the parameter fluctuations in the data generating process are not necessarily reflected in the 96 observations. Therefore, in the maximum likelihood estimation λ(t) = λ and p(t) = p. Covariates may be added to the distributions of Ej and Pj. In the distribution of Ej, data analysis would result in zj = 1, that

is to say, no covariates will be used. In the data generating distribution of Pj, no dependencies

(20)

year 8 is taken as reference group, i.e. α8 = 0. We put hats over parameters that are

estimated. As a result,

ˆ

µa= ˆµ + ˆαa,

where ˆµa is used as estimate for µain accident year a, with ˆµ8 = ˆµ. The estimate of σ, which

is defined by ˆσ, is fitted over the entire sample instead of per accident year.

4

Simulation of the Future Development of Claims

In this section, we outline simulation procedure using the micro-level model. We observe the claim developments in the data up until valuation time τ . Thereafter, the developments of the IBNR and IBNeR claims are simulated. In Figure 2, four examples of claim development simulation are demonstrated. To variables in each of the claim developments correspond a subscript i, yet they are left out for convenience. Occasionally, we refer to the figure to give a visualization of the procedure.

Figure 2: Simulation of claim developments: four examples

4.1 IBNR Claims

The number of IBNR claims in time period t are simulated from a Poisson distribution corrected with the probability that a claim is not yet reported. This is found in the first part of intensity measure (2), where 1 − FU |T(τ − t) =P(U ≥ τ − t; p). That is,

NIBN R(t)∗ ∼ Poisson ˆλ ·P(U ≥ τ − t; ˆp),

NIBN R∗ =

τ

X

t=1

(21)

resulting in n∗IBN R5 claims at valuation time τ . Subsequently, for each IBNR claim i = 1, . . . , n∗IBN R, a reporting delay is simulated. In the category of IBNR, Ui∗> τ − ti. Hence we

simulate Ui∗from a truncated geometric distribution with parameter ˆp and density fU |U >τ −ti, which is equivalent to the second part of intensity measure (2). An IBNR claim can thereupon be treated as a claim reported at time w∗i = t∗i + u∗i. Hence, the further development Di of

an IBNR claim is simulated as an IBNeR claim, from which the reporting time is known. See also the third parts of intensity measures (1) and (2), which are the same. The first claim in Figure 2 is an example of such an IBNR claim simulation. Its development after w∗ is described in the next subsection.

4.2 IBNeR Claims

For the simulation of the first payment time, event and size, the IBNeR claims with Vi1> τ

are also added to the simulation sample. We simulate the payment time since the reporting month from the probability

P(V∗ i1= k|Vi1∗ ≥ vi1τ) = ˆ hF(k)Qk−1 i=0(1 − ˆhF(i)) P120 k=vτ i1 ˆ hF(k)Qk−1 i=0(1 − ˆhF(i)) ,

where k = [1, . . . , 120] and vi1τ = max(1, τ + 1 − wi), such that we work with unconditional

probabilities in case of IBNR claims. Remember that the result also produces m∗ij = w∗i + Pj

k=1v ∗ ik.

The corresponding event Type E is thereupon simulated fromP(Eij∗ = eij; ˆβ). The claim

is settled together with a zero payment at a Type 1 event, which is the case in the second example of Figure 2. In case of a Type 2 or 3 event the corresponding payment size is simulated from

log(Pij∗)|Ti ∈ Ta∼ N (ˆµa, ˆσ2) for Eij∗ = 2, 3.

A Type 2 event implies settlement, as the third example in Figure 2. After a Type 3 event claims are not settled. The claims that are not settled follow iteratively a similar procedure of payment time, event Type and payment size simulation. A difference is that later payment hazard probabilities are used. Moreover, Vi1∗ is replaced by Vij∗ and vτi1 by vijτ = max(1, τ + 1 − mij−1). In the j-th iteration claims with Vij > τ enter the procedure, provided that the

claim did not already enter. The fourth example in Figure 2 is a claim that enters at the

5

(22)

second iteration. Subsequently, that claim is not settled in m2and therefore follows the same

procedure again.

4.3 Predicted Outstanding Liabilities

The procedure above is performed for s = 1, . . . , S times with S = 500 for each dataset b. Subsequently, we take the average of the sum of simulated payments. That is,

ˆ Rmicro,s= X i X j p∗ij ˆ Rmicro = 1 S S X s=1 ˆ Rmicro,s,

with ˆRmicroas the micro-level model’s prediction of the outstanding liabilities. The VaR0.95(R∗b)

is estimated by the 95% quantile of the ˆRmicro,s distribution. The final results per dataset b

attach a corresponding subscript, i.e. ˆRb,micro and VaR0.95( ˆRb,micro).

4.4 Simulation of the True Reserve Distribution

The dataset that is generated from the assumed population distribution provides the true outstanding liabilities denoted by Rb. It is used to compare the accuracy loss reserves

esti-mates of the models. We are also interested in the performance of the models in the high quantiles of the reserve distribution. A true reserve distribution is required for the evaluation. It is constructed by simulating the IBNR and IBNeR claims at valuation time τ . We do this S = 500 times with the true assumed population distribution and time-varying parameter set

{λ(t), p(t), hF, hL, β, µt,d, σ2},

with t = 1, . . . , τ and d = 1, . . . , ∞. Its 95%-quantile is regarded as the true VaR0.95, which

is used for the comparison of the models in the high quantiles.

4.5 Comment on the Fairness of the Model Comparison

One could argue that the micro-level model has a step ahead on the macro-level model, as similar distributional assumptions are used in both the data generating process and in the model. Due to the possible bias, also relative micro- and macro-level performances per parameter set are presented in the results in the next section.

(23)

therefore not the exact distributions that are used in the micro-level model, such that the bias is diminished. It may even be the case that macro-level models have the advantage as disturbances in the data generation process are high, since it is a more robust method. A micro-level model may suffer more due to its many or excessive parameters utilized. This can be reflected in both the point and tail prediction. One should note that the size of the added errors may influence the results. Nevertheless, they are in our point of view reasonable fluctuations over time.

5

Prediction Comparison per Insurance Type

In this section we evaluate the performances of the macro- and micro-level model based on the statistics described in Section 2.4.

5.1 Loss Reserve Distribution

Figure 3 demonstrates the distribution of the loss reserve prediction per dataset for both the macro- and micro-level model, i.e. the distribution of ˆRb,macro and ˆRb,micro. The true

distribution of the reserves per dataset, Rb, is also depicted. It provides an overview of the

average outstanding liabilities for each type of insurance. Moreover, it also demonstrates the variability in outstanding liabilities per pseudo-dataset, which is mainly caused by the differences in parameter random walk movements in the data generating process.

(24)

(a) PFOD (b) MVL (c) PL

Figure 3: Loss reserve distributions, in terms ofe1,000

5.2 Loss Reserve Error Distributions

In Figure 4, 5 and 6 distributions of reserve error REb and VaR0.95error V Eb,0.95are

demon-strated per type of insurance. Table 3 presents some relevant model comparison statistics in addition to these figures. We analyze the results per type of insurance.

RMSEP( ˆR) MAEP( ˆR) RMSEP(VaR0.95( ˆR)) MAEP(VaR0.95( ˆR))

PFOD - Macro 292.4 228.4 303.6 227.1 PFOD - Micro 236.3 184.3 130.7 104.0 MVL - Macro 703.2 549.2 714.7 532.9 MVL - Micro 706.2 567.2 638.4 499.4 PL - Macro 1151.8 758.7 1565.1 986.4 PL - Micro 710.2 562.9 623.1 484.2

Table 3: Model comparison statistics, all in terms ofe1,000

5.2.1 Personal Fire or Other Damage

In Figure 4(a) can be observed that the micro-level model performs better, as it has more mass in the small error area. The mean and absolute squared error in Table 3 agree with the figure. On average, the micro-level model’s estimate of the losses is e44,100 more accurate. The amount is obtained by subtracting the models’ MAEP( ˆR)s. In the high tails, the micro-level model is also more accurate, which is reflected in the high density around the small errors in Figure 5(b), as well as in the smaller MSEP(VaR0.95( ˆR)) and MAEP( ˆR) in Table

(25)

(a) Reserve point prediction error (b) VaR95error

Figure 4: Error distributions of loss reserving in PFOD, in terms ofe1,000

5.2.2 Motor Vehicle Liability

Figure 5(a) and Table 3 show the prediction results of both models are quite similar, in contrary to the results for a PFOD insurance. The point prediction errors of the macro-level model are slightly smaller, namely e28,000 on average. Yet, the micro-level model is more accurate in the VaR0.95 prediction, reflected in a smaller MSEP(VaR0.95( ˆR)) and

MAEP(VaR0.95( ˆR)). This is mainly caused by slight overestimation of the macro-level model,

which Figure 5(b) suggests.

(a) Reserve point prediction error (b) VaR95error

(26)

5.2.3 Personal Liability

By analyzing Figure 6(a) one can state that the micro-level model indeed slightly overpre-dicts and the macro-level model underpreoverpre-dicts the outstanding liabilities. This was already suggested in Figure 3(c). Furthermore, the point prediction error distribution corresponding to the macro-level model has fatter tails, which means that the macro-level performs worse. This can also be indicated by Table 4, where the average prediction error ise195,000 larger in case of the macro-level model. Moreover, Figure 6(b) shows that the macro-level model strongly overestimates the 95%-quantile of the loss distribution. The extreme positive errors are also reflected in the high MSEP(VaR0.95( ˆR)) in Table 3.

(a) Reserve point prediction error (b) VaR95error

Figure 6: Error distributions of loss reserving in PL, in terms ofe1,000

Overall, we observe that the macro-level model overestimates the high loss quantiles in all three insurances. Moreover, we observe that results depend a lot on the parameter set in the data generating process. Although the input parameters in the data generating process are quite similar as for a MVL insurance, the results of the PL insurance are different. Probably, the macro-level model has more difficulties in loss reserve prediction in case of extreme payments or many IBNR claims in the portfolio. We can find more clarity about this by performing a rough sensitivity analysis in the parameter set of the data generating process.

5.3 Sensitivity Analysis

(27)

characteristics in the claim process for which the micro-level model is well applicable. For this sensitivity analysis, we use the parameter set corresponding to the PFOD insurance apart from λ(0) = 100 as a baseline set. Its insurance characteristics are the most common relative to other type of insurances, except the number of claim occurrences. For a rela-tive comparison between the models, we set the RSME ratio of the results for the baseline data generating process equal to 1.00 in Table 4. The results will be analyzed in the next subsections.

RMSE( ˆR) RMSE(VaR0.95( ˆR))

Macro Micro Ratio Macro Micro Ratio

λ(0) = 400 292.4 236.3 0.92 303.6 130.7 0.91 100 143.1 106.9 1.00 160.3 62.7 1.00 25 75.9 47.0 1.21 99.4 36.9 1.05 p(0) = 1 107.2 81.6 0.98 122.7 46.3 1.04 0.2 207.8 131.9 1.18 287.8 90.1 1.25 σ = 0.85 57.1 47.9 0.89 66.3 33.8 0.77 1.3 84.7 68.6 0.92 90.2 44.5 0.79 ISD 300.4 225.6 0.99 360.5 132.4 1.06 MVL 702.3 706.2 0.74 714.7 638.4 0.44 PL 1151.8 710.2 1.21 1565.1 623.1 0.98

Table 4: Sensitivity analysis, with RMSE in terms ofe1,000. The ratio stands for the RMSE of the macro-level model divided by the RMSE of the micro-macro-level model, which is normalized to 1.00 for λ(0) = 100. The baseline parameter set, from which its results are in bold, is equivalent to the one of PF, apart from λ(0) = 100. Alternatively, MSE ratios can be obtained by squaring RSME ratios.

5.3.1 Number of Occurrences

(28)

We analyze the effects of λ(0) equal to 400, 100 and 25, from which the resulting error distributions corresponding to the latter two are demonstrated in Figure 7 and 8, respectively. The resulting error distributions with λ(0) = 400 coincide with Figure 4. The differences of the models’ reserve point and quantile prediction results are not clearly visible in the figures. The RMSE ratio can give more insight in the performances. It indicates that a micro-level model is relatively more accurate than a macro-level model as the number of occurrences decreases, reflected in a decreasing RMSE ratio over λ(0).

The results can indicate that the micro-level model’s advantage of taking the knowledge of the amount of IBNeR claims into account is greater than the disadvantage of its usage of many parameters. Therefore, micro-level models could especially be effective for small insurers or for insurances with low claim frequencies.

(a) Reserve point prediction error (b) VaR95error

Figure 7: Error distributions of loss reserving with λ(0) = 100, in terms ofe1,000.

(a) Reserve point prediction error (b) VaR95error

(29)

5.3.2 IBNR Claims

The IBNR claims have a great impact on the outstanding liabilities, in which all the corre-sponding payments are included. Besides the fact that the share of IBNR claim payments in the outstanding liabilities is likely to be massive, the sum of IBNR claim payments is difficult to estimate, since the number of IBNR claims is unobservable. We analyze the performances of the models in case of no IBNR claims and many IBNR claims. We generate no IBNR claims by setting p(0) equal to 1. In the other parameter set, we let p(0) = 0.2, which results in an expected reporting delay of 4 months. Thereby, approximately 38% of the open claims are IBNR. Remember that in the baseline parameter set p(0) = 0.4, which leads to an expected reporting delay of 1.5 month. In the baseline, on average 19% of the open claims are IBNR. These percentages are obtained by taking the average of IBNR fractions in pseudo-datasets. Figure 9 and 10 show that IBNR claims have a great impact on the accuracy of both the micro- and macro-level model. Especially the macro-level model has difficulties with predicting outstanding liabilities in portfolios with high reporting delays, reflected in large tails in its reserve point and VaR0.95 quantile distribution in Figure 10. We also observe this

in Table 4. The RMSE of both models increase significantly as the number of IBNR claims increase. Yet, especially the macro-level model seems to fail relatively to the micro-level model when the reporting delays are enormous. Namely, the RMSE ratios for p(0) = 0.2 are 0.18 and 0.25 ratio points higher than for the baseline parameter set. This indicates that micro-level models are particularly useful for insurances in which the reporting delays are high.

(a) Reserve point prediction error (b) VaR95error

(30)

(a) Reserve point prediction error (b) VaR95error

Figure 10: Error distributions of loss reserving with many IBNR, p(0) = 0.2, in terms ofe1,000

5.3.3 Variance in Payment Sizes

In the baseline data generation process, the variance in the payments is enormous. In prac-tice, some insurers distinguish small and large claims in their macro-level model, such that the variance in payments decreases. We analyze the performances of both models as the variance of payments decreases. That is, we reduce the value of σ to 1.3 and 0.85. There-after, we increase the value of µ0,0 such that equal expected payment sizes are obtained.

Consequently, we can compare the loss reserve predictions on the same scale. It results in standard deviations of the payments that are approximately halved for each reduction of σ. More detailed information about values of µ0,0 and Var[Pij] can be found in the Appendix.

Figure 7(a), 11(a) and 12(a) demonstrate that macro- and micro-level reserve error distribu-tions become more similar as the variance in payment size decreases. Furthermore, the tails of the VaR0.95error distribution of the macro-level model thin as the variance decreases, also

relative to the micro-level model’s distributions. These analyzes can also be observed in Table 3. The ratios in point and VaR0.95 prediction RMSE decrease with σ. Therefore, micro-level

(31)

(a) Reserve point prediction error (b) VaR95error

Figure 11: Error distributions of loss reserving with σ = 1.3, in terms ofe1,000

(a) Reserve point prediction error (b) VaR95error

Figure 12: Error distributions of loss reserving with σ = 0.85, in terms ofe1,000

5.3.4 Increased Settlement Delay

In many types of insurance, settlement delays are greater than in the baseline insurance. Therefore, we double the expected settlement delay since reporting. This is obtained for (hF1, hF2, hF3) = (hL1, hL2, hL3) = (0.21, 0.12, 0.06) and (p1, p2, p3) = (0.25, 0.4, 0.35). This results

in an expected settlement delay of 10 months. We refer to this Increased Settlement Delay parameter set with abbreviation ISD.

(32)

are close to 1. Nonetheless, it could be possible that a different payment structure has impact on the relative model performances. The effects of ISD could be canceled out. Namely, the ISD parameter set also causes a larger sample size in payments and a higher variance in sum of individual claim payments. Therefore, one should be careful with drawing conclusions from these results.

(a) Reserve point prediction error (b) VaR95error

Figure 13: Error distributions of loss reserving with ISD, in terms ofe1,000

6

Conclusions

In this study we compare macro- and micro-level model performances for many pseudo-datasets. We allow for changes in the parameter set of the data generating process. First, three parameter sets are constructed which correspond to characteristics of main insurances. In addition, a sensitivity analysis in the parameter set is performed. It reflects model perfor-mances per characteristic in the claim development structure. Perforperfor-mances are evaluated in terms of quantifying the best estimate and VaR0.95 of the outstanding liabilities.

In generated datasets of PFOD and PL insurances, the micro-level model outperforms the macro-level model. In pseudo-datasets of MVL insurances, the results of the models are similar. In other words, it is not worthwhile to replace the traditional macro-level model in MVL insurances.

(33)

performances between micro- and macro-level reserving. It declares the moderate micro-level performance in MVL insurances: reporting delays are average and there is a low variance in payments.

Overall, the results of the micro-level model are promising, as the performances are equiv-alent or better than macro-level models. Particularly in insurance portfolios where samples sizes are small, reporting delays are great or payment variances are high, it is worthwhile to apply a micro-level model. These three characteristics usually occur in liability insurances with large unpredictable payments, such as an asbestos liability insurance and the examined PL insurance. Yet, micro-level models applications in other type of insurances are certainly not discommended. The results encourage further research in micro-level reserving.

7

Discussion and Further Research

There are some limitations in this study. First, as earlier mentioned, the micro-level model could have taken advantage of using similar distributions as in the data generating process. The bias is reduced by implementing errors in parameters. It causes distributions which are not exact anymore. The resulting advantage the micro-level model still possibly has, is not clear. It would be interesting to investigate the model performances under an alternative data generating process. For example, one that is distribution-free.

Furthermore, the data generating process can be extended. In this study, seasonality ef-fects and correlations in variables are ignored. These seasonalities and correlations can easily be dealt with in the micro-level model by adding covariates. In earlier research, e.g. Ren-shaw and Verrall (1998), macro-level models are stated as unsuitable for GLM applications. Therefore, results may even be more in favor of micro-level models. This is worthwhile to investigate in further research.

In addition, it would be interesting to investigate the gain of implementing claim analytics results in the models. Claim analytics can provide initial claim estimates in categories or amounts. A micro-level model can use it as covariate in the payment distribution. A macro-level model can be applied to categorized claims, separately.

(34)

References

Katrien Antonio and Richard Plat. Micro-level stochastic loss reserving for general insurance. Scandinavian Actuarial Journal, 2014.

Katrien Antonio, Els Godecharle, Robin Van Oirbeek, et al. A multi-state approach and flex-ible payment distributions for micro-level reserving in general insurance. Technical report, KU Leuven, Faculty of Economics and Business, Department of Accounting, Finance and Insurance (AFI), 2016.

Peter D England and Richard J Verrall. Stochastic claims reserving in general insurance. British Actuarial Journal, 8(3):443–518, 2002.

Xiaoli Jin. Micro-level loss reserving models with applications in workers compensation insurance. University of Winsconsin-Madison, Empirical Paper, 2013.

Xiaoli Jin and Edward W Jed Frees. Comparing micro-and macro-level loss reserving models. University of Winsconsin-Madison, Working Paper, 2013.

Rob Kaas, Marc Goovaerts, Jan Dhaene, and Michel Denuit. Modern actuarial risk theory: using R, volume 128. Springer Science & Business Media, 2008.

John P Klein and Melvin L Moeschberger. Survival analysis: techniques for censored and truncated data. Springer Science & Business Media, 2005.

Christian Roholte Larsen. An individual claims reserving model. ASTIN Bulletin: The Journal of the IAA, 37(1):113–132, 2007.

Thomas Mack. Distribution-free calculation of the standard error of chain ladder reserve estimates. Astin bulletin, 23(2):213–225, 1993.

Thomas Mack. The standard error of chain ladder reserve estimates: Recursive calculation and inclusion of a tail factor. ASTIN Bulletin: The Journal of the IAA, 29(2):361–366, 1999.

Ragnar Norberg. Prediction of outstanding liabilities in non-life insurance1. ASTIN Bulletin: The Journal of the IAA, 23(1):95–115, 1993.

(35)

Arthur E Renshaw and Richard J Verrall. A stochastic model underlying the chain-ladder technique. British Actuarial Journal, 4(4):903–923, 1998.

Verbond van Verzekeraars. Verzekerd van cijfers. https://www.verzekeraars.nl/ publicaties/verzekerd-van-cijfers. Accessed: December 2017.

(36)

8

Appendix

Overview of Variable Notation

Table 5 demonstrates the variables and its notations which are regularly used in this pa-per. The first block in the paper contains variables with subscript i. These are variables corresponding to an individual claim i. Time variables are expressed in months. Payment amounts are in euros (e).

Variable Mathematical Equivalent Definition

Ti Claim occurrence time

Ui Reporting delay

Wi Ti+ Ui Reporting time

Ji # Payments, a final zero payment may be included

Jio # Observed payments

Ci {Ti, Ui, Di} Total claim development

Di {(Vij, Eij, Pij), j = 1, . . . , Ji} Claim development after reporting

Mij Time of payment j

Vi1 Mi1− Wi Months between payment 1 and reporting

Vij Mij− Mij−1 Months between payment j and payment j − 1

Eij Event type corresponding to payment j

Pij Size of payment j

SDi MiJi − Wi Settlement delay since reporting

τ Time of valuation

fq Development factor of losses from q − 1 to q

Ta [12(a − 1) + 1, 12a] Set of periods in accident year a

Maq [1, 12(a + q − 1)]

Laq Pi:Ti∈Ta

P

j:Mij∈MaqPij Losses in accident year a up until development period q

Rb True outstanding liabilities in dataset b

R∗ Outstanding liabilities following true distribution

ˆ

R∗ Outstanding liabilities following predicted distribution

(37)

Values of c1 and c2 c1 = 1 P120 k=1hF(k) Qk−1 i=0(1 − hF(i)) c2= 1 P120 k=1hL(k) Qk−1 i=0(1 − hL(i))

Variance in Payment Sizes For the baseline parameter set we have

(µ0,0, σ) = (5.5, 1.7),

Var[Pij] =e18.3 · 106.

Moreover, in this paper we also consider

(µ0,0, σ) = (6.1, 1.3),

Var[Pij] =e4.76 · 106,

and

(µ0,0, σ) = (6.583, 0.85),

Var[Pij] =e1.14 · 106.

Referenties

GERELATEERDE DOCUMENTEN

Shifts in latencies were analysed using a repeated measure ANOVA with stimulation (anodal and cathodal), eye direction (left and right) and time course (baseline, tDCS and

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

In the current article, two latent class models, referred to as the Direct model and the Indirect model, are presented that can be used to predict a group-level outcome by means

An existing micro-macro method for a single individual-level variable is extended to the multivariate situation by presenting two multilevel latent class models in which

Aspecten die een succesvol snellooiprocede lange tijd in de weg staan, zijn onder andere verwarring O'Ver wat looiing en leer eigenlijk zijn, gebrek aan kennis van samenstelling

Waren er 2 personen meer dan kreeg ieder.. f

The TREC Federated Web Search (FedWeb) track 2013 provides a test collection that stimulates research in many areas related to federated search, including aggregated search,

Letter writing and drafting skills, the value of plain language, moot court activities, alternative dispute resolution and clinical legal education provide