• No results found

Modelling expert opinions on operational risk in pension funds Master thesis Rijksuniversiteit Groningen

N/A
N/A
Protected

Academic year: 2021

Share "Modelling expert opinions on operational risk in pension funds Master thesis Rijksuniversiteit Groningen"

Copied!
81
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Modelling expert opinions on operational risk in pension

funds

Master thesis Rijksuniversiteit Groningen

Martijn Westra s1615459 November 8, 2011

(2)

Abstract

In this thesis we model expert opinions on operational risk in pension funds. The approach used in this research allows the usage of a very simple questionnaire which, in theory, reduces psychological biases in eliciting expert opinions. However, we can not test its success in doing so. The judgements of 11 experts are aggregated to find an estimate of a 97.5% VaR. The resulting VaR can be used to find a formula for operational risk in the solvability assessment of the FTK. We find that the point estimate of this VaR is inaccurate and its reliability can not be tested. Nevertheless, it is the first estimate ever made for operational risk in the FTK and lack of real world data makes it difficult to find a better estimate. A case study shows that the effect of operational risk on the required capital (VEV) of a pension fund is small.

(3)

Contents

1 Introduction 1 2 Operational risk 5 2.1 Operational risk . . . 5 2.1.1 Definition . . . 5 2.1.2 Implications . . . 6 2.1.3 Implementation . . . 7 2.1.4 Mitigation . . . 8

2.2 Existing regulatory frameworks and their operational risk management . . . 8

2.2.1 Basel II . . . 9

2.2.2 Solvency II . . . 11

2.2.3 The FTK . . . 13

2.2.4 FIRM . . . 14

2.2.5 An overview . . . 16

2.3 The committee on investment policy and risk management . . . 16

2.4 Operational risk in pension funds . . . 17

2.4.1 Loss types . . . 17

2.4.2 An indicator variable for S9 . . . 19

3 Data 21 3.1 Types of operational loss data . . . 21

3.1.1 Difficulties and restrictions in collecting operational loss data . . . 21

3.2 Collecting expert opinions: a questionnaire . . . 22

3.2.1 Psychological biases in making judgements . . . 22

3.3 The questionnaire on operational risk within pension funds . . . 23

3.4 Collected data . . . 25

4 Mathematical models 28 4.1 Frequency and severity distributions for operational risk modelling . . . 28

4.1.1 Frequency distributions . . . 28

4.1.2 Severity distributions . . . 30

4.2 Fitting distributions to expert opinions . . . 31

4.3 Aggregating distributions: the linear opinion pool . . . 33

4.4 Aggregating distributions: the copula approach . . . 33

4.4.1 Copulas . . . 33

4.4.2 The class of Archimedean copulas and their dependence measures . . . 35

4.4.3 The Frank copula . . . 37

4.4.4 The Gumbel copula . . . 39

4.4.5 A copula aggregation algorithm . . . 40

4.5 The loss distribution approach . . . 41

4.5.1 A simulation algorithm for the loss distribution approach . . . 42

4.6 Infinite mean models and its implications for VaR calculation . . . 42

4.7 Bootstrapping . . . 44

5 Results 45 5.1 Total loss distribution . . . 45

5.2 Expert Dependence . . . 52

5.3 A sensitivity analysis on λ . . . 54

5.4 A sensitivity analysis on the number of experts . . . 56

5.5 Operational risk in the FTK . . . 57

(4)

6 Conclusions and topics for further research 60 6.1 Summary . . . 60 6.2 Subquestion 1 . . . 61 6.3 Subquestion 2 . . . 61 6.4 Subquestion 3 . . . 61 6.5 Main question . . . 62 7 References 64 A Questionnaire 68

B Proof of exponentially distributed waiting times 74

C Proof of Frank Copula 76

(5)

1

Introduction

The mathematical modelling of people’s judgements is often a tedious job. Making estimates of statistics such as probabilities or quantiles from some distribution is very difficult. Furthermore, bad assumptions lead to irrelevant results. In the fi-nancial world the modelling of so-called expert opinions is done on a regular basis. Especially when there is a scarcity of data, this kind of modelling might be useful. In the case of operational risk management, scarcity of data is a very common issue. See, for example, Antonini et al. (2009) and Ouchi (2004). Operational risk is de-fined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. Whereas the banking sector has developed some decent models and collected a lot of data (see Moscadelli, 2004), the pension fund industry has made no quantitative progress (yet). The financial review frame-work (the FTK) Dutch pension funds are subject to, recognizes operational risk, but since it has not been quantified yet, it should be managed qualitatively (Pensioen- en Verzekeringskamer, 2004). Dutch pension funds are therefore not required to hold additional capital to be able to cover (un)expected operational losses. However, firms can benefit from good quantitative operational risk management, because it highlights the critical parts of an organization’s processes. This research is focused on the modelling of expert opinions on operational risk in pension funds.

The most important question in modelling expert opinions is how to obtain a capital figure for operational risk in the least subjective way. In this research an approach devised by Frachot et al. (2004) and simplified by Alderweireld et al. (2006) is used. The questionnaire proposed by Alderweireld et al. (2006) tries to reduce the difficulties in eliciting expert opinions by asking experts very simple questions. These difficulties (i.e. psychological biases) could lead to bad estimates (see section 3). The questions are simple, because they do not ask the expert to give estimates of statistics such as probabilities or quantiles. Buck et al. (2009) show that these type of estimates are sensitive to error. However, the setup of the questionnaire does not allow us to test whether the bias reduction is successful. We can only assume that it is. Therefore, we should interpret the results with care.

(6)

approximately 70%. This number of responses is very small and because of this the estimated capital figure seems to be sensitive to poor quality judgements.

For every expert a (statistical) severity distribution is found using a weighted least squares approach. The resulting 11 severity distributions will be aggregated us-ing the so-called linear opinion pool. This approach averages the obtained severity distributions and thus assumes each expert’s input to be equally likely. To find the distribution which describes the total amount of operational losses the sample fund could face within one year (the total loss distribution), the loss distribution approach (LDA) is used. This total loss distribution is found by combining the aforementioned aggregated severity distribution with a frequency distribution. This frequency distribution describes the number of operational losses occurring within a year. Input for the frequency distribution is provided by the experts. Finally, a capital charge for operational risk for the sample fund is found by calculating a Value-at-Risk (VaR) of the total loss distribution. For example, the standardized approach of the FTK assumes a one year-ahead 97.5% Value-at-Risk. This means that with a probability of 97.5% a pension fund is able to protect itself against (un)expected (operational) losses within one year.

Experts at pension funds across the Netherlands provide estimates on the frequency and severity of operational losses based on their experience and business knowledge. Of course, risk averseness and the quality of operational risk management differs per fund. Therefore, one expert might expect more severe losses than another expert. This is reflected in the obtained data. So, in a sense, the data collected in this re-search represents the operational loss experiences and operational risk management quality across the Dutch pension fund sector, but were scaled down to represent the sample fund. The solvability assessment of the FTK is a standardized approach, applicable to every pension fund in the Netherlands. Since we have collected data from across the pension fund sector, the results of this research can be applied to a simple formula regarding a capital requirement for operational risk in the FTK. The resulting VaR measure, i.e. capital requirement, thus contains the experience and business knowledge of 11 Dutch pension funds. Dividing this VaR measure by the value of AUM of the sample fund, we find the percentage value of AUM funds could maintain as a capital charge for operational risk. However, the spread in the data and the usage of only 11 experts requires a study of the parameter uncertainty of the severity and frequency distribution parameters. The bootstrap procedure is a useful tool in modelling parameter uncertainty when there are few observations and when the theoretical distribution of our statistic of interest (97.5% VaR) is complicated or unknown. By randomly generating ‘new’ sets of expert opinions, we are able to construct a 95% confidence interval around the calculated VaR measure.

(7)

depen-dence between experts, aggregating the severity distributions using a dependepen-dence structure can result in a lower capital charge for operational risk. This is called a diversification benefit. A copula approach proposed by Clemen and Jouini (1996) will be used to aggregate the elicited severity distributions to analyze the effects of dependence on the VaR measure. This dependence might arise due to the same training experts might have had or the experiences they have shared. As was shown by Cooke and Kallen (2002) and Clemen and Winkler (1986), dependence among experts exists. Clemen and Jouini (1996) propose to use a Frank copula for ag-gregation purposes, because of its symmetric properties. Besides aggregating the distributions using a Frank copula, we will look at the effect of a Gumbel copula. As was shown by Antonini et al. (2009), low-frequency high-impact losses cause the most trouble and are situated in the right tail of a distribution. Because it puts more weight on the right tail of a distribution, a Gumbel copula might be more appropriate when modelling operational losses. The symmetric Frank copula could underestimate these tail-risks. Buck et al. (2009) argue that mathematical complex methods do not necessarily outperform simple approaches when modelling expert opinions. As we will show later on, infinite mean models can cause some trouble when aggregating distributions using a copula approach.

Whenever a pension fund would adopt the approach presented in this research to find a fund-specific VaR estimate, that fund will probably not have 11 employees who can serve as a reliable expert regarding operational risk. It is therefore inter-esting to investigate how the VaR estimate behaves when we use less than 11 experts.

As of today, a lot of research exists on the modelling of expert opinions. An overview of methodology is presented by Clemen and Winkler (1999) and Ouchi (2004). The method discussed in this introduction faces a lot of difficulties. Because there is no real world data for pension funds, we are not able to backtest the resulting VaR measure. Furthermore, the usage of 11 experts across the whole pension fund sector causes a lot of spread in the data. On the other hand, the setup of the questionnaire makes it easy to elicit data. Although we are unable to test whether psychological biases are reduced, it seems that the estimated parameters of the severity distribu-tions are in line with empirical evidence. This will be shown in the results section. Furthermore, no research exists on the quantification of operational risk within pen-sion funds. Although the results of this research should be handled with care, it serves as a first view on the amount of operational risk faced by pension funds and provides, to my knowledge, the first estimate of a capital charge for operational risk for the standardized approach of the FTK.

(8)

Historical internal operational risk loss data have limited ability to predict future behaviour. Scenario analysis is forward looking and can reflect changes in the pen-sion fund environment. Therefore, expert data should not be used on a standalone basis, but should be combined with real data. In this thesis we will investigate, with the help of a simple questionnaire, how a capital charge for operational risk in pension funds can be obtained and how it behaves using expert opinions.

Considering the above, the following questions are answered in this research: • Main question: By eliciting expert opinions and using a simple questionnaire,

can we obtain an accurate and reliable Value-at-Risk estimate for operational risk in pension funds?

– Subquestion 1: How can DNB implement a capital charge for operational risk in the FTK?

– Subquestion 2: Does the model benefit from diversification effects when the severity distributions are aggregated with the help of copulas?

– Subquestion 3: How sensitive is the calculated capital charge to the number of used experts?

(9)

2

Operational risk

2.1 Operational risk

At the beginning of this millennium people began to notice that risks other than market and credit risk play a huge role in the financial stability of firms. Several somewhat neglected risk types are now considered to be important. Among these ’neglected’ risk types is the exposure to operational failure, i.e. operational risk. The amount of exposure to this type of risk became, once again, apparent when a very large real estate fraud was discovered in 2007, which affected the fund of Philips. A complex transaction scheme caused the difference in the buying and selling price of some real estate to end up in the hands of fraudulent board mem-bers of the fund and several intermediaries. The Philips pension fund immediately seized the possessions of several suspects. Internal controls and monitoring pro-cesses within the fund were examined and improved upon. The annual report of the pension fund of Philips (2010) explains that lawsuits against the fraudsters to recover a loss of approximately 165 million euro are still ongoing.

The concept of operational risk is very broad and requires a detailed introduc-tion. Besides a detailed introduction to operational risk, we will have a look at other regulatory frameworks and their operational risk management. These other frameworks will aid in the process of defining the operational risk exposure of a pension fund.

2.1.1 Definition

The concept of operational risk is very broad and subject to a lot of discussion. It has been defined as a non-financial risk, meaning that its quantitative impact cannot be observed directly. What is operational risk exactly? There is no clear and all-embracing definition of this type of risk. The Basel Committee on Banking Supervision (BCBS, 2001) has developed a commonly used definition regarding the Basel II framework (see section 2.2.1) which states that:

Operational risk is defined as the risk of loss resulting from inade-quate or failed internal processes, people and systems or from external events.

(10)

In 2003, the Securities and Exchange Commission (SEC) proposed its own rules on capital requirements for investment banks and broker-dealers instead of using the Basel II framework (see “SEC redefines operational risk”, 2003). These propos-als were inevitable, since the subsidiaries of US banks in the European Union are forced to implement the Basel II framework while the US offices are subject to its domestic capital requirements. We will use the Basel description of operational risk when defining the operations to which pension funds are exposed, since it is a clear definition and it is used by other regulatory frameworks as well. One example of such frameworks is Solvency II, the regulatory framework for insurance companies in the European Union. See section 2.2.2.

2.1.2 Implications

As discussed above, it is very difficult to define the operational risk a financial in-stitution faces. There are no mathematical grounds or proofs on how to segment the operational risks to which firms are exposed. Therefore, segmentation of a firm and its corresponding operational risk is a very subjective process. To measure operational risk exposure an institution can adopt, depending on the regulatory requirements, qualitative or quantitative approaches (or both).

Currie (2004) discusses another important problem concerning operational risk: context dependency. This describes whether the size or likelihood of an incident varies in different situations. It is determined by how fast underlying operational processes are changing. For instance, the IT environment nowadays is much more sophisticated compared to the system ten years ago, meaning that different losses and/or IT-failures would have been observed in the past. The modeling of stock prices, for example, does not appear to have these problems and seems to have the same statistical properties over time.

(11)

terror-ism (an external event), the stock market fell by more than ten percent. Clearly, the operational failure due to an external event influenced the market prices. King (1998) provides an example of the overlap between events and operational losses in terms of book and market capital:

Figure 1: Example of causes, events and losses to a firm

Figure 1 provides a nice overview of the potential difficulties operational losses can cause. Business losses, for example, can result from practically every cause listed in the figure. This once again shows the complexity of operational risk. The oper-ational losses caused by the 9/11 attacks are very difficult to measure.

2.1.3 Implementation

Operational risk management can be done both qualitatively and quantitatively and depends on the regulatory framework imposed by the pension regulator of a coun-try. The BCBS proposes four different quantitative approaches for banks, whereas DNB has created a qualitative approach called FIRM and a separate quantitative approach concerning the required capital for pension funds. In the insurance indus-try, the Solvency II framework also acknowledges operational risk. In section 2.2 the operational risk management techniques used or proposed by these frameworks will be discussed.

(12)

risk management approach, the more time it will take an institution to implement it accordingly. More sophisticated approaches result in better operational risk man-agement as well. The BCBS (2001-2) has devised principles on developing an appro-priate risk management environment. The practice of operational risk management should consist of the following steps: (i) risk identification; (ii) risk measurement; (iii) risk monitoring; (iv) control. Firms should find a trade-off between costs and benefits.

Even though the implementation of operational risk management is difficult and time consuming, the aforementioned frameworks cause firms to incorporate better internal and external controls. An increased focus on operational risk by institu-tions and regulators should result in better risk management. In this way a market for information on operational risk is created .

As a lot of operational risk management tools have emerged rapidly, financial insti-tutions no longer solely rely on internal control and the audit function to manage operational risk. A lot of these institutions have concluded that implementing a sound and risk-sensitive approach provides better operational risk management. In turn, this enhances shareholder value (see BCBS (2001-2)).

2.1.4 Mitigation

Compared to market and credit risk, operational risk can not be mitigated or hedged by buying options or entering into swaps and futures contracts. Currie (2004) ar-gues whether holding additional capital is the best solution to control operational risk. For example, introducing position limits for traders prevents them from going rogue. Additionally, restructuring the organization of a firm, which is on the edge of bankruptcy, might be more efficient than this firm holding more capital. Also, a lot of operational risks can be mitigated by insurance contracts. An institution might insure its buildings against fire, natural disasters or vandalism. The usage of insur-ance contracts might result in additional counterparty risk. However, operational risk is an integral part of every organization and should be managed accordingly. It is impossible to ‘hedge’ all operational risks. A capital charge should be present to capture, to some extent, the impact of large unforeseen operational losses. The financial crisis of 2008 has taught us that much.

2.2 Existing regulatory frameworks and their operational risk manage-ment

The financial sector in the Netherlands is regulated by De Nederlandsche Bank1

(DNB). The Financieel Toetsingskader2 (FTK) is one of the regulatory frameworks devised by DNB. Besides the FTK, DNB has created a qualitative risk

(13)

sis method. This method is called Financi¨ele Instellingen Risicoanalyse Methode (FIRM)3 and proves to be an extremely useful method in order to analyse the

op-erations at pension funds. The operational exposures faced by banks are managed by the requirements of the Basel II accord, whereas the operational risks faced by insurance companies are going to be regulated by the Solvency II framework. In this section we will show how operational risk is managed by the previous regulatory frameworks.

2.2.1 Basel II

When the importance of proper risk management came to light in 1999, the BCBS proposed to create a new capital accord which would provide guidelines on min-imal regulatory capital banks should keep in order to cope with losses resulting from financial and operational risks. This was the first time that the importance of operational risk was noted and a lot of research in that field has been done since. In 2007, the new Basel Capital Accord was implemented (Basel II). This accord is built upon three pillars: minimum capital requirements, supervisory review and market discipline. Due to the financial crisis in 2008, it became obvious that the proposed capital requirements made by Basel II were insufficient. Therefore, in 2009, the BCBS started working on the next level of banking supervision: Basel III. This new framework is still ‘under construction’. For more information on the Basel II accord, have a look at the consultative documents provided by the commit-tee or the Dutch Basel III website. This website has information on the history of previous Basel accords. All the information in this subsection on operational risk is obtained from the consultative document on Operational Risk (BCBS, 2001), the committee’s document on Sound Practices for the management and Supervision of Operational Risk (BCBS, 2001) and the book ”Quantitative Risk Management” by Embrechts et al. (2005).

To assess the areas of operational risk exposure, Basel II subdivides a bank into eight lines of business: corporate finance, trading & sales, retail banking, commer-cial banking, payment and settlement, agency services, asset management and retail brokerage. Figure 2 provides a clear overview of the loss types per business line and the exposure of these loss types:

3In English it reads; Financial Institutions Risk analysis Method. The abbreviation of FIRM is also justified in

(14)

Figure 2: Business lines, loss types and suggested exposure indicators

The BCBS provides some additional information on how to measure these loss types: 1. Write-downs: direct reduction in value of assets due to theft, fraud, unau-thorized activity or market and credit losses arising as a result of operational events

2. Loss of Recourse: payments or disbursements made to incorrect parties and not recovered

3. Restitution: payments to clients of principal and/or interest by way of resti-tution, or the cost of any other form of compensation paid to clients

4. Legal Liability: judgements, settlements and other legal costs

5. Regulatory and Compliance (incl. Taxation Penalties): fines, or the direct cost of any other penalties, such as license revocations

6. Loss of or Damage to Assets: direct reduction in value of physical assets, including certificates, due to some kind of accident (e.g. neglect, accident, fire, earthquake)

(15)

indicated that the Loss Distribution Approach (LDA) could be a useful measure-ment approach, but leaves its application to be researched by the industry. As of today the LDA is one of the most useful approaches and will be explained in chapter 4.5. The Scenario based AMA working group (2003) has written a document on a qualitative scenario based approach. The three quantitative approaches initially proposed by the BCBS will be briefly discussed:

1. The Basic Indicator Approach uses gross income as a proxy for a bank’s overall risk exposure. Each bank should hold a capital for operational risk equal to the amount of a fixed percentage, α, multiplied by its individual amount of gross income. This approach is very easy to implement and is best suitable for smaller banks with a simple range of business activities.

2. The Standardized Approach considers the eight business lines as represented in figure 2. This approach is better able to reflect the different risk profiles across banks as reflected by their business activities. The capital charge for each business line is calculated by multiplying gross income by a factor, β, which serves as a rough proxy for the relationship between the industry’s operational risk loss experience for a given business line and the broad financial indicator representing the bank’s activity in that business line, calibrated to a desired supervisory soundness standard. Summing over all capital charges per business line returns the total capital charge for operational risk.

3. The Advanced Measurement Approach takes the Standardized approach one step further by subdividing each line of business into seven loss event-types: internal fraud; external fraud; employment practices and workplace safety; clients; products and business practices; damage to physical assets; business disruptions and system failures; and execution, delivery and process manage-ment. This is the most advanced and risk-sensitive approach introduced by the BCBS. The implementation of this internal method causes a lot of difficulties. Banks will have to create an internal loss database. Nowadays this approach has been replaced by the previously mentioned loss distribution approach. The sbAMA working group (2003) notes that an operational risk model should draw on all available information such as expert experience, internal and relevant external loss histories as well as key operational risk indicators and the quality of the control environment.

2.2.2 Solvency II

(16)

following way:

Figure 3: Solvency II risk management framework

Figure 3 is obtained from Bloemkolk and Van Grinsven (2009). The first pillar re-lates to the quantitative requirement for insurers to understand the nature of their risk exposure. Insurers are required to hold sufficient capital to be able to protect themselves against unexpected losses which, statistically, happen only once every 2000 years. Under the second pillar insurers deal with the qualitative aspects of operational risk and have to adhere to the requirements for governance and risk management. The third pillar deals with disclosure, transparency requirements and reporting issues.

(17)

2.2.3 The FTK

The financial review framework Dutch pension funds are subject to, is called Het Fi-nancieel Toetsingskader (FTK). This framework consists of a solvability assessment and a continuity assessment. The solvability assessment focuses on the current fi-nancial position of a pension fund. It measures whether a fund has sufficient capital available in the next year to meet its obligations and if it is able to cover unexpected losses, which on average occur once every 40 years, resulting from several risk ex-posures. The FTK uses a standardized approach, which means that it is applicable for every pension fund and every fund uses the same predetermined parameters pro-vided by DNB. However, it is possible for a fund to use more sophisticated internal (operational) risk models. In order to achieve this, approval of DNB is required. On the other hand, the possibility exists for small institutions to adopt a simplified approach. The value of the assets and liabilities of such funds are estimated in an easy way.

The required capital for each risk type in the standardized approach of the FTK is calculated in a particular way. Most capital charges are represented by a value change due to a predetermined scenario. For example, a 25% decrease in a fund’s mature market equity investments. The corresponding capital charge is then equal to 25% times the initial equity value. From Pensioen- en Verzeringskamer (2004) we find that the capital charge for a combination of risk factors is given by:

S = q

S2

1 + S22+ S1S2+ S32+ S42+ S52+ S62+ S72+ S82+ S92 (1)

For more information on the parameters of(1) have a look at Pensioen- en Verzer-ingskamer (2004) and De Nederlandsche Bank (2006).

One of the goals of this research is to quantify the operational risk exposure of pension funds. These results are applicable to S9 in (1). The standardized

ap-proach does not require pension funds to hold additional capital to cover operational losses. The standardized approach under the regulatory framework for banks (see Basel Committee on Banking Supervision, 2001) models operational risk as a fixed percentage of gross income. Such an approach would be most appropriate for the standardized approach of the FTK as well. Let α be some fixed percentage and let I denote an indicator variable. The methodology of this research can be used to find a value for α. The indicator variable, I, might represent a fund’s gross income or assets under management. The choice of the appropriate indicator will be discussed in the section 2.4.2. In conclusion, the capital requirement for operational risk will be expressed in the following way:

S9 = α × I (2)

(18)

approach underestimates the risks associated with once-in-forty-year events. How and when the FTK will be revised is unknown at this point. Therefore, we will focus on the current (old) FTK solvability assessment.

2.2.4 FIRM

The central bank and supervisor of the financial system in the Netherlands, DNB, has created a qualitative risk analysis method called FIRM. All information in this section is obtained from the FIRM manual (2005). This risk analysis method is integrated at every financial institution regulated by DNB. FIRM has been created in order to:

• be applicable for all financial institutions which are regulated by DNB;

• where possible, align with current developments in laws and regulations such as Basel II and the FTK;

• improve the comparability of risk analyses to be more uniform across institu-tions;

• etc. (see FIRM manual)

The objectives as stated above have been accomplished by the software application of FIRM. This software allows financial institutions to assess their risk exposure in a qualitative way. The scorecard approach and scenario loss approach (see section 2.2.2) are used to accomplish this. We will focus here on the risk types that FIRM acknowledges and measures.

FIRM acknowledges financial as well as non-financial risks. Since operational risk is termed as a non-financial risk, we will ignore the financial risks and focus only on the non-financial risks and their corresponding risk items as recognized by FIRM:

Non-financial Risks Environmental Risk Operational Risk Outsourcing Risk IT-Risk Integrity Risk Legal Risk

Table 1: FIRM’s non-financial risks

(19)

risk items above. Every risk category is weighted to indicate its importance and exposure to the firm.

Integrity risk is posted as a separate risk, because integrity is an important reg-ulation objective of DNB. The risk of outsourcing is also classified as a separate risk category, because of the increasing importance within the financial sector and the increasing attention DNB pays to it from a regulatory perspective. Outsourcing is important due to the increasing complexity of, for instance, risk management. Companies might lack the experience and specific knowledge and are therefore con-tracting a third party. The risk types presented in table 1 will be discussed briefly: 1. Environmental risk: Environmental risk is defined as the risk arising from changes outside the institution and includes competition, stakeholders, reputa-tion and business climate. This category also deals with losses resulting from external events such as natural disasters, vandalism or electrical failure. 2. Operational risk: FIRM defines operational risk as the risk associated with

inefficient or insufficiently effective process design or process execution. This risk type mainly occurs due to human errors.

3. Outsourcing risk: Outsourcing risk can be defined as the risk that the conti-nuity, integrity and/or quality of the outsourced work to third parties (whether within a group or to the sponsor) or the equipment and staff made available is violated or harmed by the third party. Outsourcing risk is receiving more attention as most pension funds have outsourced their main activities (asset management and pension administration).

4. IT-risk: IT-Risk is defined as the risk that business processes and information suffer from a lack of integrity, continuity and protection due to IT. Examples are hardware/software failure and IT-downtime.

5. Integrity risk: FIRM defines integrity risk as the risk that the integrity of the institution or the financial system is influenced due to a lack of integrity, uneth-ical behavior of the organization, its employees or the management regarding laws and regulations and social values or values created by the institution itself. 6. Legal risk: Legal risk can be defined as the risk related to (changes in and compliance to) laws and regulations, the possible threat to a firm’s legal posi-tion, including the possibility that agreements in contracts are not enforceable or not correctly documented. Fines given by the regulator serve as an example. For more information on the underlying loss events, have a look at the FIRM man-ual (2005).

(20)

These findings will be used in the next section to define the operational risk envi-ronment in which pension funds operate. This envienvi-ronment is a necessity in order to elicit expert opinions, because it enables each expert to address the operational risk exposures of a pension fund and their corresponding loss types in the same way. It serves as a building block of the questionnaire.

2.2.5 An overview

Regarding the discussed frameworks, a summarizing overview of their operational risk management is as follows:

Framework For who? Qualitative? Quantitative?

Basel II Banks worldwide Yes Yes

Solvency II Insurers in the EU Yes Yes

FTK Dutch pension funds and insurers No No

FIRM Dutch financial institutions Yes No

Table 2: Discussed regulatory frameworks and their operational risk management

The next subsection shows why sound operational risk management is a necessity for pension funds in the Netherlands.

2.3 The committee on investment policy and risk management

On request of the Minister of Social Affairs and Employment, the Committee on Investment Policy and Risk Management (henceforth: the committee) was invoked. The goal of the committee was to investigate the way in which investment policy, risk management, the execution and governance of pension funds have developed since 1990 compared to the objectives and risk acceptance of pension funds.

(21)

investment policy is called ‘implementation shortfall’.

The committee recommends that funds should be in control regarding their risk management during all phases of the investment policy. This applies to the per-formance of strategical, tactical and operational investment policy. The committee also recommends that the information processes regarding the risks of asymmetric information should be reduced to a controllable level. The findings of the committee which have been discussed here can be attributed to operational risk.

2.4 Operational risk in pension funds

In order to elicit expert opinions on operational risk it is important to have a clear view on how it impacts pension funds in the Netherlands. Providing the experts with the operational processes of pension funds helps them to assess the risks in a quantifiable way. Using the regulatory frameworks discussed in this section, we are able to provide an overview of possible operational losses. This overview aids in the process of setting up an appropriate questionnaire that is understood and interpreted in the same way by all participants.

The non-financial risks as stated by FIRM are: environmental, operational, out-sourcing, IT, integrity and legal risk. Comparing the definition of operational risk as stated in section 2.1.1, we are able to classify all non-financial risks as operational risk. As an illustration, environmental risks are due to external events. Operational, outsourcing, IT, integrity and legal risks are mainly due to inadequate or failed in-ternal processes and people. IT-risk is caused by the failure of systems. Going back to the FTK, there are no separate S-values for the other non-financial risks (as defined by FIRM) besides operational risk (S9). Therefore, we will consider all non-financial risks as operational risk in terms of the FTK.

Most pension funds have outsourced its main activities: asset management and pension administration. It is also possible for a fund to carry out its own asset management or pension administration, but this is done by very few funds (see the statistical bulletin of DNB, 2010). Therefore, outsourcing risk is an important risk exposure for pension funds. This was discussed in section 2.3.

2.4.1 Loss types

(22)

These two activities are exposed to different kinds and severities of operational risk. Since most funds have one or both of these activities outsourced to a third party, we will follow the recognition by FIRM and regard a pension fund as one line of business.

Figure 2 and the documentation in section 2.2.4 allow us to identify possible losses per category:

Risk category Possible loss event

Environmental Risk - Write-downs of value in assets due to, vandalism, accidents, natural disasters or any other uncontrollable external event which can not be observed in the financial markets;

- Other losses might be losses caused by political/regulatory decisions, unions participants or employers.

Operational Risk - Losses due to payments or disbursements made to incorrect parties and not recovered;

- Payments to clients of principal and/or interest by way of restitution;

- The cost of any other form of compensation paid to clients; - Typically every unintentional loss resulting from adminis-trative errors caused by people are operational losses;

- Losing experienced people due to illness or other circum-stances, creates an operational loss. External people have to compensate the loss of knowledge;

- Losses due to fraud (internal and external) and theft; - Losses originating from prediction errors (wrongly calculated expected values of future cash flows).

Outsourcing risk - The quality of services provided by a third party is insuffi-cient or results in additional costs;

- Bankruptcy of the third party resulting in losses for the pension fund.

IT-risk - Failing hardware or software;

- Hiring a third party to repair a down IT-system; - All other losses resulting from IT-failures.

Integrity and legal risk: - Judgements, settlements and other legal costs;

- Taxation penalties, fines, or the direct cost of any other form of penalties, such as license revocations.

Table 3: Loss event types per risk category

(23)

Lately, a lot of attention has been paid to the risk of outsourcing. Outsourcing activities brings along several difficulties like asymmetric information (see section 2.3). An example of a loss due to outsourcing is the claims worth 40 million euro made by pension fund OPG due to the bankruptcy of Lehman Brothers. Fortu-nately, they were able to recover all funds plus interest. This is called a “near miss” and assists the organization is assessing its operational risk exposures.

Note that integrity risk and legal risk are taken as one risk category. The risk items for integrity risk as recognized by FIRM will result in fines or legal action most of the time. The fines given to ABP (1.3 million euro) and PGGM (900 thou-sand euro) due to illegal commercial activities are examples in this category.

Several risk items are not included. For example, we will not measure reputa-tional risk, since it’s difficult to express this kind of risk in quantifiable terms. Also, reputational damage is often the consequence of another event. The (pre) accepta-tion/transaction risk is also not considered, because it does not cause an observable loss of money. When a pension fund loses the signing of a client to a competitor, the fund does not incur a loss for which it should hold additional capital, it just misses income. Losses resulting from a bad strategy are also difficult to give a price tag. These quantitatively non-observable risk items should be monitored and controlled by qualitative methods. For example, bad strategy decisions made by a board will be observed by the inspection committee and the supervisory board. These bodies might then decide that the current board should be replaced.

The risk categories and their corresponding loss types are not the same for ev-ery pension fund. For example, funds which have outsourced their main activities (asset management and pension administration), might have a different amount of exposure to operational risk and outsourcing risk than other funds. The data used in this research is collected for a sample fund and is applicable to the standardized approach of the FTK. This will be explained in section 3.

2.4.2 An indicator variable for S9

In section 2.2.3 a proposal for the calculation of an operational risk capital charge, S9, in the FTK model was made. This function was represented by equation (2)

(24)

indicator.

(25)

3

Data

In this chapter the questionnaire and the way it is constructed is presented as well as the collected data.

3.1 Types of operational loss data

In general, a distinction between three types of operational loss data is considered: • External data: External losses are publicly available and consist of

low-frequency/high-impact losses.

• Internal data: Internal losses are firm-specific operational losses.

• Expert data: Risk managers who ‘know’ the business are so called experts in their field of expertise and are able to provide expectations on frequency and severity of operational losses. This thesis makes use of this kind of data. Currie (2004) and Muzzy (2003) note that setting up a solid data collection process is difficult and very time consuming. For example, a firm would have to make sure that employees do not try to hide their ‘mistakes’. Financial institutions are reluc-tant to set up a proper operational loss database as this would cause an additional capital charge. Instead of recording operational losses, qualitative approaches are very popular to manage operational risk as they do not cause a firm to hold addi-tional funds.

The next subsection elaborates on the collection of expert data. For more informa-tion on the collecinforma-tion of internal and external data, please consult Antonini et al. (2009), Curry (2004) and Muzzy (2003).

3.1.1 Difficulties and restrictions in collecting operational loss data

The difficulty in collecting expert data is to overcome as much as possible the subjectivity and uncertainty resulting from the chosen approach. Expert opinions provide subjective results and bad assumptions would lead to irrelevant losses. Ex-perts might base their opinions on their business knowledge or possible losses they have observed in the past. An advantage of expert data on operational risk is that internal losses, external losses as well as key risk indicators are taken into account. A disadvantage is that the collected data can not be validated, but reflects an ex-pert’s opinion on the current and future operational risk environment. Whereas internal data does not reflect future exposure, expert opinions should be combined with internal and/or external data to get a better view of the (future) operational risk exposure of a firm.

(26)

funds. Expert opinions can be elicited in several ways. One might ask an expert to grade something on a scale of 1-10 or to assess the quality of some component with a color. From Buck et al. (2009) we find that the statistic required by the researcher can have a huge impact on the quality of the results. The quality of the various measurement techniques differs and will be discussed in the next subsection along with the format of the questionnaire used for this research.

3.2 Collecting expert opinions: a questionnaire

The method of collecting expert opinions is also called scenario analysis. From the Scenario based AMA working group (2003) we find that a scenario is a potential future event. Assessing a scenario involves questions on the frequency and severity of operational losses. The relevant scenarios are determined on beforehand and should take all relevant risk drivers into account. The risk categories defined under operational risk and their corresponding loss types presented in section 2.4.1 will serve as the relevant risk drivers.

Based on the predetermined scenarios, experts have to be asked questions in an understandable and easy way. The way in which questions are asked have a huge impact on the answers provided by operational risk managers. The data used in this research have been collected using a questionnaire. An advantage of using a ques-tionnaire is that every expert receives the same information and has to answer the same questions. A difficult task in designing a questionnaire is that experts should interpret the questions in the exact same way. They should also answer those in the way intended by the researcher. Buck et al. (2009) note that people have a selective and associative memory. Therefore, a respondent may fail to recall useful information while less important/relevant information intrudes his or her thinking process. Judgements are not already available in one’s mind, but are formed using the experience and knowledge as well as the selectiveness of the expert’s memory. If the experts were able to use all the relevant information they could acquire in an unbiased way and avoid all known psychological biases, the elicited opinions would result in coherent distributions. Buck et al. (2009) call this set of distributions good. As this is impossible in practice, the goal of a questionnaire is to get results from the experts which are as reliable and consistent as possible and approximate the set of good distributions as accurately as possible. Eliciting opinions from sev-eral experts aids in the process of minimizing uncertainty. The next subsection is concerned with the psychological biases we should try to reduce or, even better, try to avoid.

3.2.1 Psychological biases in making judgements

Buck et al. (2009) acknowledge the following important psychological issues: • Availability: Experts tend to make judgements according to how quickly

(27)

the more probable he or she will rate it. For example, the real estate fraud concerning the pension fund of Philips has gained a lot of media attention. Even a book has been written about it. This might cause the example to come to mind quickly and be rated as too probable.

• Representativeness: The representativeness of an event similar to an event about which a probability has to be assessed is often used as a benchmark. The problem with this approach is that people tend to ignore variability of small samples. In other words, one should not base his or her opinion on just one observation.

• Anchoring: When people are asked to estimate a quantity or to assess an uncertainty, they often start with an initial estimate (an ’anchor’) and then adjust up or down. Unfortunately, people tend to stick too closely to the initial value, not adjusting sufficiently. As an illustration consider person X and person Y . Both persons are shown a number independent of each other. Person X is shown a number which is strictly smaller than the number presented to person Y . Then X and Y are asked to provide an estimate of, for example, the number of tennis balls fitting into a Boeing 747. The estimate provided by X, who was presented the smallest number, is likely to be smaller than the estimate provided by Y (who was presented the larger number). This is because X and Y tend to anchor on the number they were presented, while that number is independent of the number of tennis balls fitting into a Boeing 747.

Peters and H¨ubner (2009) note that outcomes from questionnaires (i.e. probability estimates) are sensitive to the phrasing and the ordering of questions. However, Buck et al. (2009) refer to several experiments which have shown that representing probabilities using frequencies rather than single-event probabilities can improve performance.

Now that the possible issues concerning the elicitation of expert opinions and the collection of loss data have been discussed, we are able to present the format of an appropriate questionnaire. The goal is to create a questionnaire which is as simple as possible and tries to avoid the biases presented in this subsection.

3.3 The questionnaire on operational risk within pension funds

(28)

questions of interest are concerned with frequency and severity estimations based on an average, worst-case and intermediate scenario. First, the expert is asked to provide his or her judgement on the average number of operational loss events per year and, given that an event occurs, what the average impact of that event could be. Next, the respondent is asked to provide a worst-case scenario figure and, given that an event of this magnitude occurs, once in how many years it can be observed. Finally, the participating expert is asked the latter two questions concerning an operational loss event which is larger than the average, but smaller than the worst-case event.

Now that the setup of the questionnaire is clear, how does this setup overcome the issues presented so far? The questions regarding frequency do not ask the re-spondent to come up with a probability estimate (which is perceived as less reliable than frequency estimates). The questions are relatively simple and should be inter-preted by every expert in the same way. Furthermore, experts do not have to have statistical knowledge in order to answer these questions. In the scenario of aver-age frequency and severity the experts are asked to provide estimates considering losses of at least 500 euro. In this way every expert maintains the same threshold value. Furthermore, experts base their judgement on a sample fund. In short, this sample fund is defined as an average pension fund with 1 billion euro AUM. Every respondent will provide judgements based on the same benchmark. Using a sam-ple pension fund also has the advantage that experts do not have to provide fund specific loss information. This will reduce the reluctance of participating in the questionnaire. The likelihood of responsiveness is further improved as the research participants will remain anonymous. The downside of a prespecified sample fund is that experts can anchor on the value of AUM. We will elaborate on this further on.

(29)

Baule and Steinhoff (2006) propose to provide experts with severities and only ask-ing them about the correspondask-ing frequency. As this approach reduces uncertainty, it brings along two problems. First, how does one determine the right values to represent the severities? Second, Buck et al. (2009) note that experts are likely to use their previous answers as an anchor for their next answers as they feel that there should be some relation.

The judgements of one expert are not enough to be able to describe the opera-tional risk faced by the entire pension fund industry on average. Therefore, the responses of several experts will be used and pooled together. The usage of more than one expert also raises a question on the dependence between them. Some experts might have had the same experience or have the same kind of business knowledge. This should be taken into account when aggregating over all responses. In the next chapter it is shown how this can be done. The questionnaire is presented in Appendix A. In the next subsection the obtained results are presented.

3.4 Collected data

Distribution of the questionnaire among a group of selected experts yielded 11 re-sponses. Most respondents were enthusiastic about the idea of this research. This resulted in a response rate of 70%. The pension funds at which the selected experts are employed have a bias towards the larger pension funds. Looking at the annual reports of pension funds, we find that the operational risk management is mainly ad-dressed by the larger funds. These funds have larger operations and require sound operational risk management. We therefore expect to find the most experienced risk managers at the larger funds. Smaller funds might not have enough money available to set up proper operational risk management. However, larger funds who focus on operational risk management do not necessarily have to be good at it. The 11 responses were collected within a time frame of 2.5 months.

Before presenting the acquired data, the introduction of some notation is in order. Recall that every expert has given his or her opinions on an average, intermediate and worst-case severity of an operational loss and its corresponding frequency. The scenario of an average loss will be denoted as scenario 1. The scenarios of an inter-mediate loss and a worst-case loss will be denoted by scenario 2 and 3, respectively. Let there be E experts. Let xi,j denote a loss figure in euros obtained from expert

i with i = 1, ..., E corresponding to scenario j, j = 1, 2, 3. Thus, x2,3 denotes a

worst-case loss figure in euros given by the second expert. In a similar way, let di,j denote a frequency estimation provided by expert i regarding scenario j. Thus

d4,1 denotes the fourth expert’s estimation of once per how many years an

oper-ational loss would occur. Since the frequency figure depends on a loss figure, we have that, for example, the set (x1,2; d1,2) denotes an intermediate loss figure and

(30)

three sets (xi,j, di,j), j = 1, 2, 3 from every expert. The following table presents the

collected data:

Expert (xi,j; di,j) Scenario 1 Scenario 2 Scenario 3

(j = 1) (j = 2) (j = 3) Expert 1 (i=1) x1,j 7,500 50,000 10,000,000 d1,j 0.143 3 10 Expert 2 (i=2) x2,j 10,000 25,000 10,000,000 d2,j 0.033 2 40 Expert 3 (i=3) x3,j 50,000 1,000,000 5,000,000 d3,j 0.1 10 25 Expert 4 (i=4) x4,j 5,000 50,000 50,000,000 d4,j 0.25 5 200 Expert 5 (i=5) x5,j 50,000 500,000 2,000,000 d5,j 0.5 4 8 Expert 6 (i=6) x6,j 50,000 250,000 1,500,000 d6,j 0.05 1 10 Expert 7 (i=7) x7,j 2,000 10,000 10,000,000 d7,j 0.005 0.1 100 Expert 8 (i=8) x8,j 2,000 10,000 200,000 d8,j 1 5 10 Expert 9 (i=9) x9,j 25,000 25,000,000 50,000,000 d9,j 0.1 40 100 Expert 10 (i=10) x10,j 1,000 200,000 5,000,000 d10,j 0.2 7 60 Expert 11 (i=11) x11,j 2,000 20,000 1,000,000 d11,j 0.2 20 30

Table 4: Collected data

The way in which the first question of the questionnaire is formulated might cause some confusion. Note that the first question of the questionnaire is about the av-erage number of operational losses per year. Consider for example d1,1. Expert

1 believes that, on average, once per 0.143 years an operational loss event could occur. This is the same as, on average, 1/0.143 ≈ 7 losses per year. The reason that the first question is formulated in this way is that it is easier to answer. The answer to the first question has been transformed into a once in 0.143-year event. The judgements of d1,2 and d1,3 are also denoted as once in 15 and once in 40 years

events. In the next section it is shown why the frequencies should be formulated as once in k-year events, where k is some number to be specified by an expert.

(31)

10 years. Furthermore, a loss of 50,000 euro could be observed once in every 3 years.

Although every expert was asked to make judgements based on a sample fund, we see that the spread in loss figures and frequencies is pretty large. Since experts are making judgements based on their own experience, it seems that not every fund is able to manage its operational risk exposure as effectively as another. In other words, estimations might be based on different risk profiles. For example, the number op operational losses per year expected by expert 8 equals 1, while the number of operational loss events per year as estimated by expert 7 equals 200 (1/d7,1 = 1/0.005 = 200) events. Besides different risk profiles, there are other

unobserved factors which play a role in the data collection. For example, the risk averseness of an expert is of influence. Because the degree of risk averseness per expert is unknown to us, we cannot measure the influence of an expert’s risk averse-ness on the collected data. We would therefore expect a lot of parameter uncertainty regarding the aggregated severity distribution. Whenever the questionnaire of this research would be used to assess the operational risk exposure of just one fund, we would obtain less data. However, we expect the quality of the input to be better, because the risk managers of a fund do not have to make estimates based on a sample fund, but can project their experiences onto their own fund. Furthermore, these risk managers would make judgements of operational risk based on the same risk profile. This reduces the spread in the data.

(32)

4

Mathematical models

This section will explain the methodology used to model the expert data and for calculating a capital requirement. The most important question in modelling expert data is how to obtain a capital figure in the least subjective way. The first step is to create a simple questionnaire which is easily understood by all participants. This was explained in the previous section. The second step is to use the results from this questionnaire and transform these into a capital figure. In the questionnaire, the experts were asked to provide their expectations on frequency and severity of operational losses for several scenarios. We will thus have to combine the number of operational loss events (frequency) with the loss figures for each event (severity) to obtain a capital requirement measure. The capital charge resulting from this ag-gregation is called a Value-at-Risk (VaR) measure and is obtained with the help of the Loss Distribution Approach (LDA). See also section 4.5. A bootstrap algorithm to assess the uncertainty surrounding the resulting VaR figures is presented in the final subsection.

As stated before, a method proposed by Frachot et al. (2004) and simplified by Alderweireld et al. (2006) has been used to find the operational loss severity distri-butions. Since 11 experts participated in the questionnaire, 11 severity distributions will result. The linear opinion pool of section 4.3 is used to aggregate these dis-tributions. In order to investigate the effect of dependence between the experts, a copula approach proposed by Clemen and Jouini (1996) is used to aggregate the severity distributions. Section 4.4 presents this methodology as well as the copulas that have been used in this research. The severity distributions used in this research might result in so called infinite mean models. These models cause some difficulties regarding VaR measures (see section 4.6). We start by presenting commonly used frequency and severity distributions in operational risk modelling.

4.1 Frequency and severity distributions for operational risk modelling

4.1.1 Frequency distributions

(33)

operational loss events per year. In terms of our questionnaire, we would have that: ˆ λi = 1 di,1 for all i (3)

If we assume that a random variable N follows a Poisson distribution with parameter λ, the probability that there will be n events in a prespecified time interval, is given by:

P (N = n) = e

−λλn

n! , n = 0, 1, 2, ... (4)

Moscadelli (2004) has done research on the loss data collection exercise performed by the BCBS in 2002. He found that frequencies were best modelled by a Negative Binomial distribution. Assuming that a random variable N follows a Negative Binomial distribution with parameters r and p, where r is a predefined number of successes and p is a probability of success, the probability of observing r successes after n trials with success rate p is given by:

P (N = n|p, r) =n − 1 r − 1



pr(1 − p)n−r for k = 0, 1, 2, ... (5)

Unlike the Poisson distribution, the Negative Binomial distribution does not impose the mean to be equal to the variance.

An example of a Poisson distribution (λ = 5) and a Negative Binomial (r = 1, p = 0.2) distribution, each with an expected value of 5, are shown in the figure below:

Figure 4: Examples of Poisson and Negative Binomial distributions

(34)

tail and more probability mass on the left side compared to a Poisson distribution with the same expected value. An important result regarding the choice of a fre-quency distribution was derived by Aue and Kalkbrener (2007). They show that the choice of a Poisson or Negative Binomial frequency distribution is almost irrelevant in terms of capital figures. The capital charge for operational risk is mainly driven by the choice of the severity distribution. In this research a Poisson distribution is used to model the frequencies. The choice for this distribution will be explained in section 4.2.

4.1.2 Severity distributions

The severities of operational losses are, trivially, always positive. Furthermore, low-frequency high-impact losses are of most interest and are located in the (far) right tail of a distribution. With this information in mind, heavy-tailed distributions seem to be likely candidates to serve as severity distributions in operational risk modelling. Commonly used distributions are the Lognormal, Gamma, Weibull, Pareto and the Generalized Pareto Distribution (GPD). For more information have a look at, for example, Aue and Kalkbrener (2007), Dom´ınguez et al. (2009), Fontnouvelle et al. (2004) and Moscadelli (2004). The density functions of the severity distributions we will use in this research are given by:

Distribution Parameters Density Function

Lognormal µ, σ > 0 f (x, µ, σ) = 1 xσ√2πe −(ln x−µ)2 2σ2 , x > 0 Gamma k, θ > 0 f (x, k, θ) = xk−1(1/θ)e−xθkΓ(k), x ≥ 0 Weibull γ, δ > 0 f (x, γ, δ) = γδ xδγ−1e−(x/δ)γ, x ≥ 0 Pareto α, xm > 0 f (x, α, xm) = αx α m xα+1, x > xm > 0 GPD ξ ≥ 0, β > 0 f (x, ξ, β) = β1 1 + ξxβ  −1/ξ−1 , x ≥ 0

Table 5: Severity Distributions

(35)

Figure 5: Possible shapes of the severity distributions

4.2 Fitting distributions to expert opinions

One of the issues in expert modelling, i.e. scenario analysis, is to extract a capital figure from the obtained data. Frachot et al. (2004) note that scenarios can be translated into restrictions on the parameters of frequency and severity distribu-tions. Once those restrictions have been identified, an optimization strategy such as Maximum Likelihood estimation can be applied to find parameter estimates for the frequency and severity distribution.

Let there be E experts. Let us assume that frequencies are Poisson distributed with parameter λi for every expert i, where i = 1, 2, ..., E. Making such an

assump-tion seems restrictive, but as noted before, the choice of a frequency distribuassump-tion is almost irrelevant in terms of capital requirements and, as we will show, the Pois-son distribution has some appealing properties. Recall that λi denotes the average

number of losses per year as expected by the i-th expert. Let Fi denote the severity

distribution function describing the data provided by expert i. This could be any of the distributions discussed in 4.1.2. As noted before, heavy losses are of most interest in operational risk modelling. We are therefore interested in the probability of extreme losses. Continuing the example of the worst-case loss xi,3, we find that

the probability of observing a loss larger than this worst-case scenario is given by:

P (X > xi,3) = 1 − P (X ≤ xi,3) (6)

= 1 − Fi(xi,3; ψi,1, ψi,2)

where ψi,1and ψi,2are the two parameters of the severity distribution corresponding

to expert i and have to be estimated using the data. If the opinions of expert i would be modelled by a lognormal severity distribution, we would have that ˆψi,1 = ˆµi and

ˆ

ψi,2 = ˆσi are the estimated parameters. The average number of losses per year

larger than xi,3 is then equal to:

λi(1 − Fi(xi,3; ψi,1, ψi,2)) (7)

(36)

that the average waiting time between two consecutive losses exceeding, for example, a worst-case scenario loss figure xi,3 is equal to:

1

λi(1 − Fi(xi,3; ψi,1, ψi,2))

(8)

A proof of this result can be found in Appendix B. The frequencies provided by the experts are denoted as once-in-k-year events, where k has been provided by the expert. Consider for example the first expert. He or she believes that we would observe a worst-case loss once every ten years. Thus on average, we would have to wait ten years until another worst-case event occurs. Therefore, the frequencies provided by our experts can be denoted as waiting times between consecutive loss events. Let di,j denote the waiting time between two consecutive events of scenario

j, j = 1, 2, 3, as provided by expert i, where i = 1, 2, ..., E. As a result, for a given scenario (xi,j; di,j), the parameters of the severity distribution are restricted

to satisfy:

di,j =

1

λi(1 − Fi(xi,j; ψi,1, ψi,2))

(9)

The goal is now to find a severity distributions Fi which describes the expert’s

judgements. This can be done by estimating the parameters ψi,1 and ψi,2 of the

severity distributions discussed in 4.1.2 by solving the following problem using a weighted least squares approach:

Zi = min ˆ ψi,1, ˆψi,2 3 X j=1 wj di,j− 1

λi(1 − Fi(xi,j; ˆψi,1, ˆψi,2))

!2

for all i (10)

We will solve (10) for all severity distributions discussed in 4.1.2. The severity distribution, F , which produces the lowest result of (10) is assumed to best describe an expert’s judgements. The variable wi,j is expert i’s weight associated with the

j-th scenario and is equal to:

wi,j =

1 d2 i,j

(11)

(37)

a lot of error. This is because in the ML we can only use the xi,j’s and would have

to disregard (i.e. use less information) the di,j values.

Frachot et al. (2004) used (10) to find an estimate of λi as well. However, the

questionnaire setup as provided by Alderweireld et al. (2006) allowed us to elicit ˆ

λi (the estimated value of λi) from expert i. We will use this parameter as an

es-timate for λi. Making this assumption seems reasonable as we expect experienced

risk managers at pension funds to be able to provide some reliable intuitions on the frequency of operational losses per year. Alderweireld et al. (2006) also assume xi,1 to be true. This allows them to rewrite (10) such that only one parameter has

to be estimated. However, this would impose too much restrictions on the model, making it difficult to find an appropriate severity distribution which best describes an expert’s judgements.

4.3 Aggregating distributions: the linear opinion pool

A very simple solution to the aggregation problem is to average over the experts’ severity distributions. Let fi(ψi,1, ψi,2) denote the density function corresponding to

expert i’s severity distribution. We can then average over all experts’ distributions with the help of the so-called linear opinion pool function (Buck et al., 2009):

f (ψ1, ψ2) = E X i=1 wifi(ψi,1, ψi,2) = 1 E E X i=1 fi(ψi,1, ψi,2) (12)

The weights wi have been set equal to 1/E, i.e. the opinion of every expert is

weighted equally, for all i. Note that this method does not account for correlations between experts.

4.4 Aggregating distributions: the copula approach

In this subsection an aggregation method using copulas will be explained. This method is able to aggregate the elicited experts’ severity distributions while taking dependence between experts into account. The procedure was designed by Clemen and Jouini (1996). We will start with an introduction to the concept of copulas.

4.4.1 Copulas

(38)

Let C : [0, 1]m → [0, 1] be an m-dimensional copula which is a mapping from the unit hypercube into the unit interval. Then, a function C(u) = C(u1, u2, ..., um) is

a copula if the following properties are satisfied:

1. C(u1, u2, ..., um) is increasing in each component ui

2. C(1, ..., 1, ui, 1, ..., 1) = ui for all i ∈ {1, ..., m}, ui ∈ [0, 1]

3. For all (a1, ..., am), (b1, ..., bm) ∈ [0, 1]m with ai ≤ bi we have 2 X i1=1 · · · 2 X im=1 (−1)i1+···+imC(u 1i1, ..., umim) ≥ 0, (13)

where uj1 = aj and uj2 = bj for all j ∈ {1, ..., m}.

The first property is a requirement which should hold for every cumulative distribu-tion funcdistribu-tion. The univariate marginal distribudistribu-tions of a copula should be uniform on [0, 1]. Property 2 makes sure that this requirement is satisfied. Equation (13) is the so-called rectangle inequality. It ensures that if the random vector (U1, ..., Um)0

has distribution function C, then P (a1 ≤ U1 ≤ b1, ..., am ≤ Um ≤ bm) is

non-negative.

An important result in the field of copulas was found by Sklar (1959). From Em-brechts et al. (2005) we find Sklar’s theorem:

Let F be a joint distribution function with margins F1, ..., Fm. Then there exists a

copula C : [0, 1]m → [0, 1] such that, for all x

1, ..., xm in R = [−∞, ∞],

F (x1, ..., xd) = C(F1(x1), ..., Fm(xm)) (14)

= C(u1, ..., ud)

If the margins are continuous, then C is unique; otherwise C is uniquely determined on Ran F1× Ran F2× · · · × Ran Fm, where Ran Fi = Fi(R) denotes the range of Fi.

Conversely, if C is a copula and F1, ..., Fm are univariate distribution functions, then

the function F defined in (14) is a joint distribution function with margins F1, ..., Fm.

Hence, Sklar’s theorem shows that the distribution function obtained from each expert can be combined into a copula. Since there are E experts, an E-dimensional copula should be used. Evaluating (14) at xi = Fi←(ui), ui ∈ [0, 1], i = 1, ..., E,

where F← denotes the generalized inverse of F , we find the result:

C(u1, ..., uE) = F (F1←(u1), ..., FE←(uE)) (15)

This results shows how copulas express dependence on a quantile scale, since the value C(u1, ..., uE) is the joint probability that Xi lies below its ui-quantile, for all i.

(39)

Fr´echet bounds. These bounds hold for every copula C(u1, ..., uE) and turn out to

have important dependence interpretations (see Embrechts et al., 2005):

max E X i=1 ui+ 1 − E, 0 ! ≤ C(u1, ..., uE) ≤ min(u1, ..., uE)

The Fr´echet lower bound implies perfect negative dependence, whereas the upper bound implies perfect positive dependence between the E experts.

4.4.2 The class of Archimedean copulas and their dependence measures

There exist a number of families of copulas. Every family has its own dependence structures and properties. The copulas used in this research belong the family of Archimedean copulas. As will be show later on, the copulas from this family have appealing properties concerning the aggregation of expert opinions. From Clemen and Jouini (1996) and Embrechts et al. (2005) we find that an E-dimensional Archimedean copula, CE, is given by:

CE(u1, ..., uE) = φ−1(φ(u1) + · · · + φ(uE)) (16)

The function φ : [0, 1] → [0, ∞] satisfying φ(1) = 0 is called the generator function of an Archimedean copula. If this function satisfies φ(0) = ∞, it is called a strict generator. Clemen and Jouini (1996) note that the generating function φ−1 is unique up to a multiplicative constant. This means that if a function φ−1 can be parameterized with some α ∈ [α−, α+], where αis the minimum value and α+ is

the maximum value that α can obtain, we have that for all (u1, ..., uE) ∈ [0, 1]E,

CE,α = φ−1α (φα(u1) + ... + φα(uE)) (17)

is an E-dimensional Archimedean copula generated by φ−1α . The generating function φ−1α is a generating function φ−1 parameterized by some constant α as described before. The lower and upper bounds of E-dimensional Archimedean copulas are given by: ΠE(u1, ..., uE) ≤ CE,α(u1, ..., uE) ≤ ME(u1, ..., uE) (18) where ΠE(u1, ..., uE) = E Y i=1 ui (19) ME(u1, ..., uE) = min(u1, ..., uE) (20)

Hence, the copulas used in this research will only satisfy experts whose dependence range from independence up to perfect positive dependence.

The Archimedean family of copulas uses Kendall’s tau, ρτ, as a dependence measure

Referenties

GERELATEERDE DOCUMENTEN

One of the most obvious reasons for restaurants to join a food-delivery platform could be related to the potential increase in revenue and larger customer base. However, there

Between group comparisons showed AVPD patients gave higher uncorrected AVPD belief ratings than non-patients on the non-ambiguous positive, mixed, ambiguous negative and non-ambiguous

TABLE 1: PENSION FUNDS IN THE NETHERLANDS All pension funds Assets under management billion euro Funding ratio # of pension funds # of active plan members Corporate pension funds

The purpose of this study is to investigate whether financial determinants have an effect on the reporting quality of Dutch pension funds and if the reporting quality has increased

The results of this research showed that evidence for a relationship between the tenure of members the Board and the presence of female members of the Board with the

We fit the multivariate normal distribution to the series of growth returns, inflation and portfolio components in order to proceed with simulation the future assets and

To measure how the allocation to equity depends on the defined variables, an OLS regression will be conducted to test the key variables wealth, age, income and gender

H1 + H2 + H3 + H4 + C ONCEPTUAL MODEL Risk identifying Risk assessment Risk strategy Competitive advantage Customer trust Customer investment Customer commitment Risk