• No results found

Underlying risk factors of asset classes

N/A
N/A
Protected

Academic year: 2021

Share "Underlying risk factors of asset classes"

Copied!
81
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Underlying risk factors of asset classes

Marcel Haan

(2)

Master’s Thesis Econometrics, Operations Research and Actuarial Studies Specialisation: Actuarial Studies

(3)

University of Groningen

Underlying risk factors of asset classes

Marcel Haan

13 March 2015

Abstract

An institutional investor invests directly or indirectly in different asset classes to obtain a diversified portfolio. The purpose of diversification over the various asset classes such as equities, bonds, and commodities is to reduce portfolio risk. The concept of diversification over merely the various asset classes has been increasingly criticised. This thesis examines if and how diversification on the basis of the underlying risk factors of asset classes can be achieved. We propose various factor-based

investment strategies that take the expected factor exposures into account. We show that for many factor-based investment strategies, the diversification over the expected risk factor exposures would have outperformed an equal diversification over solely the asset classes in bear markets. Some factor-based investment strategies will take advantage of bull markets too, although their

performance in bull markets would have been inferior to the performance of the 1/N portfolio. We also propose a factor-based strategy with revised characteristics. The performance of this strategy would have been superior to the 1/N portfolio in bull markets, but inferior in bear markets.

Keywords: Rolling window regression, principal components analysis, factor risk budgeting,

(4)

Contents

1 INTRODUCTION 5

2 LITERATURE REVIEW 6

3 LINEAR FACTOR MODELS 8

3.1 DEFINITION 8

3.2 FACTOR TAXONOMY 9

3.3 TIME SERIES REGRESSION 12

3.4 CONDITIONAL FACTOR LOADINGS 13

3.5 FORECASTING ASSET CLASS RETURNS 13

4 RISK ANALYSIS 15

4.1 COHERENT MEASURES OF RISK 15

4.2 RISK MEASURES 16

4.3 FACTOR MODEL RISK ANALYSIS 16

4.4 BACKTESTING 22

4.5 RISK DECOMPOSITION 23

5 STRATEGIC SELECTION OF INVESTMENTS 27

5.1 FACTOR-BASED INVESTMENT STRATEGIES 27

5.2 TRANSACTION COSTS 29

6 EMPIRICAL RESULTS 33

6.1 INVESTMENT ENVIRONMENT 33

6.2 DATA DESCRIPTION 35

6.3 FUNDAMENTAL FACTORS 37

6.4 PRINCIPAL COMPONENTS ANALYSIS 39

6.5 REGRESSIONS 42

6.6 FORECASTING ASSET CLASS RETURNS 48

6.7 RISK ANALYSIS 52

6.8 FACTOR-BASED INVESTMENT STRATEGIES 58

CONCLUSIONS 69

OPEN ISSUES 71

BIBLIOGRAPHY 73

APPENDIX 76

A1 FACTOR RISK PARITY PORTFOLIO 76

(5)

5

1

Introduction

A pension fund or institutional investor invests directly or indirectly in different asset classes to obtain a diversified portfolio. The idea being that an investor reduces the volatility of the portfolio’s return by combining various investment categories. However, since the financial crisis in 2007-2009, the concept of diversification over merely the various asset classes has been increasingly criticised. The investments turned out to be not as diversified as investors would have expected. For example, researchers Ang, Goetzmann and Schaefer (2009) conclude that the exposure to risk factors

accounted for the results of the Norwegian Government Pension fund, also when in 2008 ten years’ worth of cumulative outperformance was wiped out. In this perspective it would be wise to look at investing in a different way. Investing in factors is such an approach. It is one of the fundamental laws of economics that returns come at the price of risk. The idea of factor investing is that an institutional investor takes implicit position in the underlying risk factors of the different asset classes. Investors were already aware of this approach and have identified many factors over the years.

The main purpose of this thesis is to examine if and how institutional investors can reduce portfolio risk by taking the underlying risk factors of asset classes into account, while still being able to take advantage of bull markets. However, this begs further questions:

- What are the underlying risk factors of asset classes and how can we obtain an understanding of the exposure of asset classes and portfolios to these factors? - Are the underlying risk factors useful for predicting asset class returns?

- How can we measure portfolio risk and the contribution of the underlying risk factors to portfolio risk?

- How can we define investment strategies that take the underlying risk factors into account and reduce portfolio risk, to such an extent that they will remain able to take advantage of bull markets in the presence of transaction costs?

- How do the factor-based investment strategies perform compared to the investment strategy which has merely an equal spread over the various asset classes?

This report provides answers to these questions, which play an important role in achieving the main purpose of this thesis. We have structured this report in the following manner. Chapter 2 gives a brief literature overview of factor models. We describe factor models in chapter 3, where we give a summary of the well-known risk factors and describe how we extract statistical risk factors from observable asset class returns. Moreover, we show how we estimate the exposure to the risk factors and describe the methodology for predicting asset class returns in chapter 3. In chapter 4 we

(6)

6

2

Literature Review

As a general rule, the variance of the return of a portfolio can be reduced by including additional assets in the portfolio, a process referred to as diversification. Diversification is one of the key fundamental investment concepts, both in theory and in practice. According to Koedijk, Slager and Stork (2013) it even has the status of being the only truly free lunch in the world of investments. The first mathematical model that leads to minimum-variance portfolios was formulated by Markowitz (1952). The model minimises the variance of the portfolio subject to the restriction that the expected portfolio return should be higher than a predetermined value. Expanding on the Markowitz

framework, Sharpe (1964) derived what is now known as CAPM (Capital Asset Pricing Model). The CAPM postulates that expected excess returns of securities are linearly related to the excess return of the market portfolio. The CAPM can be considered a one-factor model, with the market portfolio as a factor. It was generalised by Chen, Roll and Ross (1986), who suggested to work with multiple factors. This factor model framework leads to an alternative theory of asset pricing, termed arbitrage pricing theory (APT). The APT states that, under certain assumptions, the expected returns of

securities are linearly related to factor exposures. APT remains silent about the number of factors in the model and how these factors should be included. Empirical work and tests showed that the market model is not very accurate. The so-called anomalies studies started and a lot of variables were tested for statistical explanation power. Variables such as dividend price ratios, dividend yields, earnings-price ratios etc. could be included in a factor model. Common risk factors were explored by Fama and French (1993). Their model is now known as the Fama-French three-factor model. Out-of-sample performances of these variables were explored by Goval and Welch (2008) using the Diebold-Mariano (DM) test. The DM test was intended for comparing forecasts. Diebold (2013) gives his personal perspective on the use and abuse of Diebold-Mariano tests.

Factor models have a wide history in finance. They are often distinguished into three types:

macroeconomic, fundamental and statistical factor models. Observable economic and financial time series are classified to the group of macroeconomic factors. Time series created from observable asset class characteristics belong to the group of fundamental factors. Statistical factors are unobservable and are extracted from asset class returns. Connor (1995) examined the explanatory power of the three types. He concludes that the fundamental and statistical factor models

(7)

7 Artzner et al. (1999) present and justify a set of four desirable properties for measures of risk, and call the measures satisfying these properties “coherent”. Zivot (Factor Model Risk Analysis, 2011) clearly shows how one can estimate risk measures from factor models. He describes how individual asset or portfolio return risk measures can be decomposed into additive factor contributions, which is a process called “factor risk budgeting”. It allows portfolio managers to know sources of factor risk which is convenient for allocation and hedging purposes. Moreover, it allows risk managers to evaluate a portfolio from factor risk perspective. The methodology of factor risk budgeting is based on Euler’s theorem. A proof of, and other insights on, Euler’s theorem are summarised by Border (2012). Analytical solutions for risk measures Value-At-Risk and expected shortfall were provided by researchers Boudt, Peterson and Croux (2008) under some (strong) assumptions. Approximations for the marginal contributions to these risk measures can be computed based on the theorem of Scaillet (2004). However, McNeil, Frey and Embrechts (2005) show that the marginal contribution to Value-At-Risk and expected shortfall is proportional to the marginal contribution to volatility for elliptical distributions. As analytical solutions for the marginal contribution to volatility is available, the implied contribution to Value-At-Risk and expected shortfall can be computed for elliptical distributions. We explain the theory of factor models in risk analysis in more depth in chapter 4.

(8)

8

3

Linear factor models

Both Straumann and Garidi (2007) and Zivot (Factor Models for Asset Returns, 2011) present a broad introduction to linear factor models. They apply factor models to individual asset returns; however, we differ slightly in the application as we apply factor models to asset class returns. We present the definition of a factor model and a short overview of well-known factors in academic literature. We show how we calculate the statistical factors, how we estimate the factor exposures in a rolling window context and discuss the importance of models for conditional factor exposure in the remaining part of this chapter. Finally we explain the forecast methodology.

3.1

Definition

As there are many possible definitions of a linear factor model, we will first introduce some notation. Given a universe of asset classes, indexed by . For , we write for the

return on asset class over the return period . We are interested in the underlying risk factors in the various asset class returns. We assume that the different asset classes have factors in common, where we denote the value of the common factor at time by . The factor loading

or factor beta for asset class on the factor is denoted by . Our asset class specific factor is

denoted by . Finally we can write our linear factor model for asset class as follows:

(3.1) (3.2) Please note that is a column vector with elements and is a row vector with

elements . We can write and . Thus we have a factor model for

each asset class.

The definition of factor models does not restrict the number of factors to a specific number. Equation is simply a hypothesis of the structure of asset class returns and it is the task of the

econometrician to propose and test potential factors, and select the number of factors to be included into the model. Any factor can in theory be included, however, in our approach they preferably have the following properties:

1. The factor realisations, , are stationary with unconditional moments

[ ]

2. Asset class specific error terms, , are uncorrelated with each of the common factors, ,

( )

3. Error terms are serially uncorrelated and contemporaneously uncorrelated across asset

classes

( )

(9)

9 homoscedasticity respectively. When the latter two assumptions are met, we can apply ordinary least squares as an efficient estimation technique. If the latter two assumptions are violated, we should consider refinements of the OLS estimator for efficient estimates. We will describe the estimation techniques in section 3.3 in some more detail.

The factors represent common sources of risk. From (3.1) we infer that the expected return of an asset class equals

[ ] [

The second term in the expected return of asset class , [ , is often labelled as the explained expected return due to systematic risk factors, where the first term in the expected return, , is labelled as the unexplained expected return.

3.2

Factor taxonomy

Just like researcher Zivot (Factor Models for Asset Returns, 2011), we distinguish three types of factor models: macroeconomic, fundamental, and statistical factor models.1 We classify observable factors as macroeconomic factors, factors created from observable asset class characteristics as fundamental factors and factors that are unobservable and extracted from asset class returns as statistical factors. The explanatory power of the three types of models was examined by Connor (1995). He concluded that the macroeconomic factor model has no marginal explanatory power when added to the fundamental factor model. This implies that the risk attributes in the fundamental factor model capture all the risk characteristics captured by the macroeconomic factor betas. Yet, it is not clear and they would leave it as a problem for future research, how to rotate the fundamental risk attributes to equate some combination of them to the macroeconomic factor betas. Likewise, the statistical factor model substantially outperforms the macroeconomic factor model. However, by other important criteria such as theoretical consistency and intuitive appeal, a macroeconomic factor model might be the strongest of the three approaches. One can, of course, construct a factor model that contains several types of factors. We will exemplify the factors in the following brief subsections.

3.2.1 Macroeconomic factors

Macroeconomic factors are considered to be the simplest and most intuitive type of factors. A description of commonly used macroeconomic factors was described by Chen, Roll and Ross (1986). Factors typically used are inflation, the percentage change in industrial production, the excess return to long-term government bonds, and the return premium of low-grade corporate bonds relative to high-grade bonds that is realised.

3.2.2 Fundamental factors

Fundamental factors are constructed of observable asset specific characteristics. Like Koedijk, Slager and Stork (2013), we distinguish three types of fundamental factors premiums: factors that arise from exposure to the risk of a broad (1) asset class, (2) style or (3) strategy. The asset class factor premium derives from passive investing in the traditional sources of risk: equities, bonds, real estate or commodities. The style factor premium covers the returns from assets with comparable

fundamental or technical characteristics expected. In terms of equities, examples include value, small cap and momentum. For bonds, examples are yield-curve spread, credit spread and high-yield

1

(10)

10 spread. The third type, the strategy factor premium, is generated by implementing a certain strategy, such as merger arbitrage, convertible arbitrage or ‘carry’. Researchers Briand, Nielsen and Stefek (2009) suggest that, together with the alpha of an investment, these components can explain the total return of an investment

There is no consensus on which factors offer investors the best opportunities. Some factors have gathered greater attention in both academic research and in practice. Table 1, copied from

researchers Koedijk, Slager and Stork (2013), summarises the key factors and presents an extensive description. We will briefly describe the factors stated below in table 1.

Table 1 Factors identified in the investment literature from Koedijk, Slager and Stork (2013) Type Factor in academic literature Applicable to

Asset Inflation Equities, Bonds, Currencies, Commodities

Economic growth Equities, Bonds, Currencies, Commodities

Style Value Equities, Bonds, Currencies, Commodities

Growth Equities, Bonds, Currencies, Commodities

Momentum Equities, Bonds, Currencies, Commodities

Low Volatility Equities, Commodities

Term Bonds

Credit Bonds

Short Treasury, Short Credit Bonds Short Term and Long Term Reversal Equities

Volatility Equities

Liquidity Equities, Bonds

Emerging Equity Market Equities

Convexity Bonds

Strategy Carry Equities, Bonds, Currencies, Commodities

Trending Equities, Bonds, Currencies, Commodities

Anomaly factors Mainly equities

The asset class factor premium derives from passive investing in the traditional source of risk-equities. The style factor premium covers the returns from assets with comparable fundamental or technical characteristics expected. The strategy factor premium is generated by implementing a certain strategy.

The factors Market, Value, Size and Momentum are the most well-known among investors, mainly due to researchers Fama and French (1993). These were the first anomalies to be identified and are therefore also the most widely cited in the debate on efficient markets. These factors continue to generate positive returns. They have not been arbitraged away2. The LowVol factor has been identified in academic literature for a long time, but is less well-known than the Value, Size and Momentum factors. The factors Term and Credit can explain returns in the bond markets. We would refer to Koedijk, Slager and Stork (2013) for an extensive description of all factors in table 1 and other less well-known factors.

3.2.3 Statistical factors

The above factors have been chosen ad hoc. The statistical factors are extracted from the observable returns using statistical methods. The primary methods for constructing statistical factors are

2

(11)

11 maximum-likelihood and principal component analysis (PCA). Both methods are summarised by researcher Zivot (Factor Models for Asset returns, 2011). We will examine the PCA approach and briefly describe the methodology of PCA below.

3.2.3.1 Principal components analysis

The aim of PCA is to reduce the dimensionality of highly correlated data by finding a small number of uncorrelated linear combinations that account for most of the variability of the original data, in some appropriately defined sense. PCA is not itself a model, but rather a data-rotation technique as described by McNeil, Frey and Embrechts (2005). PCA can be used as a way of constructing appropriate factors for a factor model. The key mathematical result behind PCA is the spectral (eigenvalue) decomposition theorem, so that we can write any symmetric matrix as , where (1) is the diagonal matrix of eigenvalues of which, without loss of generality, are ordered so that and (2) is an orthogonal matrix satisfying whose columns are standardised eigenvectors of (i.e. eigenvectors with length 1). Let [ be the covariance matrix of the returns. We would denote for the eigenvector of as and the elements of the eigenvector

as . We would take the principal components that explain the largest part of the

covariance matrix to construct the principal component factors. By variance decomposition, one can show that the ratio

gives the proportion of the total variance ∑ attributed to the

principal component factor return. Naturally, the cumulative variance explained by the first

factors equals ∑

. The principal component factors are linear combinations of returns. The

principal component factors are calculated as

Naturally, we will consider the principal component factors that correspond to the principal components that explain a sufficient amount of the total variance of ∑ .

3.2.3.2 Factor mimicking portfolios

Portfolios that mimic the principal component factors can be constructed, as each principal component factor is a linear combination of returns. If short positions are allowed, weights of the factor mimicking portfolio are obtained by renormalising the weights in the vectors so that they

sum to unity. The weights in the factor mimicking portfolios have the form

(

)

Note that the values of the factor mimicking portfolio returns is given by

However, short positions are not always allowed. We can circumvent short positions as follows. We will define the vector of weights as , where is a vector which contains only

ones. Please note that for constants and the vectors and are perfectly correlated. It

is easily seen that the weights sum to one when it holds that ̅ where ̅ . All

(12)

12 ( ) ̅ ( ) ̅ ( ) ̅ ̅

We can invest indirectly in the mimicking portfolios which contain short positions, as we describe in chapter five. We will consider the factor mimicking portfolios in the sequel as investable risk factors, but note that we can only invest directly in mimicking portfolio with short sales constraints.

3.3

Time series regression

We can estimate the parameters in model quickly using OLS or some refinements such as GLS. However, stylised facts bring evidence of time-varying risk premiums as described by Darolles, Eychenne and Martinetti (2010). A natural way to obtain the time variation in the factor loadings is by way of the rolling window regressions. Like Straumann and Garidi (2007) we will denote as our window size. Hence we introduce the vectors

[ ] [ ] and a matrix [ ]

Accounting for time variation in the parameters of models and , we rewrite for the models as

(3.3)

We estimate the parameters in model for , such that we will end up with

estimations of the parameters for different time periods. The OLS estimations of the parameters are presented by ̂ ( ) . The data in the outset of the window

are weighed as equally important as the data at the end of the window with standard OLS. We will incorporate more time-variation by following the EWMA approach, which is considered to be an ad hoc approach3. We simply transform the data by the weight matrix

( ) We estimate the model

(3.4)

where , and .4 The classical assumption of

homoscedasticity of the idiosyncratic returns is often violated. Heteroscedasticity makes OLS inefficient and invalidates the standard estimator of its variance. Hence, standard inference (standard tests, tests) is invalid. However, the OLS estimates remains consistent. Moreover, applying the Newey-West heteroscedasticity and autocorrelation consistent (HAC) covariance matrix

3

The commonly taken approach is based on Kalman filtering techniques, as described by Swinkels and Van der Sluis (2006).

4

(13)

13 estimators, we can make (asymptotic) consistent standard errors for the OLS coefficients5. For calculations of the HAC covariance matrix we refer to den Haan and Levin (1996) or Zeileis (2004). We would, then, denote this co-variance matrix as and its diagonal element as . For

null hypothesis , we calculate -values as ̂

( )

. Under the null, this should be a value of the -distribution with degrees of freedom.

3.4 Conditional factor loadings

As we have seen in the previous subsection, factor loadings depend on time. Clear illustrations have been presented in the empirical example. In chapter four and five we will describe how factor models can be used in theory, in risk analysis and portfolio selection respectively. In practice, however, factor loadings are not known. So in the application of factor models estimates of the factor loadings for the next period will be necessary. The simplest approach is to use the factor loading estimates of the previous period as estimator for the loadings of the coming period. Naturally, better estimates of these loadings will result in more reliable portfolio and risk analysis. As we are dealing with different asset classes and factor loadings, we will need conditional models. As is large, an efficient algorithm is required. It is, therefore, interesting to examine whether conditional models for factor loadings are known to such an extent that the quality of both risk estimations and the investment strategy will improve.

3.5 Forecasting asset class returns

In section 3.3 and 3.4 we relate the returns of asset classes at time to the values of the underlying risk factors at time . We explain the returns of asset classes by the underlying risk factors. We can estimate the exposure to the various asset classes using the methods described in the previous sections. One might be interested in forecasting asset class returns on the other hand. That is, relate the values of the underlying risk factors at time to the returns of the various asset classes at time . We will examine whether the values of the underlying risk factors are meaningful for future asset class returns. That is, we estimate the vector of parameters in the equation

(3.5)

by OLS: ̂ ( ) for The parameters depend on

time, like in (3.4). Please note that the estimators for time are not yet known at time , simply since the asset class returns are not known yet. Similar as in section 3.4 we apply the parameters ̂ as

a proxy for ̂ at time . However, more accurate estimators may be achieved by considering

more advanced conditional models from the ARMA-GARCH methodology similar as already noted in section 3.5. The one period ahead asset class returns forecasts are calculated as

̂ ̂ (3.6) We will compare these forecasts to the realised returns. Moreover, we examine whether the forecast

technique in (3.6) outperforms the following two forecasting techniques: ̂ ̂ (3,7) (3,8) 5

(14)

14 Please note that (3.7) sets the forecast of the asset class return for period at the return of the previous period and (3.8) sets the forecast of the asset class return for period equal to the weighted average of the returns of the previous window.6 We claim that if the factors in (3.6) contain any useful information for the returns for the upcoming period, method (3.6) should significantly outperform the forecasting methods described in (3.7) and (3.8). Of course the question rises how one decides whether one method outperforms the other. However, there is a wide history of forecasting asset returns in finance.

We use the DM approach to compare forecast errors.7 DM relies on assumptions made directly on the forecast error loss differential. Denote the loss associated with forecast error of asset class ,

by . We will take the quadratic loss function . The time-t loss differential between

forecast 1 and 2 is then ( ) . DM assumes that the loss differential is

covariance stationary. The null hypothesis of equal predictive accuracy corresponds to ( ) ,

in which case, under the covariance stationary assumption,

̅

̂ ̅ → 8

where ̅ ∑ is the sample mean loss differential and ̂ ̅ is a consistent estimate of the

standard deviation of ̅ . We calculate ̅ by regression of the loss differential on an

intercept. Subsequently we estimate the HAC standard deviation of ̅ .

6 Please note that we are essentially regressing the weighted returns on a constant for each window. 7

Diebold (2013) describes methodology for three cases. Comparison of forecast errors when the underlying models are unknown and comparison of forecast errors when the underlying models are known, the “old-school” and “new-“old-school” approach. He concludes that for comparing forecasts, DM is the only game in town. The situation is more nuanced if one uses models with estimated parameters, which is the case in our

application. DM-style tests are still relevant, but the issue arises as to appropriate critical values. The new-school approach considers comparison of models with estimated parameters and finds that asymptotic normality of DM is a trustworthy approximation.

8

(15)

15

4

Risk Analysis

The main purpose of this chapter is to explain the risk models that we apply in the empirical analysis. In risk management we are mainly concerned with the probability of large losses. We will therefore change our notation and denote the loss of asset class at time as . We are interested

in the right tail of the loss distribution. We will assume that the factor exposures are known at time . In practice, however, estimates of the factor loadings will be necessary, as discussed in chapter 3.5.

4.1 Coherent measures of risk

An overview of properties that a good risk measure should have was presented by McNeil, Frey and Embrechts (2005). We will summarise the list below. Such a list was proposed for applications in financial risk management by Artzner et al. (1999). A risk measure is called coherent if it has the properties translation invariance, sub-additivity, positive homogeneity and monotonicity. We will briefly explain these properties in this section. First we will have to give a formal definition of risk measures.

Let us fix some probability space and a time horizon . Denote by the set of all random variables on , which are almost surely finite. Financial risks are represented by a set of random variables, which we interpret as portfolio losses over some time horizon . We often assume that is a convex cone, i.e. that and implies that and for every Risk measures are real-valued functions → . We

interpret as the amount of capital that should be added to a position with loss given by , so that the position becomes acceptable to an external or internal risk controller. Positions with are acceptable without injection of capital; if , capital may even be

withdrawn. Please note that we will follow the notation of McNeil, Frey and Embrechts (2005). Now we can introduce the axioms that a risk measure should meet in order to be called coherent.

4.1.1 Translation invariance

The translation invariance axiom states that for all and every we have .

This property ensures that by adding or subtracting a deterministic quantity to a position leading to the loss we will alter our capital requirements by exactly that amount.

4.1.2 Sub-additivity

For all we have .

Sub-additivity reflects the idea that risk can be reduced by diversification. In particular, McNeil, Frey and Embrechts (2005) showed that the use of non-subadditive risk measures in a Markowitz-type portfolio optimization problem may lead to optimal portfolios that are highly concentrated and that would be deemed quite risky by normal economic standards. Furthermore, sub-additivity makes decentralisation of risk-management systems possible. Let us take as an example two trading desks with positions leading to losses and . Imagine that a risk manager wants to ensure that the risk of the overall loss is smaller than some number . If he uses a risk measure that is sub-additive, he may simple choose bounds and in such a way that and impose on each of the desks the constraint that . Sub-additivity of then ensures

(16)

16

4.1.3 Positive homogeneity

For all and we have

Since there is no netting or diversification between the losses in this portfolio, it would be natural to require that the equality should hold, which leads to positive homogeneity. Please note that sub-additivity implies positive homogeneity.

4.1.4 Monotonicity

For such that almost surely we have .

The axiom of monotonicity is obvious from an economic viewpoint: positions that lead to higher losses in every state of the world would require more risk capital. It has also been shown in the book of McNeil, Frey and Embrechts (2005) that for a risk measure that has the sub-additivity and the positive homogeneity properties, the monotonicity axiom is equivalent to the requirement that for all .

4.2 Risk measures

In our analysis, we prefer the expected shortfall and volatility as risk measures for several reasons. Let be an random variable, representing the loss on an asset class at time , with a

probability density function , a cumulative distribution function , an expectation equal to and a variance equal to . Then the most common risk measures associated with would be

volatility, Value-at-Risk and expected tail loss. Formally,

{ ( ) } { }

Thus the risk measure is a quantile of the loss distribution. Typical values for are or . Note that the at confidence level will not give any information about the severity of losses which occur with a probability less than . The expected shortfall is defined as

( | ) ∫ ( )

assuming a continuous loss distribution. This risk measure is also referred to as the expected tail loss. averages over all levels . It has been shown that is a coherent risk measure.

Volatility and are not coherent risk measures, volatility lacks the monotonicity property and lacks the property of sub-additivity.

4.3 Factor model risk analysis

An overview of how to use factor models in risk analysis was presented by Zivot (Factor Model Risk Analysis, 2011). From we can deduce that the loss of asset class at time is given by

(17)

17

4.3.1 Unconditional parametric models

In estimating the unconditional tail risk measures, we assume that ( ) ( )

and ( ) . As we estimated model (3.2) with a rolling window (section 3.3),

we can write our model as

(4.1)

Following Zivot (Factor Model Risk Analysis, 2011), we write ̄ and

, such

that ̃ . We can rewrite model as

̃ ̄

Please note that under our assumptions, the mean vector of ̃ is yielded by ̃ ̃

( )

[ ] The covariance matrix of ̃ is given under our assumptions by

̃ ( ̃ ̃ ) ( ̃ ̃ )

[ ] 9

We could write our initial assumptions as ̃ ̃ ̃ . Hence, we calculate the mean and

volatility of the loss of asset class at time as

̃ ̄ √ ̄ ̃ ̄ (4.2) and define

We would denote the cdf of as . For a continuous and strictly increasing distribution

function, the quantile of equals ( )

, where

is the ordinary inverse of

. If

we assume that is a member of the location-scale family10, we can calculate the tail risk measures as ( ) ( ) ( ) (4.3) (4.4)

9 Please note that the first row and column of

̃ contain only zeros; they do not depend on the reliability of

our assumptions. The last row and column are only zero in theory but are non-zero in practice, which represents the covariance of the standardised residuals and the factors.

10

(18)

18 In the following small subsections we will show how we can calculate the tail risk measures of the loss distribution under different distributional assumptions of the random variables .

Please note that we treat the random variables as if they do not contain any serial correlation in

estimating the unconditional risk measures. Clearly results can be improved by taking this serial correlation into account. We will describe how one can do this in section 4.3.2.

4.3.1.1 Normal distribution function

We assume that , that is, we assume that ̃ ( ̃ ̃ )11. Let and be

the pdf and cdf of a standard normal distribution, respectively. Analogous to and , we would calculate the tail risk measures as

( )

Stylised facts argue that return series are heavy-tailed, so we will also consider non-normal

distribution functions for estimating the tail risk measures. Commonly used non-normal distributions are the skewed Student’s and the generalised hyperbolic distribution. We will further consider Cornish-Fisher approximations and modelling extreme events by a generalised Pareto distribution. 4.3.1.2 Student’s distribution function

We assume that , that is, , when . Please note

that we do not make distributional assumptions about the factors. Under these assumptions we can calculate the risk measures as follows.

( )

where denotes the distribution function and the density of a standard distribution.12 4.3.1.3 Generalised hyperbolic distribution function

We assume that . No analytical expressions for quantiles are known for

this distribution. Tail quantile approximations for the GHD are derived, which might be useful for fast and effective calculation of risk measures, as described by Schlüter and Fischer (2009). However, we choose to approximate the tail risk measures numerically as described by Luethi and Breymann (2015). As the GHD is a member of the location-scale family, we use and to calculate the tail risk measures.

4.3.1.4 Cornish-Fisher approximation

Above we assumed a normal distribution and gave approximations for the risk measures. These approximations can be improved by adjusting it for higher moments in the data. This can be done using the order Edgeworth expansion of around the standard normal distribution function as described by Boudt, Peterson and Croux (2008):

11

Let be a random multivariate random vector and . By means of characteristic functions it can be shown that is a multivariate normal random vector if and only if is a univariate normal random variable for all vectors .

12

(19)

19

where is a polynomial in . The corresponding order Cornish-Fisher expansion of the quantile around the Gaussian quantile function equals

We would denote . The Cornish-Fisher approximation is based on a 2 term Edgeworth expansion around the normal distribution function. We invert the expansion to get the quantile estimate

where is the skewness and is the excess kurtosis of the distribution function . We would base the estimations of the skewness and kurtosis on the window , defined in section 3.3. Please note that when the skewness and excess kurtosis equals zero, as under normality, the Cornish-Fisher quantile equals the normality quantile. We will calculate the Cornish-Fisher using , and approximate the expected shortfall by approximating the integral in the definition of the expected shortfall, which was given in section 4.2.

4.3.1.5 Generalised Pareto distribution function

For an extensive introduction to extreme value theorem (EVT) we refer to McNeil, Frey and

Embrechts (2005). The role of the generalised Pareto distribution (GPD) in extreme value theory is as a natural model for the excess distribution over a high threshold, however, it is not the only method. As an alternative, McNeil, Frey and Embrechts (2005) described the hill method. The distribution function of the GPD is given by

{

( ) ( )

where and when and when . The parameters and are referred to, respectively, as the shape and scale parameters. We denote the distribution function of losses that excess threshold as

( | )

and we assume that . Please note that we select different thresholds for

(20)

20 a linear function of . Käärik and Žegulova (2012) presented an overview of threshold selection methods. However, we choose to select the threshold ad hoc. We choose three different thresholds, in such a way that five, ten or fifteen percent of the data exceed the threshold. Let be our

threshold. McNeil, Frey and Embrechts (2005) showed that for

(( ( ) ) ) They also show that for ,

( )

Let denote the number of observations that exceed the threshold. Let denote the sample size. A natural estimator for ( ) is . However, Smith (1987) proposes an estimator for tail

probabilities. That is, for we approximate ( ) with ( ) . For

we will obtain estimators for the tail risk measures. We can calculate the tail risk measures for the loss distribution by plugging these results into and .

4.3.2 Conditional parametric models

McNeil, Frey and Embrechts (2005) have shown that unconditional methods for calculating and are generally outperformed by conditional methods. This result is intuitive as conditional models use more information. Conditional models exploit the serial correlation in time series. We can define the information set , which contains the information available up to time . We assume that the time series considered are adapted to the natural filtration . In section

3.3 we already conditioned the information set for calculating the time-varying parameters of model . Therefore, the approach in section 4.3.1 can be considered a ‘semi’ unconditional approach. In this section we will discuss methods to estimate the conditional risk measures.

If assumptions of the factors or are violated, it makes sense to use conditional models for

these random variables. One can test whether assumptions are violated for marginal series by making correlograms of the original, absolute or squared series. As we are dealing with different series ( is large), we use a formal test of the strict white noise hypothesis. A popular test is that of Ljung and Box. Let be the autocorrelation (of a stationary) series. Under the null of SWN, the statistic ∑ ̂

has an asymptotic chi-squared distribution with degrees of freedom. As a rule of thumb we select √ as lag. We perform this test on the original and absolute values of the different series.13

If the Ljung-Box test is rejected for the original series, conditional models for the expected value of

will improve the reliability of the estimation of the risk measures. Likewise, if the Ljung-Box test is rejected for the absolute or squared series, conditional models for the variance of improves the

reliability of the estimation of the risk measures. Generally one makes these conditional models with

13

(21)

21 the help of the ARMA-GARCH methodology, which results in ARMA-GARCH models in our case. It has been proven that these kinds of models are suitable for these kinds of problems. If the

assumption of homogeneity of the OLS residuals holds, and if there is zero correlation between the OLS residuals and the factors, the number of conditional models can be reduced via the structure of the factor model. Under these assumptions, it would be sufficient to make conditional models for the expected value and covariance matrix for the factors. If these assumptions hold, the number of conditional models can be reduced from to . Unfortunately, these assumptions do not hold in practice. We need conditional models for the residuals as well.

So we require conditional models for the residuals and the factors. In the empirical example we attempt to create conditional models for ̃ and ̃ . As we plan to estimate conditional means and

volatility in a rolling window context, we do not consider any GARCH models to avoid technical problems as non-converging estimations or non-invertible Hessian matrices. We apply the (ad-hoc) EWMA approach for conditional mean and volatility14. Zivot (Factor Model Risk Analysis, 2011) referred to this approach as a method for the short term. First we calculate the standardised OLS residuals, keeping the time-varying OLS residual variance into account. That is, for

( | ) ( | )

where ̅ ∑ . For starting values at we simply use the sample variance. We

calculate the time-varying standardised OLS residuals as ( ( | ))

, and define ̄

( ( | )) and ̃ . So we have conditional parameters. Now we have to

estimate conditional means and the conditional co-variance matrix for ̃ .

( ̃ | ) ̃ ( ̃ | )

( ̃

| ) ( ̃ ̃̅ )( ̃ ̃̅ ) ( ̃ | )

where ̃̅ ∑ ̃ . For starting values at we simply use the sample covariance

matrix of ̃ . We compute the conditional factor mean and volatility as ( ̃ | ) ̄ √ ̄ ( ̃ | ) ̄ and define 14

(22)

22 We expect that the assumption of to be more realistic than before. We calculate the

conditional tail risk measures, as defined in and by making distributional assumptions as in section 4.3.1.

4.3.3 Non-parametric models

To make the list of commonly used risk measure approaches complete, we will examine the method of historical simulation to estimate risk measures as well.

4.3.3.1 Historical simulation

For unconditional (4.3.1) or conditional (4.3.2) we do not make any distributional assumptions. By

the strong law of large numbers, we have that, as → , the empirical distribution function ̂ ∑ → (under assumptions). Tail risk measures are easily derived from the empirical distribution function15,

̂ [

However, it has some drawbacks. First of all, the assumption of data is generally not a rightful assumption (certainly not for the unconditional series). Another drawback is that one needs long time series to obtain reliable risk estimates. Furthermore, the method has been referred to as “driving a car while looking through the rear view mirror”.

4.4 Backtesting

We described some methods for estimating risk in the preceding sections. No assumptions are made as to which model is superior. Model risk is the risk of wrong modelling, due to violations of the initial assumptions. We are not claiming that one of the above models would be superior to models that are not described in this paper. However, we hope to obtain reliable risk estimations, to such an extent that we can decompose the risk estimations into factor contributions. By backtesting our approach, we test how reliable our risk estimations are. We apply the different backtesting

approaches described by McNeil, Frey and Embrechts (2005) to measure the risk of wrong modelling. In backtesting, we distinguish in backtesting conditional models and unconditional models. We would denote the true one-period risk measures by and . Estimates are indicated with a hat.

4.4.1 Backtesting

By definition of , and assuming a continuous loss distribution, we have . This is the probability of a so-called violation of .16 We introduce an indicator random variable, which counts the number of violations

̂ { ̂ }

We expect that if our estimation method is reasonable then these indicators should behave like Bernoulli random variables with expectation . Moreover, we expect that the violation

indicators behave like random Bernoulli random variables with expectation if we estimate in a conditional way, as was shown by McNeil, Frey and Embrechts (2005).

15

We compute continuous sample quantiles from the empirical distribution function. We refer to Hyndman and Fan (1996) and R Core Team (2014) for details.

(23)

23 So we test whether for estimates for times , ∑ ̂

holds. We test this with a standard two-sided binomial test. In case of conditional models, we check for independence of the Bernoulli indicators, using a runs test. A runs test counts runs of successive zeros or ones in the realisations of the indicator variables and compares the realised number of runs with the known sampling distribution of the number of runs in Bernoulli data.

4.4.2 Backtesting

For an integrable loss with continuous distribution function we have . A proof is given by McNeil, Frey and Embrechts (2005). Therefore, we expect that

( { ̂ })

We will look at ̂ on time periods where the estimated is violated. We refer to these values as the non-zero violation residuals. These should come from a distribution with mean zero. We expect these to behave like realizations of variables from a distribution with mean zero and an atom of probability mass of size at zero. Note that the test of realisations has already been coped with in the backtesting approach of using the runs test. Likewise is the probability mass of size at zero (binomial test).

To test for mean-zero behaviour we perform a bootstrap test on the non-zero violation residuals. We subtract the sample mean of the non-zero violation residuals and draw samples of size of these series and estimate . Our null hypothesis is that of zero mean and we estimate -values as (∑ ) A low p-value is evidence against the null of a zero mean.

4.5 Risk Decomposition

In section 4.1 we gave a short review about coherent risk measures. In section 4.2 we defined our risk measures. We showed different methods how to calculate these in section 4.3. In the empirical example we tested which method gives the most reliable estimates. Using these estimates, we will describe in the following two subsections how we additively decompose asset class (or portfolio) return risk measures into factor contributions. The methodology exploits the fact that the risk measures volatility, and are linearly homogeneous risk functions and Euler’s theorem, which is stated below.

Let → be continuous, and also differentiable on . Then is homogenous of degree if

and only if for all

A proof of, and other insights on, Euler’s theorem are summarised by Border (2012).

4.5.1 Additive Risk Decompositions

(24)

24 is, ( ̄ ) ( ̄ ) . We can apply Euler’s theorem and obtain additive risk

decompositions for linearly homogenous functions for ̄

( ̄ ) ∑ ̌ ( ̄ ) ̃ ( ̄ ) ( ̄ ) ( ̄ ) ( ̄ )

The risk decomposition has a plain interpretation. At time , the marginal contribution to risk of ‘factor’ on asset class is

( ̄ )

̃ At time , the total contribution to risk of factor on asset class is

̃ ( ̄ ) ̃

Finally, at time , the contribution to risk of factor , in percentage, on asset class is ̃ ( ̃ ̄ )

( ̄ )

So Euler’s theorem can be used to additively decompose individual asset type return risk measures into factor contributions. These factor contributions to risk allow portfolio managers to know sources of factor risk for allocation and hedging purposes. It allows risk managers to evaluate a portfolio from a factor risk perspective. We will consider results known for the risk measures volatility, and below.

Results for risk measure volatility are easily derived. For the unconditional volatility risk measure17:

( ̄ ̃ ̄ )

̄

̄ ̄ ̃ ̄

Based on arguments of Scaillet (2004), Meucci (2007) showed that ( ̄ ) ̃ (( ̃ ) ( ̄ )) ( ̄ ) ̃ (( ̃ ) ( ̄ )) 17

Decomposition of the conditional volatility is similar. One has to replace ̃ and ̄ with ̃ and

(25)

25 Coleman (2013) has shown that the simulation approach with or without a bootstrap approach gives unreliable estimates for the marginal contributions. We work-around this problem by exploiting the fact that the marginal contribution to is proportional to the marginal contribution to volatility for elliptical distributions, as McNeil, Frey and Embrechts (2005) have shown. So in that case, the marginal contribution to volatility in percentage equals the marginal contribution to in percentage. Thus the marginal contribution to equals the marginal contribution to volatility in percentage times , which is referred to as “implied contribution to ”.18

Analytical results are available under normality and Cornish-Fisher expansion as Boudt, Peterson and Croux (2008) have shown. We will show here the analytical results under normality. We will recall from section 4.3.1.1 that under normality, the tail risk measures are given by

Since the tail risk measures only depend on the factor mean and variance, the marginal risk contribution can be computed using the partial derivatives of the factor mean and factor standard deviation. We will remember that the factor mean and volatility is written as

̃ ̄ √ ̄ ̃ ̄

It is easily seen that under normality assumptions, the marginal contribution of ‘factor’ to tail risks can be written as

( ̄ ) ̃ ̃ ( ( ̄ ) ̃ ̄ ) ( ̄ ) ̃ ̃ ( ( ̄ ) ̃ ̄ )

4.5.2 Portfolio Risk Decomposition

Please bear in mind that denotes the vector of losses of the different asset classes at time . The set of (known) portfolio weights are given in the vector . We can write the total loss of the portfolio at time as . For risk measures volatility, and , we have that

is a linearly homogenous function for , that is, . Following Zivot (Factor Model Risk Analysis, 2011) we decompose the portfolio risk measures into asset class contributions:

As in the previous section, the risk decomposition has an intuitive interpretation. The marginal contribution to risk of asset class to the portfolio equals

(26)

26

The total contribution to risk of asset class equals

The total contribution to risk of asset class in percentage is given by

So Euler’s theorem can be used to additively decompose portfolio return risk measures into asset class contributions. These asset class contributions to risk allows portfolio managers to know sources of asset class risk for allocation and hedging purposes. It allows risk managers to evaluate a portfolio from asset class risk perspective. Moreover, in combination with section 4.5.1, we can decompose the portfolio risk into factor risk contributions.

We derive a decomposition of the risk measure volatility as follows. We return to the representation in . The losses of the portfolio with known weights is given by ∑

(4.5)

where ∑ and ∑ . We write

( ) ( )

so that we can write and .

Following the approach in 4.3.1, we write ̄ and

, such that

̃ . We can rewrite model as

̃ ̄

We calculate the mean and volatility of the loss of asset class at time as

̃ ̄ √ ̄ ̃ ̄ and define

(27)

27

5

Strategic selection of investments

We gave an overview of common risk factors for different asset classes. We showed how a reliable understanding of the size of the underlying risk factors can be obtained for individual asset classes and for portfolios, how we calculate expected returns based on the risk factors, and estimate the size of risk measures and the marginal contribution of the risk factors to those risk measures. In this chapter we will describe several strategies that take the underlying risk factors into account. In the first subsection we will propose investment strategies that account for these underlying risk factors, under simplistic assumptions such as liquidity, no transaction costs and that we can reinvest the profits we make. We will refer to these portfolios as the target portfolio. In the second subsection we will describe how pension funds can take transaction costs into account. Finally, we will give a

description of our simulation approach.

5.1 Factor-based investment strategies

In the context of asset pricing theory, Amenc and Martellini (2014) argue that the ultimate goal of portfolio construction techniques is to invest in risky assets, to ensure an efficient diversification of specific and systematic risks. If we focus on diversification of systematic risks, ‘diversification’ means efficient allocation to factors that bring positive, long-term rewards.

In the context of the previous chapters and diversification of systematic risks, a natural investment strategy is to indirectly invest in the underlying risk factors such that the exposure to different risk factors is equal.19 We will show how one can obtain such a portfolio and how one could adjust the approach.

5.1.1 Diversification of systematic risks

For the sake of simplicity, we assume that we can invest in three different asset classes, which have 2 underlying risk factors . It has been shown in chapter three that we can write the return series as a linear combination of the risk factors:

Our goal is to find a set of portfolio weights such that the exposure to the different factors is equal. The portfolio return at time , equals

∑ ( )

It will easily be seen that the exposure of the portfolio to the different factors is equal if the weights are chosen such that

or

19

(28)

28 ( ) ( ) ( ) An equal exposure to the various risk factors means that we have an equal diversification over the specific risk factors. Please recall that the risk factors are theoretically independent, while asset classes are correlated. An equal diversification over the risk factors will therefore be more efficient. We add the constraint ∑ so that the portfolio weights will sum to unity. We also prevent short positions by adding the constraints We will maximise the

exposure to the factors, as they generate positive returns in the long run, and write the problem as a linear program { ∑ } (5.1) where [ ] [ [

So we maximise the exposure to the unexplained risks in (5.1), to such an extent that the exposures to the unexplained risks and the risk factors are equal.20 We maximise the exposure to the various risk factors and unexplained risks since an investor has to take risks to generate returns. However, we are equally diversifying the risky position over the various, theoretically independent, risk factors.21

Please note that (5.1) is a linear program and we can solve it using the Simplex algorithm. We have to solve such a problem for each point in time, which might result in a diverse set of ‘optimal’ portfolios, i.e. target portfolios. The generalised risk parity portfolio for different asset classes and

underlying risk factors will be described in the appendix.

5.1.2 Maximisation of the Sharp ratio

Please note that we will maximise the exposure to the risk factors in the systematic risk

diversification strategy (5.1). We do not take expected returns or other risk measures into account. In the empirical example we will compare the risk strategy where we maximise the portfolio its risk factor exposure (5.1) with the strategy that maximises a Sharp ratio.

20

So we are basically maximising the ‘alpha’ of the portfolio, subject to the equal exposure constraints.

21

(29)

29 The strategy that maximises a Sharpe ratio can be written as

{ ( ) ( ) ∑ } 22 (5.2)

Obviously we will need a non-linear optimisation technique. Finally, one could easily combine the two strategies. By adding the constraints into strategy (5.2) one will have portfolio weights that maximise the Sharp ratio in such a way that the exposure to the various risk factors is equal. That is,

{ ( ) ∑ } (5.3)

We will show how an institutional investor can take transaction costs into account. That is, the resulting weights of the maximisation programs (5.1), (5.2) and (5.3) will be referred to as target weights. Finally, we will compare the results of the several strategies in the empirical example. That is, we will analyse the portfolio risks and returns.

5.2 Transaction costs

Obviously, an institutional investor cannot change its whole portfolio each time period again. Many practical problems occur, such as liquidity problems and transaction costs. Besides, we will assume that the investor is a price taker which may not hold for large investment institutions. We will consider how investors can account for the problem of transaction costs in this section. That is, we will assume that all asset classes are perfectly liquid, and that the institutional investor is a price taker. Moreover, we assume that the investor can and will invest all its earned wealth.

We will assume that we have an initial portfolio at time . We denote these portfolio weights with . Furthermore, we have for each period a “target portfolio”, which are weights

resulting from the strategies described in (5.1), (5.2) and (5.3). We will denote the target portfolio weights by . Before trading at time , the portfolio is the same as the portfolio at time with

the weights changed by the returns from to . Similar to Brandt, Santa-Clara and Valkanov (2009) we call this the “hold” portfolio,

Let denote the proportional transaction costs for asset class at time . The return of the

portfolio, net of trading costs, will be equal to

22

Please note that we could, in practice, estimate ̄ ̃ ̄ for each time period. The estimation is

based on the last window. However, as we do not have observations of the OLS residuals, we cannot estimate

̃ based on the previous window. As we will start investing at we estimate the covariance matrix of the returns by applying the EWMA method. Let denote the covariance matrix of the matrix

of returns at time . Then we will estimate ̂ ∑ ̅ ̅ , where ̅ ∑ .

(30)

30 Please note that we should use estimates of one-way trading costs in the equation above since includes both the buys and sells. However, studies have shown that the optimal policy is characterised by a boundary around the target weights23. When the current weights are within this boundary, it is optimal not to trade. When the current weight is outside the boundary, however, it is optimal to trade to the boundary, but not to the target. 24 Unfortunately, the optimal trade region is not known. As a practical approximation we will define the no-trade zone as follows. We will only perform trades in asset classes if the deviance between the corresponding weight in the hold

portfolio and weight in the target portfolio is larger than a constant. We denote this constant with . More specifically, we will not perform any trades if | | . We denote

( ) ( )

where { } represents an indicator function. Please note that equals when

the deviance between the corresponding weight of the hold portfolio and the target portfolio is small enough. Moreover, when this deviance is too large, we change the value such that the deviance equals . So ideally, we would set the portfolio weights equal to . It is also easily seen that the

restriction, holds by construction of . Unfortunately, it is not generally true that

. We define a variable such that

(5.4)

Obviously we have to consider, at each point in time, three cases: , and .25 We describe the three different cases below and show how we define our portfolio weights.

Suppose that is holds. This implies that ∑ . So we can simply define our portfolio

weights as

Please note that we are exactly on the boundary of the no trading zone26. Moreover, the weights are non-negative and sum to unity.

In the second case, where , it should hold for some weights that , so that the

weights sum to unity. After all, we made the assumption that we would invest all our capital. There is more than one way to spread over the portfolio weights, such that we stay in the no-trading zone.

23 As Brandt, Santa-Clara and Valkanov (2009) point out, Leland (2000) studies the optimal portfolio problem

with multiple risky assets and proportional transaction costs. He finds again that the optimal policy has a no-trade zone with partial adjustment of the portfolio weights to the border when the current holdings are outside the no-trade zone.

24

Please note that Brandt, Santa-Clara and Valkanov (2009) explain this result as follows: when the weight is close to the target, there is only a second-order small gain from rebalancing to the target but a first-order costs from trading.

25

A small non-negative value for the constant would imply that is small too, since implies that ∑ ∑ ∑ ( ) .

26

(31)

31 For example, one could set the weights equal to , where could

follow by maximisation of a return/risk ratio under the following constraints:

Please note that these constraints guarantee that the weights will stay in the no-trading zone such that the weights sum to unity. However, we will choose in a pragmatic way so that we can reduce

the calculation time27. The intuition of the approach will be such that we trade closer to the target portfolio, until the portfolio weights sum to unity. We define as follows:

{

}

We will define the weights as Please note that do not

necessarily sum to unity. They will only add up to one when

. If they will not sum

to unity, we will repeat the following procedure such that the weights will eventually sum to unity. 1. We will redefine ∑ . If we can stop the procedure. Otherwise we will

increase the weights where as follows.

2. We will define { } and define .

3. We will define and we return to step one.

Please note that eventually the weights will sum to unity.

Finally, when , it is obvious that should hold for some weights so that the portfolio

weights sum to unity. There are several ways to spread over the portfolio weights, as in the previous case. We set the portfolio weights equal to , where

could follow from minimisation of a return-risk ratio with some trading zone constraints. Again, we will follow a more pragmatic approach. The intuition of the approach will be such that we trade closer to the target portfolio, until the portfolio weights sum to unity. We define as follows

{

}

We will define the weights as Please note that do not

necessarily sum to unity. They will only add up to one when

. If they do not

sum to unity, we will repeat the following procedure such that the weights will eventually sum to unity.

1. We will redefine ∑ . If we can stop the procedure. Otherwise we

decrease the weights where as follows.

2. We will define { } and define .

3. We will define and we return to step one.

27

We will not determine by maximisation of a return-risk ratio since the impact of will be small, as is

Referenties

GERELATEERDE DOCUMENTEN

Lasse Lindekilde, Stefan Malthaner, and Francis O’Connor, “Embedded and Peripheral: Rela- tional Patterns of Lone Actor Radicalization” (Forthcoming); Stefan Malthaner et al.,

For aided recall we found the same results, except that for this form of recall audio-only brand exposure was not found to be a significantly stronger determinant than

This thesis examines the effect of the health (SRH) of individuals and their health compared to the previous year (CRH), both measured through subjective measurements, on

(2014) a project risk management methodology for small firms is presented, these firms need to run projects beyond the scope of their normal operations. The methodology

The results in table 5 and 6 show that a shortened time horizon lowers the probability of holding risky assets and lowers the share of risky assets in financial wealth,

This means that when we are adding a composite portfolio of MFIs of various types and regions to a benchmark portfolio such as S&P500 or MSCI World, this would not be beneficial

Based upon the copula-approach by Sch¨ onbucher and Schubert (2001) the model allows a specification of the joint dynamics of credit spreads and default intensities, including

Muslims are less frequent users of contraception and the report reiterates what researchers and activists have known for a long time: there exists a longstanding suspicion of