• No results found

The Correlation Structure of Security Returns within the Eurozone

N/A
N/A
Protected

Academic year: 2021

Share "The Correlation Structure of Security Returns within the Eurozone"

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

The Correlation Structure of Security

Returns within the Eurozone

A study evaluating Mean Models in terms of their forecasting accuracy and impact on constructing minimum variance portfolios.

Frank Lanting Student number: 2520338 Master thesis MSc Finance Faculty of Economic and Business

University of Groningen June 2015 Words: 11,118

Supervisor: prof. dr. T.K. (Theo) Dijkstra Abstract:

In order to create the minimum variance portfolio as defined by Markowitz (1952), an investor needs to calculate correlation coefficients. Since the correlations appear to vary over time, it seems to make sense to try to forecast them. This study evaluates six forecasting models–namely, the Full Historical, Overall Mean Model, National Mean Model, Supersector Mean Model, and Sector Mean Model– in terms of their forecasting accuracy. In addition, their impact on constructing minimum variance portfolios for securities within the Eurozone is evaluated. The results emerging from the tests on statistical significance show that a significant difference exists between all six models. Under the statistical significance test, the Sector Mean Model performs best. Economically, the Full Historical Model performs best. The Overall Mean Model performs the worst both economically and statistically.

JEL Classification: G11

Keywords: Portfolio Theory, Correlation Structure, Mean Models I. Introduction

According  to  Markowitz’s  (1952)  pioneering   article, if an investor obtains accurate expectations about the future mean returns for each security, the variance of return for each security and the correlation of returns between each pair of security, then it would be possible for an investor to produce the Markowitz efficient set. The Markowitz efficient set includes all of the portfolios on the efficient frontier, or those that generate the largest expected return for a given level of risk. The minimum variance portfolio is the starting point of the efficient frontier. It is the portfolio that has the lowest risk of any efficient portfolio. (Elton, Gruber, Brown and Goetzmann, 2009) The Markowitz efficient set is based on the key

assumptions that investors are rational and risk adverse. Therefore an investor will prefer the highest return for a given level of risk. So in order to create a set of efficient portfolios, the investor must obtain accurate estimates for the inputs underlying the work of Markowitz, namely: the expected return for each security, the variance of return for each security and the correlation of returns between each pair of security. However, there is still an ongoing debate among scholars about how to forecast these inputs in order to get accurate estimates for the inputs underlying  Markowitz’s  work.

(2)

2 According to Elton, Gruber, Spitzer (2006)

the first attempts to forecast correlation coefficients are contained in Elton and Gruber (1973) and Elton, Gruber and Ulrich (1976). They found that for intra-country securities, the Overall Mean Model1 performed as good as or better than the widely accepted forecast techniques at the time. Eun and Resnick (1984) placed this in an international context and argued that it is, at an international level, even harder to generate accurate estimates of correlation coefficients between all pairs of domestic and foreign securities. They argued that this problem mainly arises from governmental interventions that affect the capital markets, fluctuating exchange rates, varying accounting standards and disclosure requirements across countries, and language barriers. The effects of these differences among countries lead to a lack of comparable information to conduct an analysis to estimate accurate estimates for future correlation coefficients. Consequently, Eun and Resnick (1984) argued that some type of forecasting model is likely to be the investor's best method for obtaining accurate estimates of the future correlation structure of international security prices. In their study they found that there is a strong country factor influencing the return generating process and that National Mean Model2 strictly dominates all the other models in terms of forecasting accuracy.

In addition, more recently, several studies have shown that the industry factor seems to have displaced the country factor as the dominant explanatory variable in security returns (Baca et al., 2000; Brooks and Catão, 2000; Cavaglia et al., 2000; Brooks and Del Negro,   2002;;   L’Her   et   al.,   2002;;   Vedd   et   al.,   2014). More specifically, Baca et al. (2000) concluded that the influence of the country factor was 2 to 3 times larger than the industrial factor up to 1995, but that this ratio had dropped to 1.23 in 1999. According to Vedd et al (2014) this phenomenon appears to be tied to the

1 For the Overall Mean Model, all future pairwise

correlation coefficients are equal to the mean of these pairwise correlations. Elton, Gruber, Spitzer (2006)

2 For the National Mean Model, every intra-country

pairwise correlation coefficient is calculated as being the average of all pairwise correlation coefficients within the country. Further, every inter-country pairwise correlation coefficient between securities from two different countries is calculated as being the average of all inter-country pairwise correlation between securities from the two countries. Eun and Resnick (1984)

increase in international investment in general, as well as the ever-increasing globalisation of the world economy.

The main objective of this study is to evaluate different correlation forecasting models in terms of their forecasting accuracy and impact on constructing minimum variance portfolios for securities within the Eurozone. This research objective is formalized in the following research questions:

Which forecasting model provides the best estimates of the future correlation structure for security returns within the Eurozone in terms of forecasting accuracy?

How much volatility is added by the forecasting models when constructing minimum variance portfolios?

The study evaluates six forecasting models in terms of forecasting accuracy and their impact on the characteristics of minimum variance portfolios for securities within the Eurozone. Using the returns from the securities listed on the EURO STOXX, a ten-year period of monthly returns will be used, starting at January 2004 until December 2014. From this ten-year period, the first seven years will be used as an estimating period and the last three years will be used as a forecast period. The division of the data into a seven-year estimation period and a three year forecast period is done following Eun and Resnick (1984), and will be used to test the statistically significant difference by evaluating the predictive power of the different forecasting models. In addition to the statistical significance, the economic significance of the models will be evaluated. To test the economic significance, a set of minimum variance portfolios will be constructed using the actual mean returns and volatility per security, and each of the estimates of the future correlation matrix as constructed under the various models. The actual portfolio volatility in the forecasting period will be used to examine how efficient these portfolios really are. This procedure was described by Elton and Gruber (1973) as: “given

the best possible estimates of means and volatility, which estimates of the correlation matrix gives rise to the selection of the most efficient   portfolios?” This will be done for the

three-year forecasting period in order to test the economic significance of the various models.

(3)

3 literature in several ways. First, as far is known,

no research has been done on the correlation structure of security returns within the Eurozone. The correlation structure of security returns is expected to differ from existing literature on the correlation structure of international security returns, since the Eurozone represents a monetary union. Therefore the Eurozone faces some of the problems pointed out by Eun and Renisck (1984) (i.e. governmental interventions that affect the capital markets, varying accounting standards and disclosure requirements across countries and language barriers), but the country factor is expected to be less influential since the fluctuating exchange rates is not a driver of this county factor in the case of the Eurozone.

Secondly, this study will examine a very recent dataset. As was mentioned by among others Brooks and Catão (2000), the industry factor seems to have displaced the country factor. This study will contribute to this ongoing discussion by testing different industry-based models against the National Mean Model.

Thirdly, this study is relevant for practice since it will evaluate the impact of the various forecasting models on constructing minimum variance portfolios. This is extremely relevant for an investor as was shown by Elton and Gruber (1973), since the return on a portfolio could be increased by as much as 50 per cent by selecting portfolios on the basis of techniques which forecast most accurately.

Fourthly, earlier work have mainly focused on the statistical difference between various models, and did not test for economic significance. Since statistical significance does not imply economic significance it is relevant to test for both. In addition, Elton and Gruber (1973) showed that the statistical results, might deviate from the economic results.

The remainder of this study is organized as follows. The first section of this paper will give a brief review of the theoretical discussions about the correlation structure of security returns. The second part of this paper describes the methodology and data that are used for this study. In the third section of this paper the research results will be analyzed and interpreted. In the fourth and last section of this paper the research results will be summarized and the research question will be answered.

II. Literature Review

As was displayed when exploring the research topic, a vast amount of literature has been written on the correlation structure of security returns since the pioneering article of Markowitz (1952). Elton, Gruber, Brown and Goetzmann (2009) argued that because of both the large number of forecasts required and the necessary restrictions on the organizational structure of security analysts, it was not feasible for analysts to directly estimate correlation coefficients. Instead, a structural or behavioural model of how securities move together should be developed. The parameters of these models can be estimated either from historical data or by attempting to get subjective estimates from security analysts. Because the returns of individual securities tend to have common components, also known as systematic risk, this has implications for the calculation of correlation of securities. Various models have been developed to capture this common component and its implications for security returns and correlation of securities. Most of these models can be categorized into four main groups; the Full Historical Model, Single Index Models, Multi-Index Models and Mean Models. The remainder of this section will be used to discuss the underlying assumptions of each model as well as how these models can be constructed.

A. Full Historical Model

According to Elton and Gruber (1973) the Full Historical Model is the most simple and disaggregate model. The future correlation coefficients are estimated using historical data by assuming that the future correlation coefficients are identical to past coefficients. No assumptions are made on how and why any pair of securities might move together. The correlation between security x and y can be calculated using equation 1:

ρ = ∑ ( )( )

∑ ( ) ∑ ( ) (1) B. Single Index Models

(4)

4 This key assumption implies that the only reason

securities move together is because of a common comovement with the market. There are no effects beyond the market (e.g., industry effects) that account for correlation among securities. The model of Sharpe uses historical betas (i.e., responsiveness to the market return) in order to calculate expected security returns. Mathematically the Single Index Model can be expressed as:

R − R = α + β (R − R ) + ε (2) where:

R = the return on security i in period t R = the risk free rate

α  = security i’s  abnormal  returns

β =  the  security’s responsiveness to the market return

R  = the return on the market portfolio in period t

ε  = the residual in period t

When 𝜀 is assumed to be i.i.d, the correlation coefficient for security x and y under the Single Index Model can be calculated using the following equation:

ρ = (3)

where:

β =  the  security’s  responsiveness  to  the  market   return

σ = standard deviation of security x σ = the variance of the market

However, studies have been done by among others Blume (1970) and Levy (1971) to test betas over time. Levy (1971) has shown that the beta is remarkably stationary for large portfolios over time and according to Blume (1975) the systematic risk of security portfolios tends to show a tendency of betas to regress towards the mean of all betas, which is 1. So, historical betas might not be the most accurate predictor of future returns. Therefore, Blume developed a technique to adjust the beta to take this tendency into account. A widely evaluated model is the Single Index with the adjustment of beta using Blume’s   technique.   Under   the   Blume’s   technique the betas are adjusted using the following technique: Assume that we want to forecast  the  beta’s  for  a  period  ranging  from  to  

today till 2 years in the future. We should collect the data of the betas of all securities in the 10-year period prior to today, and divided this period in two samples of 5 years each. We should than regress the betas of the last 5-period against the first 5-year period. Blume (1970) did this for the period 1948-1961 which led to the following equation:

β = 0.343 + 0.677β (4)

where 𝛽 is the beta of security i in the last 5 years of the estimating period and 𝛽 is the forecasted beta.

Another technique to adjust the beta for its tendency to regress to one is Vasicek’s   technique  (1973).  Using  Vasicek’s  technique,  is estimated by estimating the beta in a future period by taking the weighted average of the estimated beta for securities in a previous period and the average beta of all securities in this previous period. Vasicek imposed the following weighting for forecasting a beta:

β = β +   β (5)

where:

𝛽̅  = the weigthed average of the historical beta 𝛽  = the current value of beta i

𝛽  = the forecasted value of beta

𝜎  = the variance of the distribution of the historical estimates of beta over the sample of securities

𝜎  = the square of the standard error of the estimate of beta for security i

Once the adjusted beta is calculated using either Blume’s   technique or Vasicek’s   technique,   one   can use equation 3 to calculate the correlation between security x and y.

C. Multi Index Models

(5)

5 Models bases their output on the assumption that

there are fundamental indexes (e.g. firm specific characters such as size, earnings etc.) that cause a correlation among security returns. An important condition for Multi Index Models is that factors are uncorrelated. This assumption implies that there are no factors beyond the factors specified in the model that cause comovement between security returns. (Elton, Gruber, Brown and Goetzmann, 2009) The variance of returns and covariance of returns are the necessary inputs to calculate the correlation between security x and y as was shown in equation 3. Using a multi index model, these inputs can be calculated using the following procedure:

The variance of return for security x is:

σ = β σ + β σ + ⋯ + β σ      (6) The covariance between security x and y is:

σ = β β σ + β β σ + ⋯ + β β σ (7)

where:

β = the securities responsiveness to index 1 σ = the variance in index 1

One of the most famous fundamental Multi Index Models is the model of Fama and French (1993). In their study, Fama and French found that value securities outperform growth securities and small cap securities tend to outperform large cap securities. Based on these findings, Fama and French developed the Fama-French 3-Factor Model, which includes an overall market factor, and factors related to firm size and book-to-market equity. This is formulated in equation 8. The correlation between security x and y under this model can be calculated by plugging the inputs of this model in equation 7.

R − R = α + β (R − R ) +  β (SMB) + β (HML) + ε      (8)

where:

α  =  securities  i’s  abnormal  returns R = the return on security i R = the risk free rate

β = the securities responsiveness to the market return

β = the securities responsiveness to the SMB factor

β = the securities responsiveness to the HML factor

SMB = Small Minus Big (market capitalization) HML = High Minus Low (book-to-market ratio) ε  = the residual

In addition to Fama-French 3-Factor Model, the theory of Chen, Roll and Ross (1986) is widely used to explain the correlation in security returns. The work of Chen et al. (1986) states that security returns are exposed to systematic economic news, that they are priced in accordance with their exposures, and that the news can be measured as innovations in state variables whose identification can be accomplished through simple and intuitive financial theory. Based on this theory Burmeister and McElroy (1992) developed a fundamental Multi Index Model, including the following 5 factors: the default risk, term structure, unexpected inflation, change in the expected growth rate and, the residual market factor.

D. Mean Models

Another way of forecasting the future correlation can be done by taking the mean of the data in the istorical correlation matrix. These classes of models are called Mean Models and were tested by among others Elton and Gruber (1973) and Elton, Gruber, and Urich (1978). These Mean Models can be calculated in various ways; the Overall Mean Model does not discriminate between securities and just simply takes the mean of all pairwise correlation coefficients over some past period as a forecast of each pairwise correlation coefficients for the future. Another version of the Mean Model distinguishes between firms within the same industry. These Mean Industry Models assume that firms within the same group have a common correlation structure with all other firms. The Traditional Mean model assumes that an industry classification3 represents homogeneous groups, and takes the average within these groups to predict future correlation

3 In the case of Elton and Gruber (1973) the SIC industrial

(6)

6 coefficients. In addition to accepting traditional

industries as homogeneous groups, Elton and Gruber (1973) developed another model by forming pseudo industries by using multivariate techniques to determine which groups behave as homogenous units.

Eun and Renisck (1984) placed this for the first time in an international context, and tested the correlation structure of interanational securities using the National Mean Model. Under the National Mean Model, every intra-country pairwise correlation coefficient is calculated as being the average of all pairwise correlation coefficients within the country. Further, every inter-country pairwise correlation coefficient between securities from two different countries is calculated as being the average of all inter-country pairwise correlation between securities from the two countries A generalization of the procedure to calculate a Mean Correlation Matrix that distinguishes securities into groups is displayed in Table I.

Where 𝜇 is the average of all pairwise correlation coefficients of group x, and 𝜇 is the average of all inter-pairwise correlation coefficients between securities from group x and y.

E. The performance of the various models

After evaluating the characteristics of the various models, the obvious question that arises is; how well do these models forecast correlation coefficients? Elton and Gruber (1973) found that statistically significant differences exists in the ability of techniques in forecasting 5-year correlation matrices. They showed that choosing the best relative to the worst technique can account for up to a 50 percent increase in the earned rate of return at certain risk levels. In particular, they found that in the forecast of 5-year estimates the mean models outperformed all other techniques. In particular the Overall Mean Model performed best. Up until that time, the Single Index Model that was developed by Sharpe (1963) and Historical Model where mainly used to generate input for portfolio

theory. Elton and Gruber (1973) suggested the following possible explanation for their unexpected insight: “since  these  models  have  a  

particularly simple structure, their good relative performance suggests that simplified portfolio algorithms might be possible and heuristics, which lead to optimum or near optimum portfolios,  can  be  developed”.  

Elton, Gruber, Brown and Goetzmann (2009) summarized that the Overall Mean Model has been extensively tested against Single Index models, general Multi Index Models and the historical correlation matrix itself, and that tests have been performed using three different samples of securities over a total of four different time periods. In every case the Overall Mean Model outperformed the other models. Furthermore, for most risk levels the differences in portfolio performance were large enough to have real economic significance. However, when some disaggregation into to model is introduced by using the traditional mean or pseudo-mean model, the results are much more ambiguous. Another study by Eun and Resnick (1984) also showed that the Mean Models outperformed the Single- and Multi Index models as well as the historical covariance matrix. However, in their study they argued that

“because  of  the  developments  towards  a  greater integration of capital markets, as well as a greater awareness on the part of individual and institutional investors of the potential gains from international diversification, it is a necessity to make modern portfolio theory amenable to implementation in an international setting, since there is a strong country factor influencing the return   generating   process.” Their empirical

findings on estimating the correlation structure of international share prices were in line with this theory since the most important result emerging from their empirical tests showed that the National Mean Model strictly dominates all the other models in terms of forecasting accuracy.

However, more recently, several studies have shown that the industry factor seems to have displaced the country factor as the dominant explanatory variable in equity returns (Vedd et al., 2014).Baca, Garbe and Weiss (2000) argued that historically country effects have been dominant in explaining variations in global security returns in the developed markets, and investors have segmented their allocations accordingly. However, they found a significant shift in the relative importance of national Table I

Structure of Mean Models

x y z

x 𝜇

y 𝜇 𝜇

(7)

7 influences in the security returns  of  the  world’s  

largest equity markets, and showed that in these markets, the impact of industrial sector effects is roughly equal to that of country effects. This led them to the conclusion that a “country-based

approach to global investment management may be   losing   its   effectiveness”.   Brooks and Catão

(2000) showed that the share of variation of security returns explained by global industry factors has grown sharply since the mid-1990s at the expenses of country specific factors. These findings of Brook and Catão are supported, among others, by Beckers et al. (1996), Solnik and Roulet (1999) Brooks and Del Negro (2002),   L’Her   et   al.   (2002)   and Cavaglia et al. (2000). However, Brook and Catão (2000) are more conservative with their findings regarding the correlation: “it may be tempting to interpret

this finding as an indication that equity markets have become more integrated in recent years, it is also possible that the greater return variation explained by global factors is simply capturing that stock markets become more tightly correlated during crisis periods”.   The line that

the industry factor is an important driver of security returns is confirmed by a recent paper of Elton and Gruber (2009) which states:

“forming   homogenous   groups   of   firms   on   the   basis of industry membership improves forecasting  accuracy”  

As shown in this literature overview, empirical findings show that Mean Models outperformed all other models with respect to forecasting accuracy. However, within these Mean Models the findings are much more ambiguous. Basically, three different views can be derived from the literature overview. The first view is that the Overall Mean Model is the most accurate in forecasting future correlation matrices as was shown by Elton and Gruber (1973). The second view that, because of a strong country factor driving the returns of international securities, the National Mean Model as developed by Eun and Resnick (1984) is most accurate in forecasting future correlation matrices. The final view as derived from the

finding by among others Brooks and Catão (2000), that the share of variation in security returns explained by global industry factors has grown sharply since the mid-1990s at the expense of country specific factors. This finding suggests that industry-based mean models might be most accurate in forecasting future correlation matrices.

F. Hypotheses

As described in the introduction, the main goal of the study is to evaluate different forecasting models in terms of their forecasting accuracy and impact on constructing minimum variance portfolios for securities within the Eurozone. The literature study shows that Mean Models outperform Single Index Models and Multi Index Models in terms of forecasting accuracy. Therefore these models will not be included into this study, and only Mean Models will be tested. As was shown in the literature overview the results within the Mean Models are much more ambiguous. Therefore six different models will be tested. These models can be seen in Table II. During this study the Full Historical Model will be used as a benchmark since it is the most disaggregate model. Furthermore, the Overall Mean, National Mean and Industry Mean Model will be tested. In addition, two newly disaggregated models of the Industry Model will be tested; the Supersector Mean Model and the Sector Mean Model. These models will be used to test to what extend disaggregated industry-based models are in favour of the more aggregated Industry Mean Model.

This theory can be formalized in the following hypotheses:

Hypothesis 1:

H1: There is a significant difference, at common/traditional levels, between the models under consideration in terms of forecasting accuracy.

Table II

The Various Forecasting Models

Name Description

I. Full Historical Model Assumes future correlation coefficients are identical to past coefficient II. Mean Model

A. Overall Mean Model Assumes same correlation coefficient between all firms

(8)

8 Hypothesis 2:

H1: There is a significant difference between the volatility added by the models under consideration when constructing minimum variance portfolios.

III. Methodology

A similar research approach is used as in earlier studies by Eun and Resnick (1984) and Elton and Gruber (1973). The forecasting models as stated in the previous section will be tested for statistical significance and economic significance, as in the study of Elton and Gruber (1973). The test for statistical significance is used to evaluate and examine the ability of each of the selected models to forecast future correlation matrices. The economic significance is analyzed by using the forecasted correlation coefficients as inputs to construct minimum variance portfolio. The ex-post volatility is evaluated to see if differences between models are of economic importance. An estimating period of 7 years and a forecast period of 3 years is used as displayed in Table III.

A. Statistical Significance

The approach for testing the statistical significance is similar to Eun and Resnick (1984). The forecasting models are evaluated by calculating the forecast error, which measures the difference between the actual value and the forecast value for the corresponding period. To be more precise, the aggregate error will be measured using the following approach: first the performance of the forecasting models is evaluated using the Mean Squared Error (MSE). The MSE measures the expected squared distance between an estimator and the true underlying parameter. It is thus a measurement of the quality of an estimator. Mathematically this is represented by equation 9:

MSE = ∑ (F − A ) (9)

where 𝐹 is the forecasting value at the ith entry of the covariance matrix and 𝐴 is the actual value at ith entry of the covariance matrix. To calculate the MSE, 𝐹 - 𝐴  has to be squared in order to capture all the variance in the MSE, i.e. the signs have to be removed so that the ‘magnitudes’   of   the   errors   influence   the   MSE.   Hence, in order to obtain the measure in the same scale as the variable in the forecast, the Root Mean Squared Error (RMSE) is conducted by simply taking the root of the MSE.

In order to evaluate the performance of the various forecasting models relative to one another under the MSE criterion, the models are paired, and the difference between each model is measured by evaluating the MSE between each pair of forecasting models for each entry in the correlation matrix:

D = (F − A ) − (F − A ) (10) Model 1 is judged to dominate Model 2 if the mean of these differences is negative and significantly different from zero at the 5 per cent significance level. Since each pair of forecasting models produces a "paired" forecast for each entry of the correlation matrix, an ordinary two-tailed t-test can be applied to a "single" mean calculated from the n values of 𝐷 (Eun and Resnick, 1984).

In addition to examining differences in their mean errors, a cumulative frequency function is used to calculate each of the forecast models absolute forecast errors. In this way it is possible to determine if a model is less likely to make any size error than a second model. Taking a pair of models, one model is said to "dominate" the other if its cumulative frequency function of squared forecast errors, (𝐹 − 𝐴 ) , is larger than or equal to that of the other. (Eun and Resnick, 1984)

Since the distribution of the data showed that the data is (relative) skewed towards zero, the intervals are taking of increasing distance as was imposed by Eun and Resnick (1984), 15 intervals where taken with similar steps as Eun and Resnick (1984), which covered at least 95 per cent of the observations for each model. A model   will   be   considered   “dominant”   if   its   cumulative frequency function is larger that that of the majority–in this case eight–of the amount of intervals.

To evaluate the forecasting models relative to the benchmark of the Full Historical Model, the Theil Inequality Coefficient (TIC) will be Table III

Sample period

Estimating period Forecast Period January 2004 until

December 2011

(9)

9 calculated for each of the forecasting models:

TIC =   [∑ (F − A ) /(H − A ) ] / (11)

where 𝐻 is the value for the ith entry of the Full Historical correlation matrix. It is clear from Equation 11 that TIC > 1 if the forecasts are less accurate than those provided by the Full Historical Model, and TIC = 0 in the event of perfect forecasting. (Eun and Resnick, 1984)

B. Economic Significance

In addition to statistical significance, economic significance of the models is evaluated. To test economic significance, the same assumptions underlying the work of Markowitz (1952) are made; i.e. investors are rational and risk adverse. An investor will take the portfolio with the highest return given a certain level of risk. As a starting point, a buy and hold strategy is used for the three-year forecasting period. From the dataset under consideration, ten random samples of 30 securities4 are selected. For each of these samples the minimum variance portfolios5 are constructed using the actual returns, variance and correlation matrix. This procedure can be seen  as  having  “perfect”  forecasting  abilities  and   thus calculates the most efficient portfolios. These portfolios will be used as a benchmark to evaluate the accuracy of the different forecasting models. A short-sell constrain is applied to preclude undesirable extreme positions. Once the perfect forecasting portfolios are obtained, efficient6 portfolios with the same expected returns as the benchmark are constructed using the actual returns and variance in the forecasting period, and the various forecasted correlation matrixes. This procedure was described by Elton and Gruber (1973) as: “Given the best possible

estimates of means and variances, which estimates of the correlation matrix gives rise to

4 According to Statman (1987) 30 securities are needed to

form a well-diversified portfolio. See for an extensive discussion Statman (1987)

5 A minimum variance portfolio is constructed by

estimating the expected returns, the variance of return for each security, and the correlation of returns between each pair of security to minimize the portfolio’s   ex   ante   risk   while maximizing the returns for that risk level. See for an extensive discussion Markowitz (1952)

6 I.e. minimizing the portfolio volatility for this level of

return

the   selection   of   the   most   efficient   portfolios?”  

Since the actual returns are used, the portfolio will always reach the same returns as the benchmark, but the volatility will be higher. The portfolios produced by the various models then are evaluated by calculating the actual portfolio volatility in the three-year forecasting period using the actual correlation matrix. The additional volatility of the model is evaluated by taken the difference between model i and the benchmark:7

σ =  σ   − σ (12)

See Figure I for a graphical representation of this procedure. An ordinary t-test is applied to test if the volatility added by the models significantly differs from zero at the 5 per cent level.

In addition to the volatility criteria, the Sharpe Ratio is used to evaluate the efficiency of the portfolios constructed by the various models. The Sharpe Ratio is a measure for calculating risk adjusted return, and is calculated by dividing a portfolio’s   excess   return by the portfolio’s  volatility:

Sharpe  Ratio =

(13)

where 𝑅 is the return of the portfolio, 𝑅 is the risk-free rate and 𝜎 is  the  portfolio’s  volatility.   The Sharpe Ratio thus measures the return earned in excess of the risk-free rate per unit of

7 Please note that the difference will only be zero if a model

has perfect forecasting ability, and that the volatility of the portfolio will never be lower than the benchmark, since the benchmark has perfect forecasting abilities.

Figure I.

(10)

10 volatility. The Sharpe Ratio’s   produced   by   the  

various models will be tested against the benchmark using the following ratio:

     

    (14)

This ratio can be described as: how much of the portfolio’s   potential   is   utilized   by   the   model under consideration?

Furthermore, in order to evaluate the performance of the various forecasting models relative to one another, a similar approach is used as was used for the MSE dominance criteria. The portfolios of the various models are paired and the difference in their volatilities is calculated:

D = σ − σ   (15)

Model 1 is judged to dominate Model 2 if the mean of these differences is negative and significantly different from zero at the 5 percent significance level.

IV. Data description

A. Sample selection and data sources

During this study the securities of the EURO STOXX Index are evaluated. The EURO STOXX Index is a broad subset of the STOXX Europe 600 Index and consists at the time of writing this thesis of 294 securities. The index represents large, mid and small capitalisation companies of 12 Eurozone countries: Austria, Belgium, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal and Spain.8 The composition of the EURO STOXX Index is obtained from the Stoxx Ltd. website. Securities should be listed continuously from January 2004, through December 2014 in the EURO STOXX. After applying this restriction, 237 securities remained in the sample.

To analyse the returns of these 237 securities over the period of interest, a Return Index is used to analyse the month-to-month returns of securities part of the EURO STOXX Index. A Total Return Index is available from Thomson Reuter’s DataStream for individual securities. It shows a theoretical growth in value of a security holding over a specified period assuming that

8 Source: Stoxx Ltd. website

dividends are re-invested to purchase additional units of equity at the closing price applicable on the ex-dividend date9. Mathematically this looks like:

RI = RI ∗ (16)

except when t = ex-date of the dividend payment 𝐷  then:

RI = RI ∗ (17)

where 𝑃 is the price on ex-date, 𝑃 is the price on previous day and 𝐷 is the dividend payment associated with ex-date t. These data are transformed into monthly returns and are used as inputs for the various models.

For the National Mean Model, the country information will be used provided from the Stoxx Ltd website. Furthermore for the Industry Mean Model, Supersector Model and Sector Model the website of Stoxx Ltd is used to identify the industry, supersector and sector for individual securities based on the Industry Classification Benchmark (ICB)10.

9 Source: Thomson  Reuter’s  DataStream

10 The Industry Classification Benchmark (ICB) is a

definitive system categorizing over 70,000 companies and 75,000 securities worldwide, enabling the comparison of companies across four levels of classification and national boundaries. The ICB system is supported by the ICB Database, an unrivalled data source for global sector analysis, which is maintained by FTSE International Limited. Source: ICB website.

Table IV

Sample firms Classified by County and Industry Membership Country No. of Firms Industry No. of Firms

1 Austria 6 1 Basic Materials 17

2 Belgium 10 2 Consumer Goods 36

3 Finland 15 3 Consumer Services 28

4 France 70 4 Financials 55

5 Germany 50 5 Health Care 13

6 Greece 6 6 Industrials 44

7 Ireland 7 7 Oil & Gas 8

(11)

11

B. Descriptive statistics

Table IV shows the distribution of the sample firms by country and industry membership. There are firms from 12 countries in the sample and large part of the sample consists of firms from Germany and France. When looking at the distribution of the industries, the financials, industrials and consumer goods are most included in the sample. There are 10 industries compared to 12 countries, thus we can state that the National Mean Model is the more disaggregated of the two.

Table I of the Appendix shows the distribution of the supersector and sector membership. There are firms of 19 different supersectors included in the sample. The ICB uses a system of 10 industries, partitioned into 19 supersectors, which are further divided into 41 sectors.11 The sample includes firms from all industries and supersectors, and 38 out of 41 sectors. It can therefore be considered a fairly good representation of securities within the Eurozone. In addition, it is noteworthy to mention that the sample under consideration is fairly large. For example, it is larger than the samples used in the studies of Elton and Gruber (1973) and Eun and Resnick (1984), which used respectively 75 securities and 160 securities.

C. The Models

In addition to evaluating the group membership of the various securities, the main focus in this paper is to look at the distribution of the correlation coefficients produced. In this

11

Source: ICB website.

section only the National Mean Model and the Industry Mean Model are evaluated. The other models are to disaggregate to evaluate in this section. Table V shows the correlation coefficients of the National Mean Model. The intra-country coefficients range between 0.21534 and 1. It is important to note that the maximum correlation–i.e., one–is caused by the fact that there is only one entity in the sample that is located in Luxembourg. Therefore its correlation coefficient is per definition one. The intra-country coefficients range between 0.25215 and 0.46566. The lowest correlation coefficient is between Ireland and Belgium and the highest is between Austria and Luxembourg. Again we have to be careful when interpreting these results, since–as was mentioned before– there is only one firm located in Luxembourg within the sample. In addition, it is worth mentioning that the inter-country mean correlations are slightly lower than the intra-country mean correlations on average. Table VI shows the correlation coefficients produced by the Industry Mean Model. The intra-industry coefficients range between 0.21126 and 0.57455. Where the industry Health Care has the lowest correlation coefficient and the industry Oil & Gas the highest correlation coefficient. The inter-industry coefficients range between 0.18906 and 0.49396. The lowest correlation coefficient is between Health Care and Telecommunications and the highest is between the industries Oil & Gas and Basic Materials.

Table VI

Industry Mean Model: Correlation Coefficients

BM CG CS F HC I OG T TC U Basic Materials BM 0.51926 Consumer Goods CG 0.42023 0.39269 Consumer Services CS 0.38156 0.34500 0.34199 Financials F 0.42544 0.36700 0.36315 0.50178 Health Care HC 0.27712 0.25481 0.22881 0.24002 0.21126 Industrials I 0.46172 0.39879 0.37934 0.42826 0.26753 0.44929

Oil & Gas OG 0.49396 0.37363 0.33377 0.38955 0.25258 0.43730 0.57455

Technology T 0.41207 0.36358 0.33051 0.38060 0.23547 0.35006 0.38716 0.41090

Telecommunications TC 0.25654 0.24234 0.24782 0.28852 0.18906 0.26004 0.25473 0.24327 0.33935

(12)

12

V. Empirical Results

A. Statistical Significance

The most significant result under the MSE criteria is that the Supersector Mean Model dominates all other models in terms of forecasting accuracy. As shown in Table VII, the Supersector Mean Model has a lower MSE than all other models at a 1 per cent significance level. The MSE is evaluated using the 56,169 entries of the correlation matrix produced by each model. Surprisingly both the National Mean Model and the Overall Mean Model perform worse than the Full Historical Model under the MSE Criteria. This can be seen in Table VII by looking at both the MSE and the TIC. The MSE and TIC of both models are larger than the MSE and TIC of the Full Historical Model. As was mentioned by Elton, Gruber, Spitzer (2006), the Overall Mean has the same overall correlation coefficient as the Full Historical Mean, but their difference in performance is due to the ability to estimate the deviation of the pairwise correlation from this average. In this study the Full Historical Model performs clearly better than the Overall Mean Model, implying that assuming that pairwise

correlation deviations are equal to their historical level is better than assuming that their deviation is zero.

In order to evaluate the performance of the various forecast models relative to one another under the MSE criterion, the models are paired, and the difference between each model is measured by evaluating the squared forecast errors between each pair of forecasting models for each entry in the correlation matrix as displayed in Table VIII.

The model displayed in the second column is tested against the model in the first row. When the error is positive, the model in the column performs worse than the model in the row. Table VIII

Performance of the Models under the MSE domination criteria

Forecasting Model Mean Model Supersector Sector Mean Model Mean Model Industry Full Historical Model Mean Model National Overall Mean Model

1 Supersector Mean Model 0

2 Sector Mean Model 0.0004* 0

3 Industry Mean Model 0.0018* 0.0130* 0

4 Full Historical Model 0.0034* 0.0300* 0.0017* 0

5 National Mean Model 0.0054* 0.0049* 0.0036* 0.0019* 0

6 Overall Mean Model 0.0070* 0.0065* 0.0052* 0.0035* 0.0016* 0

*statistically significant at the 1 per cent level

National Mean Model: Correlation Coefficients

AT BE DE ES FI FR GR IE IT LU NL PT Austria AT 0.48350 Belgium BE 0.37408 0.30862 Germany DE 0.39031 0.31271 0.39587 Spain ES 0.35048 0.30215 0.33279 0.41542 Finland FI 0.39466 0.31080 0.35418 0.34355 0.41058 France FR 0.40462 0.33686 0.38702 0.35904 0.38039 0.40457 Greece GR 0.42583 0.32848 0.36486 0.41338 0.34924 0.37968 0.59591 Ireland IE 0.30891 0.25215 0.26594 0.27050 0.28533 0.28483 0.25581 0.21534 Italy IT 0.40440 0.34395 0.37383 0.39985 0.38563 0.40027 0.43401 0.30672 0.45416 Luxembourg LU 0.46566 0.31935 0.36707 0.29631 0.36423 0.34930 0.39033 0.29100 0.34681 1 Netherlands NL 0.43045 0.35111 0.38534 0.35454 0.38167 0.40237 0.37756 0.30173 0.39729 0.37661 0.42668 Portugal PT 0.37883 0.34822 0.34871 0.39635 0.35595 0.38168 0.45194 0.29348 0.41024 0.31417 0.37427 0.40894 Table VII

Performance of the Models under the MSE criteria

Performance Measures

Forecasting Model MSE* RMSE TIC

(13)

13 Based on these observations the following

deductions can be done. First, the industry factor seems to have replaced the country factor since all industry-based models outperform the National Mean Model in terms of forecasting accuracy under the MSE criteria. Second, more disaggregated industry-based models are more accurate in predicting future correlations matrixes than the regular Industry Mean Model. Table II of the Appendix shows the results emerging from the cumulative distribution function. As can been seen in that table, for every Model at least 95 per cent of all frequencies fall within the 0.0000 - 0.2200 interval. The Overall Mean Model has the least frequencies within this interval implying that this model is most likely to make a size error. This is also confirmed by the results in Table IX. In Table IX the results are displayed from the domination criteria under the cumulative distribution function. As was mentioned before, 15 intervals are taken to test the domination criteria. A model  will  be  considered  “dominant”   if its cumulative frequency function is larger that of the majority–in this case eight–of the amount of intervals. The numbers shown in the table are the amount of intervals in which the model displayed in the second column dominants the model in the first row. As can been seen, the Overall Mean Model is dominated by all other 5 models and therefore performs clearly worse than the other 5 models. Surprisingly, the most disaggregated industry-based model i.e. the Sector Mean Model performs best. Where under the MSE criteria, the Supersector Mean Model dominated the Sector Mean Model, here the reverse holds. The main question that arises from these results is; which model performs better? I.e. which criterium is a more stringent test of dominance. As was stated by Elton and Gruber (1973) this could be interpreted in the following way: “Techniques   could   have  

significantly different mean errors without having the odds of any particular size error being less. Thus, the cumulative distribution

function test is a more stringent test of dominance.”

In   addition,   they   argue   that   it’s   a   particular   interesting test because if one technique dominates other techniques 15 out of 15 times, it will be preferred by all forecasters regardless of their loss functions. Since the Sector Mean Model denominates the Supersector Mean Model 11 out of the 15 times, it can therefore be stated that the Sector Mean Model performs best in terms of forecasting accuracy. The models are ranked on the basis of their forecasting accuracy in Table X.

The results under the CDF criteria are in line with the MSE criteria, except for the rank of the Supersector Mean Model and Sector Mean Model as mentioned before. Again, as can be seen, the industry-based models outperform the other models in terms of forecasting accuracy. Within the industry-based models the most disaggregated model performs best.

B. Economic Significance

When looking at the economic significance the results (surprisingly) differ from the statistical significance. As can be seen in Table III of the Appendix, the Full Historical Model produces the best portfolios in terms of having the highest Sharpe Ratio on average. In addition, it has on average the lowest added volatility. The added volatility is significant at the 5 per cent level. On average the Full Historical Model adds an additional 0.083 per cent volatility per Table IX

Domination under the CDF criteria

Forecasting Model Full Historical Model Overall Mean Model Mean Model National Industry Mean Model Mean Model Supersector Sector Mean Model

1 Full Historical Model 15

2 Overall Mean Model 4 15

3 National Mean Model 6 15 15

4 Industry Mean Model 15 15 15 15

5 Supersector Mean Model 15 15 15 15 15

6 Sector Mean Model 15 15 15 15 11 15

Table X

Rank in terms of forecasting accuracy under the CDF criteria

Rank Model

1. Sector Mean Model

2. Supersector Mean Model

3. Industry Mean Model

4 Full Historical Model

5 National Mean Model

(14)

14 portfolio in a 3-year period.

When looking at the Sharpe Ratio of the benchmark relative to the Sharpe Ratio of the Full Historical Model, we can see that on average  74.8  per  cent  of  the  portfolio’s  potential   is utilized by the model. In other words, on average 25.2 per   cent   of   the   portfolio’s   efficiency is lost due to forecasting errors in the Full Historical Model.

When comparing the various models by looking at the Volatility Domination Criteria as shown in Table XI, it can be seen that the difference in volatility added between the best– the Full Historical Model–and the worst model– the Overall Mean Model–is 0.18 per cent on average in the 3-year forecasting period (significant at the 5 per cent level). Annualized this is 0.06 per cent. Although the impact seems to be modest, when comparing the average annualized Sharpe Ratio of the Full Historical Model with the Overall Mean Model it can be stated that the annualized average Sharpe Ratio of the best model is 2.41 per cent higher than the Sharpe Ratio of the worst model. An overview of the increase in the Sharpe Ratio is displayed in Table XII.

The difference between the increase in Sharpe Ratio is relatively small between the Sector Mean, Industry Mean and Supersector

Mean Models. The models are ranked in terms of Economic Significance in Table XIII.

C. Discussion of the Results

The main objective of this study is to evaluate different correlation forecasting models in terms of forecast accuracy and their impact on constructing minimum variance portfolios for securities within the Eurozone. This research objective is formalized in two hypotheses.

The first hypothesis tested during this study is: “There is a significant difference, at

common/traditional levels, between the models under consideration in terms of forecasting accuracy”. On the basis of the statistical tests it

can be concluded that there is strong evidence in favour of this hypothesis.

This study shows that a meaningful difference between the various forecasting models exists. Statistical significant differences exist for all six models. Under the statistical significance criteria the Sector Mean Model performed best. Furthermore, the study showed that the industry-based models outperform the other models in terms of forecasting accuracy. Within the industry-based models the most disaggregated model performs best. The findings are in line with the theory of by among others Brooks and Catão (2000), who argued that industry-based mean models might be the

Table XIII

Rank in terms of Economic Significance

Rank Model

1. Full Historical Model

2. Sector Mean Model

3. Industry Mean Model

4 Supersector Mean Model

5 National Mean Model

6. Overall Mean Model

Table XII

Rank in terms of Mean Sharpe Ratio

Rank Model Sharpe Mean

Ratio

Increase in Sharpe Ratio* 1. Full Historical Model 19.77 2.41

2. Sector Mean Model 19.26 1.58

3. Industry Mean Model 19.17 1.42 4 Supersector Mean Model 19.13 1.36 5 National Mean Model 18.88 0.75

6. Overall Mean Model 18.35 0

*Annualized increase in % compared to rank 6

Table XI

Performance of the Models under the Volatility Domination Criteria

Forecasting Model Mean Model Supersector Sector Mean Model Mean Model Industry Full Historical Model Mean Model National Overall Mean Model

1 Supersector Mean Model 0

2 Sector Mean Model -0.00050* 0

3 Industry Mean Model -0.00034 0.00016 0

4 Full Historical Model -0.00110* -0.00061** -0.00076** 0

5 National Mean Model -0.00034 0.00016 -0.00003 0.00075 0

6 Overall Mean Model 0.00069* 0.00119* 0.00101* 0.00180* 0.00105 0

(15)

15 most accurate in forecasting future correlation

matrices.

However, statistical significance does not imply economic significance. Therefore a second   hypothesis   is   tested:   “There is a

significant difference between the volatility added by the models under consideration when constructing  minimum  variance  portfolios”.  The

results emerging from the tests of economic significance are less robust than under the statistical significance criteria. When testing for economic significance using the volatility-dominated criteria, it can be stated that a significant difference exits in eight out of fifteen times. The Full Historical Model produces the best portfolios in terms of having the highest Sharpe Ratio on average. In addition, it has on average the lowest added volatility. The result on the economic significance deviate from the results found in the study of Elton and Gruber (1973), which showed that the Overall Mean Model produced the best portfolios. For securities within the Eurozone, the Overall Mean Model produces the worst portfolios.

Comparing the statistical significance with the economic significance shows that the ranking of the models is not identical. This was also the case in the study of Elton and Gruber (1973). Their explanation for this phenomena was: “the fact that techniques perform worse in

selecting efficient portfolios than they do in forecasting the correlation matrix would indicate that they do a relative poorer job in estimating the correlation between securities which are most promising for portfolios”. This

seems to be an plausible explanation, but the reason why particular models do a relative poorer job in estimating the correlation between securities which are most promising for portfolios is not clear. Explanations on this phenomena are not provided by earlier studies, and is beyond the scope of this study. It might be a good starting point for further research as will be elaborated on in the next section.

VI. Conclusion

A. Summary and key findings

The main goal of this master thesis was to evaluate different correlation forecasting models in terms of forecast accuracy and their impact on constructing minimum variance portfolios for securities within the Eurozone. The study used 237 securities that are continuously listed on the

EURO STOXX Index in the period from January 2004, though December 2014. The index is a subset of the broader EURO STOXX 600 Index and represents large, mid and small capitalisation companies of 12 Eurozone countries: Austria, Belgium, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal and Spain.

The literature study evaluated studies on the performance of various models and showed the Mean Models outperform Single Index Models, and general Multi Index Models in terms of forecasting accuracy. However, within these Mean Models the findings are much more ambiguous. Basically, three different views can be derived from the literature overview. The first view that Overall Mean Model is the most accurate in forecasting the future correlation. The second view that because of a strong country factor driving the returns of international securities, the National Mean Model is the most accurate in forecasting the future correlation matrices. Or, the third view that industry-based models are most accurate in forecasting future correlation matrices.

Based on the available literature, six models were tested in terms of forecasting accuracy and their impact on constructing minimum variance portfolios. The Full Historical Model is used as a benchmark since it is the most disaggregate model. Furthermore, the Overall Mean, National Mean and Industry Mean Model were tested. In addition two newly disaggregated models of the Industry Mean Model were tested; the Supersector Mean Model, and the Sector Mean Model. These models are used to test to what extend disaggregated industry-based models are in favour of the more aggregate industry model. Based on the literature the following hypotheses are formalized:

“There is a significant difference, at

common/traditional levels, between the models under consideration in terms of forecasting accuracy”.  

“There is a significant difference between the

volatility added by the models under consideration when constructing minimum variance  portfolios”.

(16)

16 volatility per security and each of the estimates

of the future correlation matrix as constructed under the various models. The actual portfolio volatility in the forecasting period is used to examine how efficient these portfolios really are.

The results emerging from the tests on statistical significance show a significance difference between all the different models exists. Under the statistical significance test, the Sector Mean Model performs best. The Overall Mean Model performs worst. The results emerging from the economic significance test show that the Full Historical Model produces the most efficient portfolios. Choosing the Overall Mean Model results in the biggest loss in efficiency. Difference in annualized added volatility between the best and the worst model is 0.06 per cent and is significant at the 5 per cent level. Although the impact seems to be modest, when looking at the average Sharpe Ratio of the Full Historical Model compared to the Overall Mean Model, it can be stated that the annualized average Sharpe Ratio of the best model is 2.41 per cent higher than the Sharpe Ratio of the worst model.

B. Limitations of the study and directions for future research

The main limitation of this study is that a relative small sample of portfolios are created to test for economic significance. There are 10 subsamples used to create in total 60 portfolios. In addition, the firms within the sample are not equal divided over the various groups (e.g. industry, supersector, sector etc.) This decreases the robustness of the test results and might have led to biases. It might have been the case that a specific group had too much (less) relative weight, which influenced the correlation coefficients that were forecasted.

This study is a good starting point for further research. Further research should focus on the economic significance of various forecasting techniques. A suggestion would be to test the economic implications over various time spans to see how the correlation structure differs over different time periods. In addition, the economic significance could be tested using different investment strategies. During this research a passive investment strategy i.e. a buy-and-hold strategy is used. Further research could focus on the economic significance of the various models using more sophisticated active investment

strategies. Furthermore, further research could focus on the difference between the results emerging from the statistical and economic test and why some models do a relative poorer job in estimating the correlation between securities, which are most promising for portfolios.

References

Baca, S., Garbe, B., Weiss, R., 2000. The Rise of Sector Effects in Major Equity Markets. Financial Analysts Journal, 56 (5), 34-40

Blume, M., 1970. Portfolio Theory: A Step Towards Its Practical Application. Journal of Business 43 (2), 152-174

Blume, M., 1975. Betas and Their Regression Tendencies, Journal of Finance 10 (3), 785-795 Brooks, R., Catão, L., 2000. The New Economy and Global Stock Returns. IMF Working Paper 216, 1-37

Brooks, R., Del Negro, M., 2002. The Rise in Comovement across National Stock Markets: Market Integration or IT Bubble? Federal Reserve Bank of Atlanta Working Paper, 2002-17a

Burmeister, E., McElroy, M., 1992. APT and Multifactor Asset Pricing Models with Measured and Unobserved Factors: Theoretical and Econometric Issues. Indian Economic Review 27, 135-154

Chen, N., Roll, R., Ross, S., 1986. Economic Forces and the Stock Market. Journal of Business 59, 386-403

Elton, E.J., Gruber, M.J., 1973. Estimating the Dependence Structure of Share Prices- Implications for Portfolio Selection. Journal of Finance 28, 1203-32

Elton, E.J., Gruber, M.J., Brown, S.J., Goetzmann, W.N., 2009. Modern Portfolio Theory and Investment Analysis. John Wiley & Sons, Inc, Hoboken

(17)

17 Prices. Journal of Finance 39 (5), 1311-1324

Fama, E., French, K., 1993. Common Risk Factors in he Returns on Securities and Bonds. Journal of Financial Economics 33, 3-56

Levy, R., 1971. On the Short-Term Stationary of Beta Coefficients. Financial Analysts Journal 27 (5), 55-62

L’Her,   J.F.,   Sy,   O.,   Tnani,   M.,   2002.   Country,   Industry, and Risk Factor Loadings in Portfolio Management. Journal of Portfolio Management 28 (4), 70-79

Markowitz, H.M., 1952. Portfolio Selection. Journal of Finance 7, 77-91

Sharpe, W., 1963. A Simplified Model for Portfolio Analysis. Management Science 9 (2), 277-293

Statman, M., 1987. How many securities make a diversified portfolio? Journal of Financial and Quantitative Analysis 22 (3), 353-363

Vasicek, O., 1973. A Note on Using Cross-Sectional Information in Bayesian Estimation of Security Betas. Journal of Finance 8 (5), 1233-1239

(18)

18

Appendix

Table I

Sample firms classified by supersector and sector membership

Supersector No. of Firms Sector No. of Firms

1 Automobiles & Parts 13 1 Automobiles & Parts 13

2 Banks 29 2 Banks 29

3 Basic Resources 7 3 Forestry & Paper 2

4 Chemicals 10 4 Industrial Metals & Mining 4

5 Construction & Materials 11 5 Mining 1

6 Financial Services 4 6 Chemicals 10

7 Food & Beverages 10 7 Construction & Materials 11

8 Healthcare 13 8 Financial Services 4

9 Industrial Goods & Services 33 9 Beverages 6

10 Insurance 13 10 Food Producers 4

11 Media 11 11 Health Care Equipment & Services 6

12 Oil & Gas 8 12 Pharmaceuticals & Biotechnology 7

13 Personal & Household Goods 13 13 Aerospace & Defence 5

14 Real Estate 9 14 Electronic & Electrical Equipment 2

15 Retail 10 15 General Industrials 5

16 Technology 14 16 Industrial Engineering 10

17 Telecommunications 9 17 Industrial Transportation 7

18 Travel & Leisure 7 18 Support Services 4

19 Utilities 13 19 Life Insurance 3

Total 237 20 Nonlife Insurance 10

21 Media 11

22 Alternative Energy 1

23 Oil & Gas Producers 4

24 Oil Equipment & Services 5

25 Household Goods 3

26 Leisure Goods 1

27 Personal Goods 9

28 Real Estate Investment & Services 2

29 Real Estate Investment Trusts 7

30 Food & Drug Retailers 8

31 General Retailers 2

32 Software & Computer Services 5

33 Technology Hardware & Equipment 9

34 Fixed Line Telecommunications 7

35 Mobile Telecommunications 2

36 Travel & Leisure 7

`37 Electricity 6

38 Gas, Water & Multiutilities 7

(19)
(20)

Referenties

GERELATEERDE DOCUMENTEN

The momentum returns of the low and high Ivol tercile portfolios have a negative exposure to the SMB size factor, while the momentum return of the medium Ivol portfolio has a positive

It can be concluded that the CSV measures in panel A and panel B do contain information about the subsequent short-term momentum strategy, while the VDAX measure

[r]

The fact that a implied volatility risk is separate from a risk proxy in the model (size, value, past return) does not indicate whether stock returns and the expected change

“Stenen uit dff ruimte” gaat verder waar de succesvolle expositie “Sporen in de tijd” eindigde: met het inslaan van een reusachtige meteoriet, die 65 miljoen jaar geleden een.

Hydrobia düboissoni (Bcuillet 163*0 Hydrobia sandbergeri (Deshayes 1862) Pseudamnicola helicèlla (Sandberger 1859) Stenothyra pupa (Nyst 1836). Turboella turbinata, (Lamarck

Dat zijn romans desondanks de moeite waard zijn, komt doordat zijn onmiskenbare talent (net als bij de naturalist Zola) zich in de praktijk niet altijd aanpast aan zijn

In this paper we investigate the conditional covariance between the returns on the S&P 500 index and the Lehman US Aggregate Bond index using the bivariate BEKK GARCH(1,1)