• No results found

Factor analysis and test of risk premia in the three factor model of fama and French

N/A
N/A
Protected

Academic year: 2021

Share "Factor analysis and test of risk premia in the three factor model of fama and French"

Copied!
24
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master’s Thesis

Factor Analysis and Test of Risk Premia in

the Three Factor Model of Fama and French

Matias Nicolas Sacoto Molina

Student number: 10827188 Date of final version: July 3, 2015 Master’s programme: Econometrics

Specialisation: Financial Econometrics Supervisor: Prof. dr. F.R. Kleibergen Second reader: Prof. dr. H.P. Boswijk

(2)

Contents

1 Introduction 1

2 Principal Component Analysis. 3 3 FM two pass procedure and R2. 7 3.1 Results of the Simulation . . . 9 4 Tests of risk premia. 14 4.1 Results of the Simulation . . . 16 4.2 Application of the tests. . . 17

5 Conclusion 19

A Simulation experiment details. 20

Bibliography 22

(3)

Chapter 1

Introduction

In the last decades asset pricing models have been deeply studied, leaving plenty of room for dis-cussion from different approaches related to this issue. For instance, the Capital Asset Pricing Model (CAPM) has been widely criticized because of its estimator’s lack of explanatory power and unrealistic interpretation. This model consists of a two pass regression, due to Fama and MacBeth (1973). Precisely, in the first step the β’s for every asset are estimated from a time series regression in order to use them as regressors in the second step where the risk premia is obtained from a cross-sectional regression.

Motivated by empirical contradictions of the CAPM, Fama and French (1992) propose a Three Factor Model, which, besides the individual asset β’s, introduces control variables for the Size and Value effects. The results obtained from this model are well accepted. In addi-tion to financial factors, other linear models also include macro factors, for example in Lettau and Ludvigson (2001) they add, for instance, consumption growth, labor income, among others.

Beyond the possible arguments related to the interpretation of the coefficients and its accu-racy, there is an issue that should be taken into account due to Kleibergen (2009), where it is shown that the two pass regression estimates are sensitive to collinearity of the βs, what occurs when the βs are close or equal to zero and/or because the β matrix is of reduced rank. As a consequence, the risk premia estimates from a two pass procedure, in many cases, could be misleading, what in turn, also put into question the correctness of different measures used to make statistical inference.

In the present work, based on Kleibergen (2009) and Kleibergen and Zhan (2015), it is dis-cussed how the OLS R2 is not a reliable measure considering that its value can be large despite

that the factor structure is not explained by the coeffitients, in other words the βs are close to zero. Furthermore, different robusts tests, such us the FM-LM, GLS-LM, the JGLS and the CLR are also analized as an alternative to the Wald tests that are sensitive to small β’s and large sample sizes.

(4)

CHAPTER 1. INTRODUCTION 2

The paper is organized as follows. First, in chapter 2 a Principal Components Analysis is undertaken in order to lay out the factor structure of the portfolio returns. Second, in chapter 3, it is explained how the weakness of the factors is reflected on the strong unexplained factor structure remaining in the residuals of the second regression. Then, in chapter 4 the different robust tests and a discussion about their application are presented. Finally, conclusions are exposed in chapter 5.

(5)

Chapter 2

Principal Component Analysis.

A linear factor model, which is intended to model the factor structure of the portfolio returns can be written, in matrix notation, as,

R = F θ + e (2.1) where R is a T × N matrix with portfolios returns, T indicating the period of time and N the number of portfolios, F is a T × K matrix with K number of factors, theta represents a K × N matrix with the coefficients of each portfolio i ∈ N and e is the T × N matrix of errors terms. According to Kleibergen and Zhan (2015), if the factors are i.i.d. with finite variance and cov(F, e) = 0, and the errors are i.i.d and have finite variance as well, the covariance matrix of R can be expressed as a function of the covariance matrix of the factors and the covariance matrix of the errors, that is :

Cov(R, R) = θCov(F, F )θ0+ Cov(e, e) (2.2) where the dimensions of Cov(R, R), Cov(F, F ) and Cov(e, e) are N × N , K × K and N × N respectively.

In the present work, 25 portfolio returns (N =25), sorted by size and value obtained from the Kenneth Frenchs web site , are used. The period of time goes from the first quarter of 1959 until the third quarter of 2014, T =224. A Principal Components Analysis (PCA) is pursued on these portfolio returns as they are explained simultaneously by the same factors. An extensive literature explains how to perform this procedure, e.g. Jolliffe (2002). Based on the conditions derived from (3.2), the N × N matrix E of eigenvectors and the N × N diagonal matrix of eigenvalues Λ are estimated.

From the PCA on the covariance matrix of the observed returns of the 25 portfolios, it can be shown that the eigenvalues increase as the size and the value of the portfolio increases. Furthermore, the largest eigenvalue represents 79.99% of the total variance of the sample. Addi-tionally, it can be seen that until the fourth largest eigenvalue, an exponential decrease is shown, however since the fifth one, the reduction of the eigenvalues is more gradual. Thus, it could be

(6)

CHAPTER 2. PRINCIPAL COMPONENT ANALYSIS. 4 possible that the number of factors explaining the structure of the portfolio returns equals four. Moreover, considering the cumulative variation, one can see that the four largest eigenvalues ex-plain 94.03% of the variation of the returns. All the mentioned results are presented in Table 2.1.

Table 2.1: Results of PCA on Covariance matrix of the 25 Portfolio Returns Eigenvalue Value Proportion Cummulative Proportion

1 262.82 79.99% 79.99% 2 20.22 6.16% 86.15% 3 14.55 4.43% 90.57% 4 11.36 3.46% 94.03% 5 3.59 1.09% 95.13% 6 2.92 0.89% 96.01% 7 2.15 0.66% 96.67% 8 1.54 0.47% 97.14% 9 1.19 0.36% 97.50% 10 1.04 0.32% 97.82% 11 0.84 0.26% 98.07% 12 0.79 0.24% 98.31% 13 0.72 0.22% 98.53% 14 0.65 0.20% 98.73% 15 0.55 0.17% 98.90% 16 0.52 0.16% 99.06% 17 0.47 0.14% 99.20% 18 0.46 0.14% 99.34% 19 0.42 0.13% 99.47% 20 0.36 0.11% 99.58% 21 0.32 0.10% 99.68% 22 0.30 0.09% 99.77% 23 0.28 0.09% 99.85% 24 0.24 0.07% 99.93% 25 0.24 0.07% 100.00%

Now different variables are used as proxies to model the factor structure of the 25 portfolio returns. First, I will focus on the Fama and French factors. The model is represented in (2.3): rit= βi1(Rmt− Rf t) + βi2SM Bt+ βi3HM Lt+ eit (2.3)

where: rit represents the portfolio return; Rmt the market return; Rf t the risk-free rate; SM Bt

is the difference between the average return of a set of portfolios with small size, measured as the market equity, and low book-to-market ratio and the average return on a set of portfolios

(7)

CHAPTER 2. PRINCIPAL COMPONENT ANALYSIS. 5 formed with firms that have big size and high book-to-market ratio; HM Lt represents the

av-erage return on a set of firms with high value, book-to-market ratio, minus the avav-erage return of a set of portfolios with low value. βik is the coefficient of the portfolio i for the factor k;

and eitis the corresponding error term. The i stands for each portfolio and t for the time period.

An important fact to mention is that, although, in financial series the constant term is ex-pected to be zero or is assumed to be part of the error term, in the present work a constant term is included because the macro factors do not have a zero mean.

Following Kleibergen and Zhan (2015), I estimate the eigenvalues of the covariance matrix of the error terms obtained from the regression of (3.3), i.e. the Three Factor Model, in order to measure the amount of unexplained factor structure that is left in the residuals. The proportion of the variation explained up to the fourth largest eigenvalue of the errors is a 69.64%, what let us to conclude that the proxies (observed factors) do not model the factor structrure of the portfolio returns appropriately.

To further analyse the robustness of model (3.3), an LR test is performed to test the signif-icance of the coefficients estimates. Testing null hypothesis is: H0 : θ = 0, and the alternative

is: H1 : θ 6= 0. The following especification for the LR test is applied,

LR = T × ΣNi=1(logλi,Rp− logλi,e) (2.4)

where λi,Rp the eigenvalues of the covariance matrix of the portfolio are returns and λi,e

repre-sents the eigenvalues of the covariance matrix of the estimated error terms. From this test, I can reject the null hypothesis with a 95% of confidence. Furthermore, applying the same procedure, but, instead of using the residuals from (3.3), applying the estimated residuals of regressing the portfolio returns on the value weighted excess return and a constant, it can be seen that SM B and HM L are significant.

Another test used to test global significance is the F − test, specified as follows:

F − test = tr(Σ−1θ0ΣNi=1(fi× fi)θ) (2.5)

where tr stands for the trace of the matrix in the parenthesis, and Σ is the covariance matrix of the residuals from the estimated Three factor model. The obtained result from (2.5) enable us to conclude that all the estimates are significant.

Finally, to measure the fit of the model, a P seudo − R2 is computed. The obtained value obtained from this statistic is 84.19%, which represents the total variation explained by the observed factors, i.e. the market excess return, SM B and HM L. This test was specified as in Kleibergen and Zhan (2015):

P seudo − R2= 1 − Σ

N i=1λi,e

ΣNi=1λi,Rp

(8)

CHAPTER 2. PRINCIPAL COMPONENT ANALYSIS. 6 the variables of (2.6) are the same as in (2.4).

All the results of the mentioned tests are sumarized in Table 2.2.

Table 2.2: Results of Tests undertaken on specification (2.3), (2.7) and (2.8). Portfolios FF Model LL1 LL2 1 262.82 20.69 253.31 40.60 2 20.22 7.67 20.07 19.99 3 14.55 4.40 14.02 12.36 4 11.36 3.43 11.17 4.15 5 3.59 3.04 3.57 3.40 Cumulative 95.13% 69.64% 93.95% 81.92% LRF F 1996.61 790.03 285.32 LRCAP M 1096.52 LRL 110.06 1185.41 P seudo − R2 84.18% 3.27% 71.33%

In order to analyze different proxies for the unobserved factors, the variables used in the two specifications that gives better results in Lettau and Ludvigson (2001) are included. Specifically, the following two specifications:

rit= βi1cayˆt+ βi24 ct+ βi3( ˆcayt4 ct) + eit (2.7)

rit= βi1cayˆ t+ βi2(Rmt− Rf t) + βi34 yt+ βi4( ˆcayt(Rmt− Rf t)) + βi5( ˆcayt4 yt) + eit (2.8)

In the models above, (2.7) LL1 and (2.8) LL2,cayˆ tis the consumption-wealth ratio, 4ctis the

consumption growth and 4yt is the labor income growth.

Table 2.2 shows the results obtained after performing the same test as for the Three Factor Model (FF). Precisely, in both cases from PCA we can see that the variation kept in the five largest eigenvalues of the residuals (presented as cumulative in Table 2.2) is higher than the residuals from the FF model. This may be due to the fact that the proxies do not well explain the factor structure, what is also supported by the P seudo − R2. It has to be noted that

the specification that include the proxy of the market excess return are better mimicking the behavior of the portfolio returns. Although, the LR tests suggest that the estimated coefficients are different from zero, those results are not big enough to imply that the estimates are close to zero. The same conclusion stands for the results from the F − tests.

(9)

Chapter 3

FM two pass procedure and R

2

.

In order to estimate the risk premia of the returns, the most common applied procedure is the one proposed in Fama and MacBeth (1973), which consists first in undertaking a time series regression to estimate the β’s of the assets. Next we pursue a cross-sectional regression on the estimated β’s obtained from the first one. Specifically, we start calculating the parameters of (3.3) using proxies for the (unobserved) factors in F , represented as Z:

ˆ

θ = Z0Z−1Z0R (3.1) Then we regress the average of rit, ¯R, on ˆθ, in concrete the model that we are going to estimate

is:

¯

R = λθ0+ u (3.2) with ¯R = T1ΣTt=1rit and λ a 1 × (K + 1) vector containing a constant term and the risk premia

of each k factor. Subsequently, we proceed to find the OLS estimator λ: ˆ

λ = [(ι...ˆθ)0(ι...ˆθ)]−1(ι...ˆθ)0R¯ (3.3)

.

Kleibergen and Zhan (2015), showed that the quality of the results of the two pass procedure rely on the capability of the observed factors to capture the factor structure of the portfolio returns. More precisely, the authors show that when the model does not explain the factor structure of the portfolio returns, the unexplained structure will be brought in the residuals of the first step, affecting the estimates in the second step. For instance a consequence of the lack of explanatory power of the proxies is that θ would be small. In the previous section, it was shown that θ0, the true value of the β’s of the observed factors, are non-zero, however, some

of the estimates of the analyzed models, i.e. specifications (2.3), (2.7) and (2.8), seemed to be close to zero. The LR tests and the F − test showed the fit of the models is better when the market excess return is included.

(10)

CHAPTER 3. FM TWO PASS PROCEDURE AND R2. 8 Kleibergen and Zhan (2015) use the assumption that when the sample size increases, T , the parameter δ, which is a coefficient of the observed factors modelling the true factors (as can be seen in (3.4) that is an infeasible model), meanders to zero, i.e. δ = √d

T, where d is a fixed

K ×M dimensional full rank matrix, M stands for the number of observed factors. Additionally, it has to be considered that the size N (number of portfolios) remains fixed. This assumption is referred as the weak correlation assumption.

F0 = µf + Xδ + V (3.4)

where F0 is the matrix T × K of unobserved factors, µf the trend matrix, X the T × M

ma-trix of observed factors, δ the M ×K mama-trix of coefficients and V the T ×K mama-trix of error terms.

Assuming weak correlation enables us to relax the common assumptions of fixed full rank of the matrices δ and θ0, and a normal distribution of the errors in the scenarios of finite and

large sample inferences. The consequence of taking the weak correlation assumption is that the properties of the test statistics change. For example, it leads to an undersized values of the F-statistics. Furthermore, under this assumption, another commonly used statistic, such as the R2, also changes its properties.

In Kleibergen and Zhan (2015) the formula for the R2 of an OLS estimation is given by: R2 = ( ¯R 0M ιθ(ˆˆθ0Mιθ)ˆ −1 ˆ θ0MιR)¯ ( ¯R0MιR)¯ (3.5) with Mι = IN − ιN(ιN0 ιN)−1ι0N which is a N × N matrix.

From Kleibergen and Zhan (2015), it is known that the R2 converges to a random variable. This can be explained either, because the correlation of the observed (X) and the unobserved (F0) factors is weak what affects the order of magnitude of d; and/or when some of the observed

factors are correlated and their number is less than the unobserved, i.e. M < K, the part of the R2 of the uncorrelated factors will converge to a random variable.

The obtained results presented in Table 2.1 and Table 2.2 suggest that the observed factors used in (2.3), (2.7) and (2.8) are weak, so that the value of the R2 would not be a reliable mea-sure for inference considering the previously discussed properties of this statistic. For instance, we can see that the variances explained by the residuals of (2.7) and of (2.8) are very high, leading to convergence of the R2 to a random variable.

In order to prove the liability of the R2, I pursue a simulation experiment in the same spirit as in Kleibergen and Zhan (2015). Details of how the simulation is performed are presented in Appendix A.

(11)

CHAPTER 3. FM TWO PASS PROCEDURE AND R2. 9

3.1

Results of the Simulation

Figure 3.1: Density functions of the R2 using true Factors.

Figure 3.1 shows plots of the density curves of the R2 values for different specifications. Curve RS1 represents a model where three true factors are used; RS11, a model with only one weak true factor, and; RS12 represents a model with two true factors, one weak and one strong factor.

What is highly notable from the outcomes of the simulations, confirming the results of Kleibergen and Zhan (2015), the value of the R2 is higher when we use more factors, even though only one of the factors is strong.

To further check the consistency of the R2, a PCA on the residual of each model is performed in order to measure the percentage of the variance captured by the factors in each model. We can see in Figure 3.2 that when all the factors are used, less variance is left in the residuals, whereas when the model that only uses one true factor is less capable to capture information in the model, therefore its residuals have more information of the variance of the model than the estimated coefficient.

In Figure 3.2, AS1, AS11 and AS12 are the density functions of the sum of the three largest eigenvalues of RS1, RS11 and RS12 respectively.

(12)

CHAPTER 3. FM TWO PASS PROCEDURE AND R2. 10

Figure 3.2: Density functions of the sum of the cumulative proportion of the three largest eigenvalues.

To expose the sensitivity the behavior of the R2 to the number of used factors, different

models where simulated working with only useless factors. Figure 3.3 depicts the density func-tions of the R2 values of: RS2, a model where only one useless (weak) factor is used, RS3 where two useless factors are used and for RS4, a model with three useless factors.

The R2of RS2 has a distribution closer to the right, confirming the fact that the R2is sensible to the number of factors regardless their strength, meaning that we can have a significant R2 in the second pass regression even though the β’s from the first pass regression are not strong enough variables to model the risk premia of the portfolio returns. This is also supported by the results presented in Figure 3.4, where the density function of the third largest eigenvalue of the residuals of RS2, RS3 and RS4 is plotted, showing that the unexplained structure in the residuals is relatively equally high for the three models, regardless the value of their R2.

In Figure 3.4, AS2, AS3 and AS4 correspond to the density functions of the sum of the three largest eigenvalues of RS2, RS3 and RS4 respectively.

(13)

CHAPTER 3. FM TWO PASS PROCEDURE AND R2. 11

Figure 3.3: Density functions of the R2 using useless factors.

factor but for RS6 and RS7 I added one useless factor and two useless factors respectively. Figure 3.5 displays similar results to those presented in Figure 3.3, however, the difference rely on the PCA on the residuals, where as we can see in Figure 3.6, the unexplained structure is less pronounced than in models where only useless factors were used due to the inclusion of one true factor. Specifically, the remaining variation in the residuals of the models that include one true factor is around 79%, whereas the residuals from the models with only useless factors keep 91% of the factor structure approximately.

In Figure 3.6, AS5, AS6 and AS7 correspond to the density functions of the sum of the three largest eigenvalues of RS5, RS6 and RS7 respectively.

In order to extend the analysis of the models studied in the previews section, in specific, the three factor model (2.3) and two models from Lettau and Ludvigson (2001), (2.7) and (2.8), in Table 3.1 presents the results of the R2 for those models.

These results show that the value of the R2 of the second pass is high, hoever, there is a large unexplained factor structure in the residuals brought about from the first pass regression. Thus, it can be concluded that the R2 statistic, must be contrasted with an analysis of the factor structure of the residuals in order to have a more accurate idea of its reliability.

(14)

CHAPTER 3. FM TWO PASS PROCEDURE AND R2. 12

Figure 3.4: Density functions of the sum of the cumulative proportion of the three largest eigenvalues.

Table 3.1: Results of Tests undertaken on specification (3.3), (4.1) and (4.2). Statistic FF Model LL1 LL2

Cumulative 63.06% 90.41% 77.50% P seudo − R2 0.8418 0.0327 0.7133

(15)

CHAPTER 3. FM TWO PASS PROCEDURE AND R2. 13

Figure 3.5: Density functions of the R2 using one true factor and one/two useless factors.

Figure 3.6: Density functions of the sum of the cumulative proportion of the three largest eigenvalues.

(16)

Chapter 4

Tests of risk premia.

As mentioned before, risk premia estimates from a two pass procedure can be distorted as a consequence of weak factors, provoking small coefficients, β’s. The estimation process could also be aggravated when the number of assets is large. These inconsistent estimates may, in turn, affect statistical inference based on tests whose asymptotic distribution converges to nor-mal (theoretical) distributions. The problem is that the empirical distribution of these tests not necessarily converges to a normal one, because some tests, such as the Wald t-statistic, are sensible to small β’s and to the sample size.

In this section different tests that are robust to weak correlation issues will be discussed. In order to address this purpose, I will consider the tests proposed by Kleibergen (2009).

It is important to note that when the observed factors used as proxies are weak, the residuals are carried into the portfolio returns equation, what in turn affects the distribution of the residuals of the second regression. System formed by Equations (4.1) and (4.2) shows how the disturbances are correlated when using the FM two pass procedure.

R = λ1+ β( ¨F + λF) + ε (4.1)

F = µF + v (4.2)

with ¨F = F − ¯F , ¯F = T1ΣT

t=1Ft, and ε = u + β ¯v, where ¯v = T1ΣTt=1vt.

Kleibergen (2009) shows that the large sample distribution of the estimate of risk premia from (3.3) is greatly different from normality in three scenarios of β, when the β’s are zero, weak and/or when many β’s are used. Hence, tests to make inference on the risk premia estimator are not reliable if they are not robust towards these β issues. Specifically, tests whose limit distribution is not affected by the number or values of the β’s are necessary in order to make accurate inference.

(17)

CHAPTER 4. TESTS OF RISK PREMIA. 15 Working in line with Kleibergen (2009), if H0 : λF = λF,0 is the null hypothesis that we

want to test, we would need to get rid of the constant estimate λ1. For that purpose we subtract

the 25th portfolio from each of the other 24 portfolio returns. So that, the new estimator comes from:

˜

β = ( ˜F0F )˜ −1F˜0R˜ (4.3) where ˜F = F + λF,0, and ˜R is the matrix (T × (N − 1)) of portfolio returns. Under the null

hypothesis, the distribution of the average returns and ˜β converges to a normal distributed ran-dom vectors with mean 0 and covariance matrices as specified in Lemma 2 of Kleibergen (2009).

The tests that are going to be applied in this work correspond to those proposed in Kleiber-gen (2009). Proofs of their robustness are given in the mentioned paper.

The first test is the F M −LM statistic. The robustness of this test relies on the independence of ¯R − ˜βλF,0 (disturbances) and ˜β as it allows to work with convenient properties on their

product. Furthermore, this test converges to a random variable χ2(k) distributed when the sample size becomes large. This statistic is specified as follows:

F M − LM = T 1 − λ0F,0Q(λˆ F)F F −1 λF,0 ( ¯R − ˜βλF,0)0× ˜β( ˜β0Σ ˜˜β) −1˜ β0( ¯R − ˜βλF,0) (4.4) where ˆQ(λF)F F = T1( ¨F + λF)0( ¨F + λF) and ˜Σ = T −k1 ( ¯R − ˜βλF,0)0( ¯R − ˜βλF,0).

The second test is the GLS − LM , which also has a χ2(k) limiting distribution. This test is invariant to transformation of the asset returns. Equation (4.5) shows the specification of this test: GLS − LM = T 1 − λ0F,0Q(λˆ F)F F −1 λF,0 ( ¯R − ˜βλF,0)0Σ˜−1β × ( ˜˜ β0Σ˜−1β)˜ −1˜ β0Σ˜−1( ¯R − ˜βλF,0) (4.5)

After normalizing the returns and therefore the β’s, the constant term is no longer needed, so that the difference between the restricted and the unrestricted likelihood functions under H0: λF = λF,0, with a χ2(N −1) distribution, can be expressed as Equation (4.6), see Kleibergen

(2009): F AR = T 1 − λ0F,0Q(λˆ F)F F −1 λF,0 ( ¯R − ˜βλF,0)0Σ˜−1× ( ¯R − ˜βλF,0) (4.6)

Finally, the Conditional Likelihood Ratio statistic is presented in this work, specified in Kleibergen (2009) as follows:

CLR = 1

2[F AR − r(λF,0) + q

(18)

CHAPTER 4. TESTS OF RISK PREMIA. 16 with J GLS = F AR − GLS − LM .

In order to check the robustness of the different mentioned tests, a simulation experiment is pursued. The experiment process is explained in the Appendix 1.

4.1

Results of the Simulation

The experiment consists on finding the empirical size convergence and the power of the pre-viously mentioned tests under the null hypothesis: H0 : λF = 10 with a significance level of 95%.

To find the asymptotic size of the tests, in Figures 4.1 and 4.2 the rejection frequencies of the different tests when altering the value of β by δ (delta) is plotted. Figure 4.1 confirms the fact that the F M − W ald1 statistic, which is also used in the experiment to show its sensitivity to weak correlation, is sensible to small values of β giving a rejection frequency higher than 5% and converging to, approximately, 5% as the value β grows. On the other hand, the asymptotic rejection frequency of the F M − LM , GLS − LM , F AR and the CLR tests converges approx-imately to 5%, independently of the different values of β.

Figure 4.1: Sensitivity to different values of β.

Now, in order to analyze the power of the tests, different values of lambdaF are used keeping

the value of β constant.

Figure 4.3 reflects the power curves of the F M − W , F M − LM and of the GLS − LM are plotted. Supporting the results of Figure 4.1, we can see that the size of the F M − LM and of

1Kleibergen (2009) defines this test as: F M − W ald = T ( ¯R − ιˆλ

1− ˆβλF,0)0(ι . . . ˆβ)[(ι... ˆβ)0Θ(ιˆ ... ˆβ)]−1(ι... ˆβ)0( ¯R − ιˆλ1− ˆ βλF,0), where ˆΘ = ˆΩ(1 + ˆλ0F(T1F¨ 0¨ F )−1ˆλF), and ˆΩ = T −k−11 (R − ¨F ˆβ)0(R − ¨F ˆβ).

(19)

CHAPTER 4. TESTS OF RISK PREMIA. 17

Figure 4.2: Sensitivity to different values of β.

the GLS − LM is 5%, what enable us to say that both are not size distorted.

Figure 4.4, shows that that, F AR, J GLS, and CLR are size corrected when the true value of λF converges approces to λ0.

4.2

Application of the tests.

The test statistics can be applied to test hypotheses that are specified for a subset of parameters, i.e. H0: ωF = ωF,0, where λF = (νF...ωF). The partialled out parameters (νF) are estimated by

maximum likelihood. The reason why this method is chosen is that the coefficients won’t alter the distribution of the tests.

In order to construct confidence sets for the risk premia estimates we have to specify a range of values ωF,0. The (1 − α)% confidence set contains all values of ωF,0 for which the value of

(20)

CHAPTER 4. TESTS OF RISK PREMIA. 18

Figure 4.3: Power curves.

(21)

Chapter 5

Conclusion

When we estimate the risk premia of portfolio returns using a two pass procedure, the consis-tency of the estimators rely on the strength of the factors that are being used. Specifically, the risk premia estimator depends on the β’s of the assets, so that this estimate may be sensible to the value of the β’s, especially to small β’s, and or to the number of the β’s. Furthermore, the inconsistency of the estimator can be aggravated when the portfolio returns are sorted, such as the Fama and French (1992) sorted portfolios. This is because a certain level of correlation between the average returns and the coefficients of the β’s may be provoked.

Statistical inference on such estimators might not to be reliable due to their inconsistency. For instance, the OLSR2may have a bigger value than the actual fit of the model. Furthermore, tests that depend on the estimated β’s from the first pass regression, could be size distorted what leads to wrong inference.

Even though the factors used to model the factor structure of the sorted portfolio returns are weak, one can have a large value of OLSR2. What causes this inaccurate measure is that the remaining factor structure left in the residuals of the time series regression, i.e. the first pass where the β’s of the assets are estimated, is carried into the cross-sectional residuals, affecting the empirical distribution of these. In order to clarify robustness of the OLSR2, a Princi-pal Component Analysis must be pursued to assess the amount of unexplained factor structure left in the residuals and then consider whether the OLSR2measure might be considered reliable. Moreover, many test statistics are also sensible to small β’s and or to the number of β’s, because the residuals could have an empirical non-normal convergence. So that, the robust tests proposed by Kleibergen (2009) are adequate to make statistical inference on estimators from a two pass procedure. The main reason why these tests, i.e. the F M − LM , GLS − LM , J GLS and the CLR, are robust is because they use β estimators that do not violate the zero-correlation condition with the average portfolio return. Thus, these tests does not suffer from size distortion, even we are working with sorted portfolios.

(22)

Appendix A

Simulation experiment details.

The simulation experiment consists on generating 25 (N ) portfolio returns (Rps), for a period T , specified as follows:

Rsp = µ + Fsθ + e¨ s (A.1) with µ = ˆθˆλ, and ˆθ and ˆλ, the estimates from the F M estimation using the Fama and French factors, i.e, Equation (2.3). Fs is a T × K matrix with generated factors with mean zero and a covariance matrix equal to the covariance matrix of the Fama and French Factors. Furthermore, ¨

θ is a matrix with the elements of ˆθ excluding the estimate for the constant term. And, finally, es stands for the matrix T × N of errors which are i.i.d. with mean zero and their covariance

matrix is the covariance of the residuals from F M estimation using the Fama and French factors.

With the generated data, different regressions specified as (A.2) are run. Numerous factors (Z), that are either strong or weak (useless), are included in the different specifications. More-over, the number of factor in each specification also varies. Afterwards, the R2, specified in (3.5) value is estimated for each specification.

ˆ

θs= (Z0Z)−1Z0Rsp (A.2) Furthermore, a PCA is performed on the residuals resulting from:

ˆ es= Rs

p− Z ˆθs (A.3)

For Chapter 4, the same data generating process is used, however only one factor of Fs, (Fs,1), is chosen to generate a new sample of Rsp. As ˆθ, the market excess return β, ( ˆβrw), from

(2.3) is taken. Equation (A.4) shows how the simulated portfolios are formed:

Rsp = (ιˆλ1+ ˆβrwλF,0) + Fs,1βˆrw+ es (A.4)

Then, the different tests explained in Chapter 4, i.e. (4.4), (4.5), (4.6) and (4.7) are applied to the simulated model under the null hypothesis: H0: λF = λF,0with a level of significance of 5%.

(23)

APPENDIX A. SIMULATION EXPERIMENT DETAILS. 21 In order to find the asymptotic size of the tests, the value of ˆβrw is varied by δ, afterwards

the rejection frequencies are obtained. On the other hand, in order to find the power of the tests, a range values of λF are used, and the rejection frequency is computed as well.

(24)

Bibliography

Fama, E. and French, K. (1992). The cross section of expected sstock returns. Journal of Finace.

Fama, M. and MacBeth, J. (1973). Risk, return and equilibrium empirical tests. Journal of Political Economy.

Jolliffe, I. (2002). Principal components Analysis. Springer, 2 edition.

Kleibergen, F. (2009). Tests of risk premia in linear factor models. Journal of Econometricss. Kleibergen, F. and Zhan, Z. (2015). Unexplained factors and their effects on second pass r

squared’s. Journal of Econometrics.

Lettau, M. and Ludvigson, S. (2001). Resurrecting the (c)capm: A cross-sectional test when risk premia are time varying. Journal of Political Economy.

Referenties

GERELATEERDE DOCUMENTEN

This paper examines the empirical behavior of the three Fama and French coefficients over time. Specifically, by examining the accuracy of extrapolations of

These three factors are the Market factor; measured as the return of the market portfolio over the risk-free rate, the Size factor; measured as the difference between the

The dependent variable is the value weighted average stock return of the portfolio sorted by size and book-to-market ratio minus the riskfree interest rate in the period.. Size,

By allocating stocks into portfolios based on the cash flows between investors and companies, the FF5 model can explain the cross- sectional variation in returns and identify

The goal of this research is to test whether the FFM, CAPM and four-factor model are able to account for different risk factors that influence stock returns and on how

The smallest size and highest book-to-market equity portfolio () and the largest size and lowest book-to-market equity portfolio () are highlighted in the

Kruis het antwoord aan dat het beste bij uw kind past. Er zijn meerdere antwoorden mogelijk. [multiple answer; tag=read_w; deze vraag alleen tonen als 18= ja of = ja, maar beperkt]. 

Door deze enkelvoudige case study uit te voeren, kan er goed geanalyseerd worden of het leegstandsbeleid van de gemeente Roosendaal geëvalueerd kan worden met