• No results found

Estimating BEKK models with different software packages

N/A
N/A
Protected

Academic year: 2021

Share "Estimating BEKK models with different software packages"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Estimating BEKK models with different software packages

Thomas van Wilsum

Student ID: 6338070

Supervisor: Prof. C.G.H. Diks

Bachelor’s thesis Econometrics

Faculty of Economics and Business, University of Amstedam

June 28, 2013 1

(2)

1. Introduction

Understanding and modelling volatility is one of the most important aims in financial time series analysis. To model univariate time series, Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models are widely used (Bollerslev, 1986). The invention of multivariate GARCH (MGARCH) models improved the possibilities to better model

covariances between financial returns. This has led to some important applications such as finding optimal hedge ratios; see Brooks et al. (2002). Much research has been done about the theoretical aspects of this multivariate model (see, e.g., Bauwens et al., 2006) but very little is known about practical implications, such as estimation procedures used by software packages.

At the moment there are several software packages available, capable of estimating an inportant MGARCH model, called the BEKK model. This acronym comes from work on MGARCH models by Baba, Engle, Kraft and Kroner. Although these software packages are all estimating the same model, different parameter estimates can be obtained by using different packages; see Brooks et al. (2003). In that article the differences in estimates are only pointed out but are not examined nor do they provide answer to the question which package is most accurate. The intention of this thesis is to assess the computational accuracy and performance of two packages for the estimation of bivariate BEKK models. Diagonal as well as full BEKK models will be considered.

In this thesis the software packages EViews and R will be examined. Similar research has been done by Brooks et al. (2003). They examined the differences in parameter estimates of multivariate GARCH models for five different software packages. They used a diagonal VECH model to estimate and used GAUSS-FANPAC, RATS, SAS and S-PLUS as software packages. They obtained large differences in parameter estimates and standard errors.

Several aspects of the algorithms contained in pre-programmed routines of the software packages are evaluated to see how these influence the outcome of parameter estimates. Different algorithms will lead to different estimates and these differences will be compared. To choose a best performing package the likelihood for both packages will be computed.

Empirical data as well as data from a Monte Carlo simulation will be used to estimate the BEKK models. The empirical data contains the daily returns of the NYSE and AEX of the

(3)

last 5 years. For the Monte Carlo simulation different data generating processes will be used. The remainder of this thesis is organized as follows. In Section 2, univariate and multivariate GARCH models are explained as well as methods to estimate these models. Also several optimization routines used by the packages are briefly discussed there. Section 3 describes the software packages used, the data that is employed and criteria that are used to compare the performance of the two packages. Section 4 presents the results and analysis of these results. Section 5 concludes.

2. Models and estimation

2.1. Univariate GARCH model

For a good understanding of the multivariate GARCH model it is essential to understand the univariate version of the model. Firstly, the simple univariate ARCH(1) model will be illustrated. Heij et al. (2004) propose the following model for volatility of daily Dow-Jones returns, where 𝑦𝑦𝑡𝑡 denotes the daily return at time t. This model is given by

𝑦𝑦𝑡𝑡 = 𝜇𝜇 + 𝜖𝜖𝑡𝑡, 𝜖𝜖𝑡𝑡l𝑌𝑌𝑡𝑡−1 ∼ N(0, 𝜎𝜎𝑡𝑡2), 𝜎𝜎𝑡𝑡2=𝛼𝛼0+𝛼𝛼1𝜖𝜖𝑡𝑡−12

Here var(𝑦𝑦𝑡𝑡l𝑌𝑌𝑡𝑡−1)= 𝜎𝜎𝑡𝑡2 represents the conditional volatility of the daily returns and

𝑌𝑌𝑡𝑡−1={𝑦𝑦𝑡𝑡−1, 𝑦𝑦𝑡𝑡−2, …} denotes the available information set at time t-1. is an unknown constant that can be replaced by the sample mean 𝑦𝑦_; see Heij et al. (2004).

This model can be extended to an ARCH(p) model by adding more lagged squared innovations in the conditional variance equation; see Engle (1982).

Bollerslev (1986) generalized the ARCH(p) model by adding lagged terms of the conditional variance to the current conditional variance equation. In this way a GARCH(p,q) model becomes

𝜎𝜎𝑡𝑡2 = 𝛼𝛼0+ 𝛼𝛼1𝜖𝜖𝑡𝑡−12 + ⋯ + 𝛼𝛼𝑞𝑞𝜖𝜖𝑡𝑡−𝑞𝑞2 + 𝛽𝛽1𝜎𝜎𝑡𝑡−12 + ⋯ + 𝛽𝛽𝑝𝑝𝜎𝜎𝑡𝑡−𝑝𝑝2

2.2 Multivariate GARCH model

The univariate GARCH model can be extended to a multivariate model. See Engle and Kroner (1995) for a more detailed description of multivariate GARCH models. In this thesis bivariate GARCH models are estimated.

(4)

In the bivariate case the time series 𝒚𝒚𝒕𝒕 from the previous section now becomes a 2x1 vector 𝒚𝒚𝒕𝒕 of two different time series. The conditional variance of 𝒚𝒚𝒕𝒕 and 𝝐𝝐𝒕𝒕 is not a scalar anymore but a 2x2 covariance matrix, called 𝐻𝐻𝑡𝑡 . This matrix 𝐻𝐻𝑡𝑡 depends on the information set 𝒀𝒀𝒕𝒕−𝟏𝟏. So in the bivariate case this becomes

𝝐𝝐𝒕𝒕l𝒀𝒀𝒕𝒕−𝟏𝟏∼ N(0, 𝐻𝐻𝑡𝑡) (1)

where 𝝐𝝐𝒕𝒕 = �𝜖𝜖1,𝑡𝑡𝜖𝜖2,𝑡𝑡�, 𝒀𝒀𝒕𝒕−𝟏𝟏 = �𝑦𝑦2,𝑡𝑡−1𝑦𝑦1,𝑡𝑡−1 𝑦𝑦2,𝑡𝑡−2𝑦𝑦1,𝑡𝑡−2 , …, …� and 𝐻𝐻𝑡𝑡= �ℎℎ21,𝑡𝑡11,𝑡𝑡 ℎℎ22,𝑡𝑡12,𝑡𝑡�.

Several specifications of the covariance matrix 𝐻𝐻𝑡𝑡 exist, resulting in different MGARCH models. See Bauwens et al. (2006) for an overview of different models.

Here the BEKK model of Engle and Kroner (1995) is used. This model is used because the covariance matrix is always positive definite in this case. For simplicity K, p and

q are set equal to one and exogenous variables are omitted. This gives the following model

𝐻𝐻𝑡𝑡= 𝐶𝐶0’ 𝐶𝐶0 + A𝝐𝝐𝒕𝒕−𝟏𝟏𝝐𝝐𝒕𝒕−𝟏𝟏′A + G’𝐻𝐻𝑡𝑡−1G (2)

with 𝐶𝐶0, A and G 2x2 parameter matrices. 𝐶𝐶0 is an upper triangular matrix. For identification of the model the parameters 𝑐𝑐11, 𝑐𝑐22, 𝑎𝑎11 and 𝑔𝑔11 are all restricted to be positive. For

covariance stationarity of the process the eigenvalues of A+G need to be less than one in modulus; see Engle and Kroner (1995). The diagonal BEKK model will also be considered. The only difference from this with the full BEKK model is that the matrices A and G are now diagonal matrices. So the total number of parameters to estimate will reduce by four.

2.3 Maximum likelihood estimation

For estimation of the parameters of the BEKK model the method of maximum likelihood will be used. The conditional distribution of 𝝐𝝐𝒕𝒕 is normal (see 1) and so the likelihood for

observation t is

𝐿𝐿𝑡𝑡 (𝜃𝜃)= (2𝜋𝜋)𝑛𝑛 2⁄ (det(𝐻𝐻1 𝑡𝑡))1 2⁄ 𝑒𝑒−(1 2⁄ )𝝐𝝐𝒕𝒕 ′𝐻𝐻

𝑡𝑡−1𝝐𝝐𝒕𝒕

Assuming the observations to be independent random variables the likelihood for the sample becomes

(5)

L(𝜃𝜃)= � (2𝜋𝜋)𝑛𝑛 2⁄ (det(𝐻𝐻1 𝑡𝑡))1 2⁄ 𝑒𝑒−(1 2⁄ )𝝐𝝐𝒕𝒕 ′𝐻𝐻

𝑡𝑡−1𝝐𝝐𝒕𝒕 𝑇𝑇

𝑡𝑡=1

where T denotes the number of observations in the sample and n denotes the dimension of the MGARCH model. For 𝐻𝐻1 the unconditional variance covariance matrix of the data is used. Setting n=2 (for the bivariate case) and rearranging gives the log likelihood function

l(𝜃𝜃)=log(L(𝜃𝜃))= ∑𝑇𝑇 − log(2𝜋𝜋)

𝑡𝑡=1 − (1 2⁄ ) log(det(𝐻𝐻𝑡𝑡)) − (1 2⁄ )𝝐𝝐𝒕𝒕′𝐻𝐻𝑡𝑡−1𝝐𝝐𝒕𝒕 (3) The function l(𝜃𝜃) now has to be optimized with respect to the 11 parameters 𝑎𝑎11, 𝑎𝑎12, 𝑎𝑎21, 𝑎𝑎22, 𝑔𝑔11, 𝑔𝑔12, 𝑔𝑔21, 𝑔𝑔22, 𝑐𝑐11, 𝑐𝑐12 and 𝑐𝑐22.

3. Packages, data and comparison criteria

The BEKK model and the method of estimation were explained in the previous section. For the estimation two software packages will be used. The first one is the mgarchBEKK package that can only be used in R version 2.12.0. This package is not implemented in the standard version of R 2.12.0 and has to be downloaded and installed before one can use it1. The second package that is going to be examined is the bv_garch package in EViews version 6. This package is already implemented in EViews2. Both packages contain pre-programmed routines to estimate BEKK models.

To estimate bivariate BEKK models, using the packages, two different simultaneously observed time series of data are needed. For that purpose the daily index of the AEX and NYSE of the last 5 years is used. The dataset contains 1305 daily observations spanning the period April 24, 2008 - April 24, 2013. The data is transformed to get the daily returns instead of the daily index. Firstly, for both time series the logarithm of the index is taken. Secondly, differences of these logarithms are taken to get daily returns. For example let AEX𝑡𝑡 denote the AEX index at time t. Then the daily return of the AEX at time t is log(AEX𝑡𝑡)-log(AEX𝑡𝑡−1). Once the data is transformed in the way described above and read into the programs, the BEKK models can be estimated.

The models will be estimated in two ways. Firstly, the empirical dataset as described above is used to estimate the model. In this case the data generating process is unknown. Secondly, data from a Monte Carlo simulation will be used, in which case the data generating process is known. The data from this simulation will be used to (re)estimate the model. To obtain more reliable results multiple datasets will be generated and estimated.

(6)

The parameter estimates given by EViews and R are compared. If there are differences between them, then the preferred estimation will be pointed out based on which one has achieved the highest likelihood or the lowest AIC value.

Differences in the estimations result from differences in algorithms used by the packages. The likelihood function of Section 2.3 is a rather complex function and for that reason problems can arise during the optimization routine for that function. For example only local maxima can be found or no convergence can occur at all.

There are a lot of points on which the algorithms can differ but the packages do not give sufficient information to compare all these points. Two important aspects will be considered, namely the numerical optimization technique that is used and the starting values of the parameters in both packages. Examination of these aspects does not give an explanation for possible differences in estimated parameters. However, they do give an interesting notion on the impact of these two aspects on the estimations.

4. Results and analysis

Before estimating the empirical dataset the optimization routines and starting values of each package are examined. For both packages there are several optimization routines to choose from. In R Nelder-Mead, BFGS, CG, L-BFGS-B and SANN are available and in EViews Marquardt or BHHH can be chosen. The default settings of EViews and R are Marquardt and BFGS respectively. It is plausible that these last two routines are most robust because otherwise they probably would not be chosen as default settings. To check for the impact of different optimization routines on the likelihood of estimated parameters multiple datasets are simulated and then estimated using different routines. Five simulations (all with different dgp’s) of 1000 observations are made and estimated using all available optimization methods in the packages. The mean likelihood for these five datasets per optimization routine is constructed and showed in Table 1 below.

(7)

Table 1. Mean loglikelihood for different optimization routines Panel A: Methods in R R Loglikelihood Nelder-Mead -3163.178 BFGS -3079.444 CG -3246.349 L-BFGS-B -3123.221 SANN -3123.303

Panel B: Methods in EViews

EViews Loglikelihood

Marquardt -3147.025

BHHH -3316.168

For these datasets BFGS is the best method in R and Marquardt has to be chosen in EViews for optimal estimations. In either the Marquardt and BFGS optimization routine numerical approximations are used, so neither of these routines has the advantage of using analytical derivates and Hessians.

In R as well as in EViews there is the possibility to change the initial parameter vector The default settings for the initial parameter vector in R for the diagonal BEKK model is (1, 0, 1, 0, 0.1, 0.1, 0.1). In EViews initial values are obtained by estimating first two separate GARCH models and then use the square roots of these estimated coefficients as starting values. A dataset of 5000 observations is now simulated according to a diagonal BEKK model with (1.0, 0.8, 1.0, 0.5, 0.3, 0.4, 0.8) as parameter specification. This dataset is used to estimate diagonal BEKK models with different initial values. The results are shown in Table 2.

Table 2. Loglikelihood for different initial parameter values

Loglikelihood EViews Loglikelihood R

Default initial values R -3887.770 -3755.064

Default initial values EViews -3815.29 -3750.150

(1,1,1,1,1,1,1) Convergence not achieved within 100 iterations

-3772.154 (1,1,1,5,1,5,1) Convergence not achieved within

100 iterations

Convergence not achieved within 100 iterations

As can be seen from Table 2, the estimations in both packages are very sensitive with respect to differences in initial parameter values. If the starting values differ too much from the specified parameter values in the data generating process, no convergence even occurs at all in both packages. When this occurs, convergence is still tried to achieve by raising the maximum number of iterations. But this has no effect in improving the solution in this case.

(8)

Raising the maximum number of iterations in both packages to 500 and re-estimate results in error warnings of failures to improve likelihood in both packages.

Both packages increase their likelihood when using the default initial values of the package in EViews compared to the default initial values in R. However, the results indicate that the default settings in EViews are better than in R, there is been chosen to employ in each package just the own default settings for initial parameter values. This has been done for simplicity of estimation and because most researchers just simply employ the default settings when using a package.

It is important to use the same convergence criteria in both packages and the same maximum number of iterations. If this is not the case then one package will probably be more accurate only due to these settings. During all further estimations the default settings of EViews will be employed. This means that both packages use a maximum number of iterations of 100 and a convergence criteria of 10E-0.5.

The empirical dataset can now be estimated with all settings of the packages set to default settings as described above. The diagonal as well as the full BEKK model will be estimated. Using the AEX and NYSE dataset to estimate the diagonal BEKK models gives the results shown in Table 3.

Table 3. Parameter estimates and AIC values for the empirical dataset (diagonal BEKK model) parameters EViews R 𝑐𝑐11 0.001760 (0.000179) 0.009459127(0.0005844674) 𝑐𝑐12 0.001370 (0.000201) 0.004489163(0.0003744272) 𝑐𝑐22 0.001257 (0.000114) -0.00001358288(0.0003395681) 𝑎𝑎11 0.956290 (0.004159) 0.5004086(0.03011511) 𝑎𝑎22 0.939117 (0.005656) 0.4710771(0.02229981) 𝑔𝑔11 0.268825 (0.013319) -0.6377485 (0.03364818) 8

(9)

𝑔𝑔22 0.325069 (0.015851)

-0.8491155 (0.01256190)

AIC -7719.279 -7796.206

It is extremely remarkable how big these differences in parameters are. For example the c11 coefficient is around 0.0018 in EViews and 0.0095 in R. So the parameter estimate for c11 in R is more than five times as big as the estimation in EViews. The values of all

estimated parameters are larger in EViews than in R except for the c11 and c22

coefficient. There also are big differences in the values of the standard errors between the packages. The magnitude of the standard errors in R is larger than in EViews for all

coefficients except g22. For both packages standard errors for coefficients of the C matrix are smaller than those for coefficients of the A and G matrices.To check for covariance

stationarity the eigenvalues of A+G are computed. These eigenvalues are 0.972541 and -0.0925411, so the necessary and sufficient conditions for covariance stationarity are satisfied.

To decide which package is performing best in this case, the Akaike information criterion (AIC) is used. The AIC values are presented in the bottom row of table3. The best model, according to Akaike's information criterion, is the model with the lowest AIC value. So for this dataset R is the best performing package.

Estimating a full BEKK model with the two software packages gives the results shown in Table 4. Parameter estimates are still very different between packages, even as the standard errors. Both AIC values in Talbe 4 are smaller than the ones in Table 3. So the contribution of the improved likelihood value to the AIC value is greater than the punishment resulting from adding four more variables to the model. Also when estimating a full BEKK model, R is again the best performing package. This can be seen from the lower AIC value for R in Table 4.

(10)

Table 4. Parameter estimates and AIC values for the empirical dataset (full BEKK model) Parameters EViews R 𝑐𝑐11 0.001337 (0.001688) 0.006350106 (0.0004555903) 𝑐𝑐12 0.001135 (0.009412) 0.0026576033 (0.0008373669) 𝑐𝑐22 0.001881 (0.015296) 0.0009762108 (0.0006997202) 𝑎𝑎11 0.283663 (0.012586) 0.2671173 (0.06073940) 𝑎𝑎12 0.285079 (0.005335) -0.1696093 (0.07426896) 𝑎𝑎21 -0.201540 (0.000892) -0.4612097 (0.06297176) 𝑎𝑎22 0.042235 (0.001376) 0.3060752 (0.06771544) 𝑔𝑔11 0.916709 (0.005306) -0.7579141 (0.05087004) 𝑔𝑔12 0.938531 (0.017703) -1.290825 (0.04180496) 𝑔𝑔21 0.080600 (0.004728) 1.2047835 (0.04678578) 𝑔𝑔22 -0.002133 (0.013172) 1.265455 (0.04537794) AIC -7814.301 -7931.009 10

(11)

Now some estimation results are presented in case of data from a known data generating process. The parameter vector that is used to simulate the data is (1.0, 0.8, 1.0, 0.5, 0.3, 0.4, 0.8). The order of the elements of the vector follows the order of the parameters as given in Table3. A number of 1000 simulations had been made. Each simulation contains 1000 observations. So the BEKK model is now 1000 times estimated in EViews and R. For all estimations, the likelihood has been saved and put into a histogram. Also a graph of these likelihoods has been made. See Table 5 and Table 6, respectively.

Table 5. Histogram of loglikelihood values for the simulated data Panel A: Histograms EViews

Panel B: Histogram R 0 100 200 300 400 500 -5250 -5000 -4750 -4500 -4250 -4000 -3750 Series: EVIEWS Sample 1 1000 Observations 1000 Mean -3819.539 Median -3814.203 Maximum -3685.903 Minimum -5365.572 Std. Dev. 82.71251 Skewness -10.45665 Kurtosis 166.1414 Jarque-Bera 1127186. Probability 0.000000 0 20 40 60 80 100 120 -3920 -3880 -3840 -3800 -3760 -3720 -3680 Series: R Sample 1 1000 Observations 1000 Mean -3805.705 Median -3804.214 Maximum -3651.615 Minimum -3939.167 Std. Dev. 42.45158 Skewness 0.036251 Kurtosis 3.180328 Jarque-Bera 1.573941 Probability 0.455222 11

(12)

Table 6. Graph of loglikelihood values for the simulated data Panel A: Graph EViews

Panel B: Graph R

From the histogram it can be seen that the mean value of the likelihood in R is larger than in EViews. The standard deviation in R is almost twice as small as in EViews. From the graphs

-5,500 -5,250 -5,000 -4,750 -4,500 -4,250 -4,000 -3,750 -3,500 250 500 750 1000 EVIEWS -3,960 -3,920 -3,880 -3,840 -3,800 -3,760 -3,720 -3,680 -3,640 250 500 750 1000 R 12

(13)

can be seen that although both packages fluctuate nearly around the same values, EViews has more outliers.

Table 7 shows a histogram of the differences in loglikelihood between the two packages for each estimation above. These differences, 𝛥𝛥𝑖𝑖, are defined by the following equation

𝛥𝛥𝑖𝑖 = 𝑙𝑙𝑅𝑅,𝑖𝑖 - 𝑙𝑙𝐸𝐸,𝑖𝑖

where 𝑙𝑙𝑅𝑅,𝑖𝑖 is the loglikelihood value in R for estimation i and 𝑙𝑙𝐸𝐸,𝑖𝑖 is the loglikelihood value in Eviews. As can be seen from the histogram, the mean of this difference is about 13.8.

To point out a better performing package the differences in loglikelihood have to be significantly different from zero. Therefore the nullhypothesis 𝐻𝐻0: E(𝛥𝛥𝑖𝑖) = 0, ∀i will be tested against the alternative hypothesis 𝐻𝐻𝑎𝑎: E(𝛥𝛥𝑖𝑖)≠ 0. To test this hypothesis a regression of the differences on a constant has been made; see Table7. If this estimated constant is significantly different from zero then the nullhypothesis can be rejected.

Table 7. Histogram of the differences of loglikehood values in EViews and R

0 40 80 120 160 200 240 280 320 0 250 500 750 1000 1250 1500 Series: LOGLIKELIHOOD_DIFFERENCE Sample 1 1000 Observations 1000 Mean 13.83417 Median 8.307500 Maximum 1626.352 Minimum -177.0040 Std. Dev. 94.82753 Skewness 7.591522 Kurtosis 110.2643 Jarque-Bera 489006.3 Probability 0.000000 13

(14)

Table 8. Regression of the differences in loglikelihoods on a constant

Method: Least Squares Date: 06/27/13 Time: 13:25 Sample: 1 1000

Included observations: 1000

Variable Coefficient Std. Error t-Statistic Prob.

C 13.83417 2.998710 4.613374 0.0000

R-squared 0.000000 Mean dependent var 13.83417 Adjusted R-squared 0.000000 S.D. dependent var 94.82753 S.E. of regression 94.82753 Akaike info criterion 11.94300 Sum squared resid 8983267. Schwarz criterion 11.94790 Log likelihood -5970.498 Hannan-Quinn criter. 11.94486 Durbin-Watson stat 2.017943

The test is performed at a 95% significance level. As can be seen from table 8 the coefficient C is significantly different from zero, so the nullhypothesis is rejected. Because the C

coefficient is significantly positive it can be concluded that for this data generating process, R has significantly higher loglikelihood values and so can be considered again as the best performing package.

5. Conclusion

The aim of this thesis was to compare the performance of two packages for the

estimation of diagonal BEKK models. Firstly, the effect of different optimization routines and starting values within the packages was examined. It appeared that these two aspects had a great influence on the value of the likelihood and that the default settings for the optimization routine were optimal in both packages. So it can be concluded that the optimization routine and initial parameter values have a substantial effect on the estimated parameters. Because of the complexity of the likelihood function it is probable that the packages can sometimes converge to a local optimum.

When employing the empirical dataset, considerable differences were found in parameter estimates. It turns out that based on the AIC criterion parameter estimates in R are better than in EViews for the full BEKK model and the diagonal BEKK model.

(15)

Also in the case where the data generating process is known, R appears to be the best performing package. After 1000 estimated simulations it turns out that the mean of the likelihood was significantly larger in R than in EViews. A graphical inspection showed that EViews contained much more outliers with small loglikelihood values.

So R is the best performing package for all the datasets employed in this thesis. But many more datasets and estimations are needed to really point out a best performing package. In this thesis one empirical data set is used. Perhaps with a different empirical dataset EViews would have been credited as the best performing package. Although, 1000 simulations are used in the non-empirical case, they are all simulated by the same data generating process. It might be that different data generating processes give different results. So it cannot be stated in general that R is a better performing package. But the results of this thesis indicate that R is a slightly better package and further examination might validate this result.

(16)

Notes

1. R version 2.12.0 can be downloaded here

http://cran.r-project.org/bin/windows/base/old/2.12.0/ and the mgarchBEKK package here

http://www.quantmod.com/download/mgarchBEKK/

2. The bv_garch package of Eviews 6 can be found in the folder Sample programs under logl.

References

Bauwens, L. Laurent, S. and Rombouts, J.V.K. (2006), Multivariate GARCH models: A

(tab)survey. Journal of Applied Econometrics, 21, 79-109.

Bollerslev, T. (1986), Generalized autoregressive conditional heteroskedasticity. Journal of

(tab)Econometrics 31, 307-328

Brooks, C. Henry O.T. and Persan, G. (2002), The effect of asymmetries on optimal hedge

(tab)ratios. Journal of Business, 75(2), 333-352.

Brooks, C. Burke, S.P. and Persan, G. (2003), Multivariate GARCH models: software

(tab)choice and estimation issues. Journal of Applied Econometrics, 18, 725-734. Engle, R.F. (1982), Autoregressive conditional heteroskedasticity with estimates of the

(tab)variance of United Kingdom inflation. Econometrica 50, 987-1007.

Engle, R.F. and Kroner K. (1995), Multivariate simultaneous generalised ARCH.

(tab)Econometric Theory, 11, 122-150.

Heij, C. de Boer, P. Franses, P.H. Kloek, T. and van Dijk, H.K. (2004), Econometric

(tab)Methods with Applications in Business and Economics. Oxford University Press 620-626

Referenties

GERELATEERDE DOCUMENTEN

Beeldverwerking, ook wel computer vision genoemd, is een technologie om behulp van camerasystemen en software de in en uitwendige kwaliteit objectief te bepalen van een

Dit leert dat in vergelijking tot Neder- land een lagere mobiliteit in Japan gepaard gaat met een lagere mortali- teit, waarbij nog opgemerkt moet worden dat de

Benchmarking to the control group, I find that ex-PE portfolio companies experience a decrease of 4.45% post-IPO which backs my second hypothesis assumption,

133 The influence of genotype, sperm concentration and equilibration period on sperm traits and the ability of epididymal sperm to withstand cryodamage This part of the

Vooral de percentages juiste antwoorden op vraag B 27 bevreemden ons, omdat we van mening zijn dat juist door het plaatsen in een context deze opgave voor de leerlingen

In figure 5 the reader may find an example of 36 vectors (the columns) of weight 5 in V(10,2) with mutual distance at least 4. We shall show that this example is unique. So let A

LIVER AND RENAL AND PLATELET COUNTS investigations should be continued it is necessary to know how AFTER DELIVERY IN PATIENTS WITH SEVERE soon after delivery results of the

Agentschap Onroerend Erfgoed Vondstmelding in de Verdronken Weide in Ieper.. (Ieper,