• No results found

Predictive quality density forecast : density forecast evaluation for misspecification of the kurtosis

N/A
N/A
Protected

Academic year: 2021

Share "Predictive quality density forecast : density forecast evaluation for misspecification of the kurtosis"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculty Economics and Business, Amsterdam School of Economics Bachelor Thesis

Predictive quality density forecast

Density forecast evaluation for misspecification of the kurtosis

Nathan Bijleveld (10590943)

Supervisor: prof. dr. Cees Diks

December 2015

Abstract

In this paper a modified scoring rule is proposed for misspecification of the kurtosis.The log-likelihood scoring rule is modified such that it becomes more sensitive to the kurtosis. This modification is conducted by adding a penalty term to the logarithmic scoring rule. By analyzing the size and power of the original and the modified test statistic, a conclusion about the sensitivity with respect to the kurtosis is drawn.

(2)

Contents

1 Introduction 3

2 Scoring rules for competing density forecasts 4

2.1 Data environment . . . 4

2.2 Loss function and scoring rules . . . 5

2.3 Logarithmic scoring rule . . . 6

2.4 Weighted logarithmic scoring rule . . . 7

3 Modified logarithmic scoring rule 8 3.1 Penalty term . . . 9

4 Monte Carlo simulation 10 4.1 Properties of the test statistic . . . 11

4.2 Power of test statistic . . . 13

(3)

1

Introduction

In econometrics forecasting is of big importance, since this discipline is about making pre-dictions using data that already exists. Very often point forecasts are used, but nowadays also density forecasts are considered. This type of forecasting provides more information about the estimation, because it contains a density instead of just one value. The interest in this way of predicting is increasing, as a density forecast provides more information about the uncertainty of the estimation. Especially this uncertainty is highly appropriate for dis-ciplines like finance and macroeconomics. Financial risk management is an example where this information is obviously suitable, since the left tail of the distribution from the density estimator contains the loss that is made by an investment. Diks, Panchenko, and Van Dijk (2011) discuss this example, where they develop a score function to judge a density forecast of the left tail of the distribution of asset returns.

In realistic situations, such as financial risk management for example, the forecast model might be misspecified. This should be taken into account, since important decisions are made using forecast models. For instance governmental institution decisions are often based on these models. Hence a framework is developed by Giacomini and White (2006) for use when the model may be misspecified.

Another problem in practice is caused by the fact that one often has to choose between two or more (possibly misspecified) models. Such a problem is described by Diebold and Mariano (2012). They compare two models by considering the predictive accuracy of both models. The conditional mean of both models is used to compare the predictive accuracy.

According to the discussed practical problems, it seems reasonable for instance that one sometimes may have to choose between a model with correctly specified mean, but incorrectly specified distribution and a model with incorrectly specified mean, but otherwise correctly specified distribution. In practice many variations between a model with correctly specified mean, but an incorrect specified characteristic and a model with incorrectly specified mean, but a correct specified characteristic are conceivable. An interesting variation is using the kurtosis as a characteristic, since the kurtosis provides useful information about the shape of the distribution.

(4)

In this thesis a scoring rule based on the kurtosis is developed. Where the question arises whether the power of the modified scoring rule indeed increases. Starting with the log-likelihood scoring rule, as described by Diks et al. (2011), it is modified such that it becomes more sensitive to the kurtosis. This adaption involves adding a penalty to the log-likelihood scoring rule, such that it prefers a model with incorrectly specified mean but correctly spec-ified kurtosis above a model with correctly specspec-ified mean but incorrectly specspec-ified kurtosis. At first the size of the test statistic of the modified scoring rule is considered, to conclude whether the test statistic is well-sized or not. Subsequently standard normality of the statis-tic needs to hold and therefore the mean, variance and a histogram are considered. Finally the power is examined by Monte Carlo simulation to finally conclude whether the modified scoring rule discredits the model with incorrectly specified mean or not.

2

Scoring rules for competing density forecasts

2.1

Data environment

To perform this Monte Carlo simulation, the data environment needs to be considered. Only then a scoring rule can be developed. It seems reasonable to use the same data environment as described in Giacomini and White (2006), since they develop a framework for use when the model may be misspecified. That corresponds to the problem as described in the thesis, since in this thesis models are considered with either misspecified mean or misspecified kurtosis. Giacomini and White (2006) use the following stochastic process W ≡ {Wt : Ω → Rs+1, s ∈ N, t = 1, 2, ...} and it is therefore used in this thesis. This stochastic process can also be written as W = {Wt: Ω → Rs+1}Tt=1 defined on a complete probability space (Ω, F , P) with Wt = (Yt, Xt0)

0, where Y

t : Ω → R is the random variable of interest and Xt : Ω → Rs is a vector of predictor variables. Ft is defined as Ft = σ(W10, ..., Wt0), which is known as the information set at time t.

Subsequently two model-based density forecasts are considered, each producing predictive densities Yt+1, that are based on Ft. These two predictive densities are denoted by respec-tively ˆfm.t ≡ f (Wt, Wt−1, ..., Wt−m+1; ˆβm,t) and ˆgm.t ≡ g(Wt, Wt−1, ..., Wt−m+1; ˆβm,t), with g and f measurable probability distribution functions and m the estimation window size, as

(5)

supposed by Amisano and Giacomini (2007). They use a rolling window to obtain the out-of-sample evaluation, which is performed as follows. Starting with a one-step-ahead forecast at time m, using data indexed 1, ..., m, f (Ym+1) can be computed. Repeating this step using data indexed 2, ..., m + 1, f (Ym+2) can be computed. This process is iterated until a density forecast for YT is obtained, using data T − m, ..., T − 1. When a density forecasts is obtained, its predicting quality can be studied.

2.2

Loss function and scoring rules

The out-of-sample forecasts can be evaluated by making use of loss functions, as discussed by Diebold and Lopez (1996). Such a function determines the quality of a prediction. A bigger value of the loss function indicates a less accurate forecast, while a loss function close to zero indicates a very accurate forecast. The loss function is important for a scoring rule, since the scoring rule is a loss function in the current context according to Diks et al. (2011). The scoring rule is denoted by S∗( ˆft; yt+1), depending on the density forecast and the value of yt+1. The scoring rule has to behave such that, if possible, incorrectly specified density functions

ˆ

ft do not get higher average scores than true conditional density forecast estimations. This can be written as follows, where pt is the true conditional density function,

Et h

S∗( ˆft; Yt+1) i

≤ Et[S∗(pt; Yt+1)] ∀ t. (1)

If a scoring rule satisfies condition (1) it is called proper, following Gneiting and Raftery (2007). This implies that when a density forecast is closer to the true conditional density function, it receives a higher score.

When two density forecasts need to be compared, just a proper scoring rule is not enough. Therefore a test statistic is needed to test for equal predictive ability. To perform such a test, a score difference has to be defined, as described in Diks et al. (2011),

d∗t+1 ≡ S∗( ˆft; yt+1) − S∗(ˆgt; yt+1). (2)

(6)

the following null hypothesis for equal predictive ability can be computed,

H0 : E[d∗t+1] = 0, ∀ t ∈ {m, m + 1, ..., T − 1}. (3)

Define ¯d∗m,n as the sample average of d∗t+1 for t ∈ {m, ..., T − 1}, thus ¯d∗m,n = n−1PT −1

m d

∗ t+1 with n = T − m, where m denotes the maximum size of the estimation window, since these observations are used to predict ym+1, ..., ym+T −1. According to Diebold and Mariano (2012) the following test statistic can be computed to test H0 against Ha : E[d∗t+1] 6= 0

tm,n = ¯ d∗m,n q ˆ σ2 m,n/n , (4) where ˆσ2

m,n is a heteroskedasticity and autocorrelation-consistent (HAC) variance estimator of σ2

m,n = Var √

n ¯d∗m,n, with plim ˆσ2

m,n− σm,n2  = 0. Referring to Theorem 4 of Giacomini and White (2006) tm,n is asymptotically standard normally distributed as n → ∞. Hence for a two-sided test the null hypothesis is rejected if Φ(|tm,n|) > 1 − α2, with significance level α and Φ the standard normal CDF.

2.3

Logarithmic scoring rule

In the previous subsection a test statistic is derived making use of a scoring rule. Naturally the next step is to determine an appropriate scoring rule. Conforming to Mitchell and Hall (2005) a logarithmic scoring rule can be appropriate, since it assigns a high score value to a density forecast if yt+1 falls within the region with high predictive density ˆft. In a similar way the logarithmic scoring rule assigns a low score value to a density forecast if yt+1 falls within a region with low predictive density ˆft. They define the logarithmic scoring rule as

Sl( ˆft; yt+1) = log ˆft(yt+1). (5)

As described in Diks et al. (2011) two density forecasts ˆftand ˆgtcan be compared by ranking them, in line with their average scores based on n accessible observed variables ym+1, ..., yt. These average scores for ˆftand ˆgtare given by n−1PT −1t=mlog ˆft(yt+1) and n−1PT −1t=mlog ˆgt(yt+1)

(7)

respectively. Clearly the forecast with the highest average score would be preferred.

Intuitively the sample average of the log score differences can be used to perform a test like equation (3), but now testing whether the log score differences differ significantly from zero. With the log score differences defined as dl

t+1= log ˆft(yt+1) − log ˆgt(yt+1). Conforming to Diks et al. (2011) the logarithmic scoring rule is closely related to the Kullback-Leibler Information Criterion (KLIC), defined as

KLIC( ˆft) = Et  log pt(Yt+1) − log ˆft(Yt+1)  = Z ∞ −∞ pt(yt+1) log pt(yt+1) ˆ f (yt+1) ! dyt+1, (6)

with pt the true conditional density. They show that the KLIC is bounded from below by zero and since it’s obvious that the expectation of the logarithmic score (5) is inversely proportional with the KLIC, this results in a logarithmic score that is on average always bigger than or equal to zero. The KLIC seems useless since pt is unknown, but Mitchell and Hall (2005) show that the KLIC nevertheless can be used the compare the predictive accuracy of ˆft and ˆgt. The comparison can be made by taking the difference between KLIC(ˆgt) and KLIC( ˆft), hence the logarithmic score rule is again obtained, as shown below

KLIC(ˆgt) − KLIC( ˆft) = Et(log pt(Yt+1) − log ˆgt(Yt+1)) − Et  log pt(Yt+1) − log ˆft(Yt+1)  = Et  log ˆft(Yt+1) − log ˆgt(Yt+1)  = dlt+1. (7)

Following Diks et al. (2011), equation (4) changes to the following test statistic

tlm,n = ¯ dlm,n q ˆ σ2 m,n/n , (8)

with ¯dlm,n the sample average of dlt+1 and H0 : E[dlt+1] = 0 for all t = m, ..., T − 1.

2.4

Weighted logarithmic scoring rule

As discussed in the introduction, estimations are oftentimes used in disciplines like finance and macroeconomics etc. In these disciplines one might generally be interested in specific regions of the distribution, such as the the tails for instance. Therefore a weighted scoring

(8)

rule can be computed. Amisano and Giacomini (2007) propose a weighted logarithmic scoring rule

Swl( ˆft; yt+1) = wt(yt+1) log ˆft(yt+1). (9) Again a new test statistic, sample average and null hypothesis can be computed as in the previous subsection. Following the same approach, the sample average with respect to the weighted logarithmic scoring rule is ¯dwl

m,n = PT −1 t=mdwlt+1 with dwlt+1= Swl( ˆft; yt+1) − Swl(ˆgt; yt+1) = wt(yt+1)  log ˆft(yt+1) − log ˆgt(yt+1)  . (10)

The test statistic is given by

twlm,n = ¯ dwl m,n q ˆ σ2 m,n/n , (11) with H0 : Edwlt+1 

= 0 ∀ t ∈ {m, ..., T } and twlm,n still asymptotically standard normally distributed as n → ∞ with m fixed, just as in equation (4).

3

Modified logarithmic scoring rule

In the previous section a logarithmic scoring rule is described. Using this scoring rule, a modification can be conducted, such that it becomes more sensitive to the kurtosis. As described in the introduction, the kurtosis is considered since it provides useful information about the shape of the distribution. The logarithmic scoring rule is modified by adding a penalty to Swl( ˆft; yt+1). This penalty needs to be a term depending on the kurtosis of ˆft(yt+1). Hence the penalty has to behave such that the modified logarithmic scoring rule gives a lower score to a forecast with correctly specified mean and incorrectly specified kurtosis. While a forecast with an incorrectly specified mean and correctly specified kurtosis has to get a higher score, due to the added penalty.

(9)

3.1

Penalty term

When a penalty term based on the kurtosis has to be computed, it seems reasonable to give an expression of the kurtosis for a random variable X

κ ≡ Kurt[X] = E [(X − E(X)) 4

]

σ4 , (12)

with σ2 the variance of X. Note that in this thesis κ is defined as the kurtosis and in no way refers to the definition of a cumulant. Obviously the kurtosis is the fourth standardized central moment, also knows as he fourth central moment divided by the squared variance. Logically an expression the fourth central moment could be used as error term, since it influences the value of the kurtosis. But as described previously the modified logarithmic scoring rule needs to be sensitive to incorrectly specified kurtosis, hence the modified logarithmic scoring rule can be computed. Define

Sκl( ˆft; yt+1) ≡ Swl−  yt+1− Efˆt(yt+1) 4 − Efˆt  yt+1− Efˆt(yt+1) 4 1/4 (13)

as the modified logarithmic scoring rule. Obviously the penalty term exists of the absolute value of the difference between the realized and theoretical kurtosis. Since both the realized and theoretical kurtosis are included in the penalty term the modified scoring rule, the modified scoring rule is more sensitive to differences between the realized en theoretical kurtosis. Which means that it is more sensitive to incorrectly specified kurtosis. Following equation (10) differences dκl

t+1 can be computed for the new scoring rule.

dκlt+1 = Sκl( ˆft; yt+1) − Sκl(ˆgt; yt+1) = dwlt+1+ (yt+1− Eˆgt(yt+1)) 4 − µ4,ˆgt(t + 1) 1/4 −  yt+1− Efˆt(yt+1) 4 − µ4, ˆf t(t + 1) 1/4 , (14) with µ4, ˆf

t(t + 1) and µ4,ˆgt(t + 1) equal to the fourth central moment with respect to ˆft and

ˆ

gt at time t + 1 respectively.

(10)

kurtosis, it has to be shown that an expression as in equation (11) can be used. Therefore Edwl

t+1 

= Edκl

t+1 has to hold, such that the null hypothesis of equal predictive quality holds. Using the condition that

(yt+1− Egˆ(yt+1))4 ∼ 

yt+1− Efˆ(yt+1) 4

(15)

in the following equation the two expectations cancel out,

Edκlt+1 = E dwlt+1 + Eh (yt+1− Eˆg(yt+1)) 4 − µ4,ˆg(t + 1) 1/4i − E "  yt+1− Efˆ(yt+1) 4 − µ4, ˆf(t + 1) 1/4# = Edwlt+1 . (16)

Hence under H0 : Edκlt+1 = 0 ∀ t ∈ {m, ..., T − 1} it follows from equation (11) that the test statistic for equal predictive quality of the adapted scoring rule is given by

tκlm,n = ¯ dκlm,n q ˆ σ2 m,n/n , (17)

with ¯dκlm,nthe sample average of dκlt+1for all t = m, ..., T −1 and ˆσm,n2 a HAC variance estimator of σ2

m,n = Var √

n ¯dκl m,n.

4

Monte Carlo simulation

Equation (17) provides a test for the modified logarithmic scoring rule. To analyze whether the test statistic is useful, Monte Carlo simulation is conducted with a standard normal dsitribution as data generating process. According to Davidson and MacKinnon (1994) Monte Carlo analysis is useful to obtain graphical information about a test statistic. Size-size plots and size-power plots are used to provide such graphical information. By making such plots myself and analyzing them, a conclusion is drawn about the performance of the test.

(11)

4.1

Properties of the test statistic

The test statistic has to perform in such a way that the scoring rule without the penalty term prefers S( ˆft) over S(ˆgt) and the scoring rule including the penalty term prefers S(ˆgt) over S( ˆft). Where ˆft is a distribution with correctly specified kurtosis, but incorrectly specified mean and ˆgt a distribution with correctly specified mean but incorrectly specified kurtosis. Which leads to the following hypotheses,

H0 : E[S( ˆft)] = E[S(ˆgt)]

Ha : S( ˆft) > S(ˆgt) with penalty term

Ha : S( ˆft) < S(ˆgt) without penalty term. (18)

To test if the test statistic performs as wanted, different properties of the test statistic have to be known at first. Otherwise conclusions about the performance of the test statistic are hard to make. Properties like the mean, variance and distribution are considered, because standard normality needs to hold. At last the size is studied, since the type I error of a test statistic is important to know due to the fact that it equals the rejection probability of the test statistic. The size has to be approximately equal to the significance level α. By analyzing these properties, a conclusion about the standard normality of the t statistic is drawn. This has to hold for both the scoring rule with and without penalty term.

To perform this analysis two existing probability density functions are used as density forecasts, where ˆft equals a normal distribution with bias b and variance 1 and ˆgt a normal distribution with bias −b and variance 1. Using these functions, a vector of test statistic values corresponding with the number of replications of the Monte Carlo simulation is obtained to check the normality condition. Such that at 100 · α% significance level about 100 · α% test statistics are rejected, which implies that the size is equal to the nominal size α.

(12)

(a) Without penalty term (b) With penalty term

Figure 1: Histogram of a particular simulation of the test statistic

Considering multiple histogram plots by running the Monte Carlo simulation multiple times, standard normality of the test statistic seems to hold. Performing this Monte Carlo simulation of 1000 replications a bias of 0.2 used with sample size n = 1000 and degrees of freedom ν = 6. This is also checked by returning the mean and variance while performing the simulations. Finally a size plot as a function of the bias b and a size-size plot are made for both test statistics with and without penalty term, as shown below.

(a) Significance level 5% (b) Size-size plot

(13)

The plots obviously show that the size of both statistics approximately equals the significance level. As a result of the finite sample size, the line fluctuates around the chosen significance level in Fig. 2a.

Summarizing, standard normality seems to hold and the size of the test behaves as wanted. As the properties of the test statistic under H0 appear to be correct, the performance of the statistic in terms of power can be tested. To analyze the performance, two distributions need to be specified, as described in the following subsection.

4.2

Power of test statistic

Since the size behaves well, the next step is logically to consider the power. For this case two distributions are conducted. The first pdf has to be distributed with a correct kurtosis but a wrong mean, while the other has to be distributed with a correct mean but an incorrect kurtosis. Hence it seems reasonable to use a normal distribution with µ = b and σ2 = 1 to determine the power the statistic, such that it is biased, with bias b, from a standard normal distribution for the first pdf. Thus

ˆ ft(yt+1) = 1 √ 2πe −1 2(yt+1−b) 2 . (19)

For ˆgta t-distribution is used with ν = 6, since it has a mean equal to zero, but a kurtosis different to the kurtosis of a standard normal distribution, due to the degrees of freedom ν equal to 6. The pdf of a t-distribution can be written as follows,

Γ ν+12  √ νπΓ ν2  1 + y 2 t+1 ν −ν+12 . (20)

Since this distribution has a variance ν−2ν , that is in general not equal to 1, the pdf has to be transformed. Say X ∼ t(ν), then Y ≡ qν−2ν X has variance 1. Using the inverse transformation method the pdf of Y can be computed,

fY(y) = dx dy fX(x) = r ν ν − 2fX r ν ν − 2y  . (21)

(14)

Therefore ˆgt can be written as follows, ˆ gt(yt+1) = r ν ν − 2 Γ ν+12  √ νπΓ ν2  1 + y 2 t+1 (ν − 2) −ν+12 . (22)

Since the distributions are specified, only the weight of the weighted logarithmic scoring rule needs to be defined as described in equation (10). It is desirable that high differences log( ˆft(yt+1)) − log(ˆgt(yt+1)) get a higher weight than lower scores, such that the test statistic receives a higher power compared to the logarithmic scoring rule without weights. Therefore the weight wt(yt+1) = |yt+1| is used. Thus dκlt+1 is computed as follows,

dκlt+1 = |yt+1|(log( ˆft(yt+1)) − log(ˆgt(yt+1))) + yt+14 −  ν ν − 4+ 3  1/4 − y4t+1− 3 1/4 (23)

since the kurtosis of a standardized t distribution equals ν−4ν + 3 for ν > 4 and the kurtosis of a normal distribution equals 3. Note that this weighted logarithmic scoring rules isn’t proper according to Diks et al. (2011).

To specify, tκl

m,n the HAC estimator needs to be calculated. In the context of this paper I use an estimator equal to the sample variance since this estimator is already heteroscedasticity and autocorrelation-consistent, due to the use of independent normal and t distribution instead of time series with dependence. Therefore

ˆ σm,n2 = Var(√n ¯dκlm,n) = nVarn−1Xdκlm,n= n n2Var X dκlm,n= 1 n X Var(dκlm,n) = Var(dκlm,n), (24)

where independence of dκlm,n over the time is used, since in this simulation dκlm,n only depends on simulated observations generated from the same DGP. Thus the test statistic as computed in equation (17) is written as tκlm,n = ¯ dκl m,n q Var(dκl m,n)/n , (25) with dκl m,n as stated in equation (23).

(15)

the power is defined as,

power = P[reject H0|Ha is true] (26)

where a higher power corresponds to a better performance of the test statistic. Considering multiple power plots with different sample sizes, the power of the test statistic is shown in the plots below. In these plots the alternative hypotheses changes, such that one can compare the increase in power due to the penalty term added. Therefore Ha : S( ˆft) > S(ˆgt) is used in the following plots, since this corresponds to the alternative hypothesis of the scoring rule with penalty term.

(a) n = 500 (b) n = 1000

(c) n = 2000

(16)

Considering these plots, an increase in power is visible. Obviously a bias higher than 0.8 leads to a power of zero, such that the test performs better with a bias closer to zero.

Since the test statistic seems to perform as wanted, the question arises whether it also performs as wanted under a different null hypothesis. This hypothesis differs such that a right sided test under the null hypothesis changes in a left sided test and vice versa. Hence the following alternative hypothesis is obtained,

Ha : S( ˆft) < S(ˆgt) with penalty term

Ha : S( ˆft) > S(ˆgt) without penalty term. (27)

Using this alternative hypothesis, new power plots can be obtained. These plots provide extra information about the test statistic. It is desirable that they show a low power, where the power plots from figure 3 show a high power. The plots are obtained by making use of Ha : S( ˆft) < S(ˆgt), since this hypothesis corresponds to the alternative hypothesis of the scoring rule with penalty term as stated in equation (27).

(17)

(a) n = 500 (b) n = 1000

(c) n = 2000

Figure 4: Power plots for different sample sizes n

Considering the following plots, the power functions indeed behave in the opposite direction compared to Fig. (3). Also the power function of the scoring rule without penalty term is higher than the power function of the scoring rule with penalty term. This behavior corresponds to the alternative stated in equation (18) for the scoring rule without penalty term.

(18)

5

Conclusions

In this paper a new scoring rule is developed by modifying an existing scoring rule. In this case the logarithmic scoring rule is used as starting point and modified such that it becomes more sensitive to the kurtosis of the density forecast. This adaption was preferable, since not only the mean but also the kurtosis provides useful information about a distribution. One can imagine that in applications like financial risk management it is useful to include a penalty term with the kurtosis in the original scoring rule, as the kurtosis provides information about the tailedness of a distribution and therefore in this case about the uncertainty of the financial risk decision.

Performing Monte Carlo simulation, multiple size and power plots for different sample sizes are obtained for both test statistics with and without penalty term. At first a size plot is conducted, such that information is provided about whether the test is well sized. A well sized approximately standard normally distributed test statistic is required to interpret the power of the test. Subsequently the power plots are obtained. Considering these plots different properties of the test statistic are visible.

Considering Fig. 3 the increase in power is visible by adding a penalty term to the original scoring rule. Obviously on the domain where the modified test statistic has a high power, the original test statistic has less power under the same alternative hypothesis. This implies that the modification has the desired result, since the modified test statistic does have more power and therefore is more sensitive to the kurtosis.

According to the performed analysis, the modification results in a test statistic that is more sensitive for the kurtosis. But also note that that the modified test statistic has such a power for a bias between 0 and 0.8. Also the use weighted logarithmic scoring rule is not proper and therefore the modified scoring rule is not proper. Therefore these issues can be investigated in further research, such that the modification also holds for a higher bias and a proper scoring rule is found.

(19)

References

Amisano, G., Giacomini, R., 2007. Comparing density forecasts via weighted likelihood ratio tests. Journal of Business & Economic Statistics 25 (2), 177–190.

Davidson, R., MacKinnon, J. G., 1994. Graphical methods for investigating the size and power of hypothesis tests. Institute for Economic Research, Queen’s University.

Diebold, F. X., Lopez, J. A., 1996. Forecast evaluation and combination. in G.S. Maddala and C.R. Rao (eds.), Handbook of Statistics. Amsterdam: North-Holland (1996).

Diebold, F. X., Mariano, R. S., 2012. Comparing predictive accuracy. Journal of Business & Economic Statistics.

Diks, C., Panchenko, V., Van Dijk, D., 2011. Likelihood-based scoring rules for comparing density forecasts in tails. Journal of Econometrics 163 (2), 215–230.

Giacomini, R., White, H., 2006. Tests of conditional predictive ability. Econometrica 74 (6), 1545–1578.

Gneiting, T., Raftery, A. E., 2007. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 102 (477), 359–378.

Mitchell, J., Hall, S. G., 2005. Evaluating, Comparing and Combining Density Forecasts Using the KLIC with an Application to the Bank of England and NIESR ’Fan’Charts of Inflation*. Oxford bulletin of economics and statistics 67 (s1), 995–1033.

Referenties

GERELATEERDE DOCUMENTEN

This paper investigates the predictive powers of different versions of the Phillips curve and the BVAR to forecast inflation for a one-year ahead forecast and a two-year ahead

Now it is clear the chosen solution is outperforming the current method, a full implementation should be considered such that Company X can make use of Holt-Winter’s

The total number of returns is predicted using classification of return requests by the Logistic Regression and the timing by Poisson Regression and Negative Binomial Regression.

Process stage Horizon Aggregation level Time buckets Base demand forecast 24 Months FC Commercial Product Code Months Promotional demand forecast Year to go FC Commercial Product

It is based on the Positive Affect Negative Affect scale (PANSAS) by Wattson and Clark (1985). Pansas-t consists of word associated with eleven moods: joviality,

[r]

F3 F3 158-206 158-206 Severe – Roofs and some walls torn off Severe – Roofs and some walls torn off well constructed homes, trains.. well constructed homes, trains

terugkoppeling naar het medewerkerniveau niet goed verloopt.. Op medewerkerniveau is er vrijwel geen cross-functionele communicatie in het financiële forecasting proces. Indien