• No results found

Dynamic Hierarchical Factor Models (DHFM): Application to Sales Data

N/A
N/A
Protected

Academic year: 2021

Share "Dynamic Hierarchical Factor Models (DHFM): Application to Sales Data"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

0

University of Groningen

Faculty of Economics and Business

MSc Marketing MI/MM

Dynamic Hierarchical Factor Models (DHFM):

Application to Sales Data

Applying a Dynamic Hierarchical Factor Model to Retail Sales

Supervisor: Dr. Keyvan Dehmamy

Second supervisor: Dr. Felix Eggers

Student: Nicolò Ventura, s3045536

Abstract:

The goal is not to explain the distinct factors and market dynamics that lead sales, but instead to model them and investigate to which level are the idiosyncratic dynamics referred to between: global, chain-specific and brand-specific components. We do this applying a dynamic hierarchical factor model (DHFM) that was recently introduced by Moench et al. (2008), that is a multilevel generalization of the two-level dynamic factor model presented by Geweke (1976) and Sargent, Sims, et al. (1977). In other words, the objective is to obtain latent factors that represent the general dynamics of sales. In a second step, using an impact analysis we will observe if there are differences in the impact of a negative shock of the market between chains and in particular brands.

Finally, we will explore the possibility of forecasting for the model. The goal is here to understand if the model can actually produce precise forecast of sales.

(2)

1

Table of contents:

1.0 Introduction: ... 2 1.1 Literature review: ... 3 2.0 Methodology: ... 4 2.1 Data Collection: ... 4 2.2 Research Design: ... 5

2.3 Structure of the model: ... 6

2.4 Gibbs sampling procedure:... 8

3.0 Results: ... 9

3.1 Variance decomposition: ... 9

3.2 Impulse response (IR) analysis: ... 11

3.3 Forecast: ... 13

4.0 Conclusion: ... 15

4.1 Discussion: ... 15

4.2 For further research: ... 17

5.0 References: ... 19

6.0 Appendixes: ... 20

Appendix A – Data descriptive statistics: ... 20

Appendix B – Mean coefficients: ... 23

Appendix C – Factor graphs: ... 25

Appendix D – Matrix restrictions: ... 26

(3)

2 1.0 Introduction:

Forecasting sales is an important step in order to develop strategies for the future. The growing use of computers for inventory control and production planning drove the necessity for explicit forecasts of sales and usage for individual products. These forecasts should be estimate on a routine basis, this in order to set properly strategies and tactics for both sort and long-term plan. In addition, they should be able to account for changing conditions. Companies currently apply different methods such as moving averages, exponential smoothing or seasonal-trend model.

Differently, we apply a dynamic hierarchical factor (DHF) model (Moench et al. , 2013), where latent factors represent the general dynamics of sales.

Due to the difficulty to include in a single model all relevant factors that may drive sales, we aim to estimate a restricted number of factors that are able to summarize and include all relevant dynamics that can lead sales. This is one of the main advantages of dynamic factor models. Indeed, it is demonstrated that a few factors can explain most of the variance of many time series, as has been empirically shown by e.g. Sargent et al. (1977) and already applied in other researches (Förster, 2014) . In addition, this would match one of the most important criteria in model building, the parsimoniousness of parametrization.

We used a three levels model. The first level represent the macroeconomic dynamics or common factors that influence all the time series. The second level is represented by chain-specific factors which summarize dynamics that are relative to each retailer, that can be represented for example by the promotional pressure, the format of the stores and the price level. The third level contains factors linked to the brands themselves, such as brand image, brand familiarity, loyalty and so on.

Another aspect that we wanted to test is the application of the model in forecasting procedures. Indeed, we aim to demonstrate that the hierarchical subdivision included in our model will produce precise estimations. Taking into account the subdivision, the model is able to combine the information of the variance decomposition to reduce and precisely estimate the error term of the forecast.

(4)

3

different entities, such as chains and brands, respond to shocks in the Dutch deodorant industry. Moreover, we would like to discover if there are differences between chains and brand regarding sales. Additionally, we utilize the dynamic hierarchical factor model’s ability to distinguish movements of the data that are common for the entire economy or the market from those that are chain or brand specific.

Our model could represent a good framework for performance assessment and forecasting. We will analyse a model where sales are clustered into blocks according to the brand. For example, we expect that strong brand will be able to resist to negative shock in the market.

1.1 Literature review:

Our dataset comprehends the Price war in the Dutch supermarket initiated in 2003. According to the finding of H.J. Van Heerde, E. Gijsbrechts, K. Pauwels (2008) the price war has enhanced consumers’ sensitivity to both weekly store prices and chain price image, confirming predictions in the literature (Heil and Helsen, 2001). Hence, this can influence the situation of retailers and brand that are relatively considered as expensive.

During the impulse response analysis, we added a negative shock, which we may assume represented by a recession period. According to Johansson, Dimofte and Mazvancheryl (2012), who demonstrated that the returns of the 50 brands did not outperform the market but instead were slightly worse than the market average, We expect that strong brands will not perform better than the market during crisis.

Furthermore, as confirmed by Hampson and McGoldrick (2013) and by McKenzie and Schargrodsky (2005), we may expect a smaller negative effect on weak and not expensive brand, because as we know it refers to a purchase were the customer is really involved and where it often has a brand preference. This because as the authors demonstrated the shopping behaviour of customer changes during recession, in particular, they demonstrated how impulse buying and planned purchases are reduced in this period. Ang et al. (2000) also confirm the theory. Here the researchers demonstrated how customers reduce their consumption of wastefulness and switch to cheaper and rather local brands during a financial crisis.

(5)

4

From the retailers’ perspective, according to Srinivasan and Sivakumar (2011), we expect that due to the change in shopping behaviour, big supermarkets will suffer proportionally more than smaller stores.

2.0 Methodology: 2.1 Data Collection:

The dataset used for the analysis, provided by Nielsen, comprehend sales data from the competitive environment in the deodorant market in the Netherlands. The data are clustered according to eight different deodorant brands, which alphabetically ordered are: 8x4, Axe, Dove, Fa, Nivea, Rexona, Sanex and Vogue. Sales of these above-mentioned brands are stored across five different Dutch supermarket retailers, which are: Albert Heijn, Edah, Super de Boer, Jumbo and C1000. The data file comprehends weekly observations on regular prices (without promotions, in euros per 100ml) and current prices (with promotions and for the same size). Moreover, it includes information for each week on the utilization of promotion and advertising of the supermarket chain that can be divided in display promotion, feature promotion or combination of both.

The dataset collects sales values for a period of 124 weeks, from the 46th week of the 2003 until the 12th week of the 2006. During the period, a strong price war started by Ah, where chains started highly competing on price and raised their promotional pressure.

(6)

5

Focusing, instead, on brands, we can see how they varied quite heavily about sales. As shown in Appendix A, the market leaders at that time were clearly brands Rexona, with a market share of 22%, followed by Axe, which had a market share of 21%. Vogue struggled to compete, indeed, it displayed only a market share of 6.9%, which is also due to the fact that it recorded sales equal to zero during several weeks in Edah. Oher brands gained a relatively small part of the cake with market shares that balance between 8% and 11%.

Before running the analysis, the dataset was checked for missing values, outliers and inconsistencies. For what concerns the values of sales, in several weeks the recorded sales at the Edah supermarket for Vogue were zero. This could have been caused by contractual infringement, which lead to a refusal in either supplying or selling the product by one of the parties. Furthermore, sales figures variate in general heavily between weeks. This is however not illogical, since promotion effects could lead to promotional peaks and dips.

Due to the previously explained problem with Vogue, we decided to leave this brand out of the analysis. Another solution could have been represented by the exclusion of the brand when it comes to the estimations of the factor of the chain Edah. Unfortunately, in this way, the idiosyncratic component of the brand could have been biased, due to the lack of data. Hence, the elimination of one brand appears to be a more feasible solution.

2.2 Research Design:

Firstly, in order to apply the DHFM, the time series are, assumed to be stationary. In other words, they need to have the mean, variance and autocorrelation structure that does not change over time. The standardization of the time series was done using a log-linearization procedure around the mean. This was done because all-time series present positive sales values, as expected, but they also display high variation with several peaks (see graphs in appendix A).

(7)

6

Subsequently, we applied the same procedure introduced by Monech et al. based on MCMC with Backward smoothing and Kalman filtering. In order to replicate the Bayesian probability distribution, we needed to select a number of sampling to train our model and let it able to converge to average values. We therefore generated 20000 draws, of which the first half was devolved to the training step. Furthermore, we decided to store every two observations; this prevented us from finding correlations among draws.

In a third step, we had to define how many factors we want for each level of the hierarchy. Due to the lack of prior researches that applied this technique, we referred to the Moench’s paper, where it is assumed that the factor-loading matrix is constant and so, we decided to estimate only one factor for both each level and block. In other words, we will have one idiosyncratic factor that account for all the dynamics in the whole market, then, we have one factor per each chain, and, finally, an additional component for each brand in each chain.

Finally, we needed to select how many degrees of auto-regression we will use. By the way that we want to preserve degree of freedom that lead to a parsimonious model, we decide to include only one lagged effect.

2.3 Structure of the model:

We applied a dynamical hierarchical factorial model (DHFM) (Moench. 2013). As previously mentioned, a Gibbs sampling procedure was performed in order to iteratively obtain new values for factors and coefficients.

(8)

7

Figure 1 – Schematization of a 3-level DHF model with five blocks

We built a three-level dynamic factor model with 𝐵 blocks, relative to different supermarket chains, and 𝑁𝑏 time series (where 𝑏 = 1, . . , 𝐵) in each of the block representing the sales of different brands. Indeed, there are in total 𝑁 = 𝑁1+. . +𝑁𝐵 time series with 𝑇 observations (the weekly sales of a given brand). In other words, in our dataset every 𝑖 time series, which refers to the sales of one of the brand, in a block 𝑏, relative to the supermarket chain, is divided into two idiosyncratic components and one common factor. The series are assumed stationary with mean equal to zero and unit variance. However, we need two more assumptions to be able to apply the model. On one hand, we also assumed that 𝑁 and 𝑇 are both big enough, and on the other hand, we expect that 𝐵 ≪ 𝑁. Therefore, we grouped all-time series in matrix 𝑋, whose dimension are 𝑇𝑥𝑁. The first level comprehend the 𝑁 time series relatives to the brands, which are then divided in blocks, the second level represents the chain-specific factors and the third one consists of the common factors, which describes the common dynamics of the whole market.

The three-level hierarchical structure is composed as follow:

(9)

8

Where 𝑋𝑏𝑛𝑡 is a vector of the standardized average sales of the n-th brand of the block-chain 𝑏 at time 𝑡; 𝐺𝑏𝑡 is a vector which contains block factors, or chain related components, of the block 𝑏 at time 𝑡. Finally, 𝐹𝑡 is a vector of common factors relative to the whole market at time 𝑡. Moreover, 𝛬𝐺𝑏𝑛 and 𝛬𝐹𝑏 are matrix polynomials in 𝐿. The components 𝑒𝐹𝑘𝑡, 𝛬𝐺𝑏𝑛(𝐿)𝐺𝑏𝑡 and Λ𝐹𝑏(𝐿)𝐹𝑡 are respectively the idiosyncratic, block-specific and common residuals. The error terms synthetize the variance of the data that factors 𝐹 and 𝐺 are not able to explain. Furthermore, the error terms, 𝑒𝑍𝑏𝑛𝑡, 𝑒𝐺𝑏𝑡and 𝑒𝐹𝑡, are assumed stationary and normally distributed autoregressive processes. They are calculated as follow:

𝑒𝑋𝑏𝑛𝑡 = 𝛹𝑋𝑏𝑛𝑡(𝐿)𝑒𝑋𝑏𝑛𝑡 , 𝑒𝑋𝑏𝑛𝑡~𝑁(0, 𝜎2

𝑋𝑏𝑛) (4) 𝑒𝐺𝑡 = 𝛹𝐺𝑏(𝐿)𝑒𝐺𝑏𝑡, 𝑒𝐺𝑏𝑡~𝑁(0, 𝜎2𝐺𝑏) (5) 𝑒𝐹𝑡 = 𝛹𝐹(𝐿)𝑒𝐹𝑡, 𝑒𝐹𝑡~𝑁(0, 𝜎2𝐹) (6) Furthermore, the covariance matrices for are assumed to be in diagonal form, with the variance parameters 𝜎𝐹 and 𝜎𝐺 set as fixed and equal to 0.1. While, the 𝛬 matrixes are restricted in such a way that the first series gets only the loading from the first 𝐺, while the second gets loadings, both from the 2nd 𝐺 and from the first 𝐺 and so on. In other words, each series is influenced by the relative 𝐺 coefficient and the others of lower grade.

An example of the Lambda’s (𝛬) and Ksi’s (𝛹) matrixes can be found in appendix D. For further descriptions, please refer to Moench et al. (2013).

The model was estimated after having applied a Markov Chain Monte Carlo (MCMC) iterative method called Gibbs sampling.

2.4 Gibbs sampling procedure:

(10)

9

Furthermore, in order to run the MCMC algorithm, we had to settle some parameter. We decided to generate only one common factor (𝐾𝐹 = 1) and one factor for each chain (𝐾𝐺,𝑏 = 1). Moreover, we opted for only one lag order, when it comes to both the transition equation 𝑒𝐹𝑡 = 𝛹𝐹(𝐿)𝑒𝐹𝑡 and the idiosyncratic components 𝑒𝑋𝑏𝑛𝑡.

We followed Moench et al. for what concerns the non-informative prior distribution for the factor loadings 𝛬 and autocorrelation coefficients 𝛹. Thereby, we adopted a standard Gaussian distribution.

The Gibbs sampling, through the Monte Carlo simulation, created a Markov Chain of samples. Because we decided to go for a non-informative prior distribution, the samples at the beginning of the estimated progression will probably not be able to describe the joint distribution. The sampling process generated 20000 draws, of which the first half was used to train the model and the second ones was used during the estimation part. Furthermore, we store the sample every 2𝑛𝑑 draw, in order to avoid correlation between subsequent samples. Finally, the procedure gave us 10000 draws that can be used for further analysis, such as Impulse response (IR) analysis and variance decomposition.

3.0 Results:

3.1 Variance decomposition:

It is interesting to estimate the extent of the variation at each level of the hierarchy. Doing so, we can understand what are the main factor that influence the sales between common, chain specific and brand specific. According to this, we executed a variance decomposition analysis.

Indeed, the total variance of sales (𝑉𝑎𝑟(𝑋𝑏𝑛)) can be split in a weighted sum of the variances relative to different levels.

(11)

10

Taking into account that 𝛾𝐹, 𝛾𝐺 and 𝛾𝑋 are compounds of parameters, the estimation was computed as follow, they represent respectively the share of F, the share of G and the share of idiosyncratic component:

𝛾𝐹𝑉𝑎𝑟(𝐹)/𝑉𝑎𝑟(𝑋𝑏𝑛) (8) 𝛾𝐺𝑉𝑎𝑟(𝑒𝐺𝑏𝑡)/𝑉𝑎𝑟(𝑋𝑏𝑛) (9) 𝛾𝑋𝑉𝑎𝑟(𝑒𝑋𝑏𝑛)/𝑉𝑎𝑟(𝑋𝑏𝑛) (10) As we can see in figure 2 and table 1, we found that averagely the majority of the variance is given by the brand-specific component, which normally accounts above 78%. It is followed by the chain specific factor that explain between 3% and 13%, and finally by the common factor, that only account between 3% and 19%.

It is interesting to see how Edah and C1000 display greater importance for the chain specific factor, respectively 13% and 10%; while Albert Heijn and Super De Boer are more influenced by the common factor, which account for 15% and 19%.

Instead, Jumbo is completely outstanding, for what concerns the variance decomposition. Here sales are mainly driven by the common factor, which accounts for 53% of the variation. Then we find the brand-specific factor that here only explain the 41% of the fluctuations, almost half of the impact that it has in other chains. Finally, the chain component explain only the 6% of the variations. Despite this, the results for the common specific factor and for the idiosyncratic component are not significant at 5% level. This could mainly due to the EDLP strategy applied by the outstanding retailer (see appendix A for price boxplot), which make it more sensible to market variations.

Table 1 – Variance decomposition with relative standard errors

Block 𝑆ℎ𝑎𝑟𝑒𝐹 𝑆ℎ𝑎𝑟𝑒𝐺 𝑆ℎ𝑎𝑟𝑒𝑋 Albert Hejin 15.13% (.031)* 2.65% (.0076)** 82.22% (.032)* Edah 2.88% (.016)* 12.63% (.032)* 84.49% (.031)* Super De Boer 18.86% (.030)* 3.49% (.0075)** 77.65% (.028)* Jumbo 53.45% (.056) 5.82% (.010)* 40.73% (.051) C1000 7.40% (.035)* 9.79% (.028)* 82.81% (.024)*

Standard errors in parentheses

(12)

11

Figure 2 – Variance decomposition of sales

In appendix C, we may see the distribution of the common factor (𝐹) and chain-specific factors (𝐺), while appendix B show the mean values for each component.

3.2 Impulse response (IR) analysis:

From the forecasting perspective, it could be interesting to see how sales are impacted by changes in the deodorant industry. Sometimes, negative shock to the market occur due to unforeseen and unpredictable event. They are indeed caused by factors that were not under the control of the management board. With the objective to simulate such a situation, we applied an IR analysis to discover how chain and brand specific factors react to a negative shock on the common factor. We decided to limit the forecast estimation to 100 periods, because of difficulties in predicting sales for a longer period, by the way that they can be influenced by several external factors.

Figure 3 displays the forecasted posterior means of the impact of each block to one standard deviation shock. In addition, the graphs include a 90% confidence interval. As we can see, the effect is negative for all the supermarket retailers. This is supported by the fact that almost every correlation between the supermarkets’ sales is positive (see appendix A). The moment when the peak is reached vary according to the retailer. However, it averagely balances between 2 and 4 weeks. Furthermore, Jumbo appears to be the first supermarket chain that would start recovering from the negative shock. It shows the peak after only 2 weeks. Indeed, C1000 would be affected by the shock for a

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

ALBERT HEIJN EDAH SUPER DE BOER

JUMBO C-1000

(13)

12

greater time, because it reaches the maximum level after 9 weeks respectively and reduce its negative effect under 5% after 48 weeks.

Instead, when we look at the magnitude of the effect we can see how C1000 would be the most damaged one, with a negative effect of 0.60 standard deviation of the log scale of sales of its series. Averagely the negative shock is close to 0.35 standard deviation of the log scale of the relative sales. A different consideration should be done for Edah that would be barely influenced by the shock. The effect over this chain will be close to 0.06.

Figure 3 – Impulse response analysis. The first graph is relative to the common factor. The five followings are pertinent to the chain specific factors.

Figure 4, analyses the how the negative shock affects very brand sales in every retailer. Not all deodorant brands present a significant negative effect on sales. Indeed, Dove is the only brand that appear to be significant in all the chains, while Axe shows significant effects in every chain except Albert Heijn. In Edah, few brands found a significant negative effect, and these are: Dove, 8x4 and Axe.

8x4 appear to be the brand that would be damaged more by a negative shock to the market. It showed an incredible negative impact when it comes to C1000, where a shock of one standard deviation will have an effect more than doubled (2.6). In addition, the impact on this brand has a less steady curve, the maximum is reached in between 18 and 28 weeks, and it is often expired over one year. Furthermore, if we look at Jumbo, the only chain where all brands get a significant effect, Axe, Dove and Rexona, seem to be more resistant. Their impact has lower magnitude, with smaller troughs and a shorter influence. By the way, that they are the most sold brands, this is accord to our hypothesis that stronger brands will be able to recover faster their position (Dobre, 2013). To conclude, generally, Nivea and Sanex struggle to find significative effects.

(14)

13

Figure 4 - Impulse response analysis for every brand in every chain

3.3 Forecast:

(15)

14

idiosyncratic component of each brand. This was the last step due to the fact that these values depend on the value of the chain-specific factors.

In figure 5, we find the forecast based on the F common component. It simply shows an expectation for the development of the sales based on the data we got. The five graphs below are, instead, relative to each supermarket chain. For all these graphs, the data for the first 100 weeks are used to forecast the next period. We can see how the lines in the beginning appear rough and show sudden spikes. From t=100 onward you can see that the lines smoothen out and are more stable. This is caused by the fact that they represent a prediction.

The negative trend present in the first graphs of the common factor F suggest that the sales of the whole market would reduce its magnitude across time. In detail, we expect that it would be halved after 10 weeks approximately from the beginning of the forecast. When we look at the graph of specific chain, we confirm the same trend. The five below charts show that the sales should drop down again after having expired a significative growth after the price war.

Figure 5 – The graphs are relative to the forecast of the sales of the whole market, chart above, and sales of each supermarket chain. The first part of the five graphs below are real data, which

were used during the forecasting procedure.

(16)

15

would move, despite the standardization applied before the analysis. In appendix E, we can find focused predictions of each brand in every chain.

Figure 6 – Sales forecast for each brand in each supermarket retailers. The charts compare the real data, dark line, with a 90% confidence interval of prediction.

4.0 Conclusion:

4.1 Discussion:

(17)

16

components: a common factor, a chain-specific and a brand-specific factors. According to this, we found that generally the most important factor is the brand-specific component. This was due to its greater variability. However, this result is not surprising, as we could expect that proportionally the most important component of sales is given by the brand, because due to the brand image and loyalty that they previously created they are more able to influence the customer preferences. Moreover, the brand has more freedom in settling promotional activities, which could strongly influence the sales performance as well.

Despite this, we also showed how in Edah and C1000 the chain-specific component fulfil a greater importance; this is surprising if we consider that they were using discount/low service strategies. Instead, Albert Heijn and Super De Boer, which had strategies more focused on high quality service, presented a lower influence on sales. Finally, in Jumbo, which uses a EDLP strategy, the common factor explains more than the half of the variation. According to this, we can conclude that the different strategies adopted by different retailer are able to influence the shares of the components. We surprisingly demonstrated that the more a company focus on quality and service the less it drives the sales.

(18)

17

by Srinivasan (2011). Unfortunately, we could not provide many insights over the brand level because many of them resulted not significant.

When it comes to forecasting, we should stress that the model have not produced reliable results, due to the high sales variability. However, further and more complex model could overcome this limitation. For example, the model could have included more than one factor, to capture more variance. However, more explanations are provided in the following chapter. Despite this, as we demonstrated that there is a common dynamic that lead sales in the market, the model may represent a starting point in order to estimate the possible future competitive scenario. For instance, the model could be used in combination with a Scenario analysis with the goal of forecasting what could be the future situation in case we do not account for strategic competitors’ actions.

4.2 For further research:

The dataset comprehends information until 2006, however, the market changed after the analysed period, as Luijten and Reijnders (2009) demonstrated, for instance many retailers tried to reposition changing their retail format. Therefore, further studies could be conducted to understand if changes modified the market in relation to both the variance decomposition of sales and the market reaction to a negative shock. However, it lacks of flexibility, and so if the managers are interested to analyse how known sources impact, a causal model would be the appropriate one.

Moreover, the sales could have been grouped according to the location or in relation to the supermarket characteristic. Future researches have also the possibility to inspect how a different standardization technique would perform; such as a z-score transformation of log values.

(19)

18

(20)

19 5.0 References:

Ailawadi, K. L., Pauwels, K., & Steenkamp, J. B. E. (2008), “Private-label use and store loyalty”, Journal of Marketing, Vol. 72, No. 6, pp. 19-30.

Ang, S. H., Leong, S. M., & Kotler, P. (2000), “The Asian apocalypse: crisis marketing for consumers and businesses”, Long Range Planning, Vol. 33, No. 1, pp. 97-119.

Ang, A., & Piazzesi, M. (2003), “A no-arbitrage vector autoregression of term structure dynamics with macroeconomic and latent variables”, Journal of Monetary economics, Vol. 50, No. 4, pp. 745-787. Distrifood (2016), “Formules - Jumbo”, available at http://www.distrifood.nl/formules/jumbo.

Dobre, A. M. (2013), “Intangible assets as a source of competitiveness in the post-crisis economy. The role of brands”, Theoretical and Applied Economics, Vol. 18, No. 12 (589), pp. 127-138.

Bernanke, B. S., Boivin, J., & Eliasz, P. (2005), “Measuring the effects of monetary policy: a factor-augmented vector autoregressive (FAVAR) approach”, The Quarterly Journal of Economics, Vol. 120, No. 1, pp. 387-422.

Farris, P. W., & Moore, M. J. (Eds.). (2004), The profit impact of marketing strategy project: retrospect and prospects, Cambridge University Press.

Förster, M., Jorra, M., & Tillmann, P. (2014), “The dynamics of international capital flows: Results from a dynamic hierarchical factor model”, Journal of International Money and Finance, Vol. 48, pp. 101-124. Geweke, J. (1978), The dynamic factor analysis of economic time series model, Social Systems Research Institute, University of Wisconsin-Madison.

Gupta, R., Jurgilas, M., & Kabundi, A. (2010), “The effect of monetary policy on real house price growth in South Africa: A factor-augmented vector autoregression (FAVAR) approach”, Economic modelling, Vol. 27, No. 1, pp. 315-323.

Hampson, D. P., & McGoldrick, P. J. (2013), “A typology of adaptive shopping patterns in recession” Journal of Business Research, Vol. 66, No. 7, pp. 831-838.

Johansson, J. K., Dimofte, C. V., & Mazvancheryl, S. K. (2012), “The performance of global brands in the 2008 financial crisis: A test of two brand value measures”, International journal of research in marketing, Vol. 29, No. 3, pp. 235-245.

Luijten, T., & Reijnders, W. (2009), “The development of store brands and the store as a brand in supermarkets in the Netherlands”, The International Review of Retail, Distribution and Consumer Research, Vol. 19, No. 1, pp. 45-58.

McKenzie, D., & Schargrodsky, E. (2004), “Buying less, but shopping more: Changes in consumption patterns during a crisis”, Manuscript, Universidad Torcuato di Tella.

(21)

20

Ng, S., Moench, E., & Potter, S. (2008), Dynamic hierarchical factor models, manuscript available at http://www. columbia. edu/osn2294/papers/dhfm) short. pdf.

Roberts, K. (2003), “What strategic investments should you make during a recession to gain competitive advantage in the recovery?”, Strategy & Leadership, Vol. 31, No. 4, pp. 31-39.

Sargent, T. J., & Sims, C. A. (1977), “Business cycle modeling without pretending to have too much a priori economic theory”, New methods in business cycle research, Vol. 1, 145-168.

Semeijn, J., Van Riel, A. C., & Ambrosini, A. B. (2004), “Consumer evaluations of store brands: effects of store image and product attributes”, Journal of Retailing and Consumer Services, Vol. 11, No. 4, pp. 247-258.

Srinivasan, S. R., & Sivakumar, S. N. V. (2011), “Strategies for retailers during recession”, Journal of Business and Retail Management Research, Vol. 5, No. 2.

Van Dun, J. (2013), “Design of the Periodic Weekly Delivery Schedule at Jumbo Supermarkten: a store-oriented approach”.

Van Heerde, H. J., Gijsbrechts, E., & Pauwels, K. (2008), “Winners and losers in a major price war”, Journal of Marketing Research, Vol. 45, No. 5, pp. 499-518.

Zietz, J. (2006), Log-linearizing around the steady state: A guide with examples.

6.0 Appendixes:

Appendix A – Data descriptive statistics:

(22)

21

Figure 6 – Price boxplot for every brand in every chain

(23)

22

Figure 8 – Sales time series for every cahin, in order: Albert Hejin, Edah, Super de Boer, Jumbo and C1000

Table 2 - Correlations matrix between chain’s sales

Block Albert Heijn Edah Super De Boer Jumbo C1000 Albert Heijn 1.000 Edah -0.052 1.000 Super De Boer 0.147 0.193 1.000 Jumbo 0.481 -0.124 0.355 1.000 C1000 0.042 0.169 0.146 0.158 1.000

Table 3 - Correlations matrix between brand’s sales.

Sub-block Dove Fa Nivea Rexona Sanex 8x4 Axe

(24)

23

Note: *Significant at p-value<.05 **Significant at p-value<.01

Figure 9 – Correlation graph between chain’s sales

Appendix B – Mean coefficients:

Table 4 - 𝛬𝐹 parameter matrix

Chain Mean coefficient Std.

Albert Heijn 1.00 .00

Edah 0.19 .070

Super de Boer 0.97 0.12

Jumbo 1.19 0.12

(25)

24

Table 5 - 𝛬𝐺 parameter matrix

Chain/Brand Albert Heijn Edah

Super de

Boer Jumbo C1000

Mean Std. Mean Std. Mean Std. Mean Std. Mean Std.

Dove 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00 0.00 Fa 0.60 0.15 0.27 0.31 0.42 0.13 0.67 0.11 0.50 0.17 Nivea .045 0.15 .084 0.29 .087 0.12 0.77 .091 -.027 0.16 Rexona 0.39 0.13 0.37 0.34 0.50 0.15 0.67 .094 0.36 0.14 Sanex .032 0.16 0.55 0.34 -.059 0.12 0.69 .085 .064 0.14 8x4 -0.30 0.19 0.19 .080 0.37 .093 0.81 0.12 0.27 .091 Axe 0.25 0.16 0.25 0.72 0.70 0.11 0.80 .099 0.37 0.17

Table 6 - 𝛹𝐹 parameter matrix

Mean coefficient Std.

9.26E-05 .0093

Table 6 - 𝛹𝐺 parameter matrix

Chain Mean coefficient Std.

Albert Heijn 0.27 0.25

Edah 0.23 0.10

Super de Boer 0.28 0.21

Jumbo .068 0.14

C1000 0.83 .072

Table 7 - 𝛹𝑍 parameter matrix

Chain/Brand Albert Heijn Edah Super de Boer Jumbo C1000

(26)

25 Appendix C – Factor graphs:

Figure 9 – F common factor with confidence interval

(27)

26 Appendix D – Matrix restrictions:

(28)

27 Appendix E – Focused forecasts:

Figure 11 – Forecast for all brands in Ah, which in order are: Dove, Fa, Nivea, Rexona, Sanex, 8x4 and Axe. The graphs compare real data, dark line, with a 90% confidence interval that is

the result of the forecast procedure.

Figure 12 - Forecast for all brands in Edah, which in order are: Dove, Fa, Nivea, Rexona, Sanex, 8x4 and Axe. The graphs compare real data, dark line, with a 90% confidence interval

(29)

28

Figure 13 - Forecast for all brands in Super De Boer, which in order are: Dove, Fa, Nivea, Rexona, Sanex, 8x4 and Axe. The graphs compare real data, dark line, with a 90% confidence

interval that is the result of the forecast procedure.

Figure 14 - Forecast for all brands in Jumbo, which in order are: Dove, Fa, Nivea, Rexona, Sanex, 8x4 and Axe. The graphs compare real data, dark line, with a 90% confidence interval

(30)

29

Figure 15 - Forecast for all brands in C1000, which in order are: Dove, Fa, Nivea, Rexona, Sanex, 8x4 and Axe. The graphs compare real data, dark line, with a 90% confidence interval

Referenties

GERELATEERDE DOCUMENTEN

Because the idiosyncratic variance evidently is an important driver for the product return rate variance, we have evaluated the dynamics of the return reasons within blocks and

This model will function to decompose lemonade price- and advertising data in a large panel of chains and brands into (i) a common factor

In this paper, a Dynamic Hierarchical Factor Model will be used to analyse the movement of the regular price of deodorants at the common, block-specific

When latent factors caused by general dynamics of price promotion development are found with the help of a Dynamic Hierarchical Factor Model, it is possible to discover if there are

A recent study of 998 publications has shown that mind-set metrics are often causally closest to the marketing actions (Katsikeas, Morgan, Leonidou, &amp; Hult, 2016). Therefore, this

The FAVAR model was applied to predict the sales performance of individual brands, different price segments, and the deodorant market.. The common F was extracted from the previous

A Negative Binomial regression model with spatial random effects is defined, and detailed parameter estimation methods are proposed in Chap- ter 4 where we also test our Markov

The distribution functions contain direct information about the distances between the particles in a liquid and therefore, they can be used as a detailed