• No results found

Sinking in a sea of spare parts: A practical model for optimizing inventory levels MSc Technology & Operations Management Master Thesis

N/A
N/A
Protected

Academic year: 2021

Share "Sinking in a sea of spare parts: A practical model for optimizing inventory levels MSc Technology & Operations Management Master Thesis"

Copied!
43
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

University of Groningen | Faculty of Economics and Business | Department of Operations

Master Thesis

MSc Technology & Operations Management

Sinking in a sea of spare parts:

A practical model for optimizing inventory levels

Aron den Uijl

Student No: 1614312

First supervisor: Dr. Ir. Daniele Catanzaro Second supervisor: Prof. Dr. Jacob Wijngaard

Word count: 9780

(2)

2

Abstract

Spare part management has received increasing attention in academic research, because of the costs associated with too high inventory levels and outages. In this study we categorize spare parts according to mean time to failure data. We compare well-known methods for forecasting intermittent demand and a novel forecasting method. We use generated data to compare the considered methods within each categorized group of spare parts. The generated data is based on real data provided by FrieslandCampina Bedum. Further, we carry out a simulation project to determine optimal inventory levels per category, given a target service level. Surprisingly, we find that exponential smoothing consistently and clearly outperforms all other well-known forecasting methods and that the novel forecasting method outperforms exponential smoothing in 2 out of 7 categories. Moreover, we find optimal inventory levels for each category of spare parts and find that the optimal inventory level is dependent on the reorder period and spare part categorization.

Keywords: forecasting; intermittent demand; inventory levels; time to failure; spare part categorization

__________________________________________________________________________

Acknowledgements

(3)

3

Table of Content

1. Introduction ... 4

2. Theoretical framework ... 6

2.1. Classifying spare parts ... 6

2.2. Forecasting spare parts demand ... 7

3. Methodology ... 9

3.1 Classifying spare parts ... 10

3.2 Forecasting models ... 10

3.3 Validation of proposed forecasting models ... 10

3.4 Determining inventory levels ... 11

4. Deliverables ... 11 5. Data ... 11 5.1 Acquired data ... 12 5.2 Categorizing data ... 13 5.3 Generating data ... 13 6. Forecasting methods ... 15 6.1 Forecasting set-up ... 15 6.2 Forecasting models ... 16

6.2.1 Simple moving average ... 16

6.2.2 Weighted moving average ... 16

6.2.3 Single exponential smoothing ... 17

6.2.4 Holt-Winters method ... 17

6.2.5 den Uijl method ... 18

6.3 Accuracy measures ... 19

6.3.1 Mean absolute percentage error ... 19

6.3.2 Root mean square error ... 20

6.4 Forecasting results ... 21

7. Simulation set-up ... 23

7.1 Simulation logic ... 23

7.1.1 Simulation logic two identical spare parts ... 25

7.1.2 Simulation logic three identical spare parts ... 26

7.2 Simulation results ... 29

8. Discussion ... 37

9. Conclusions, limitations and recommendations ... 37

10. References ... 39

(4)

4 1. Introduction

Spare parts inventories assist maintenance staff in keeping equipment in operating condition (Kennedy et al., 2001). A myriad of industries rely on the effective management of spare parts (Lengu et al., 2014) and US Bancorp estimates that spare parts relate to a $700 billion annual expenditure that constitutes about 8% of the US gross domestic product (Jasper, 2006). The difficulties with spare parts management lie in two areas. The first one concerns the nature of demand for spare parts. Spare parts are typically slow moving items with an intermittent demand pattern (Teunter and Duncan, 2009). The demand can further be classified as lumpy, indicating demand that is highly variable and sporadic with low auto-correlation (Boylan and Syntetos, 2008; Kalchschmidt et al., 2006). The second difficulty lies with a trade-off between holding costs and outages (Vrat, 1984). To cope with these issues and to provide new insights into this field of research, we conduct a case study at FrieslandCampina N.V., whereby we create a simulation model to fit their demand patterns for spare parts, with the goal of minimizing inventory levels, while avoiding lengthy outages. Outages can be especially costly for the firm in question, because the studied factory produces perishable goods (Chen et al., 2014). Therefore, we will use forecasting to estimate future failure events to reduce lengthy and unexpected outages.

The field of research on spare parts has been very broad in scope over the last decades. Papers vary from non-technical with a systemic view of spare parts inventory management like Moore (1996), to those papers with a more technical and narrow scope like (Moinzadah, 1986; Kennedy et al., 2001). The technical papers exhibit a clear focus on (s,S) systems, like those presented in Cohen et al. (1992) and Kabir and Al-Olayan (1996). The field has consequently been expanded by multitude of variations on such systems like the (s – 1, s) inventory policies (Dhakar et al., 1994; Alfredsson, 1997) and (r,Q) system policies (Federgruen and Zheng, 1994). Furthermore, the intermittent demand pattern of spare parts has often been described by statistical distributions such as the Poisson, or other compound distributions (Kennedy et al., 2002; Diaz and Fu, 1997). We can spot another more recent trend in literature, in the classification of spare parts to allow for more accurate and less cumbersome demand forecasts. Wherein the stock keeping units are classified on the basis of several item characteristics (van Kampen et al., 2011). However, this field of research has not received as much academic attention as it deserves (Boylan et al., 2008).

(5)

5 decisions seemingly only allows for small incremental improvements in this research field. However, the comparison of simpler with more complex forecast methods as presented by Strijbosch et al. (2000), combined with spare part stock keeping unit classification with the goal of forecasting part failures and thus avoiding outages is still lacking. This thesis provides a complete system that compares simple and more complex forecasting method for each of the considered spare part categories. Furthermore, data is generated according to statistical distributions and we simulate large periods of failure events to arrive at the optimal inventory levels for a (s,S) reorder policy under continuous review, which is used by FrieslandCampina.

The aim of this thesis is to firstly classify spare parts according to the inter-demand interval. The classification scheme makes it possible to use simpler models for demand forecasting per category of spare parts. The simpler models are then compared with more advanced models, to see if the added complexity allows for significantly more accurate forecasts. The resulting classifications and models aim at forecasting failure events, which in turn will help in avoiding unexpected long outages. The categorized product groups also serve as input for the simulation, wherein we determine the optimal inventory levels for each group. Thus providing a complete system for the inventory management of spare parts and providing both a theoretical and practical contribution. The research questions that naturally arise from these goals are:

1. What are useful classification schemes for the categorization of spare parts?

2. Which inventory levels are best suited to each category and distribution of data for a (s,S) reorder policy under continuous review?

3. What forecasting models can be used to accurately forecast demand for spare parts per category?

In order to answer these questions we draw upon existing research in the spare parts field, which provides possible classification schemes and possible forecast models to use. Data is provided by FrieslandCampina N.V. Bedum. The theoretical contribution of this paper lies in the combination of different methodologies applied in the research field leading to a more complete, simpler system. Furthermore, a new forecasting method is presented and a novel simulation method is used determine inventory levels. The practical contribution lies in the provision of a practical model that is easier to understand and apply in a general business environment.

(6)

6 the described data in both the forecasting and simulation sections that follow respectively. The last section consists of conclusions, limitations and recommendations for future research.

2. Theoretical framework

We apply the simulation to the (s,S) inventory policy under continuous review, which is currently used by FrieslandCampina Bedum. This type of inventory policy relies on failure events to immediately instigate a reorder of spare parts, where s is the reorder point and S the order-up-to amount (Axsäter, 2006). We also asses multiple forecasting methods and use the most accurate method per group to forecast failure events. The advantage of firstly classifying spare parts is that it allows for simpler forecasting models, increase accuracy of the demand forecasts and simplify the process of determining the optimal inventory levels (Boylan et al., 2008). Furthermore, the use of classification schemes removes the need to engage in the cumbersome process of making stock decisions for each individual SKU (van Kampen et al., 2011). The use of simpler models could increase practical applicability. More accurate demand forecasts enable better prediction of failure events, thereby decreasing downtime and unexpected outages, which is essential in a facility which produces perishable products (Chen et al., 2014).

2.1. Classifying spare parts

(7)

7 validity of their results on 3000 intermittent demand data series from the automotive industry and found that their categorization could lead to superior forecasting performance. Kostenko and Hyndman (2008) provided an alternative view on SKU classification based on the assumption that demand values are not independent, as is assumed by Syntetos et al. (2005). Therefore, they proposed a simpler method, whereby the SKU classification was based on the historical mean of demand. However, they did not empirically validate the claim that their simpler method was more accurate. To empirically asses the work of Kostenko and Hyndman (2008) and Syntetos et al. (2005) and to compare the superiority of either of these methods, Heinecke et al. (2011) used data on more than 10000 SKUs from three different industries. The authors found that the linear function proposed by Kostenko and Hyndman (2008) led to superior forecasting performance and they concluded that the simple nature of this method is another advantage. Heinecke et al. (2011) stated that the discussed methods could be improved further by adding the concept of item criticality into the classification scheme, because this could increase the attractiveness of classification methods to practitioners. On a different note, Blanco and Gureckis (2012) investigated one of the potential pitfalls of item classification called the “representational shift hypothesis”. This hypothesis argued that the categorization of an object impairs recognition memory by altering the trace of the encoded memory to be more similar to the specific category prototype. The authors conducted experiments to test if the hypothesis is valid and they found no evidence to support the hypothesis. Meaning that labelling an object by the characteristics of its category does not impair item memory. A more general view on SKU classification can be found in van Kampen et al. (2011) who provided an analysis of current research in this field. They concluded that classification characteristics are influenced by industry characteristics and this makes it difficult to assess the performance of classification methods.

2.2. Forecasting spare parts demand

(8)

8 protection against being out of stock, which would lead to outages. Syntetos and Boylan (2005) presented a model based on Croston (1972). Based on their sample, they found that their model exhibits superior performance compared to Croston’s method and exponential smoothing. Furthermore, Syntetos and Boylan (2010) argued that the method they developed in 2005 leads to significantly more accurate demand forecasts than exponential smoothing and Croston’s method. Contrastingly, Moon et al. (2012) found by analysing the lumpy demand for spare parts in the South Korean Navy that exponential smoothing led to the most accurate demand forecasts. Further, Teunter and Duncan (2009) discovered that Croston’s method and Bootstrapping clearly outperformed the use of moving averages and exponential smoothing. They also concluded that the performance of Croston’s method and Bootstrapping can be further improved by taking into account that an order in a period is triggered by a demand in that period.

Academic research also uses statistical distributions to determine spare parts demand. Syntetos et al. (2012) used data on 13000 SKUs and carried out goodness of fit tests to assess the performance of, Normal, Gamma, Poisson, Stuttering Poisson and the Negative binomial distribution. They found that both the Gamma and Normal distribution performed the worst and that the stuttering Poison distribution was the most accurate. Contrastingly Porras and Dekker (2008) used data on 43000 SKUs at an oil refinery and found that the model with the Normal distribution outperformed the Poisson model, because the latter model caused overstocking in several cases. A novel approach was suggested by Teunter et al. (2010), where the lead-time demand was modelled as a compound binomial distribution. They tested this new model against the several other distributions and found that the new model performed significantly better. Another popular choice for time to failure distribution is the Weibull distribution, which can be used to accurately estimate failure events of spare parts (Campbell et al., 2011). Furthermore, Allen and Morzuch (1995) employed the Normal, Gamma, Cauchy and Weibull distribution to make probabilistic forecasts and concluded that the Weibull distribution showed the most promise.

(9)

9 second stage covers the period from the moment that the defect has arisen to failure. The results they presented indicate that their model outperformed both the Croston model and the Syntetos and Boylan (2005) model. Wallstrom and Segerstedt (2010) provided a very interesting contribution to this field of research. They concluded that the forecasting method to be used is dependent on situational factors and that a method with a low forecast error is not always equal to better customer service. These claims were supported by Syntetos and Boylan (2005) and Levén and Segerstedt (2004).

We can conclude that the research field of forecasting spare parts demand is very fragmented and that there is no single optimal solution for every situation. The field is awash with complex methods and most of these methods find support in different papers, where most authors conclude that a certain distribution is better at forecasting than others. Therefore, we can state that a single complex solution that performs best in each situation can still be found, or there might be room for a more simple approach with higher possibilities of generalization. Therefore, we will consider several statistical distributions and forecasting methods, which allow us to determine the optimal solution for each category of spare parts.

3. Methodology

(10)

10 3.1 Classifying spare parts

In recent literature on the classification of spare parts there are two variables that are widely used, namely, inter-demand interval and variation of the demand size (van Kampen et al., 2011; Boylan et al., 2008). We employ the widely used inter-demand interval as input for the classification scheme in this research, which is based on the mean time to failure ranges provided by the company. We determine the cut-off points between categories on the basis of the characteristics of the particular demand dataset (Heinecke et al., 2013).

3.2 Forecasting models

The research field of forecasting spare parts demand is fairly fragmented, hence it is difficult to determine a method that will lead to the most accurate forecasts. In order to cope with this problem, we compare 4 different and often used forecasting models. The simple methods we use are moving average and weighted moving average. These methods allow for a very simple forecast, because they assume that demand is dependent (Teunter and Duncan, 2009; Holt, 2004). The third method we consider is exponential smoothing, which has been shown to generate accurate forecasts (Moon et al., 2012; Gelper et al., 2010). Contrasting to the proposed simple methods, exponential smoothing uses a smoothing factor to assign exponentially decreasing weight of the data over time. The fourth model we investigate expands on exponential smoothing, the Holt-Winters model, which identifies trends and seasonality in the data and implements them in the forecast. (Faustino et al., 2011; Gelper et al., 2010). Lastly, we use a new method that relies on the mean of the mean time to failure range and the deviation from this mean over a set amount of periods. We apply the proposed models to each of the previously determined categories.

3.3 Validation of proposed forecasting models

(11)

11 3.4 Determining inventory levels

In a (s,S) policy, the s denotes the reorder point and the S denotes the order-up-to level. When such a policy is combined with continuous review of stock, then an order will be immediately placed to refill the inventory up-to level S, if inventory falls to, or below s (Campbell et al., 2011). The company in question uses a (s,S) inventory policy with continuous reviews, thus the ordering of spare parts is always instigated by a part failure (Campbell et al., 2011). Therefore, we will use simulation to test all possible inventory levels for each category of spare parts. Because specific data on spare part failure behaviour is not available, we will employ several statistical distributions to generate data that adheres to the mean time to failure ranges provided. We use a multitude of different distributions to provide a wide range of data that could describe reality. We employ the often used Normal, Poisson, Weibull and Gamma distributions (Syntetos et al., 2012; Porras and Dekker, 2008; Teunter et al., 2010; Campbell et al., 2011). Further, we use two versions of a Beta distribution and randomized data in order to describe a large amount of possible spare part failure behaviour is reality.

4. Deliverables

This thesis will provide the following deliverables:

 Exhaustive categorized spare part groups

 Insight into the performance of different forecasting methods

 The best performing forecasting method per categorized group and distribution

 A new forecasting method

 Optimal inventory levels per category and distribution for the (s,S) reorder policy 5. Data

(12)

12 5.1 Acquired data

As described in Section 5, general time to failure data is available at the company. Unfortunately, specific time to failure data, or historical time to failure periods are not available per spare part. However, the acquired data comes in the form of mean time to failure (MTTF) ranges. The company has specified three MTTF ranges for the three considered identical machines. These ranges are considered to be very accurate and virtually all part failures fall within their specified range, otherwise, the deviation from the specified range is very small. The specified ranges are:

 MTTF between 5 and 10 year

 MTTF between 1 and 5 year

 MTTF between 0.25 and 1 year

The acquired data on reorder periods consist of order lead-time and order delivery time. We combine these two types of data into the total ordering time and all parts of the considered machines have a total ordering time of either:

 2 days

 14 days

 21 days or,

 44 days

Finally, we acquired data on the number of same spare parts in use. Because we consider three identical machines, the majority of identical spare parts in use is 3, because each machine uses 1 identical part. We also analyse equipment that feeds input and equipment that receives output from the three identical Conomatics. Therefore, we also consider spare parts with 1 or 2 identical copies in use.

(13)

13 5.2 Categorizing data

The MTTF ranges dictate the categorization scheme, leading to three main groups. The three ranges are exhaustive, all spare parts have inter-demand intervals that fall within one of the specified ranges. We subsequently categorize the spare parts according to the reorder periods and lastly according to the amount of spare parts in use. The resulting categorization scheme consists of 3 main MTTF groups, which are divided further into 4 reorder period groups and separated further into 3 identical spare parts in use groups. The classification scheme is shown in table form in tables 5.1, 5.2 and 5.3.

The simulation model uses all categorized groups as input, because all three categorization criteria act as important variables in this model. While the forecasting methods only use the three main MTTF groups as input.

GROUP 1: MTTF 5 – 10 year

Reorder period (days): 2 14 21 44

Number of identical spare parts in use:

1 2 3 1 2 3 1 2 3 1 2 3

Table 5.1: Group 1, MTTF 5 – 10 year, part categorization

GROUP 2: MTTF 1 – 5 year

Reorder period (days): 2 14 21 44

Number of identical spare parts in use:

1 2 3 1 2 3 1 2 3 1 2 3

Table 5.2: Group 2, MTTF 1 – 5 year, part categorization

GROUP 3: MTTF 0.25 – 1 year

Reorder period (days): 2 14 21 44

Number of identical spare parts in use:

1 2 3 1 2 3 1 2 3 1 2 3

Table 5.3: Group 3, MTTF 0.25 – 1 year, part categorization

5.3 Generating data

(14)

14 input. This data has the form of inter-demand intervals in days. The generated input per MTTF group functions as possible failure behaviour of the spare parts per group in reality. To achieve as much coverage of possible real scenarios, we employ multiple statistical distributions to fill the ranges. We determine the parameters of the used distributions on the basis of the MTTF range to be filled, where between 100% and 95% of all generated values fall within the MTTF range. 5% deviation from the range is accepted, because the ranges are not absolute, as FrieslandCampina has specified that the ranges are accurate for almost all of the failure events.

We use several statistical distributions that are often employed to describe time to failure behaviour of spare parts. These are the Normal, Weibull, Gamma and Poisson distributions (Syntetos et al., 2012; Porras and Dekker, 2008; Teunter et al., 2010; Campbell et al., 2011; Allen and Morzuch, 1995). However, as there is no way to test whether actual spare part behaviour follows one of these well-known distributions, we also useseveral other distribution to achieve more coverage of possible real scenarios. The other distributions are a fully randomized and two version of an up-scaled Beta distribution, samples of generated data for Group 3 can be found in the appendix. The first, Beta1 distribution has shape parameters of 1 and 3, while the second Beta2 distribution has shape parameters of 3 and 1. Therefore, Beta1 has a downward slope within the MTTF range and Beta2 has the opposite upward slope within the MTTF range. Even though up-scaling of a statistical distribution is a delicate procedure, that could lead to biased results, the goal is to achieve a lot of coverage of real scenarios. Therefore, it is not essential for the purposes of this study that the up-scaled data is precisely Beta distributed. If the up-scaled data deviated slightly from the original Beta distribution, it could still capture a possible real scenario. The up-scaling of the Beta distribution is achieved as follows. Let:

Variables Definition

B Generated Beta data with a value of (0 ≤ B ≤ 1)

MTTFmin Minimum of the MTTF range

MTTFmax Maximum of the MTTF range

D Up-scaled inter-demand interval

Table 5.4: Upscaling variables

We obtain:

(15)

15 6. Forecasting methods

In this section we use forecasting to determine when the next spare part failures might occur. Because the company uses a (s,S) reorder policy with continuous reviews, the forecasts can be used to reduce downtime of the production process and to avoid outages. Downtime can be reduced because preventive maintenance takes less time than corrective maintenance (Muchiri et al., 2014). Further, multiple parts that might fail in the near future can be opportunistically replaced in order to reduce the frequency of preventive maintenance for continuously operating units (Laggoune et al., 2009). This will allow FrieslandCampina to achieve a smoother production process while reducing unexpected and total downtime. This is especially important, because the company produces perishable products (Chen et al., 2014).

6.1 Forecasting set-up

(16)

16 6.2 Forecasting models

6.2.1 Simple moving average

Simple moving average works by aggregating inter-demand intervals over the most recent N amount of periods, and dividing the sum of this demand by N. This method can work particularly well if the demand is subject to seasonality. Let:

Variables Definition

N Number of observed periods over which the moving average is calculated

D t Inter-demand interval at time t

D’ t + 1 Forecasted inter-demand interval at time t + 1, after observing t

Table 6.1: Simple moving average variables

We obtain:

(2)

6.2.2 Weighted moving average

Weighted moving average works similarly to the simple moving average, as demand is summer over N periods and divided by N. However, weighted moving average does not weight each period equally. Instead, it assigns higher weight to the most recent period and weighting of the period decreases as the considered period approaches t-N. The weighting factors, W t, are normalized such that they sum up to 1. Let:

Variables Definition

N N t

Number of observed periods over which the moving average is calculated Indicator of the number of periods until D t, with a value of (0 ≤ N t ≤ N)

D t Inter-demand interval at time t

W t Weighting factor of inter-demand interval at time t

D’ t + 1 Forecasted inter-demand interval at time t + 1, after observing t

Table 6.2: Weighted moving average variables

We obtain:

(17)

17 ∑ (4) ∑ ∑ ⁄ (5)

6.2.3 Single exponential smoothing

To initiate the forecast, we use the average of first N = 3 periods. To update the forecast, exponential smoothing uses a linear combination of the previous forecast and the most recent demand. This forecasting model employs a smoothing constant α, this constant can be optimized to achieve the most accurate resulting forecasts. α determines the weighting of previous forecast and most recent demand, the forecasted value D’(t+1) therefore, is largely dependent on α. α is optimized using MS Excel evolutionary solver algorithm, as described in Section 6.1. Let:

Variables Definition

α Smoothing constant with a value of (0 ≤ α ≤ 1)

D t Inter-demand interval at time t

D’ t + 1 Forecasted inter-demand interval at time t + 1, after observing t

Table 6.3: Single exponential smoothing variables

We obtain:

( ) (6)

6.2.4 Holt-Winters method

Holt-Winters model has similar workings to the exponential smoothing model, but it also considers a periodic trend and a seasonality factor. The model consists of four equations. Equations 1 carries out smoothering of the average, equation 2 smoothes the trend information, equation 3 the seasonal component and lastly, equation 4 carries out the actual forecasting. Let:

Variables Definition

α Smoothing factor average

δ Smoothing factor trend

β Smoothing factor seasonality

s Seasonality duration

(18)

18

Table 6.4: Holt-Winters variables

D t Inter-demand interval at time t

m Periods to forecast

A t Smoothed average

T t Trend

S t Seasonality

D’ t + 1 Forecasted inter-demand interval at time t + 1, after observing t

Table 6.4: Holt-Winters variables (continued)

We obtain: ( ) ( ) (7) ( ) ( ) (8) ( ) (9) ( ) (10)

6.2.5 den Uijl method

In order to further develop the forecasting research field, we have attempted to create a new forecasting method. The purpose of this method is to put emphasis on commonly observed inter-demand interval values while considering deviation from these common values over past periods and outliers. The model should be suitable for a large range of different data, because the option to optimize a weighting factor could put more emphasis on either common or uncommon values.

(19)

19 allow for more accurate forecasting. Furthermore, the deviation from the mean could capture either trends or seasonality over the past N periods and therefore, produce a forecast that is up-to-date with current information, while also considering the fact that outliers should not be weighted too heavily. Let:

Variables Definition

ε Deviation weighting factor with a value of (0 ≤ ε ≤ 2)

D t Inter-demand interval at time t

MTTFmin Lower bound of the mean time to failure range

MTTFmax Upper bound of the mean time to failure range

N Number of periods over which deviation from the MTTF mean is calculated

D’ t + 1 Forecasted inter-demand interval at time t + 1, after observing t

Table 6.5: den Uijl variables

We obtain:

(

)

(11) 6.3 Accuracy measures

We use the accuracy measures to determine which forecasting method performs best given a certain MTTF range and distribution within this range. The measures work by comparing the forecasted inter-demand interval with the actual inter-demand interval for the same period. The accuracy measures used are the mean absolute percentage error (MAPE) and root mean square error (RMSE).

6.3.1 Mean absolute percentage error

(20)

20

Variables Definition

D t Inter-demand interval at time t

D’ t Forecasted inter-demand at time t

X Total number of periods considered

t0 First considered period

tx Last period considered

MAPE Mean absolute percentage error

Table 6.6: Mean absolute percentage error variables

We obtain:

(∑

) ⁄ (12)

6.3.2 Root mean square error

The RMSE works by squaring the difference between the forecasted value and the actual value for all forecasted periods. Subsequently, we compute the mean of these values and the root of this mean results in the RMSE outcome. RMSE places more weight on larger errors in comparison to smaller errors and the performance and robustness of this method has often been established in previous research and it allows for easy comparison between the different forecasting methods (Teunter and Duncan, 2009; Clements, 2012; Eaves and Kingsman, 2004; Moon et al., 2012). It has to be noted that the RMSE method is scale dependent, however this will not affect the interpretation of our results, as all input data is in days. Let:

Variables Definition

D t Inter-demand interval at time t

D’ t Forecasted inter-demand at time t

X Total number of periods considered

t0 First considered period

tx Last period considered

RMSE Root mean square error

Table 6.7: Root mean square error variables

We obtain:

√∑ ( )

(21)

21 6.4 Forecasting results

We present the results in tables 7.1 up to 7.6. Tables 7.1 and 7.2 show the outcomes for MTTF group 1 (5-10 year), tables 7.3 and 7.4 show the outcomes for MTTF group 1 (1-5 year) and tables 7.5 and 7.6 show the outcomes for MTTF group 1 (0.25-1 year). We further divide the results within each MTTF group according to the type of data distribution, where one forecasting method performs best for each type of distribution. Best performance is indicated by a bold font.

The most noticeable result is the very good performance of the exponential smoothing forecasting method. This method, on average, performs best for each of the MTTF groups on both accuracy measures. This contrasts strongly with the expectations, as one could expect that the Holt-Winters method would outperform the exponential smoothing method, because Holt-Winters is a very similar but more sophisticated method. Holt-Winters method only outperforms exponential smoothing when data follows the Beta1 distribution in group 1. Another noticeable results is the performance of the newly developed method. The method consistently produces the most accurate results for both random and normal distributed data for each group. This can be attributed to the fact that the outliers weight heavily when data is randomly distributed and outliers weight lowly when data is normally distributed, because the mean would be the most observed value under a normal distribution.

The practical usability of these results should be interpreted as follows; if a spare part with a MTTF between 5 and 10 years exhibits inter-demand intervals that are Poisson distributed, then exponential smoothing should be used as the forecasting method. This method provides on average an error of 8.30%, when this type of spare part has inter-demand intervals that are Poisson distributed. Therefore, the forecast can be used to pre-empt the actual part failure with preventive maintenance, thereby reducing downtime and unexpected outages.

Table 6.8: Group 1, MTTF 5-10 year

MEAN ABSOLUTE %

ERROR RANDOM NORMAL GAMMA BETA1 BETA2 POISSON WEIBULL Average

(22)

22

Table 6.9: Group 1, MTTF 5-10 year

Table 6.10: Group 2, MTTF 1-5 year

Table 6.11: Group 2, MTTF 1-5 year

Table 6.12: Group 3, MTTF 0.25-1 year

ROOT MEAN SQUARE

ERROR RANDOM NORMAL GAMMA BETA1 BETA2 POISSON WEIBULL Average

Moving average 622.00 296.55 533.42 417.97 409.43 324.64 362.51 343.58 Weighted moving average 599.26 285.21 521.66 402.15 393.04 313.85 351.04 332.45 Exponential smoothing 533.72 266.29 469.20 379.40 357.51 280.80 312.34 296.57 Holt-Winters 548.20 267.19 495.11 375.99 450.84 285.83 348.14 316.99 den Uijl 532.71 254.59 473.04 475.27 449.73 283.36 355.61 319.49 Average 562.40 270.71 493.65 402.08 402.95 294.86 340.13 317.50 MEAN ABSOLUTE %

ERROR RANDOM NORMAL GAMMA BETA1 BETA2 POISSON WEIBULL Average

Moving average 48.72% 23.28% 19.16% 37.18% 22.01% 15.63% 27.87% 30.07% Weighted moving average 47.47% 22.51% 18.64% 36.29% 21.65% 15.13% 27.19% 29.31% Exponential smoothing 45.28% 20.27% 16.36% 33.18% 19.85% 13.72% 24.56% 26.99% Holt-Winters 45.99% 21.12% 16.98% 35.01% 20.08% 14.45% 25.55% 27.84% den Uijl 44.12% 20.11% 20.85% 42.59% 23.02% 13.64% 29.93% 30.14% Average 45.98% 21.24% 18.66% 37.19% 21.46% 14.42% 27.32% 28.91%

ROOT MEAN SQUARE

ERROR RANDOM NORMAL GAMMA BETA1 BETA2 POISSON WEIBULL Average

Moving average 488.73 291.66 216.59 311.78 338.84 202.64 302.49 252.57 Weighted moving average 472.20 280.35 210.41 301.13 329.59 196.61 292.57 244.59 Exponential smoothing 433.17 249.15 187.28 270.49 294.41 176.88 261.82 219.35 Holt-Winters 442.86 258.92 193.15 283.42 301.86 186.89 270.26 228.58 den Uijl 422.04 249.01 225.30 363.91 374.97 176.56 297.66 237.11 Average 446.84 263.24 208.58 310.97 332.01 185.97 286.34 236.15 MEAN ABSOLUTE %

ERROR RANDOM NORMAL GAMMA BETA1 BETA2 POISSON WEIBULL Average

(23)

23

Table 6.13: Group 3, MTTF 0.25-1 year

7. Simulation set-up

To determine the optimal inventory levels under a (s,S) reorder policy with continuous reviews, we carry out a simulation project. The (s,S) reorder policy and the acquired data determine most of the simulation set-up. Continuous review is combined with a (s,S) policy therefore, the ordering of a part is always immediately triggered by an actual spare part failure. Because, only a part failure can cause the current inventory level to fall below the reorder point, s. Therefore, we generate spare part failure events and all possible values for s and S are considered. This approach is possible because the amount of identical spare parts in use is fairly low, 3 at most. This method of simulation has the advantage that a service level can be computed for all combinations of s and S. The simulation results allow for the comparison between the achieved service levels for all (s,S) combinations and the target service level of 95%. The (s,S) combination that leads to the lowest inventory levels and achieves the target service level will therefore always be the optimal inventory policy, given the target service level.

We have categorized the spare parts in exhaustive groups in Section 5.2. The groups consist of three MTTF ranges, which can be divided further according to the used reorder periods. Within each of these groups, all possible (s,S) inventory policies will be considered for 1, 2, or 3 three spare parts in use.

7.1 Simulation logic

The simulation works along a timeline, generated inter-demand intervals serve as input for the simulation and these intervals determine when a part failure occurs. Therefore, the generated inter-demand intervals can be viewed as the instigators for all simulation events. If a failure event occurs, this has implications for the inventory level, spare part reordering, spare part arrivals, out of stock events and service level. Other important input is the reorder period,

ROOT MEAN SQUARE

ERROR RANDOM NORMAL GAMMA BETA1 BETA2 POISSON WEIBULL Average

(24)

24 amount of spare parts in use and the considered (s,S) values. Each simulation considers 300 generated inter-demand intervals and is repeated 5 times with other generated intervals. This implies that each simulation considers a period of at least 100 year, depending on the distribution of data and the MTTF group. The considered period is relatively long, to ensure that the resulting service level is accurate across repetitions.

The MTTF ranges on which the categorization is based have important implications for the simulation logic. The ranges provided are considered to be very accurate, meaning that virtually all of the inter-demand intervals for spare parts will fall within the range. Consider the following, a spare part falls within Group 1, MTTF 5-10 years, with a reorder period of the maximum 44 days. If this part fails, then the (s,S) with continuous reviews policy ensures that a new part is ordered immediately, instigating the reorder period of 44 days. However, the minimum inter-demand interval for this spare part is 5 years, or 1825 days, this means that it is impossible for this part to fail within the reorder period in our simulation. Therefore, the service level will always be 100%, if 1 identical spare part is in use with an (0,1) (s,S) reorder policy under continuous review. The same logic can be extended to a scenario with 2 spare parts and a (1,2) reorder policy and a scenario with 3 spare parts and a (2,3) reorder policy. We therefore omit these inventory policies from the simulation, as the service level will always be 100%. All other values for s and S we simulate in order to determine if they can achieve the required service level of 95%. We simulate the following inventory policies for all 3 MTTF groups, all 7 distributions of data and all 4 reorder periods:

When 2 identical spare parts are in use the use we consider the following (s,S) levels:

 (0,1)

 (0,2)

When 3 identical spare parts are in use we consider the following (s,S) levels:

 (0,1)

 (0,2)

 (1,2)

 (0,3)

(25)

25

7.1.1 Simulation logic two identical spare parts

Let: Variables Definition s Reorder point S Order-up-to point RP Reorder period T Time indicator

TA Failure event of part A at time T

TB Failure event of part B at time T

Tpr Place reorder at time T

Toa Order arrives at time T

ILt Inventory level at time T

Iat Inter-demand interval of part A at time T

Ibt Inter-demand interval of part B at time T

N Amount of failure events

AO Amount ordered

OOS Amount of out of stock events

SL Achieved service level

Table 7.1: Simulation logic variables

We obtain: Initiation

 Generate random Iat0 & Ibt0

 ILt0 = S

Determine T of failure events

 TA = Iat + TA-1

 TB = Ibt + TB-1

Determine inventory level at T

(26)

26

Table 7.2: Simulation logic variables 2

 If T = Toa

o Then ILt = ILt + AOa,b Determine reorder time and size

 If T = TA  And ILt <= s o Then Tpr = T o And Toa = Tpr + RP o And AOa = S – ILt – AOb  If T = TB  And ILt <= s o Then Tpr = T o And Toa = Tpr + RP o And AOb = S – ILt – AOa

Determine out of stock

 If T = TA

 And ILt < 1

o Then OOS = OOS + 1

 If T = TB

 And ILt < 1

o Then OOS = OOS + 1

 If T = TA = TB

 And ILt < 2

o Then OOS = OOS + 2 Determine service level

 SL = OOS / N

7.1.2 Simulation logic three identical spare parts

(27)

27

TA Failure event of part A at time T

TB Failure event of part B at time T

TC Failure event of part C at time T

Tpr Place reorder at time T

Toa Order arrives at time T

ILt Inventory level at time T

Iat Inter-demand interval of part A at time T

Ibt Inter-demand interval of part B at time T

Ict Inter-demand interval of part B at time T

N Amount of failure events

AO Amount ordered

OOS Amount of out of stock events

SL Achieved service level

Table 7.2: Simulation logic variables 2 (continued)

We obtain: Initiation

 Generate random Iat0, Ibt0 & Ict0

 ILt0 = S

Determine T of failure events

 TA = Iat + TA-1

 TB = Ibt + TB-1

 TB = Ict + TC-1

Determine inventory level at T

(28)

28 o Then ILt = ILt + AOa,b,c

Determine reorder time and size

 If T = TA  And ILt <= s o Then Tpr = T o And Toa = Tpr + RP o And AOa = S – ILt – AOb – AOc  If T = TB  And ILt <= s o Then Tpr = T o And Toa = Tpr + RP o And AOb = S – ILt – AOa – AOc  If T = TC  And ILt <= s o Then Tpr = T o And Toa = Tpr + RP o And AOC = S – ILt – AOa – AOb

Determine out of stock

 If T = TA

 And ILt < 1

o Then OOS = OOS + 1

 If T = TB

 And ILt < 1

o Then OOS = OOS + 1

 If T = TC

 And ILt < 1

o Then OOS = OOS + 1

 If T = TA = TB

 And ILt < 2

o Then OOS = OOS + 2

 If T = TA = TC

 And ILt < 2

(29)

29

 If T = TC = TB

 And ILt < 2

o Then OOS = OOS + 2

 If T = TA = TB = TC

 And ILt < 3

o Then OOS = OOS + 3 Determine service level

 SL = OOS / N 7.2 Simulation results

Table 7.3 shows the aggregated results. The results presented in this table are all the optimal values for s and S for each MTTF range, reorder period, amount of identical spare parts in use and type of distribution for a target service level of 95%. The results are as expected and moreover, logical. As the reorder period increases, the optimal (s,S) policy also increases. This was expected because, a longer reorder period leads to a higher chance of out of stock events. Furthermore, when we compare group 1 and group 3, with MTTF ranges of 5 to 10 year and 0.25 to 1 year respectively, an obvious trend can be spotted. Group 3 has a significantly lower MTTF range, this implies that spare parts fail more frequently, therefore higher optimal (s,S) values were expected. The results are clearly aligned with this expectation throughout all 3 groups. The effect of the type of data distribution is also present, although this effect is not very pronounced. Beta1 distributed data consistently leads to the highest optimal (s,S) levels, while Beta2 and Normal distributed data lead to the lowest optimal (s,S) levels.

(30)

30 period is usually instigated while there is still 1 spare part available. Further, regardless of performance, the (1,2) policy can be considered superior to the (0,3) policy because it will lead to, on average, lower inventory levels. Overall, we can conclude that the results are indeed, sensible and that the results could cause FrieslandCampina to lower their current inventory levels, which vary from 0 to 10.

The practical implication of the results should be interpreted as follows: firstly, consider what group a real spare part can be categorized in. This can be easily achieved by checking the applicable reorder period, amount of this real spare part in use and its mean time to failure. These three steps will always place the spare part in one of the presented categories, because the categories are exhaustive. When this has been achieved, the distribution of the inter-demand intervals should be considered, this completes the categorization and leads to the optimal inventory level for this particular spare part. If the means are not available to check the distribution of the inter-demand intervals of the real spare part, then one could always consider the Beta1 scenarios, which could serve as a worst case benchmark. Beta1 data consistently leads to the highest inventory levels and could therefore be safe to apply to each other type of data distribution, while maintaining the 95% target service level.

Table 7.3: Aggregate results

Group 1: MTTF 5 - 10 year Aggregated results Target service level: 95%

Type of data Random Normal Gamma Poisson Weibull Beta1 Beta2

Reorder period: # identical spare parts Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S)

2 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 14 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 21 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,2) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 44 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,2) (0,2) (0,2) (0,2) (0,2) (0,2) (0,2)

Group 2: MTTF 1 - 5 year Aggregated results Target service level: 95%

Type of data Random Normal Gamma Poisson Weibull Beta1 Beta2

Reorder period: # identical spare parts Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S)

2 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 14 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,1) (0,2) (0,2) (0,2) (0,1) (0,2) (0,2) 21 days 2 (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) (0,1) 3 (0,2) (0,2) (0,2) (0,2) (0,2) (1,2) (0,2) 44 days 2 (0,2) (0,2) (1,2) (1,2) (0,2) (1,2) (0,1) 3 (1,2) (1,2) (1,2) (1,2) (1,2) (1,2) (1,2)

Group 3: MTTF 0.25 - 1 year Aggregated results Target service level: 95%

Type of data Random Normal Gamma Poisson Weibull Beta1 Beta2

Reorder period: # identical spare parts Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S) Optimal (s,S)

(31)

31

(32)

32

(33)

33

(34)

34

(35)

35

(36)

36

(37)

37 8. Discussion

As previous research has already suggested, it remains difficult to reach definite answers with regards to the research questions raised in the introduction. However, we demonstrate that time to failure, or time to failure ranges can serve as a good basis for spare part categorization. Moreover, the categorization of spare parts makes the process of forecasting and determining inventory levels less cumbersome. The second research question has been extensively answered in Section 7.2, therefore we will not expand on each possible inventory level in this section. However, it has to be noted that these results are based on generated data, attempts have been made to reduce this problem by generating large amounts of differently distributed data, but reality offers an endless amount of spare part failure behaviour. Therefore, these results should be interpreted cautiously. Identical reasoning can be applied to the third research question. However, the high performance of the exponential smoothing method remains unaltered and this method seems to be applicable to a wide variety of data and a wide range of different time to failures.

9. Conclusions, limitations and recommendations

We compared multiple popular forecasting methods and we have generated some unexpected and interesting results. The most notable result is the consistent high performance of the exponential smoothing method, which outperformed the moving average, weighted moving, average and Holt-Winters method in all tests except for one. The performance of the new forecasting method is another noticeable result. The new method outperforms all well-established forecasting methods, for all MTTF groups with randomized and Normal distributed data. Which is a surprising and satisfying result considering the simplicity and novelty of this forecasting method.

Further, the lack of part specific time to failure data is an obvious limitation for this study, which we will come back to later in this section, however, it also allowed for the use of an exhaustive categorization scheme. The part categorization made both the forecasting and simulation models less cumbersome. Moreover, it made it possible to apply a novel simulation model, wherein all possible inventory levels within a (s,S) policy with continuous reviews were simulated. The achieved inventory levels were sensible and logical, indicating that the model is functioning well.

(38)

38 Therefore, the models can be used by a wide variety of firms that do not have the means to collect historical data. Furthermore, the approach presented in this thesis, when real data is lacking, to our knowledge, is not yet present in academic literature. Herein lie both a valuable practical and theoretical contribution of this thesis.

However, this thesis also has some obvious limitations. The most notable limitation is the lack of spare specific historical failure data. Even though this limitation has been overcome, a problem still remains. Which is, the lack of recommendations for inventory levels and forecasted failures for specific parts. Instead, the recommendations are based on the categorized groups, filled with generated data. It should be clear that both the faith in and accuracy of the considered models will be significantly higher if real historical data had been available. A second limitation is connected to the new forecasting method, which is the threat of overfitting, because we have not used extensive back-testing to ensure that this problem is not present. A third limitation can be found in the amount of identical spare parts considered, which is three at most. The simulation model can be extended to more identical spare parts than three, however this will have a major impact on computation time, because the amount of (s,S) inventory levels to consider will increase rapidly. Lastly, the amount of repetition used in the simulation could be increased to make the findings more robust.

(39)

39 10. References

Alfredsson, P. 1997. Optimization of multi-echelon repairable item inventory systems with simultaneous location of repair facilities. European Journal of Operational Research, 99(3): 584–595.

Allen, P. G. & Morzuch, B. J. 1995. Comparing probability forecasts derived from theoretical distributions. International Journal of Forecasting, 11(1): 147-157.

Axsäter, S. 2006. Inventory control. Second edition. International Series in operations

Research & Management Science, Springer London, UK.

Blanco, N. & Gureckis, T. 2013. Does category labelling lead to forgetting? Cognitive

Processing, 14(1): 73-79.

Boylan, J. E. & Syntetos, A. A. 2008. Forecasting for inventory management of service parts. In: K. A. H. Kobbacy & D. N. P. Murthy, Complex system maintenance handbook, pp. 479–506. Springer London, UK.

Boylan, J. E., Syntetos, A. A. & Karakostas, G. C. 2008. Classification for forecasting and stock control: A case study. Journal of the Operational Research Society, 59(4): 473-81. Campbell, J. D., Jardine, A. K. S. & McGlynn, J. 2011. Optimizing equipment life-cycle decisions. Asset Management Excellence, Taylor & Francis Group London, UK.

Chakravarty, A. K. 1981. Multi-item inventory aggregation into groups. Journal of the

Operational Research Society, 32(1): 19-26.

Chen, X., Pang, Z. & Pan, L. N. 2014. Coordinating inventory control and pricing strategies for perishable products. Operations Research, 62(2): 284-300.

Clements, M. P., 2012. Forecast uncertainty – ex ante and expost: U.S. inflation and output growth. Journal of business and economic statistics, 32(2): 206-216.

Cohen, M., Kleindorfer, P., Lee, H. & Pyke, D. 1992. Multi-item service constrained (s, S) policies for spare parts logistics systems. Naval Research Logistics, 39(4): 561–577.

Croston, J. D. 1972. Forecasting and stock control for intermittent demands. Operational

Research Quarterly, 23(3): 289-303.

(40)

40 Diaz, A. & Fu, M. 1997. Models for multi-echelon repairable item inventory systems with limited repair capacity. European Journal of Operational Research, 97(1): 480–492.

Eaves, A. H. C. & Kingsman, B. G. 2004. Forecasting for the ordering and stock-holding of spare parts. The Journal of the Operational Research Society, 55(4): 431-437.

Faustino, C. P., Novaes, C. P., Pinheiro, C. A. M. & Carpinteiro, O. A. 2011. Improving the performance of fuzzy rules-based forecasters through application of FCM algorithm.

Artificial Intelligence Review, 41(2): 287-300.

Federgruen, A. & Zheng, Y. 1992. An efficient algorithm for computing an optimal (r,Q) policy in continuous review stochastic inventory systems. Operations Research, 40(4): 808-813.

Gelper, S., Fried, R. & Croux, C. 2010. Robust forecasting with exponential and Holt-Winters smoothing. Journal of Forecasting, 29(3): 285-300.

Ghobbar, A. A. & Friend, C. H. 2003. Evaluation of forecasting methods for intermittent parts demand in the field of aviation: A predictive model. Computers and Operational Research, 30(14): 2097-2114.

Heinecke, G., Syntetos, A. A. & Wang, W. 2011. Forecasting-based SKU classification.

International Journal of Production Economics, 143(2): 455-462.

Holt, C. C. 2004. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal of Forecasting, 20(1): 5-10.

Huiskonen, J. 2001. Maintenance spare parts logistics: Special characteristics and strategic choices. International Journal of Production Economics, 71(1–3): 125–133.

Jasper, J. B. 2006. Quick response solutions. FedEx Critical Inventory Logistics Revitalized, Memphis FedEx, USA.

Kabir, A. B. M. Z. & Al-Olayan, A. S. 1996. A stocking policy for spare part provisioning under age-based preventive replacement. European Journal of Operational Research, 90(1): 171–181.

Kalchschmidt, M., Verganti, R. & Zotteri, G. 2006. Forecasting demand from heterogeneous customers. International Journal of Operations & Production Management, 26(6): 619– 638.

Kourentzes, N., Petropoilos, F. & Trepero, J. R. 2014. Improving forecasting by estimating time series structural components across multiple frequencies. International Journal of

(41)

41 Levén, E. & Segerstedt, A. 2004. Inventory control with a modified Croston procedure and Erlang distribution. International Journal of Production Economics, 90(3): 361-367.

Van Kampen, T. J., Akkerman, R. & van Donk, D. P. 2011. SKU classification: A literature review and conceptual framework. International Journal of Operations & Production

Management, 32(7): 850-876.

Kennedy, W. J., Wayne Patterson, J. & Fredenhall, L. D. 2001. An overview of recent literature on spare parts inventories. International Journal of Production Economics, 76(2): 201-215.

Kostenko, A. V. & Hyndman, R. J. 2006. A note on the categorization of demand patterns.

Journal of the Operations Research Society, 57(10): 1256–1257.

Laggoune, R., Chateauneuf, A. & Aissani, D. 2009. Opportunistic policy for optimal prpeventive maintenance of a multi-component system in continuous operating units.

Computers and Chemical Engineering, 33(9): 1499-1510.

Lengu, D., Syntetos, A. A. & Babai, M. Z. 2014. Spare parts management: linking distributional assumptions to demand classification. European Journal of Operational

Research, 235(3): 624-635.

Moinzadah, K. & Lee, H. L. 1986. Batch size and stocking levels in multi-echelon repairable systems. Management Science, 32(12): 1567–1581.

Molenaers, A., Baets, H., Pintelon, L. & Waeyenbergh, G. 2012. Criticality classification of spare parts: A case study. International Journal of Production Economics, 140(2): 570-578. Moon, S., Hicks, C. & Simpson, A. 2012. The development of a hierarchical forecasting method for predicting spare parts demand in the South Korean navy: A case study.

International Journal of Production Economics, 140(2): 794-802.

Moore, R. 1996. Establishing an inventory management program. Plant Engineering, 50(3): 113–116.

Muchiri, P.N., Pintelon, L., Martin, H. & Chemweno, P. 2014. Modelling maintenance effects on manufacturing equipment performance: results from simulation analysis. International

Journal of Production Research, 52(11): 3287-3302.

Porras, E. & Dekker, R. 2008. An inventory control system for spare parts at a refinery: An empirical comparison of different re-order point methods. European Journal of Operational

(42)

42 Romeijnders, W., Teunter, R. & van Jaarsveld, W. 2012. A two-step method for forecasting spare parts demand using information on component repairs. European Journal of

Operational Research, 220(2): 386-393.

Strijbosch, L. W. G., Heuts, R. M. J. & van der Schoot, E. H. M. 2000. A combined forecast – inventory control procedure for spare parts. Journal of the Operational Research Society, 51(10): 1184-1192.

Syntetos, A. A., Babai, M. Z. & Altay, N. 2012. On the demand distributions of spare parts.

International Journal of Production Research, 50(8): 2101-2117.

Syntetos, A. A., Boylan, J. E. & Croston, J. D. 2005. On the categorization of demand patterns. Journal of the Operational Research Society, 56(5): 495–503.

Syntetos, A. A. & Boyland, J. E. 2005. The accuracy of intermittent demand estimates.

International Journal of Production Economics, 128(2): 546-555.

Syntetos, A. A. & Boyland, J. E. 2010. On the variance of intermittent demand estimates.

International Journal of Forecasting, 21(2): 303-314.

Teunter, R. H. & Duncan, L. 2009. Forecasting intermittent demand: A comparative study.

Journal of the Operational Research Society, 60(3): 321-329.

Teunter, R. H., Syntetos, A. A. & Babai, M. Z. 2010. Determining order-up-to levels under periodic review for compound binomial (intermittent) demand. European Journal of

Operational Research, 203(3): 619-624.

Vrat, S. P. 1984. A location–allocation model for a two–echelon repair inventory system. IIE

Transactions, 16(3): 222–228.

Wang, W. & Sytetos, A. A. 2011. Spare parts demand: Linking forecasting to equipment.

Logistics and Transportation Review, 47(6): 1194-1209.

Willemain, R. T., Smart, C. N., Shocker, J. H. & DeSautels, P. A. 1994. Forecasting intermittent demand in manufacturing: a comparative evaluation of Croston’s method.

(43)

43 11. Appendix

11.1 Graphs of generated data Group 3, MTTF 0.25 – 1 year

Graph 11.1: Randomly distributed data group 3, histogram

Graph 11.3: Gamma distributed data group 3

Graph 11.5: Weibull distributed data group 3

Graph 11.7: Beta2 distributed data group 3 (un-scaled)

Graph 11.2: Normal distributed data group 3

Graph 11.4: Poisson distributed data

Referenties

GERELATEERDE DOCUMENTEN

Objectives: The aim of this article was to explore the extent to which wheelchair service delivery in a rural, remote area of South Africa was aligned with the

Het project Samen bouwen aan ver- trouwen in Rotterdam leverde veel tips op voor het oprichten van een PaTz-groep. • Bespreek van tevoren de verwach- tingen en doelstellingen

The MTFEQ consists of a one-tap time-varying (TV) time-domain equalizer (TEQ), which converts the doubly selective channel into a purely frequency-selective channel, followed by

For situations allowing companies to have non-identical demand rates and base stock levels and for situations allowing companies to have non-identical downtime costs, we show that

In addition to the reward power, no statement is made of the possibility of other types of powers that may influence the governance mechanisms since coercive power or non-mediated

Therefore, to support the statement that all cultural dimensions might have a moderating effect on the relation between LM practices and performance, the article of Kull and

distribution of data, promotion focused people consistently fulfilled more options (tasks) for the customer in all workload conditions, and the difference in the amount of sliders set

The model derives the output values: total cost, cost per container, average inventory level and the percentage of rush orders for a base stock, (s,S) or MRP policy. Changes