• No results found

Condition-Based Maintenance for Multi-Component Systems with Lead Time and Capacity Planning

N/A
N/A
Protected

Academic year: 2021

Share "Condition-Based Maintenance for Multi-Component Systems with Lead Time and Capacity Planning"

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Master Thesis

Condition-Based Maintenance for

Multi-Component Systems with Lead Time and

Capacity Planning

Author: Morris Beek

Supervisor: Dr. B. de Jonge

A thesis submitted in partial fulfillment of the requirements for the MSc Degree

in the

Faculty of Economics and Business

(2)

Master’s Thesis Econometrics, Operations Research and Actuarial Studies Supervisor: Dr. B. de Jonge

(3)

University of Groningen

Abstract

Condition-Based Maintenance for Multi-Component Systems with Lead Time and Capacity Planning

by Morris Beek

(4)

Contents

1 Introduction 6

2 Literature review 8

3 Problem definition 9

4 Markov decision process 11

4.1 State Space . . . 12

4.2 The roadmap of the MDP . . . 13

4.2.1 Preventive maintenance . . . 15

4.2.2 Preparation decision . . . 16

4.2.3 Deterioration and corrective maintenance . . . 17

4.2.3.1 Deterioration . . . 17

4.2.3.2 Corrective maintenance . . . 21

5 Solution methodology 22 5.1 CBM policy with lead time . . . 22

5.2 TBM policy . . . 23

6 Results 24 6.1 Analysis of the base case . . . 25

6.1.1 Comparison of a CBM and a BBM policy . . . 27

6.2 Analysis of the lead time . . . 27

6.3 Analysis of the number of components . . . 30

6.4 Varying the cost components . . . 31

6.4.1 Analysis of the set up cost . . . 31

6.4.2 Analysis of corrective maintenance cost . . . 32

(5)

4

List of Figures

1 Illustration of dynamic process at decision epoch t and t+1. 14

2 The three transitions from x= (2, 1)to ˜x= (0, 2, 1). 20

3 Illustration of the Value iteration algorithm. 23

4 Cost per unit of time under BBM policy as a function of T . 27

5 Comparison of cost per unit of time under CBM and TBM policy for varying L. 29

6 Comparison of cost per unit of time under CBM and TBM policy for varying n. 31

7 Comparison of cost per unit of time under CBM and TBM policy for varying cs. 32

8 Comparison of cost per unit of time under CBM and TBM policy for varying ccm. 33

List of Tables

1 Total number of deterioration states of the system for m = 4, L = 3 and different

values of n 13

2 Base case parameter selection. 25

3 Optimal action d(x)for different x, where l=c=0 for the base case. 26

4 Optimal action d(x)for different x, where l=c=0 for different values of L. 28

5 Optimal action distribution for different values of L 29

6 Optimal action d(s)for different x, where l=c=0, for varying n. 30

(6)

5

(7)

1 Introduction 6

1

Introduction

Maintenance planning and performance has become increasingly important, due to the increase of machinery in production processes. Cost corresponding to maintenance are often a significant part of total cost. The yearly maintenance expenditure in the railway sector is, for example, about 15-20% of total cost (Dutch Railway Company [8] and German Railway Company [10]). Since the exact condition of a machine is often unknown, companies as the Dutch and German railway company have used a static, time dependent maintenance planning schedule, i.e. time-based maintenance (TBM) policies. A TBM policy is defined as performing maintenance based on fixed time intervals or on the age of a machine. The only input that is required for this is calendar or usage time, which makes implementation relatively easy and cheap.

The condition data that can be collected of different machines have increased over the years (McKinsey & Company [14]). It became possible to determine the quality and the remaining useful lifetime of a component more accurately and use the degradation information to plan maintenance activities. This is defined as condition-based maintenance (CBM), which shows to have a greater impact on the cost required for maintenance than TBM. De Jonge et al. [7] research the difference in cost savings between TBM and CBM and show that CBM is more cost efficient, although this difference is dependent on a number of factors, such as a planning time, imperfect condition information and an uncertain level at which failure occurs. With the increasing availability of data, the effects of imperfect condition feedback and uncertain failure levels are reduced and CBM can become even more cost efficient. However, a planning time still needs to be considered.

(8)

1 Introduction 7

Research on both CBM and TBM policies initially focussed on single-component systems. Nowadays, due to increased automation, manufacturing processes are more complex and the focus switched to multi-component systems (Olde Keizer et al. [15]). Thomas [18] analyses systems with multiple components and conclude that maintenance policies for single-component systems rarely hold for the larger multi-component systems. Additional difficulties that occur are for example decisions on which components to maintain and on identifying the components that failed.

In this paper, we consider a condition-based maintenance policy for a multi-component system where a fixed planning or lead time is required between initiating and performing preventive maintenance. Furthermore, at the start of the lead time the number of components that will be maintained need to be specified as well. We assume that a set up cost has to be paid for preventive maintenance, which results in economic dependence between units. We analyse this problem by formulating it as a Markov decision process. We modify the notation of the state space, thereby allowing us to solve the problem exactly for relatively large systems. It is then compared to a TBM strategy to elaborate on the difference between CBM and TBM.

(9)

2 Literature review 8

2

Literature review

We start by discussing existing research that is relevant and related to the problem. A lead time between initiating and performing preventive maintenance is considered in different forms. Firstly, it might be required to make sure repairmen or the correct spare parts are available. The scarce availability of repairmen is for example considered by Marseguerra et al. [13] and Tan et al. [17]. Joint optimisation of inventory and maintenance is studied by multiple authors, we refer to Van Horenbeek et al. [20] for a review. Both are a form of resource dependency, as introduced by Olde Keizer et al. [15], where several components are connected through a limited availability of resources, such as repairmen or spare parts. However, these studies do not consider the effects of a fixed time between initiating and performing preventive maintenance, e.g. be necessary to reserve a workplace, which reduces the practical implication.

A second reason for postponing preventive maintenance is to extend the lifetime of a component or to prevent disruption of the production plan due to ineffectively planned downtime. Berrade et al. [2] and Van Oosterom et al. [22] consider a delay time model, where a delay time is defined as a time lapse from the occurrence of a defect up until failure. They analyse whether it is cost effective to postpone preventive maintenance after detection of a defect. In these studies, the parameter that is optimised is the decision how much preventive maintenance is delayed. Van Oosterom et al. [22] model preventive maintenance cost as a non-increasing function of the postponement duration and find that if the delay is deterministic, significant cost savings can be attained by planning in advance. They extend the analysis under random delay times and conclude that benefits of postponement still exist. Berrade et al. [2] explores conditions to make postponement in the delay time model cost effective. They find that postponement is cost-effective if an inspection is reliable, sufficient maintenance opportunities exist and the corrective maintenance cost is not too high.

(10)

3 Problem definition 9

policy influences the asymptotic availability of the system. They compare two values for the delay time between the threshold and the execution of preventive maintenance and find that longer delay times increase unavailability. Grall et al. [11] use a similar setting and analyse the asymptotic behaviour of the reliability function, but do not analyse the effect of the required delayed preventive maintenance.

De Jonge et al. [7] study, among other things, how a required planning time for a single-component system that deteriorates according to a gamma process affects the cost benefits of CBM over TBM. They conclude that the preventive maintenance threshold decreases and that the cost per unit of time increase for longer required planning times. Bouvard et al. [4] consider a multi-component system where a minimal time between two preventive maintenance events is required, but do not the study the effect of this minimal time.

The contribution of the research in this paper is twofold. Firstly, we consider a multi-component system with economic dependence and with a required preparation time between initiating and performing preventive maintenance. To the best of our knowledge, no existing papers study the effects of a lead time in multi-component systems. The second contribution is our suggestion for the representation of the state space of the Markov decision process. This representation decreases the size of the state space, which allows us to find exact solutions for relatively large systems.

3

Problem definition

(11)

3 Problem definition 10

to the probability of a component transitioning from state i to state j. Without maintenance, each component can only deteriorate and hence the matrix P is uppertriangular. To restore the quality of a component, each of the n components can be subject to maintenance. This can be either preventive maintenance (PM) or corrective maintenance (CM). Both PM and CM restore a component back to the ‘as-good-as-new’ state and take a negligible amount of time. Failures of components are self-announcing and when this occurs, CM will be performed immediately at high cost. This is due to the unexpected nature of the failure. It results in higher cost corresponding to an emergency delivery of parts and potential unforeseen downtime. PM is planned maintenance and is therefore less expensive. It is performed if a component is not in the failed state. There is a fixed lead time, denoted by L, between initiating and carrying out a preventive maintenance action.

During each time period, it can be determined to schedule a preventive maintenance event and it is then also determined how many components can be maintained. The number of components that will be maintained at the next preventive maintenance event is denoted by a. We assume that a preventive maintenance action can only be scheduled if no other preventive maintenance event is already planned. This is reasonable if times between maintenance events is significantly larger than the lead time. During the lead time, the system will continue to be utilised and thus will be subject to further deterioration. This should be taken into consideration when deciding on the number of components to maintain. When more components are in deteriorated states than is prepared for, we will only carry out maintenance on the most deteriorated components. At an event only PM can be done, since CM is carried out immediately and thus happens between time periods.

(12)

4 Markov decision process 11

cost ccm. This cost is higher than the cost for PM. There is not set up cost for CM, since failures occur randomly and will therefore never coincide. The preparation cost are included in ccm. The objective is to solve the problem by finding a maintenance policy that minimizes the average cost per unit of time.

4

Markov decision process

We will formulate the dynamic problem as described in Section 3 as a Markov decision process (MDP). The definition of the MDP is based on De Jonge [6], Puterman [16] and Tijms [19]. An MDP is a sequential stochastic decision proces and can be applied to a wide variety of problems, including maintenance. An MDP can be defined by five elements, namely by the set of decision epochs, the state space, the action space and the cost function and transition probabilities. Since we are dealing with a discrete-time setting, we consider a discrete-time MDP, in which decisions are made at each epoch, which are fixed equidistant points in time. At each epoch the process can be in a discrete number of states at which a discrete number of actions can be chosen. The transition probability function shows the probability of moving from a state to a future state, when an action is chosen. The reward or cost function shows the corresponding cost to a transition when taking an action. The objective, on which the decisions are based, is to minimize average cost per unit of time. An MDP has two important properties. Firstly, the future development of the system only depends on the current state of the system and the current decision that is made. It is independent of its history of decisions and states. Furthermore, in the current state, all necessary information for making a decision is present.

(13)

4 Markov decision process 12

4.1

State Space

We define S as the finite discrete set of possible states. A specific state of the system is denoted by s ∈ S. One state has to capture the degradation levels of all components, the remaining lead time until the next preventive maintenance event and the number of components that will be maintained at this maintenance event. Note that the state of a component will never be m+1, since CM is performed immediately and takes negligible time. Generally, the degradation level of all components in an MDP related to a maintenance problem is defined as a vector, where each element is the deterioration state of that specific component. We call this the flat representation. However, if the components are identical, this flat representation is inefficient for two reasons. Firstly, the number of different possible states is mn, which increases exponentially for increasing n. Solving for large n would consume a lot of time just to read the state space alone. Secondly, it is inefficient because it shows redundant information. We are interested in the number of components per deterioration level and not in the deterioration state of each specific component, since all components are identical. Therefore, we define a compact approach, by showing the number of components in each of the deterioration states. We can use combinatorial mathematics to find the total number of states. It is computing in how many ways n identical objects or components can be distributed among m different buckets or deterioration states. Feller [9] shows this to be equal to n+m −1 m −1 ! = (n+m −1)! n!(m −1)! . (1)

We define x = (x1, x2, . . . , xm) as the deterioration levels of the components in the compact representation, where xiis the number of components in deterioration state i, with i=1, 2, . . . , m. The sum of the number of components in all states has to equal the total number of components. We define the set of all possible x as X, which is equal to

X=nx= (x1, . . . , xm): xiN0, i =1, . . . , m,

m

X

i=1

xi =no. (2)

(14)

4 Markov decision process 13

event, which can never be larger than the lead time L. The value l = 0 means that no PM event is scheduled. We let c denote the maximum number of components that can be restored to the ‘as-good-as-new’ state at the next maintenance event. It is never optimal to prepare for more than n components and therefore c will never be larger than n. Furthermore, during a maintenance event at least one component has to be maintained. An event will take place if l=0 and c > 0.

The state space S can be defined as

S=ns= (x, l, c): x ∈ X, l ∈ {0, 1, . . . , L}, c ∈ {0, 1, . . . , n}o. (3)

In Table 1 we compare the total number of states for m = 4 and L = 3 for varying number of components. It is computed by multiplying ((L+1)× n) +1 either by mn or by Equation

(1) for the flat and compact representation, respectively. For each of the deterioration levels all combinations of l and c > 0 exist, hence(L+1)× n. In addition, all states also exist for l=c=0. The compact method reduces the number of states drastically, especially for larger n. We note that the drawback of using the compact representation is that it becomes more complicated to determine the transition probabilities, as we will see in Section 4.2.3. However, the transition probabilities have to be computed only once and hence we will use the compact method to define the problem, as it allows us to solve the problem exactly for larger n.

Tab. 1: Total number of deterioration states of the system for m=4, L=3 and different values of n

n Flat Compact Reduction (%)

2 144 90 37.5

4 4352 595 86.3

6 102400 2100 98.0

8 2162688 5445 99.8

4.2

The roadmap of the MDP

(15)

4 Markov decision process 14

respectively denoted by I, II and III in Figure 1. Note that I and II take place within a decision epoch and III defines the transition to the subsequent decision epoch, where the same process starts over again.

s s0 s00 s000

t t+1

I

I II III

Fig. 1: Illustration of dynamic process at decision epoch t and t+1.

Each of the phases can be characterized by its own action space and its own cost function and transition probabilities. Furthermore, each phase has an individual output, these are denoted by s0, s00and s000, respectively. The interpretation of s is equal to s000, aside from the fact that s000 is in the subsequent decision epoch, where it serves as s again in the next iteration. A more detailed description of these phases is as below.

I: Preventive maintenance. If a preventive maintenance event is due then that will be executed in this phase. It results in a deterministic transition from s to s0.

II: Preparation decision. A decision whether to start preparing a new maintenance event and for how many components is made in II. It is only possible to prepare a new maintenance event, if no event is currently being prepared. It results in a deterministic transition from s0 to s00

III: Deterioration and corrective maintenance. The system will deteriorate and, if necessary, corrective maintenance is done. The system will stochastically transition from s00to s000.

(16)

4 Markov decision process 15

4.2.1 Preventive maintenance

Upon observing s, preventive maintenance is performed only if a maintenance event is due at that specific decision epoch. The action thus depends on l and c. A maintenance event is due if l = 0 and c > 0. PM will then be performed on the c components that are most deteriorated. These components will be restored to the ‘as-good-as-new’ state. Note that it can happen that components in the ‘as-good-as-new’ state will be maintained. These maintenance actions have no effect. The cost for preventive maintenance are incurred on scheduling, not on performing a preventive maintenance event.Therefore, no cost is incurred in this phase. We adapt x0 ∈ X accordingly. We introduce the transformation function r(x, c), which specifies the number of components per deterioration level after carrying out preventive maintenance on c components, when the current deterioration levels are specified by x. It is equal to

r(x, c) = 

r1(x, c), r2(x, c), . . . , rm(x, c) 

, (4)

where a single element of ri(x, c)is defined for i=1, . . . , m by

ri(x, c) =                          min{x1+c, n}, if i=1, max{xi− max{c −Pm j=i+1xj, 0}, 0}, if i=2, . . . , m − 1, max{xm− c, 0}, if i=m. (5)

To illustrate, if we would have a system with n = 6 and m = 3, where x = (2, 2, 2) and c = 3, we would have r(x, c) = (5, 1, 0). In this example the components for which preventive maintenance was prepared was less than the number of deteriorated components. If we would have x = (4, 2, 0), then we would have r(x, c) = (6, 0, 0). In the case that either no preventive maintenance event is scheduled or that an event is still in preparation, no PM will be performed and no cost will be incurred. If an event is still in preparation the remaining lead time l will be reduced by 1 and c will remain the same. If no event is scheduled, then nothing changes.

(17)

4 Markov decision process 16

the occurrence of an event is captured in s. The transition probability function can be defined by

PI(s0| s) =                                    1, if l=l0=0, c=c0=0, x0 = x 1, if l > 1, l0 =l −1, c0=c> 0, x0 = x, 1, if l=1, l0=0, c > 0, c0 =0, x0 =r(x), 0, otherwise. (6) 4.2.2 Preparation decision

In this phase, the system moves from s0 to s00. The decision can be made whether preventive maintenance is scheduled and for how many components it will be prepared. The set of all possible decision is called A and equals

A=n0, 1, . . . , no (7)

We let a ∈ A be a specific decision. It denotes the number of components for which maintenance will be planned. If a = 0, then no maintenance will be prepared. The cost for preventive maintenance are incurred when planning an event and thus incurred in this phase. The cost function, CII(s0, a), is equal to

CII(s0, a) =1a>0· cs+a · cpm+1a>0·1c0>0· M. (8)

In this function, we use two indicator functions. 1a>0is to indicate whether the set up cost has to be incurred. Furthermore, because at most one preventive maintenance event can be scheduled at any time, a very high cost M is incurred if a new PM event is scheduled, while another event is being prepared, which is indicated by1c0>0.

(18)

4 Markov decision process 17

PII(s00|s0, a)is, just as in phase I, a deterministic transition and equal to

PII(s00| s0, a) =                          1, if a=0, l00 =l0, c00 =c0, x00 = x0, 1, if a=c00 > 0, l0 =0, c0 =0, l00= L, x00 = x0, 0, otherwise. (9)

4.2.3 Deterioration and corrective maintenance

The last phase incorporates two transitions and is split up accordingly. It consists of a deterioration phase, where the possibility exist that components fail or move to the same or a worse state and a corrective maintenance phase, where, if necessary, CM will be performed. We stated that failure of a component can happen any time and has to be repaired immediately. For mathematical convenience, however, we allow the possible failures to be aggregated until the corrective maintenance phase and thus repair all failed components at once. In practice, they will never coincide, which is also the reason that no economic dependence between failed components exists.

Because components can be in the failed state m+1 in the intermediate stage in this phase, we introduce the set ˜X, where we allow failed components in the representation and which will be used as an intermediate stage, before corrective maintenance is performed. ˜Xis defined by

˜ X =n˜x= (˜x1, . . . ,˜xm+1): ˜xiN0, i=1, . . . , m+1, m+1 X i=1 xi=no. (10)

Within the deterioration phase, the system moves from x00∈ X to˜x ∈ ˜X. The subsequent corrective maintenance phase then moves the system from ˜x ∈ ˜Xto x000 ∈ X, both leaving l00and c00unaltered.

4.2.3.1 Deterioration

(19)

4 Markov decision process 18

components two is in the most worn state or vice versa. Both would yield the exact same compact state. For the current state this is not a problem, since the order of the components is not important. For a future state however it can be that components make different transitions, yielding the same future compact state. The probability of a transition from a compact state to another compact state is the sum of the probabilities of all transitions that are possible to reach the same future compact state. To find all possibilities, we introduce the function f(x), which specifies the deterioration for each component separately, if the current deterioration levels are specified by x or ˜x. It thus rewrites the compact representation to the flat representation and is equal to

f(x) = 

f1(x), f2(x), . . . , fn(x) 

. (11)

Each element fi(x)is then, for i=1, . . . , n, defined by

fi(x) =min ( v ∈ {1, 2, . . . , |x|}: v X j=1 xj≥ i ) , (12)

where |x| is the number of elements in x, which is either m or m+1 depending on whether the input is x or ˜x. To illustrate this, assume a system with n = 3, m = 2 and ˜x = (0, 2, 1). This would mean that zero components are new, two components are in deterioration state 2 and one component would be in the failed state 3. In the flat representation it is then denoted as f(˜x) = (2, 2, 3), where component one and two are in deterioration state 2 and component three is in the failed state.

(20)

4 Markov decision process 19

present. The number of identical instances of an element is called the multiplicity of that specific element. In the example, f(˜x) would be a multiset with two distinct values, namely two and three, where the multiplicity of two is two and the multiplicity of three is one, since they appear twice and once respectively. The multiset would then be {2, 2, 3}. In general, for a multiset f(˜x), the distinct values are the deterioration levels where at least one component is currently in and the multiplicity of each element i ∈ {1, . . . , m+1} is equal to ˜xi. Brualdi [5] defines a multiset permutation as an ordered arrangement of the elements of a multiset, where each element appears exactly as often as its multiplicity. A simple example is an anagram of a word having some repeated letters. Identical letters can be interchanged, without changing the meaning of the word. The set of all multiset permutations for f(˜x) = (2, 2, 3)is then

n

(2, 2, 3),(2, 3, 2),(3, 2, 2)o. (13)

Brualdi [5] defines the method of finding the total number of multiset permutations. It is computed using the multinomial coefficient, which is a generalisation of the binomial coefficient. For a set with m distinct elements, where the multiplicity of element i is equal to ki for i = 1, . . . , m, the multinomial coefficient and thus the total number of multiset permutations is defined by

Pi=1 m ki k1, k2, . . . , km ! = Pi=1 m ki! k1!k2! · · · km!. (14)

Applying this to a multiset f(˜x), the deterioration levels from 1 to m+1 are the distinct elements. The multiplicity of each distinct element is then given by ˜xi, for i = 1, . . . , m+1. The total number of elements is equal to the number of components n. We let z(˜x)denote the function that computes the total number of multiset permutations for a specific ˜x ∈ ˜X. It is defined by

z(˜x) =n! ,m+1

Y

i=1

˜xi. (15)

If again ˜x= (0, 2, 1)and f(˜x) = (2, 2, 3), then the number of multiset permutations is thus equal to

z((0, 2, 1)) = 3!

0!2!1! =3. (16)

(21)

4 Markov decision process 20

problem, since 0! = 1 by definition. Now suppose x00 = (2, 1) and ˜x = (0, 2, 1) in the compact representation, which is a system with n = 3 and m = 2, then f(x00) = (1, 1, 2) and f(˜x) = (2, 2, 3). The total probability is the sum of the probabilities of moving to each of the multiset permutations probabilities. Figure 2 shows the three different transitions for the given example. 1 1 2 2 2 3 f(x00) f(˜x) (a) 1 1 2 2 3 2 f(x00) f(˜x) (b) 1 1 2 3 2 2 f(x00) f(˜x) (c)

Fig. 2: The three transitions from x= (2, 1)to ˜x= (0, 2, 1).

We define the set of all multiset permutations of f(˜x) as in Equation (13), where fu(˜x) is a specific multiset permutation, with u = 1, . . . , z(˜x). In addition, fiu(˜x)is the ith element in the vector fu(˜x). The transition probability of moving to a single multiset permutation u is the product of the probabilities of each of the transitions from fi(x00)to fiu(˜x), which we obtain by using the transition matrix P for one component and is defined by P[fi(x00), fiu(˜x)]for i = 1, . . . , n. The transition probability for one multiset permutation is then PIIIafu(˜x)| f(x00), uand is equal to

PIIIafu(˜x)| f(x00), u=

n

Y

i=1

P[fi(x00), fiu(˜x)]. (17)

By summing this for all u, we obtain the transition probability PIIIb(˜x | x00), which is equal to

PIIIb(˜x | x00) = z(˜x) X u=1 n Y i=1 P[fi(x00), fiu(˜x)]. (18)

To illustrate, the example in Figure 2 would show the following. For Figure 2a we obtain P[1, 2]P[1, 2]P[2, 3], for Figure 2b, P[1, 2]P[1, 3]P[2, 2] and for Figure 2c we obtain P[1, 3]P[1, 2]P[2, 2]. The total transition probability from x00 = (2, 1)to ˜x= (0, 2, 1)is equal to

PIIIa˜x= (0, 2, 1)| x00 = (2, 1)=P[1, 2]P[1, 2]P[2, 3] +P[1, 2]P[1, 3]P[2, 2]

(22)

4 Markov decision process 21

4.2.3.2 Corrective maintenance

After moving to ˜x, corrective maintenance will be performed, if necessary, and the system transitions from ˜x to x000. This is deterministic, since upon observing ˜x, it is known whether

one or more components have failed. If this is the case, these are required to be repaired, since we must have x000m+1 =0. We define the function g(x)to transform the state ˜x to x000. It is equal to

g(˜x) = 

g1(˜x), g2(˜x), . . . , gm(˜x) 

, (20)

where an element gi(˜x)is defined, for i=1, . . . , m, by

gi(˜x) =                ˜x1+ ˜xm+1, if i=1, ˜xi, if i=2, . . . , m. (21)

Note that the input is ˜x ∈ ˜X, while the output g(˜x) ∈ X. To find the total probability of moving from x00 to x000, we sum all states ˜x ∈ ˜X for which g(˜x)equals x000. The transition probability function, PIII(s000| s00), for the third phase of the roadmap of the MDP thus

PIII(s000 | s00) =                    X {˜x∈ ˜X:g(˜x)=x000} PIIIb(˜x | x00), if l000=l00, c000 =c00 0, otherwise. (22)

The cost corresponding to corrective maintenance is equal to the sum of the expected corrective maintenance cost for all components, as stated by Puterman [16, Equation 2.1.1]. The expected corrective maintenance cost is equal to the probability that a component moves to the failed state, multiplied by the cost corresponding to a breakdown, ccm. To compute this we let q(s00)denote the sum of the probability of failure for all components. It is equal to

q(s00) =

m+1

X

i=1

(23)

5 Solution methodology 22

The expected cost of corrective maintenance in a state s00are equal to the product of the expected probability of failure and the corrective maintenance cost. Hence,

CIII(s00) =ccm· q(s00). (24)

5

Solution methodology

We will analyse the performance of the condition-based maintenance policy by comparing it to a time-based maintenance (TBM) policy. This section focuses on the approach of optimizing these policies, so that we are able to compare.

5.1

CBM policy with lead time

To determine an optimal policy the value iteration algorithm is used. The value iteration algorithm creates a sequence of values denoted by v0, v1, . . . ∈ V, where V denotes the space of bounded real-valued on the state space S. That is, v : S →R for all v ∈ V. The value vN(s)∈ V can be

interpreted as the minimum total expected cost as a function of the state s ∈ S, when there are N time periods left. This is a process of backward induction. It considers the last time a decision has to be made, what an optimal decision would be for each state s and computes the corresponding cost. It starts by setting v0for all s ∈ S, usually to zero. Using this information, it determines what is optimal in v1. The minimum total expected cost is an increasing value, since every time step cost are incurred for expected corrective maintenance or for scheduling preventive maintenance. The sequence {vN} does in general not converge. Therefore, the span between two successive time periods is used as a convergence criteria. It is equal to

|| vN+1− vN ||=max

s∈S[

vN+1− vN]− min

s∈S[

vN+1− vN]. (25)

Since we are dealing with three phases each decision epoch, we introduce wN 1 and w

N

2. Figure 3

shows an illustration of backward induction used by the value iteration algorithm.

(24)

5 Solution methodology 23

vN+1 wN2 w

N

1 vN

Fig. 3: Illustration of the Value iteration algorithm.

Step 1. Set v0(s) =0 for all s ∈ S, ε > 0 and N=0.

Step 2. Compute for all s ∈ S, (a) wN1(s) =CIII(s) +X j∈S PIII(j | s)· vN(j), (b) wN2(s) =min a∈A  CII(a) +X j∈S PII(j | s)· wN1(j)  . (c) vN+1(s) =X j∈S PI(j | s)· w2N(j).

Step 3. If ||vN+1− vN||< ε, go to step 4, otherwise increment N by one and return to step 2.

Step 4. Compute for all s ∈ S

d(s) =arg min a∈A  CII(a) +X j∈S PII(j | s)· wN1(j)  . (26)

Step 5. Compute the optimal cost per unit of time g∗

gπ∗ = min(v

N+1− vN) +max(vN+1− vN)

2 . (27)

5.2

TBM policy

(25)

6 Results 24

performing maintenance on all components, equal to

CBBM =cs+n · cpm, (28)

and incurred at each preventive maintenance event. Since no decision exist in phase II, the transitions are fixed. Each T time steps preventive maintenance will be performed and in between nothing will be done.

Because all components are restored to the ‘as-good-as-new’ state at each preventive maintenance event, each preventive maintenance event is a renewal point. We can therefore obtain the expected cost per cycle by using the backward induction algorithm. This cycle has length T and to find the cost rate per unit of time gBBM(T), we divide the cost per cycle by T . We can optimise the

BBM policy by running the backward induction algorithm for multiple values of T and select the interval length for which the cost per unit of time are minimised.

6

Results

(26)

6 Results 25

The aim of the results section is to analyse the optimal policy for differing parameters and interpret the results clearly. Because of the large state space, we will only present the most interesting characteristics of optimal policies, rather than complete optimal policies.

6.1

Analysis of the base case

In the base case, we set the number of components equal to n = 5, the number of deterioration states to m = 4 and the lead time equal to L = 3. For the base case, we will be looking at an average case and therefore set the parameters to the gamma process to α = β = 1. We set the failure level to 20 and the length of the time periods to 1. The transition probability matrix P is then defined by P=                     0.8013 0.1973 0.0013 0 0 0 0.8013 0.1973 0.0013 0 0 0 0.8013 0.1973 0.0013 0 0 0 0.8013 0.1986 0 0 0 0 1                     . (29)

In addition, we set the set up cost of preventive maintenance to cs=3 and the per component cost to cpm = 1. Furthermore, the corrective maintenance cost is set to ccm = 6. Table 2 shows the parameters for the base case.

Tab. 2: Base case parameter selection.

Parameter Value n 5 m 4 L 3 cs 3 cpm 1 ccm 6

(27)

6 Results 26

amount would probably result in too early maintenance. In addition, if only one component is deteriorated, than it is also optimal to do nothing. For x = (4, 0, 1, 0) and x = (4, 0, 0, 1) the optimal action is thus zero. While it is still less expensive then corrective maintenance, it is, in general, never optimal to schedule preventive maintenance for one component, due to the economic dependence between components. When preventive maintenance is necessary for one component the threshold for including an extra component is low. Furthermore, during the lead time, the probability exists that another component deteriorates, which contributes to the decision of joint preventive maintenance actions.

Tab. 3: Optimal action d(x)for different x, where l=c=0 for the base case.

x d(x) (0,4,1,0) 0 (4,0,1,0) 0 (4,0,0,1) 0 (3,0,2,0) 3 (0,3,0,2) 4

In the case of x = (3, 0, 2, 0), it is optimal to prepare preventive maintenance for three components. The probability that an extra component deteriorates from the ‘as-good-as-new’ state is high in this case, therefore it is optimal to prepare for more components than are in a deteriorated state currently. On the other hand, if x = (0, 3, 0, 2)it is optimal to schedule for four components, while all five are in deteriorated states. The probability of failure during preparation is highest when components are in the most deteriorated state. When this is the case, it is optimal to use a ‘run-to-fail’ strategy. There is trade off between these two effects, depending on the probability that one of the deteriorated components fail and the probability that one of the new components move to a deteriorated state. The optimal cost per unit of time for the base case is

(28)

6 Results 27

6.1.1 Comparison of a CBM and a BBM policy

For the BBM policy, we use the same parameters as in Table 3 and the same transition probability matrix P as in Equation (29). The lead time is also the same and therefore, we have T ≥ L. Figure 4 shows the cost per unit of time gBBM(T)for different values of T.

Fig. 4: Cost per unit of time under BBM policy as a function of T .

The optimal maintenance interval equals T∗ =10, and the cost per unit of time is equal to gBBM(T∗) =1.168.

Keeping in mind that the cost per unit of time under the CBM policy was g∗ = 1.037, it follows that using a CBM policy, leads to an improvement of 11.2%. One should hold in mind that the investment of making CBM possible, e.g. installation of sensors, is not held into account, which would make the difference between a CBM and a BBM policy less and could potentially even be in favour of the BBM policy.

6.2

Analysis of the lead time

(29)

6 Results 28

Tab. 4: Optimal action d(x)for different x, where l=c=0 for different values of L.

x dL=1(x) dL=3(x) dL=6(x)

(0,4,1,0) 0 0 4

(0,3,2,0) 0 5 4

(0,0,0,5) 5 4 3

(3,0,1,1) 2 2 3

For an increasing L, maintenance is prepared earlier due to longer time between initiating and executing preventive maintenance. It therefore has to be planned earlier in advance, as is shown by x = (0, 4, 1, 0)for different lead times. In addition, when the lead time increases it is optimal to schedule maintenance for less components than are in the deteriorated state. The probability that one or more components fail during preparation of preventive maintenance is larger when the lead time increases. This is visible for x = (0, 3, 2, 0), x = (0, 0, 0, 5). Furthermore, the probability that one or more components that are in the ‘as-good-as-new’ state, deteriorate during preparation is also increased for larger L. For x = (3, 0, 1, 1), we see that when the lead time is equal to L = 1 or L = 3 it is optimal to prepare preventive maintenance for two components, where, if L=6, we have that it is optimal to prepare for three.

(30)

6 Results 29

Tab. 5: Optimal action distribution for different values of L

L(%) a 1 2 3 4 5 6 7 0 44.6 35.7 28.5 25.0 19.6 12.5 8.9 1 0 0 0 0 0 0 0 2 5.3 3.5 3.5 0 0 0 0 3 16.0 19.6 25.0 33.9 33.9 37.5 41.0 4 21.4 32.1 35.7 39.2 44.6 50.0 50.0 5 12.5 8.9 7.1 1.7 1.786 0 0

Columns might not add up to hundred due to rounding.

Lastly, Figure 5a shows the cost per unit of time under the CBM policy and the BBM policy for different values of L and Figure 5b shows the percentage difference between the two policies. Varying L has no effect on BBM, as long as L ≤ T∗, and therefore that is a horizontal line. The

policy for CBM becomes more expensive for increasing L, since more uncertainty is present during preparation and uncertainty comes at a cost. This means the difference between TBM and CBM becomes smaller as is in line with De Jonge et al. [7]. Lastly, it shows that for accounting for too short lead time, the maintenance cost are underestimated.

(a) (b)

(31)

6 Results 30

6.3

Analysis of the number of components

Table 6 shows optimal action for different values of x, where l=c=0, for varying n. We cannot compare exact policies since the number of components differ and with that also the total number of states. We see, however, that the overall followed strategy is fairly similar for increasing n. If all components are in deterioration state 2 or better and at most one components is deteriorated further, e.g. x = (0, 1, 1, 0), x = (0, 4, 1, 0)and x = (0, 6, 1, 0), then it’s optimal to do nothing. The same reasoning can be applied as in the base case.

For increasing n, the probability that a component fails during preparation increases, since there are more components that can fail. Therefore, it is optimal to prepare for less components when they are in the most deteriorated state, than for smaller n, as is shown by x = (0, 0, 0, 2), x= (0, 0, 0, 5)and x= (0, 0, 0, 7). In addition, the probability that a new component deteriorates is increasing when more components are in the ‘as-good-as-new’ state. Therefore, for larger n, it is more often optimal to prepare for more components than are actually deteriorated.

Figure 6a shows the cost per unit of time under the CBM and BBM policy for varying n. Cost

Tab. 6: Optimal action d(s)for different x, where l=c=0, for varying n.

n x d(x) 2 (0,1,1,0) 0 5 (0,4,1,0) 0 7 (0,6,1,0) 0 2 (0,0,0,2) 2 5 (0,0,0,5) 4 7 (0,0,0,7) 5 3 (1,0,2,0) 2 5 (3,0,2,0) 3 7 (5,0,2,0) 3

(32)

6 Results 31

(a) (b)

Fig. 6: Comparison of cost per unit of time under CBM and TBM policy for varying n.

6.4

Varying the cost components

6.4.1 Analysis of the set up cost

By varying cs, we vary the amount of economic dependence. For cs = 0, there is no economic dependence and for increasing cs, the economic dependence between components increase as well. In Table 7 for a few different states the optimal actions is shown for different values of cs.

When the set up cost are high, it is optimal to perform joint preventive maintenance. Therefore the number of components for which preventive maintenance is prepared, is larger for higher set up cost. See for example x = (0, 0, 0, 5). However, when the set up cost are high, there is

Tab. 7: Optimal action d(x)for different x, where l=c=0 for different values of cs.

x dc s=0(x) dcs=3(x) dcs=5(x) (0,4,1,0) 2 0 0 (0,3,2,0) 3 5 0 (3,0,2,0) 2 3 0 (0,0,0,5) 3 4 4

(33)

6 Results 32

In Figure 7a the cost per unit of time under CBM and BBM for different values of cs are shown and Figure 7b shows the percentage difference. Both under the CBM and the BBM policy the cost per unit of time are increasing. This is intuitive, since the cost per individual component remain the same. The percentage difference however, is higher for smaller values of cs, which

could be due the postponement of maintenance under the CBM policy.

(a) (b)

Fig. 7: Comparison of cost per unit of time under CBM and TBM policy for varying cs.

6.4.2 Analysis of corrective maintenance cost

Lastly, we analyse the effect of the corrective maintenance cost. Table 8 shows optimal actions for different states for varying values of ccm. The expected cost for failure during deterioration

increase, if the corrective maintenance cost are higher, and hence it is optimal to schedule a preventive maintenance event earlier. This is visible for x = (0, 5, 0, 0), x = (2, 2, 1, 0) and x = (2, 0, 3, 0). We also note that more often it is optimal to prepare for more components than are deteriorated at the time of decision. Lastly, even when components are in the most deteriorated state, it is now optimal to prepare preventive maintenance. The ‘run-to-fail’ strategy is not optimal any more, since it is less expensive to include preparation for components that are in the most deteriorated state than to let them break down, because of the high corrective maintenance cost.

(34)

7 Conclusion 33

Tab. 8: Optimal actions for different states for different values of ccm.

x dc cm=6(x) dccm=12(x) dccm=18(x) (0,5,0,0) 0 0 5 (2,2,1,0) 0 3 4 (2,0,3,0) 3 4 4 (1,2,0,2) 3 3 4

BBM policy the cost per unit of time increase. Since preventive maintenance is scheduled earlier for higher corrective maintenance cost, this makes sense. Furthermore, we note that the optimal BBM interval length T∗ decreases for high CM cost to take into account that failure between preventive maintenance events is more costly. However, under a BBM policy the condition of the system is not held into account sufficiently and therefore the difference between the cost per unit of time under the CBM policy and the BBM policy is increasing slightly.

(a) (b)

Fig. 8: Comparison of cost per unit of time under CBM and TBM policy for varying ccm.

7

Conclusion

(35)

7 Conclusion 34

We formulated the problem as a Markov decision process to analyse it. To be able to solve larger multi-component systems, we used a compact representation that greatly reduced the size of the state space. A drawback of this method is that the computational complexity of the transition probabilities is increased. However, since the transition probabilities have to be computed only once, the smaller state space allows us to solve large systems exactly.

By using the value iteration algorithm, we solved the MDP and found an optimal policy. The characteristics of this policy show two effects concerning the lead time. Firstly, due to the inclusion of a lead time components continue to deteriorate during the preparation of a preventive maintenance event. This is taken into account when deciding on the components that can be maintained. Secondly, during the lead time, it might be that deteriorated components break down and are restored to the ‘as-good-as-new’ state. It can be accounted for by preparing preventive maintenance for less components then are deteriorated at the time of decision. This is optimal if components are in highly deteriorated states.

These two effects are magnified if the lead time increases in comparison to the mean time to failure of a components. The number of components does not change the characteristics of the optimal policy. Furthermore, if the set-up cost is high, preventive maintenance is further postponed. When the corrective maintenance cost is high, the effect of preparing preventive maintenance for more than are deteriorated at the time of decision increases. In conclusion, we find that the improvements of using a condition-based policy decreases for higher lead times relative to using a block-based maintenance policy.

(36)

7 Conclusion 35

(37)

7 Conclusion 36

References

[1] C. B´erenguer, A. Grall, L. Dieulle, and M. Roussignol. Maintenance policy for a continuously monitored deteriorating system. Probability in the Engineering and Informational Sciences, 17(2):235–250, 2003.

[2] M. D. Berrade, P. A. Scarf, and C. A. V. Cavalcante. A study of postponed replacement in a delay time model. Reliability Engineering and System Safety, 168:70–79, 2017.

[3] W. D. Blizard. Multiset theory. Notre Dame Journal of Formal Logic, 30(1):36–66, 1989. [4] K. Bouvard, S. Artus, C. B´erenguer, and V. Cocquempot. Condition-based dynamic

maintenance operations planning & grouping. application to commercial heavy vehicles. Reliability Engineering and System Safety, 96(6):601–610, 2011.

[5] R. Brualdi. Introductory Combinatorics. 5th edition, 2010.

[6] B. De Jonge. Lecture notes Maintenance Planning and Optimization. 2017-2018.

[7] B. De Jonge, R. H. Teunter, and T. Tinga. The influence of practical factors on the benefits of condition-based maintenance over time-based maintenance. Reliability Engineering and System Safety, 158:21–30, 2017.

[8] Dutch Railway Company. Nederlandse Spoorwegen (NS) - Annual Company

Report. https://www.nsjaarverslag.nl/FbContent.ashx/pub_1000/Downloads/

NS-jaarverslag-2016.pdf, 2016.

[9] W. Feller. An Introduction to Probability Theory and Its Applications, volume 1. Wiley. [10] German Railway Company. Deutsche Bahn (DB) - Annual Company Report. https:

//www.deutschebahn.com/file/en/11887746/LeCLpXYqgYqxkace4W9dEq1oAEk/ 13620500/data/ib2016_dbgroup.pdf, 2016.

(38)

7 Conclusion 37

[12] A. Haurie and P. L’Ecuyer. A stochastic control approach to group preventive replacement in a multicomponent system. IEEE Transactions on Automatic Control, 27(2):387–393, 1982. [13] M. Marseguerra, E. Zio, and L. Podofillini. Condition-based maintenance optimization by

means of genetic algorithms and monte carlo simulation. Reliability Engineering and System Safety, 77:151–166, 2002.

[14] McKinsey & Company. Manufacturings Next Act. https://www.mckinsey.com/ business-functions/operations/our-insights/manufacturings-next-act, 2015.

[15] M. C. A. Olde Keizer, S. D. P. Flapper, and R. H. Teunter. Condition-based maintenance policies for systems with multiple dependent components: A review. European Journal of Operational Research, 261:405–420, 2017.

[16] L. Puterman, M. Markov decision processes. Wiley, 1994.

[17] L. Tan, Z. Cheng, B. Guo, and S. Gong. Condition-based maintenance policy for gamma deteriorating systems. Journal of Systems Engineering and Electronics, 21(1):57–61, 2010. [18] L. C. Thomas. A survey of maintenance and replacement models for maintainability and

reliability of multi-item systems. Reliability Engineering, 16:297–309, 1986. [19] H. Tijms. A First Course in Stochastic Models. 1st edition, 2003.

[20] A. Van Horenbeek, J. Bur´e, D. Cattrysse, L. Pintelon, and P. Vansteenwegen. Joint maintenance and inventory optimization systems: A review. International Journal of Production Economics, 143(2):499–508, 2013.

[21] J. M. Van Noortwijk. A survey of the application of gamma processes in maintenance. Reliability Engineering and System Safety, 95(EM6), 2009.

Referenties

GERELATEERDE DOCUMENTEN

Lastly, Borrero and Akhavan-Tabatabaei [6] focus on a single machine and a single prod- uct, where they include the production for inventory as well. Two Markov Decision Process

“Hoewel dit waar is dat ’n mens se produktiwiteit kan afneem as jy konstant op die uitkyk is vir nuwe e-pos of op die internet rondswerf eerder as om op ’n taak te fokus,

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

van deze overdrachtfunctie een amplitude- en fasediagram laten zien Voor bet bepalen van een systeemoverdracht in het frequentiedomein wordt vaak een bepaald

High picture quality a t moderate recording bit rates is obtained with component coded video and application of a n advanced bit-rate reduction technique.. which

Table 1 indicates that if only the reduction operation is considered (i.e., with- out the precalculations, argument transformations, and postcalculations) and for arguments twice

The research question is: How can future acute patients who need an MRI scan be diagnosed within 24 hours and how can access within 15 minutes be achieved for acute stroke

- This alternative uses a ‘cell layout’ for the test facilities resulting in a decrease of the lead time of TCS and the possibility of using a pull system in order to decrease