• No results found

W ORKLOAD C ONTROL WITH SIMULTANEOUS BATCHING OF INCOMPATIBLE PRODUCT FAMILIES M ASTER ’ S THESIS

N/A
N/A
Protected

Academic year: 2021

Share "W ORKLOAD C ONTROL WITH SIMULTANEOUS BATCHING OF INCOMPATIBLE PRODUCT FAMILIES M ASTER ’ S THESIS"

Copied!
39
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

M

ASTER

S THESIS

W

ORKLOAD

C

ONTROL WITH SIMULTANEOUS

BATCHING OF INCOMPATIBLE PRODUCT

FAMILIES

STUDENT NAME:SIBREN POSTHUMA

STUDENT NUMBER:1953613

MSC TECHNOLOGY AND OPERATIONS MANAGEMENT

DATE:23JUNE 2014 UNIVERSITY OF GRONINGEN

FACULTY OF ECONOMICS AND BUSINESS

SUPERVISOR: DR.J.A.C.BOKHORST

CO-ASSESSOR: DR.E.URSAVAS

(2)
(3)

Preface

Doing research for my master’s thesis and writing this report were the most important steps to finalizing my MSc Technology and Operations Management. I have learned new skills and developed myself during this process. Unlike other students, this thesis will not be the end of my time as a student. I plan to do another MSc at the University of Groningen next year.

Despite this report will not mean the end of my student time, I would like to thank a few people who have made it possible for me to stand where I stand. First I would like to thank all people from company X for helping me to conduct this research. Everyone was very helpful and the circumstances we worked in were very good. In particular I would like to thank Bart for his guidance. For confidentiality purposes, the case company and its employees will be made anonymous. Bart has been our guide from the beginning and I hope he can appreciate the work I have done. Second, I would like to thank Jos Alblas, my colleague for doing the master’s thesis at company X. We were the lucky ones to be assigned to the project of company X. We have had interesting discussions about both study related and unrelated contents, which I enjoyed very much. Third, I would like to thank dr. J.A.C. Bokhorst, my supervisor, for the interesting meetings we had. These meetings have always given me new insights and guidelines to continue my project.

The people I have to thank most are my parents. They have given me the chance to study and I am very grateful that they have always given me the support and freedom to develop myself.

I hope that you all will enjoy it to read this thesis!

(4)

Abstract

The concept of Workload Control (WLC) is primarily designed in order to improve the delivery performance of Make-to-Order (MTO) companies. Currently, it does not yet account for simultaneous batching of incompatible job families. An explorative single case study is conducted to investigate to what extent the concept of WLC is applicable to simultaneous batching of incompatible job families. First, the consequences of choosing a small or large batch size is translated in a continuum in order to understand the consequences of choosing a batch size. The release level is the most important level to account for the simultaneous batching and releasing the jobs in the appropriate batch sequence. Upstream priority dispatching rules should be simple First-Come-First-Serve (FCFS) rules. Downstream dispatching rules should be according the Least Slack rule in order to meet due dates of jobs. When efficient batching is favorable, the early jobs in the buffer downstream of the

simultaneous batching station should be decoupled in order to control workloads downstream. This decoupling results in a re-release level that should be added to the original WLC

concept. In summary, the concept of WLC when dealing with simultaneous batching of incompatible job families is applicable, but some conflicts occur. In addition, a re-release level should be included in order to keep downstream workloads low. In order to generalize the results, further work is recommended. This could be in the form of fieldwork or

(5)

5

Table of contents

1. Introduction ... 6

2. Theoretical background ... 9

2.1 The concept of workload control ... 9

2.2 Simultaneous batch processing of incompatible job families ... 11

3. Methodology ... 14

3.1 Literature study ... 14

3.2 Case study ... 14

4. Analysis and Design ... 16

4.1 Case description ... 16

4.2 Planning process and delivery performance ... 19

4.3 Literature review on scheduling of simultaneous batching with incompatible job families ... 22

4.4. Continuum: (Cost) efficient batching - stable and low workloads at successor station ... 23

4.5 Job entry level ... 25

4.6 Job release level ... 26

4.7 Priority dispatching at upstream stations ... 28

4.8 Priority dispatching downstream ... 29

4.9 Decoupling and measuring workloads ... 30

4.10 Adaptions to WLC concept for simultaneous batching of incompatible job families ... 32

5. Discussion ... 33

5.1 Limitations and further work ... 33

5.2 Practical implications ... 33

6. Conclusion ... 35

(6)

1. Introduction

Research on pull policies in general and workload control (WLC) in particular has primarily focused on flow lines that produce one or more similar products (see, for example, Bard and Golany, 1991; Spearman and Zazanis, 1992; Spearman, 1992; Tayur, 1993; Gstettner and Kuhn, 1996; Dar-El et al., 1999). However, the benefits of a Work-in-Process (WIP) limit can also be expected to accrue in job shops, in which multiple products with distinct processing requirements compete for the same set of resources. The concept of workload control is a well-established production control concept for job shops that put primary emphasis on load-based order release (Land, Stevenson, & Thürer, 2014). WLC aims to control throughput times by incorporating a restricted release of customer orders to the shop floor, while maintaining an order pool prior to release to buffer against the many uncertainties involved with Make-to-Order (MTO) companies. MTO companies increasingly have to cope with fierce competition in today’s turbulent markets. Improving and controlling logistical performance is crucial for such companies. Previous research on WLC has shown improvements in delivery performance and reductions in shop floor workloads (Thurer et al., 2012) when applying WLC principles.

In real world MTO companies, simultaneous batching processes of incompatible job families are often seen. Examples of such production processes are the process of wafer fabrication and heat treatment processes. This research is motivated by a company with a production process consisting of a simultaneous batching machine with incompatible job families, where all jobs of the same family have identical processing times and jobs of different families cannot be processed together. Simultaneous batch processing is the simultaneous processing of a batch of jobs where processing times are independent of the batch size. An incompatible job family is a group of jobs that have to undergo an identical process step and cannot be processed in combination with jobs of other job families.

(7)

7 In the case of company X there is a need for a deeper understanding of the combination of the applicability of WLC to simultaneous batching of incompatible job families. On the one hand efficient batch production is favorable to reduce costs, in particular when the simultaneous batching process is very expensive. Since the costs of processing a batch are independent of the batch size, full batches are the most cost efficient. On the other hand, efficient batching of incompatible job families can cause fluctuating workloads at successor stations since a batch consists of multiple jobs. This contradicts with the function of WLC to stabilize workloads. Furthermore, efficient batching can have other consequences. When jobs with high slack will be pulled forward in order to batch efficiently, these jobs will be stored in the buffer downstream of the simultaneous batching station. As a consequence, the workload of these jobs will remain in the system and can inhibit other late jobs to be released, when the workload of a station reaches its maximum limit.

So, on the one hand efficient batching can have negative consequences. On the other hand, stable throughput times are important to establish the need for improved delivery performance, due to ever increasing market demands that MTO companies are facing. When looking for possibilities for WLC when dealing with batch processing of incompatible product families, more insights are needed.

To gain insights in the possibilities and difficulties of WLC within the context of simultaneous batch processing with incompatible job families in job shops, the following research questions are stated:

Main research Question:

- To what extent is the concept of Workload Control applicable to job shops processing simultaneous batches of incompatible job families?

To structure the research, the following sub questions are stated:

- What is available in literature on scheduling of simultaneous batching with incompatible job families?

- How is the conflict between cost efficient batching and low and stable workloads at the downstream station affected when controlling workloads?

(8)

- What adaptions can be made to the original concept of Workload Control to fit into Make-To-Order companies processing simultaneous batches of incompatible job families?

(9)

9

2. Theoretical background

In this chapter, first the concept of Workload Control will be discussed in section 2.1. Then, the concept of simultaneous batching of incompatible job families will be discussed in section 2.2.

2.1 The concept of workload control

Workload control aims to control the logistical performance of companies. The philosophy of the WLC concept builds on the relationship between workload and throughput time. By imposing norms on the workloads, it tries to control the throughput times (Land, 2004). It is regarded as the most appropriate production planning and control (PPC) concept for MTO companies (Stevenson et al. 2005). WLC is based on the philosophy that controlling the logistical performance of MTO companies requires a controlled situation on the shop floor, i.e., the throughput times of orders on the shop floor are controlled.

Within the concept of WLC, three decision moments are present: job entry, job release and job dispatching. Each decision moment gives the possibility to influence the input to the load of a specific subsystem of the job shop. An input decision may be accompanied by a decision to adjust capacity, which affects the output of the job shop (Land, 2004). The WLC concept has translated input/output control in different phases of the job flow into a decision framework with three hierarchical levels, the entry level, the release level, and the dispatching level respectively, as shown in figure 2.1. Decisions at the entry level are used for control of the total amount of accepted work. The release level controls the amount of work on the shop floor. The dispatching level remains for influencing the progress of individual jobs. The following figure displays the framework of WLC.

(10)

The goal of WLC is to improve delivery performance. A common used method to measure delivery performance is lateness. The lateness is defined as follows:

The lateness can be either negative or positive. Zero lateness means that a job is delivered exact on the time as planned. A job with a negative lateness is delivered earlier than planned. The costs of negative lateness are inventory costs. When a job is delivered later than planned, the lateness is a positive number. Costs incurred from a positive lateness are prestige loss and maybe even discounted prices of products. Land (2004) argues that two factors of the lateness are important: 1) mean lateness and 2) variance of lateness. Figure 2.3 indicates to what extent the concept of WLC can influence the

distribution of the lateness. The first option to Figure 2.3. Lateness concept (Land, 2004) influence the lateness distribution is to speed

up the throughput. This will shift the mean of the whole distribution curve to the left. The shape of the curve will not change. Second, the dispersion of lateness can be reduced. This will reduce the standard deviation but not the mean.

(11)

11

In section 4.9 a method to measure workloads during its processing when dealing with simultaneous batching of incompatible job families will be discussed.

2.2 Simultaneous batch processing of incompatible job families

Simultaneous batching is the simultaneous processing of jobs by a machine and characterized by the independency of the processing time and the batch size, which is shown in figure 2.2.

Figure 2.2. Processing time of a simultaneous batch related to batch size

Once processing is begun on a batch, no job can be removed from or added into the batch. A batch-processing machine with incompatible job families is a process where all jobs of the same family have identical processing times and jobs of different families cannot be processed together. The processing time is equal for any batch size k with a lower bound of 1 and an upper bound to the maximum batch size for the station (1 < k < maximum batch size). In simultaneous batch processes, batching is mainly done to reduce setup times, reduce costs and to increase throughput. The throughput of the simultaneous batching process can be defined as: 0 2 4 6 8 10 12 1 2 2 3 4 5 .. max. batch size Pro cessi ng t ime of ba tch (h ours) Batch size

Processing time of a simultaneous batch

related to batch size

(12)

Since the processing time is independent of the batch size, throughput increases linearly with the batch size, with an upper bound at the maximum batch size. Furthermore, costs per product can be reduced to spread the process costs over multiple jobs by simultaneous processing these jobs. The costs per job in a batch are calculated as follows:

As in the context of company X, batch sizes are chosen high to reduce the process costs per single product because the costs of producing a single batch are high due to energy costs. In company X, the simultaneous batching process of incompatible job families is the autoclave station. This station consists of three autoclaves. To give an idea of an autoclave, the following photo shows an autoclave similar as in the case company. In addition, a process flow chart of the production process in company X will be given in chapter 4.1.

Figure 2.3. Example of an autoclave; similar as in company X

(13)

(14)

3. Methodology

The aim of this study is to investigate the applicability of WLC in job shops processing simultaneous batches of incompatible job families. The methodology used to guide this research will be discussed in the following chapter.

3.1 Literature study

First, literature will be collected and studied in order to gain insights in the common grounds and contradictions in both the WLC concept and the processing of simultaneous batches of incompatible job families. As mentioned in the introduction, WLC includes decisions in release and dispatching of orders. These decisions are also studied in batch processing contexts. A deeper understanding of the combination of these two themes is needed. Therefore, available literature on simultaneous batching of incompatible job families will be studied. Similarly, literature on WLC will be studied. The following databases are used in order to search articles: Business Source Premier, Sciencedirect, Taylor & Francis, IIE Transactions and Emerald Insight. The used keywords are: ‘production planning and control’, ‘simultaneous batch’, ‘nesting’, ‘job families’, ‘parallel batching’, ‘workload control’, ‘pull production’, ‘dispatching’, ‘release’ ‘job shop’ and ‘MTO’. The results were sorted on relevance, from high to low. The first step was to read the titles. When the title was pointing to interesting content of the article, the abstract was read. In some cases the abstract did not yet reveal whether the content was relevant for this research. In doubt of the relevance of the article after reading the abstract, the introduction and conclusion were read. An overview of the relevant articles on scheduling of simultaneous batching of incompatible job families is given in table 4.1.

3.2 Case study

Silverman (2001) argues that qualitative data has the ability to provide a deeper understanding of certain phenomena than quantitative; therefore this approach is selected by applying an explorative case study research methodology.

To gain deeper insight in the research question, but this research project is time bound, a single-unit case study is conducted. Meredith (1998) cites three outstanding strengths of case research:

(15)

15 - The case method allows the questions of why, what and how, to be answered with relatively full understanding of the nature and complexity of the complete phenomenon

- The case method lends itself to early, exploratory investigations where the variables are still unknown and the phenomenon not all understood.

The purpose of this case study is exploration; uncover areas for research and theory development. A single-unit, explorative case study will be conducted. The advantage of a single case is that it provides greater depth than multiple cases. A disadvantage is that the results are less generalizable.

3.2.1 Data collection and analysis

The prime source of data will be structured interviews. The format used for the interviews is the funnel model; the interviews start with broad and open-ended questions first, and become more specific and detailed to the end. An outline of the interview protocol will be send to the interviewees in advance to properly prepare the interviewees. The interviews are recorded to provide an accurate rendition of what has been said. On the negative side, transcribing tapes is very time consuming; it often takes place some time after the interview, can be seen as a substitute for listening and may inhibit interviewees. Other sources of data are unstructured interviews, interactions, attendance at meetings, collection of objective data and informal conversations. The main respondents are: planning personnel, operators, managers, and production staff. In order to ensure construct validity, multiple sources of evidence are used. Where possible, raw data will be recorded electronically. Furthermore, quantitative data regarding the production process is available from out the company’s ERP system. Data on batching decisions, lead-time, processing times, set-up times and workload is available.

3.2.2 Data limitations

A single investigator conducts the interviews. The use of multiple interviews will increase the reliability. To increase the reliability, observations and insights out of interviews are being validated at other sources.

(16)

4. Analysis and Design

In this chapter, the analysis and design of this research will be discussed. Section 4.1 will give a description of the production process. Then, the current planning process and delivery performance will be discussed in section 4.2. In order to get an overview of the available literature on scheduling algorithms on simultaneous batching on incompatible job families, a brief literature review will be done in section 4.3. Then, beginning at section 4.4, the sub questions will be worked out. Section 4.4 will discuss the conflict between efficient batching and low and stable workloads at the downstream stations. Section 4.5 will discuss the job entry level of the WLC concept. The job release level will be discussed in section 4.6. Then, sections 4.7 and 4.8 will discuss the dispatching rules for upstream and downstream stations, respectively. Section 4.9 will discuss a method to measure workloads in the production process. Last in this chapter, the adaptions to the WLC concept will be discussed in section 4.10.

4.1 Case description

In this section, the case description of company X will be discussed. The process diagram of the production process of company X is included in figure 4.1.

The empirical part of this research will be conducted at an aero-structures manufacturer in Europe. The company is an MTO company in the manufacturing of composite parts. The end customers of the products are world leading airplane manufacturers. The variety of products is large and production volumes are low, a so-called Low Volume High Variety (LVHV) environment.

The production process, as drawn in figure 4.1, is as follows: the first step in the production process is the cutting of the composite materials that are on large rolls. Since the products consist of multiple layers composite, in the next station production staff builds up the layers onto a mould; the lay-up department. The lay-up department consists of 5 different stations that make different products.

(17)

17 different job families cannot be processed together; in that sense the job families are to be considered incompatible. In the production process of company X, there is only one job family that consists of products processed by two lay-up stations. The rest of the job families are only processed at a single lay-up station. So, for example, lay-up station 1 processes job families A and B. Lay-up station 2 processes job families C and D. In that way, a single lay-up station processes multiple job families. Only job family E is processed by lay-lay-up station 2 and 3.

After the products have been processed in the autoclave, the products are removed from the moulds; the debag station. Since the moulds are used from the lay-up department until the debag process, the moulds go back to the lay-up departments after the products have been removed in the debag station. Then, the products are sent to the chip up department.

As mentioned before, the simultaneous batch process of company X is a very expensive process. In the current situation, company X is not having sufficient key-process-indicators (KPI’s) to measure the efficiency of the autoclaves. Therefore, it is difficult to measure how they are performing in terms of efficient batching. Furthermore, as mentioned in the data limitations section, historical data on workloads and capacity are not available in the company. In addition, company X does not control workloads in the current situation. In order to make better choices in terms of efficiency in the future, there is a need for KPI’s. For example, such KPI’s should give the performance of throughput times, cost efficiency and accurate delivery performance.

(18)

Cutting layers Job Pool Cutting layers

Lay-up 3 Lay-up department Autoclave 2 Autoclave department Lay-up 2 Lay-up 4 Lay-up 1 Lay-up 5 Autoclave 1 Autoclave 3 Debag Chip up 1 Chip up 2 Chip up 3 Chip up 4 Chip up 5 Chip up department Debag department Lay-up buffer Autoclave buffer Debag buffer Chip up buffer Simultaneous batching of

incompatible job families

Single-piece flow Single-piece flow

Moulds

Upstream Downstream

(19)

19

4.2 Planning process and delivery performance

In this section, the current planning process within the case company and the delivery performance will be discussed.

The case company is having high quality norms to their products, due to the safety standards in the aerospace industry. When a product appears to be non-conform the specified norms, the product is taken out of production. When a non-conform product is reported, the majority of the products can be further processed after a couple of days. A part of the products can be further processed after a few weeks. In the worst case, the product has to be scrapped. These problems occur due to quality issues in the production process. Since the quality aspects within the production process is beyond the scope of this paper, products that have (temporarily) been taken out of production are neglected within the analysis of delivery performance. Consequently, the dataset that was analyzed to analyze the delivery performance consists of all jobs, except the above described, from 01 January 2013 – 01 April 2014.

The Latest Starting Time (LST) of job j is used as planned starting date. The following calculation is used by the Enterprise Resource Planning (ERP) system to determine the LST of job j:

In order to determine how much time is left until a job will arrive too late at its customer, we will use the concept of slack. The slack of a job j is determined as

Jobs with high slack will be referred to as early jobs. On the contrary, jobs with little slack will be referred to as late jobs. The consequences of the combination of early and late jobs in a batch will be discussed in chapter 4.

(20)

Figure 4.2. Workload of lay-up station 1 over time

capacity and workloads of stations; it plans with infinite capacity. The ERP system is highly push-production oriented. Since the case company is facing fluctuating demand, the workloads are also fluctuating, as can be seen in figure 4.2. This graph indicates the fluctuation in workload at the lay-up 1 station.

To account for queuing times in the production process, waiting times are taken into account in the planning. A problem that exists in the current situation is the neglecting of actual waiting times of a job at stations by the planning software. In the ERP system, static waiting times, which are not based on the shop floor status, are used to plan jobs. Therefore, the workloads of stations in the routing of a job and consequently actual waiting times on the shop floor are not taken into account. When the workloads at stations in the routing of a job are low, the actual waiting times are lower than accounted for in the planning. As a result, a job arrives earlier at the next station than planned. The costs incurred from a too early delivery of products are inventory costs. When in reality workloads at stations through the routing of a job are high, actual waiting times for a job are higher than accounted for in the planning. In this case, the job will arrive later at the next stations than planned. In the worst case, the due date of a job will not be met. As a result of the fluctuating workloads at stations, the throughput time of a job will fluctuate. Therefore, the actual delivery date will vary from the planned delivery date. This results in a lateness as is displayed in figure 4.3. In the figure, blue bars represent jobs that have been delivered on time, which can also mean that these jobs are delivered earlier than planned. Red bars represent jobs that have been delivered too late.

0 20 40 60 80 100 120 140 160 180 2 4 6 8 10 12 14 16 18 20 22 24 26 Wor kl oa d in h ours Week

Workload of lay-up station 1

(21)

21 Figure 4.3. Lateness histogram

Delivery performance information of company X of 01 January 2013 – 01 April 2014

Average throughput time 26 days

Average lateness 0.6 days

Standard deviation of lateness 16.7 days

Percentage of jobs with Lateness > 0 days 47.5 %

Cumulative 90% fractile 15 days

Cumulative 95% fractile 23 days

Table 4.1. Delivery performance information

In summary, it can be concluded that workloads at stations are fluctuating. This is due to the current planning process that does not account for capacity and workloads. Since the workloads at stations are varying, waiting times also vary. Therefore, throughput times are varying much. Consequently, on average jobs are delivered 0.6 days too late, but 47.5% of the jobs is delivered too late. The variability in the workloads and thus throughput time is causing a bad delivery performance that is measured in lateness.

In order to control throughput times, company X is considering implementing workload control. In the current situation, no WLC principles are used.

(22)

4.3 Literature review on scheduling of simultaneous batching with incompatible job families

In this section, the available scheduling literature on simultaneous batch processing of incompatible job families will be discussed.

In the available literature, scheduling algorithms are developed. The goal of the scheduling algorithms is on minimizing a target function. Minimizing total (weighted) tardiness as in Mehta & Uzsoy (1998) and Gokhale & Mathirajan (2013), number of tardy jobs as in Jolai (2005), maximum lateness, flow time or makespan is central in most articles. The relevant articles on the subject are clustered in an overview that is based on the focus of the goal function. Table 4.2 gives an overview of the available literature.

Author(s) Target Developed method

Mehta & Uzsoy (1998) Minimizing tardiness Dynamic scheduling program

Gokhale & Mathirajan (2013) Minimizing total weighted tardiness Integer Linear Programming model

Jolai (2005) Minimizing number of tardy jobs Dynamic programming algorithm

Liu & Zhang (2008) Minimizing number of tardy jobs Dynamic programming algorithm

Erramilli & Mason (2006) Minimizing total weighted tardiness Mixed integer programming

Malve & Uzsoy (2007) Minimizing maximum lateness Iterative heuristics

Balasubramanian et al. (2004) Minimizing total weighted tardiness Genetic algorithm

Uzsoy, (1995) Minimizing makespan, lateness and

total weighted completion time

Multiple algorithms

Table 4.2. Articles on scheduling of simultaneous batching of incompatible job families

The scheduling algorithms focus on the scheduling of a simultaneous batch processing machine(s). Therefore, the influence of the scheduling algorithms on the upstream and downstream stations is not included. Stations upstream of the batch-processing machine have to supply the work for the batch process. Stations downstream have to process the work that is finished by the batch process. Therefore, the consequences of simultaneous batching of incompatible job families on upstream and downstream stations have to be studied.

(23)

23 it might well be optimal to process partially full batches. This can have negative impact on the costs per in comparison with full batches, but can stabilize the workloads at downstream stations. The last advantage can lead to more stable throughput times and thus a better delivery performance.

As can be concluded from the available literature on simultaneous batching of incompatible job families, this literature does not account for WLC principles. Therefore, in the following sections of chapter 4 the results of the case study will be discussed in order to analyze to what extent the concept of WLC is applicable to a process with a simultaneous batch process with incompatible job families.

4.4. Continuum: (Cost) efficient batching - stable and low workloads at successor station

As is discussed in chapter 2, efficient batching can cause fluctuating direct workloads at successor stations, which contradicts the goal of WLC. Furthermore, processing batches of both early and late jobs can cause high workloads at downstream stations when early jobs are not being processed since they have high slack. Important issues regarding this conflict will be discussed in the following section.

The goal of WLC is to establish low and stable workloads on the shop floor in order to establish low and stable throughput times. The control of throughput times is needed in order to reach a good delivery performance, which is measured by the mean and variance of the lateness. First, we will look at the range at which the simultaneous batch machine can be filled, which will lead to a theoretical continuum.

(24)

would become too high. Regarding the % of early jobs in the downstream buffer, a small batch size will consist of only late jobs, since it makes no sense to process jobs too early. When a small batch size is chosen, it will consist of late jobs. Therefore, a small batch size will results in a low % of early jobs in the downstream buffer.

As can be concluded from above, simultaneous batches consisting of multiple jobs is needed in order to keep the utilization of the autoclave station < 1. So, on the long-term a minimum batch size is needed in order to keep the utilizations < 1.

Now that the lower limit of the batch size has been examined, we will focus on the upper limit. The upper limit of the batch size is determined by the capacity of the machine, in company X the autoclave. In the case company, the capacity of the machine is dependent on a couple of factors. These factors are: volume of the autoclave, number of vacuum connections and the number of moulds available. In that way, it can be that the autoclave is not fully filled in terms of volume, but the number of vacuum connections is (almost) all used. In that way, the maximum batch size is constrained by multiple factors. In general, processing a batch with the maximum batch size will be defined as (cost) efficient batching. As a downside from efficient batching, a batch consisting of a lot of jobs and thus much workload for successor stations can cause high fluctuations at successor stations. In the case company, the debag station and the chip-up stations can have large fluctuations in direct workload when efficient batching is chosen. Furthermore, the utilization of the autoclave station is as low as possible when it processes full batches since it the number of batches to process is the lowest when the batches size is high.

(25)

25 As described above, efficient batching has negative consequences for keeping workloads at downstream station low and stable. In addition to this, efficient batching can have other consequences. When determining which jobs to nest in a batch, the late jobs should be chosen first. Then, when efficient batching is preferred, early jobs can be processed in the same batch. But, the autoclave cannot process the batch until all jobs have arrived. Therefore, it cannot start processing before the last job of a batch has arrived. So, the larger the batch size is chosen, the longer the first arrived job has to wait until processing starts. This waiting time is often referred to as Time-to-Batch (Hopp and Spearman, 2008). When the latest job will arrive first at the autoclave, the other jobs have to be processed by the cutting and lay-up department before they can be processed by the autoclave. Since the first job at the autoclave has to wait for the last job, this waiting time has to be accounted for so that the first job will meet its due date. How to deal with this will be discussed in chapter 4.6.

The above presented will now be represented in a continuum. The following continuum describes what implications the chosen batch size has on 1) the costs per product, 2) the utilization of the simultaneous batch station, 3) the workload at the downstream station and 4) the % of early jobs in the downstream buffer.

Feasible batch size to keep utilization < 1

Long-term batch size k 1 1 > k > max. batch size max. batch size

Costs per product high moderate low

Utilization simultaneous batch station too high (on long term) high low

Workload fluctuations downstream station low moderate high

% of jobs with high slack in chip up buffer low moderate high

Not-feasible area due to too high utilization

on the long term

Utilization of batching station < 1

Figure 4.4. Continuum: efficient batching – low and stable workloads at downstream station

4.5 Job entry level

(26)

not relevant. As a consequence, company X controls the long-term utilization levels by adjusting capacity; the output control. Controlling the workloads can influence the throughput times, since there is a relationship between workloads and throughput times (Little’s law: Work-In-Process = Cycle Time x Throughput).

In the current situation of company X, due dates are not related to the workload status on the shop floor, but due dates are based on infinite capacity. When implementing WLC, realistic due dates can be better determined, based on the workloads in the job pool and on the shop floor. Since the products that company X produces differ in routing and throughput times, planned throughput times have to be determined by the shop floor status to minimize the variance of lateness.

In summary, the specific characteristic of the production process of company X, simultaneous batch processing of incompatible job families does not have consequences for the job entry level as in the WLC concept. The input control still remains due date assignment and job acceptance as is described in chapter 2.1. For Company X, job rejection is not relevant, since all jobs are accepted. Therefore, the output control is medium-term capacity adjustments, as in the original WLC concept.

4.6 Job release level

This section will focus on the job release level when dealing with simultaneous batching of incompatible job families.

Land et al. (2014) argue that the primary focus of WLC research is on job release. Controlled job release should keep the queues on the shop floor small and steady. The release level has a strong influence on the work-in-process on the shop floor and therefore the related throughput times. Releasing a job has consequences on workloads of stations in the routing of the job. For gateway stations, like the cutting department in Company X, the workload consists of only direct loads. For downstream stations, like the chip up department in company X, the proportion of upstream load in the total workload is higher. Determining the direct workloads is easy, as we assume that the routing of a job and the processing times are known. Similar, determining the upstream load is of similar ease as the direct load, as is shown in chapter 2.4. For the cutting layers and lay-up departments, the workload of a job is equal to the processing time at those stations.

(27)

27 and therefore different slack. The jobs with the least slack have to be released first in order to meet their due date. Then, since these jobs have to be processed by the autoclave, the release function should consider nesting possibilities in order to batch efficiently. Assume job 1 has the least slack and belongs to job family A. For efficiency purposes, other jobs of job family A can be released after job 1 in order to nest jobs in the autoclave. But, this early releasing of jobs has the following consequences: 1) early jobs can demand capacity of the upstream stations and cause delay for late jobs of other families 2) early jobs will arrive with a lot of slack at the debag buffer because they were nested in the autoclave. For the first consequence, early jobs can compete with late jobs for the same resource. To deal with this, the dispatching rules at upstream stations have to cope with this possible conflict. This will be discussed in section 4.7. For the second consequence of efficient batching, dispatching rules at downstream stations have to deal with the early jobs. This will also be discussed in chapter 4.8.

A decision at the release function should consider the possible conflict between meeting due dates of jobs and efficiency possibilities. In order to batch efficiently, the release of late jobs with negative slack can be postponed so that jobs of the same job family can be nested. On the contrary, when meeting due dates has the higher priority, it can be necessary to release batches consisting of just one job. For example, assume two jobs of different job families both have little slack and have to be processed by the same lay-up station. Zero slack means that the jobs have to be released immediately in order to meet due dates. Therefore, these jobs should not be nested, because giving early jobs priority to late jobs will cause lateness in the ‘late slack job’ that is not released.

(28)

workloads at the debag station. Second, the possible conflict between efficient batching and meeting due dates of late jobs. Last, the availability of moulds and materials are important to consider at the release level.

4.7 Priority dispatching at upstream stations

This section will discuss the dispatching rules that should be applied at stations upstream of the simultaneous batching process of incompatible job families. In the case of company X, this are the cutting layers and lay-up departments.

It is generally assumed in a wider operations management literature (i.e. beyond that specific to the WLC concept) that, if order release is controlled, only simple priority dispatching needs to be applied on the (less congested) shop floor (Land et al., 2014). Prior simulation research on the WLC concept has shown that the more sophisticated dispatching rules come into conflict with the functioning of order release (Land, 2004). In fact, early studies on load-based order release methods suggested that using a release method avoids the need to deviate from simple first-come-first-serve (FCFS) dispatching on the shop floor (Bechte, 1988).

As can be concluded from the above, the release function should control for workloads. In addition, as is said in chapter 4.6, the release function should consider nesting possibilities. Therefore, when jobs are released in the sequence of job families, dispatching at upstream stations should not deviate much from the incoming job sequence. In fact, FCFS dispatching rules are the most appropriate to keep the job sequence as determined at the job release. As jobs are to be released in order of job families, it is not a problem to deviate from the job sequence as long as it is within a job family. For example, jobs are released in the following sequence: job 1 (family A), job 2 (family A), job 3 (family A), job 4 (family B and job 5 (family B). For nesting possibilities at the autoclaves, it is not a problem to process jobs 1,2 and 3 in a different order at stations before the simultaneous batching process. A motive for this could be to deal with sequence dependent set-ups, unavailability of moulds or material unavailability.

(29)

29

4.8 Priority dispatching downstream

In this section, priority dispatching at stations downstream of the simultaneous batching of incompatible job families is discussed. These are the stations downstream of the autoclaves.

After the jobs have been processed by the autoclave, they are stored in the debag buffer. Because a batch consists of jobs with a variety of different slack, due to the nesting, priority dispatching is needed in order to reduce the variance of lateness. Late jobs should be processed earlier than early jobs, since late jobs have little slack. Therefore, the debag station should dispatch according to the least slack dispatching rule. Furthermore, it could be argued that early jobs should not be processed by the debag station as long as they have high slack. This is because the jobs become more valuable the further they are downstream in the production process. Therefore, the jobs are cheapest at the most upstream station. The most upstream station after the autoclave is the debag station. Therefore, to reduce inventory costs, early jobs should be kept in the most upstream buffer after the simultaneous batching station. In company X, this is not favorable, since the jobs in the debag buffer consist of products with their moulds. Because the moulds have to be used again at the lay-up departments, the debag department processes all available jobs as soon as possible. Therefore, the debag department processes an entire batch, no matter if jobs are early or late. The sequence of processing a batch should be in the order of least slack, so first the latest jobs and last the earliest jobs.

Since the moulds have to be used again and thus early jobs are also processed by the debag station, the late and early jobs are coming into the chip-up buffer. The chip-up department should dispatch the jobs according to the least slack method. Since products in the chip up buffer are cheaper than after the chip up department processes them, early jobs should be kept in the chip up buffer to save inventory costs. The negative consequence of this is that the workload of the chip up department is high. As is discussed in 5.2, this is a negative consequence of efficient batching. When small batch sizes are chosen, the number of early jobs in the chip-up buffer is smaller.

(30)

4.9 Decoupling and measuring workloads

In this section, a method for measuring the workload will be discussed. In this research, we will not discuss and determine how the limit on the workloads should be determined but we will focus on the measuring of workloads. As explained before, the goal of WLC is to improve the delivery performance by limiting the workload on the shop floor. In chapter 2.4 the difference between direct and upstream load is explained. In order to establish a limit on the workload Land (2004) gives two approaches to combine the direct load and upstream load. The first is called load conversion, the second aggregate load. Here we will use the method of aggregate load to calculate workloads since Oosterman, Land,

& Gaalman (2000) argue that this method performs best in a general flow shop. Case company X can be considered as a

general flow shop, since it has a directed flow and variable routing lengths. The workload of a job j at station s will be included to the total workload to station s from the release of the job until the job has been processed by station s. The concept of aggregate load is displayed in figure 4.5.

When processing simultaneous batches of incompatible job families,

some problems arise when using the Figure 4.5. Concept of aggregate load concept of aggregate load. Stations

(31)

31 WLC, these high workloads are not acceptable. Therefore, a solution has to be designed in order to cope with this problem.

In order to keep using the method of aggregate load, since it performs well and is easy to use in reality, a solution will now be suggested. The workload of the chip up station can be defined as upstream and direct load. But both the upstream and direct load can consist of early jobs. These early jobs do not have to processed in the near future but will contribute to the workload of the chip up buffer. In order to cope with this, the early jobs should be decoupled after the debag department. The workload of the chip up department will then consist only of late jobs and therefore the workload measure will give a good indication of the work that needs to be processed in the near future by the chip up department. So, only the late jobs will contribute to the workload of the chip up department. The early jobs should be re-released to be processed by the chip up department when the slack of a job has reached a certain limit. The following figure indicates how the decoupling should be placed in the production process. Chip up 1 Chip up 2 Chip up 4 Chip up 5 Chip up 3 Debag Buffer of early

and late jobs Late jobs

Moulds

Re-release level

Figure 4.6 Decoupling of the production process

(32)

norm, the job should be re-released to the chip up department. This re-release decision based on least slack is in accordance with the downstream dispatching rules as explained in section 4.8.

In summary, the workload measures upstream of the autoclave station can be according to the method of aggregate load. On the first hand, the stations downstream of the autoclave can face high workloads when early jobs are being buffered. High workloads in the system are contradicting with the WLC theory. Therefore, early jobs should be decoupled after the debag station. As a consequence, efficient batching with early jobs will not inhibit the control of workloads at the chip up department. As a consequence of the decoupling, the jobs in the buffer of early and late jobs should be re-released.

4.10 Adaptions to WLC concept for simultaneous batching of incompatible job families

The adaptions that are needed to the WLC concept are described in this section. Therefore, this section will also cover the implications for WLC theory, which will not be described in the discussion chapter.

(33)

33

5. Discussion

In this chapter, the discussion on this research will be done. Since this research consists of a case study, most of the discussion is in chapter 4. The theoretical implications for the WLC concept are discussed in chapter 4.10. To place this research in a context, the limitations and practical implications will be discussed in the following sections.

5.1 Limitations and further work

This section will discuss the limitations of this research and recommended further work. This research was mainly conducted by means of an explorative single case study. Therefore, the generalizability is limited. The most important limitation is probably that the case company is not using WLC principles to any extent. As a consequence, real life problems when WLC is implemented in a production process with simultaneous batching of incompatible job families are not discovered. In order to make the results more generalizable, more work is needed. First, fieldwork would be helpful in researching the difficulties when WLC is applied with simultaneous batching of incompatible job families. The work of Cransberg (2013) and Alblas (2014) gives deeper insights in what additional complexities arise in real life, but more work is needed.

In addition to the fieldwork that can help to gain better understanding, simulation studies can be helpful too. Release and dispatching rules should be tested in order to analyze their influence on the costs and delivery performance of the production process. The proposed release and dispatching rules are probably the most appropriate for dealing with the simultaneous batching and meeting due dates of jobs, but more research is needed. Furthermore, appropriate workload norms should be tested. Last, different arrival distributions of jobs and the size of the batching machine should be analyzed. It could be argued that the more job families there are, the more problems the processing of very early jobs will give. Similar, the size of the batching machine is important on the % of early jobs that will arrive in the chip up buffer. More simultaneous batching machines with less capacity will probably also lower the amount of early jobs downstream and the fluctuations in workloads of the debag station, because the less jobs that can be batched in a batch, the less early jobs should be pulled forward.

5.2 Practical implications

(34)

Company X is having problems with their delivery performance as is analyzed in section 4.2. In order to improve their delivery performance, they are looking for possibilities to implement WLC. Since the market where company X is operating in very competitive is, the costs of their products are important. In addition, the simultaneous batching process is very expensive, therefore efficiency in this step is needed.

This research describes the conflicts that arise when large batch sizes are chosen. These conflicts are described in section 4.4. This continuum should be accounted for at the release level. Furthermore, the release level should incorporate the possibilities for nesting. The impact of releasing early jobs for efficiency purposes can be encountered by decoupling jobs after the debag station. When early jobs are decoupled, the downstream stations will not face unnecessary high workloads. When decoupling is implemented, the early jobs should be re-released when they become late jobs.

(35)

35

6. Conclusion

A company that is exploring the possibilities for implementing WLC intended this research. The case company has a very expensive simultaneous batching process of incompatible job families. This is the autoclave process. On the first hand, this batch process can conflict with WLC principles. Literature does not yet account for the combination of WLC and simultaneous batching of incompatible job families. Therefore, an explorative single case study was conducted. The goal of this research is to investigate the applicability of WLC to simultaneous batching of incompatible job families. First, the sub questions will be answered. Then, the main research question will be answered.

The conflict between efficient batching and low and stable workloads has lead to the design of a continuum. In this continuum, the batch size is the independent variable. The consequences of the batch size are analyzed on the 1) costs per product, 2) utilization of the simultaneous batch station 3) workload fluctuations at downstream station and 4) % of jobs with high slack in chip up buffer. Choosing for small batches on the long term is not feasible, since the utilization of the autoclaves would then become too high. The described continuum should be accounted for when dealing with simultaneous batching of incompatible job families.

When looking at the control levels of the WLC concept, the job entry level cannot deal with the simultaneous batching of incompatible job families characteristic. On the contrary, the job release and dispatching levels should account for simultaneous batching of incompatible job families. First, the release level should, in addition to the controlling of workloads, release jobs in the sequence as they are to be processed by the autoclave. The dispatching rules at the upstream stations should be according to FCFS, where within a released batch another sequence can be established in order to gain advantages on station level. For the downstream stations, the debag and chip up department, the dispatching rules should be according to Least-Slack in order to meet due dates of jobs.

(36)

The adaptions to the WLC concept should be that the release level is the most important control level to control workloads and deal with simultaneous batching of incompatible job families. Furthermore, when efficient batching is favorable, decoupling should be applied in order to keep workloads at downstream stations low. This decoupling will create a need for a re-release level. This re-release level is an extension on the original WLC concept.

(37)

37

References

Alblas, J. (2014). Exploring arising complexities when implementing workload control in a real life Make-to-Order company.

Balasubramanian, H., Mönch, L., Fowler, J., & Pfund, M. (2004). Genetic algorithm based scheduling of parallel batch machines with incompatible job families to minimize total weighted tardiness. International Journal of Production Research, 42(8), 1621–1638. doi:10.1080/00207540310001636994

Bard, J. F. and Golany, B. (1991) Determining the number of kanbans in a multiproduct, multistage production system. International Journal of Production Research, 29(5), 881-895.

Bechte, W. (1988). Theory and practice of load-oriented manufacturing control. International

Journal of Production Research. doi:10.1080/00207548808947871

Cransberg, V. (2013). Accommodating the workload control concept to the dynamics of real life job shops, 120505276.

Dar-El, E. M., Herer, Y. T. and Masin, M. (1999) CONWIP-based production lines with multiple bottlenecks: performance and design implications. IIE Transactions, 31, 99-111.

Erramilli, V., & Mason, S. J. (2006). Multiple Orders Per Job Compatible Batch Scheduling.

IEEE Transactions on Electronics Packaging Manufacturing, 29(4), 285–296.

doi:10.1109/TEPM.2006.887355

Gokhale, R., & Mathirajan, M. (2013). Minimizing total weighted tardiness on heterogeneous batch processors with incompatible job families. The International Journal of Advanced

Manufacturing Technology, 70(9-12), 1563–1578. doi:10.1007/s00170-013-5324-z

Hendry, L., Land, M., Stevenson, M., & Gaalman, G. (2008). Investigating implementation issues for workload control (WLC): A comparative case study analysis. International

Journal of Production Economics, 112(1), 452–469. doi:10.1016/j.ijpe.2007.05.012

Jolai, F. (2005). Minimizing number of tardy jobs on a batch processing machine with

incompatible job families. European Journal of Operational Research, 162(1), 184–190. doi:10.1016/j.ejor.2003.10.011

Land, M. (2004). Workload control in job shops , grasping the tap.

(38)

Liu, L., & Zhang, F. (2008). Minimizing Number of Tardy Jobs on a Batch Processing Machine with Incompatible Job Families, 282–285. doi:10.1109/CCCM.2008.107

Malve, S., & Uzsoy, R. (2007). A genetic algorithm for minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals and incompatible job families. Computers & Operations Research, 34(10), 3016–3028.

doi:10.1016/j.cor.2005.11.011

Mehta, S. V., & Uzsoy, R. (1998). Minimizing total tardiness on a batch processing machine with incompatible job families. IIE Transactions, 30(2), 165–178.

doi:10.1080/07408179808966448

Oosterman, B., Land, M., & Gaalman, G. (2000). Influence of shop characteristics on workload control. International Journal of Production Economics, 68, 107–119. doi:10.1016/S0925-5273(99)00141-3

Uzsoy, R. (1995). Scheduling batch processing machines with incompatible job families.

International Journal of Production Research, 33(10), 2685–2708.

(39)

Referenties

GERELATEERDE DOCUMENTEN

employees can be made aware of their habitual combination of learning strategies (their on-the-job learning styles) and of other possible learning strategies, they will learn

Het laatste punt heeft als extra voordeel dat het verklaart waarom men gegeven dat het loon hoog genoeg is, niet verder zoekt. Verder kunnen we dan de job offer arrival rate

Positive affectivity (PA) Task performance Perceived job complexity Perceived emotional

After analysis of the data, a planning approach is designed to combine input- and output control to establish efficient batches for a simultaneous batch process characterized

FIVE-CO-ORDINATED phosphorus compounds are known to adopt the trigonal bipyramidal (TBP) configuration in which the apical positions are preferred by electron-withdrawing

Om toch een goed idee te kunnen vormen van eventueel aanwezige archeologische waarden ter hoogte van de noordelijke trap, is tussen de noordelijke trap en de middelste

Having discussed the general way in which Ant Colony Optimization algorithms solve a discrete optimization problem, it is time to turn to a more precise and more technical

Toch is de helft van het materiaal onge- bruikt gelaten, omdat de sprekers – vijf in getal – niet voldeden aan het criterium dat niet alleen zij, maar óók nog eens hun beide ouders