• No results found

Combining Operation Due Date based Dispatching Rules and Job Release

N/A
N/A
Protected

Academic year: 2021

Share "Combining Operation Due Date based Dispatching Rules and Job Release "

Copied!
28
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Combining Operation Due Date based Dispatching Rules and Job Release

Operations and Supply Chain management Master thesis

Shiping Xu S1939114

University of Groningen

Faculty of Economics and Business

August 2010

Supervisor: Dr. Martin J. Land

Co-Supervisor: Drs. G.D. (Erik) Soepenberg

(2)

I

Preface

How time flies, my one-year master study at the University of Groningen almost comes to an end.

At the last stage of my study, I have been writing a master thesis since May 2010. During these days, many people have helped me a lot. Hereby, I will like to express my sincere gratitude to them.

First and foremost, I want to thank my first supervisor Dr. M.J. Land for his patience and critical suggestions throughout the process of the research. Moreover, I appreciate the time and the distinct comments of my second supervisor Drs. G.D. (Erik) Soepenberg. Next to that, I will like to thank Drs. R.A. Ittoo in the University of Groningen who help me a lot for my research project.

I will also like to express my appreciation to my parents and my dear friends Arvid, Xuan, Marie,

Huajie and Chrisanti for their love and support.

(3)

II

Combining Operation Due Date based Dispatching Rules and Job Release

Abstract

Operation Due-dates (ODDs) can reflect the urgency of an order in a job shop and can be used as priorities for orders on the shop floor. However, literature already pointed out that using an ODD priority rule may interact negatively with a Workload Control (WLC) release methods (Land, 2004). The planned release date of these methods is based on a certain ODD-based progress pattern. The ODD dispatching rule may lead to severe progress corrections at the workstations, such as, when the planned release dates is not met, and when the first ODD may have been passed already at the time of release. The performance of two improved ODD dispatching rules is investigated in this study. One, called ODD-J, collects and uses the realized job release date to redistribute the remain slack across the original ODDs, and the other one, called ODD-JO, uses both realized job release and operation completion dates.

The performance of the new techniques is checked based on studies of a job shop with work load control, a specific form of the job release control. Performance comparisons are made with the ODD rule without correcting for the realized job progress information. To investigate the performance, computer simulation is used as the research tool. Not only the mean and standard deviation of lateness are checked, but also the mean tardiness and percentage tardy are measured.

The results indicate that the use of realized job progress information can help to decrease the mean lateness and percentage tardy, as compared with the classical ODD priority rule. However, the magnitude of the effect depends on the workload norms selected. The results are dependent on the types of release situations. When the job release is not constrained by workload norms, traditional ODD performs with the least lateness variance and realizes a lower percentage tardy.

However, when the workload of the shop is under strict control, our proposed methods can improve due-date performance beyond the performance of ODD at infinite workload norms.

Keywords: Operation Due Date (ODD), Work Load Control (WLC), realized job progress

information, Load balancing.

(4)

1

1. Introduction

A job shop can be defined as a manufacturing situation where a lot of different products are manufactured to cater for specific customer needs. Selecting a proper job shop control mechanism is essential for the realized completion of jobs on the job floor (Philipoom et al. 1993). However, achieving optimal efficiency in a job-shop is challenging due to the various uncertainties and disturbances, such as high variability of arrivals, routing and processing times (Land 2004).

Broadly speaking, a job shop control system consists of three stages (Ahmed and Fisher 1991, Ragatz and Mabert 1988, Land 2004), namely entry level with due-date assignment, release level for job releases and dispatching level based on dispatching priority, as illustrated in Figure1.1.

Figure 1.1 Decision moments translated into hierarchical WLC framework (Land 2004)

In most previous research, the due-date assignment (of the entry level) is always assumed to be

predefined, and need not be specified. This results in a simpler two-stage model that is more

suitable for research focusing on job releases (release level) and dispatching level. It also avoids

the difficulties in due-date selection. This paper uses this two-stage model with the pre-shop stage

and the shop stage. The throughput times in this system are shown in Figure 1.2

(5)

2

For a job j, the time of entry is denoted by t , release time by

Ej

t , time of entering queue of

Rj

station s by t , time of its completion on station s by

Qjs

t , and finally the time it leaves the floor

cjS

after the last operation is completed by t .

zj

Figure1.2 Overviews of Subsystems In Terms of Throughput Times (Land 2004)

For each subsystem a throughput time per job can be defined. The most commonly used throughput times of a job j are the gross throughput time T

Gj

, the pool time T

Pj

, the floor time T

jF

, and the station throughput time T

Djs

for station s. Each throughput time is determined as the difference between an input and an output time, such as: the time between the job entry and the job release is called the pool time

Jobs arriving at the job shop stay in the pool before release. The job pool can be seen as release buffer and is very useful in the job shop. Land and Gaalman (1997) pointed out that the pool can absorb fluctuations of the incoming job flow, which also enables the delay of decisions, reduces the waste caused by cancelled jobs, and facilitates later ordering of raw materials. During the pool time, job release control is performed to calculate a planned release date by back scheduling from the due-date. This is done by using a planned throughput time for all stations in the routing of the job.

Workload control (WLC) is a Production Planning and Control (PPC) concept developed particularly for job shop control. The basic purpose of WLC is to provide a specific approach to control the job input by releasing the right job at the right time periodically (Land, 2004).

Several concepts and workload controlling released methods (Baker 1984, Ragatz and Mabert

1988, Gary and Vincent 1984, Wisner 1995, Oosterman et al. 2000) that employ workload norms

(6)

3

have been developed. Workload norms refer to the maximum or the minimum workload restrictions of workstations. Restrictive release keeps the job queue size on the floor small and stable. Job releases are based on the job urgency and on the workload norms. Only the jobs that fit in the gap between the present workload and the norm are released. If the size of a job does not obey the workload norms, the job must wait in the pool until the next release time (Land 2004).

Once released, jobs’ progress will be controlled by dispatching rules. A job remains on the shop floor until all its operations have been completed (Oosterman et al. 2000). The time period between the release date and the completion date is the shop floor throughput time. Dispatching decisions are carried out in this period. Each station throughput time consists of the queue waiting time before the process and the process time in the station. Priority dispatching rules are used to select the next job to be serviced from the queue buffer before the workstations (Kanet et al.

1982).

A large number of priority rules have been developed and their objectives are multidimensional.

The ability to conform to the (predefined) due-dates is essential for the effectiveness of dispatching rules and is the primary concern of shop managers (Melnyk et al. 1986). Also, it has been found that jobs failing to meet the due-date lead to additional costs incurred, such as the penalty clause in the contract, rush shipping costs (Chang, 1997).

Many popular dispatching rules represent some function of a job’s due-date (Kanet and Haya 1982).The ODD priority rule takes more time factors into consideration, and it assigns each operation a due-date to enable better control of the shop. According to Land (2004), ODD makes it easy to locally compare the relative urgency of jobs, to determine priority of the operations centrally, to reduce lateness dispersion by correcting progress disturbances, and indicate the planned progress of the job.

However, the simple ODD rule is myopic, as it makes the dispatching decisions without obtaining

time-phased shop status information beyond the workstation (Irfan and Reha 1994). Excessive

reliance on local, myopic rules has been cited as one of the reasons for poor performance of

industrial scheduling systems (Kempf et al. 1991). This difficulty can be overcome by

incorporating job shop information with ODD dispatching. This shop status information is

normally considered mainly in the release decision. The release decision can either speed up or

postpone release based on the actual shop floor status, in order to balance the workload. But in

that case, the originally planned ODDs could not be realistic anymore, and may disturb the

required progress pattern in order to balance the workloads.

(7)

4

In this paper, we design a new method for improving the ability to meet due-date requirements.

This improved ability should result from better load balancing effects, which are main effects of WLC. Our method is based on the effective coordination of job release control and ODD based dispatching. After having investigated the composition of the job throughput time, the release methods, and the dispatching rules, this paper aims to develop and test the coordination mechanism between job release and ODD dispatching by means of a simulation study.

Specifically, the paper investigates the effectiveness of the proposed coordination mechanism to control lateness of jobs.

The paper is structured as following: we start with a literature review in section 2. After that, a new mechanism is presented in the section 3. The model designed for simulation is discussed in section 4 after which simulation results analysis are discussed in section 5. Section 6 consists of the conclusions.

2. Literature review

This section consists of three parts. It starts with an explanation of operation due-date rules (2.1) before describing workload control (2.2). Section 2.3 is about the literature related to the combination of dispatching rules and work load control. A summary of the literature review is presented in 2.4. At the end, section 2.5 gives out the contribution of this thesis.

2.1 Literature on operation due-dates

Operation due-dates rules are those that require the establishment of due dates for each operation of a job. John and Jack (1982) mentioned that introducing operation due-dates means the scheduling system must consider more time and process details than for other rules without consideration of operation due-dates details, such as the job due-date, the job routing, as well as the estimation of each workstation processing time of the job. Also, estimation of job flow times has been an important issue since the late 1960s in the scheduling literature (Sabuncuoglu and Comlekci 2002). Although prediction of the final job due-date assignment rules can be highly accurate, forecast errors are unavoidable in a real dynamic job shop, where jobs of various types enter and leave the production system continually and randomly (Chang 1996).

The ODD dispatching rule gives the highest priority to be processed at the workstation to the

operation with the closest operation due date. Also, ODD priority avoids the difficulty of

(8)

5

obtaining realized status information on the job shop which is likely to change during the scheduling.

The progress of a job is influenced by many factors (Chang 1997), including the scheduling heuristic in effect, number of jobs currently in the job shop and the processing time of jobs currently in the job shop. A job shop control mechanism with the realized shop floor information leads to a productive direction for research.

Ragatz and Mabert (1988) investigated the application of information and computer technology to collect and use updated job shop information in production control. Chang (1997) developed a method to provide real-time estimates of the queue times and to incorporate them into existing scheduling heuristics. His simulation results showed that constantly updating the schedule will improve the due-date related performance in almost all shop conditions studied. However, Berry et al. (1975) gathered and processed the queue waiting time data in their research, but the updated information incorporation mechanism does not improve the schedule performance a lot. Their analysis shows that the poor performance may be caused by using simple exponential smoothing model to forecast the queue length. They suggest that more sophisticated forecasting models could lead to better results.

2.2 Literature on Work Load Control (WLC)

For years, researchers and practitioners have suggested that job release control policies have considerable potential for improving shop performance (Ashby and Uszoy 1995). These control policies are regarded as useful tools to limit the number of jobs on the shop floor, to simplify the dispatching task and reduce congestion, and to improve the performance of job shop scheduling.

The mechanism developed to control job release is also called order review/release control (ORR).

Job release decisions typically make use of input control to smooth the flow of jobs through the shop (Wisner 1995). WLC is an important stream of research in ORR (Land and Gaalman 1996).

WLC focuses on developing measuring criteria for load levels on the shop floor. Such criteria are called load norms. The central theme is to use the norm levels in job to ensure a controlled, low and stable load.

Essentially three different norms have been devised originally. Firstly, load conversion is

developed to estimate the upstream inputs to the direct load, such as the work in queue of a

station and being processed. For each workstation, the direct load after release and the estimated

input is subjected to norms (Bechte 1988, Bechte 1994, Wiendahl 1995). Secondly, the direct and

upstream workload is added as the aggregate workload which is subject to a norm (Bertrand and

Wortmann 1981, Kingsman et al. 1989, Hendry and Kingsman 1991). Thirdly, the sum of the

(9)

6

upstream, downstream and direct load related to the station is defined as the shop load subjected to a norm. The shop load of a station also includes the work already completed at the station, but still downstream on the shop floor (Tatsiopoulos 1983, Oosterman et al. 2000).

Oosterman et al. (2000) compares the advantages and disadvantages of these WLC norms. Also, modifications were proposed to make the mentioned approaches more robust. They proposed a new technique for calculating the load called the “corrected aggregate loads”. It corrects the processing time to be included in the workload by multiplying it with the ratio of two throughput times: time spent by a job at a particular workstation, and its aggregate time spent at the upstream stations and the considered station. The required station throughput time can be estimated with historical data. The corrected aggregate load is the product of the load contributions of each job and the ratio. When the station throughput times across all the stations are equal, the ratio can be simplified further. In these stages, aggregate load contributions are obtained by dividing the processing time by the position number of the operation in routing. According to the simulation results of Oosterman et al. (2000), this release method is effective to adjust the aggregate load of stations, including work upstream, and to reduce the negative effects caused by the variations of the station position in the job routings.

2.3 Literature on the combination of WLC and dispatching rules

Previous research on job shop control often focuses on only one aspect, either job release or dispatching. However, in real-life applications, effective performance cannot be achieved by simply focusing on a single aspect as the dispatching rule may reduce the load balancing effects of job release control (Land 2004). A few studies have investigated the integration of job release and dispatching. According to Ahmed and Fisher (1994), the performance of the release rules depends on the choice of the due-date and sequencing procedures. Baker et al. (1981) suggested that the use of job review and release procedures can alter the performance of dispatching rules.

Ashby and Uszoy (1995) showed that the combination of job release policies and dispatching rules have significant effects on system performance. Lu et al. (2010) also demonstrated that both ORR and dispatching rules are highly relevant with respect to the due-date and flow time related performance measures.

Despite the numerous efforts towards investigating job release, dispatching policies and their

interactions in various manufacturing environments, desired results are still difficult to achieve,

and are often inconsistent. For example, the difficulties to meet due-dates in job-shop control

contradict the general agreement on its benefits (Bertrand 1983, Ashby and Uszoy 1995), such as

less congestion on the shop floor and shorter, more predictable flow times through the shop

(10)

7

Land (2004) researched the WLC release methods in combination with ODD dispatching, and the results showed that norm tightening with ODD dispatching will cause both gross throughput times and the standard deviation of lateness to increase. He stated the reason as following: on the one hand the dispatching rule looses effectiveness as the release method reduces the choice of jobs in the queue; on the other hand, the unplanned progress of jobs on the floor disturbs the effectiveness of balancing and timing within the release method. Thus, there seems to be a negative interaction between the release method and the ODD dispatching rule when the load norm is tightened.

In addition, the relative importance of job release versus dispatching has been the subject of debate. Some research work performed in the field of semiconductor (Glassey and Resende 1988, Wein 1988) suggested that the effect of dispatching is less than that of the job release policy.

Baker (1984) found that job release does not improve the performance of more sophisticated priority rules. Ragatz and Mabert (1988) concluded that it is unclear how much control of release could improve the effects of the simple dispatching rules, and it is unclear whether the performance of simple dispatching rules in combination with controlled release can be improved sufficiently to equal the performance of more complex rules. However, for control of an assembly job shop, Lu et al. (2010) suggested that the incorporation of ORR control into the existing control of dispatching rules is highly important for better performance.

2.4 summary of literature review

Table 2.1 is the list of papers that have been checked. The applied release method, dispatching rule and relevant conclusions are presented.

Authors Release methods Dispatching rule Conclusions

John J. Kanet and Jack C.

Hayya (1982)

Earliest job due-date (DDATE); smallest job slack (SLACK);critical ratio

Earliest operation due-date

(OPNDD)/Smallest operation slack (OPSLK);/Operation Critical Ratio (OPCR)

Setting and attempting to enforce operation due-dates improves every conventional measure of job shop performance.

(OPNDD) performs better than (OPSLK).

Kenneth R.

Baker(1984)

Shop load-Dependent Due- Dates (S-rule); FCFS

ERD (Earliest Release Date)/SPT/MST (Minimum Slack Time)/MCR (Minimum Critical Ratio)/ MDD (Modified Due-Date)

The use and refinement of a job releasing rule is far less important to system performance than is the use of an effective priority scheme; and the effects of input control on scheduling performance thus appear to be somewhat complicated, input control can be counterproductive.

(11)

8 Steven A.

Melnyk and Gary L. Ragatz , (1989)

Non job review(NORR)/

Aggregate workload trigger, work-in next queue selection (AGGWNQ)/ Work centre workload trigger, earliest due- date selection (WCEDD)

First comes first serviced (FCFS)/

Shortest processing time (SPT)/ Earliest due-date (EDD)/

(S/OPN)

Present the importance of ORR. However, ORR can’t improve all aspects of system performance.

Steven A.

Melnyk, Gary L.

Ragatz and Lawrence Fredendall (1991)

Immediate release (IMM)/

Maximum load limit (MAX)

FCFS/SPT/ MST The combination of smoothing with ORR results in stable and predictable system and the combined effects of smoothing and ORR can improve the performance of simple shop floor dispatching rules like FCFS.

Imtiaz Ahmed and Warren W. Fisher (1991)

Immediate (IMM)/ Backward infinite loading (BIL)/

Modified infinite loading (MIL)/ Forward finite loading (FFL)

FCFS/SPT/EDD/

Critical ratio (CR)

There is a three-way interaction between the due-date, release, and sequencing

procedures.

Irfan M. Ovacik and Reha Uzsoy (1994)

Job-EDD (J-EDD) Operation-EDD

(O-EDD)

The proposed algorithms perform better than dispatching rules with modest increases in computation time. The amount of improvement obtained depends on the shop configuration, with substantial improvements being obtained for the semiconductor testing facility and reentrant flow shops.

Van (S.T.)Enns (1995)

Release the job when the first and second required machine have available buffer space

SCR( The smallest critical ratio )

The approach which dynamically adjust machine buffer, is proven to work well.

Input control increases the lead time.

James R. Ashby and Reha Uzsoy (1995)

Pull Shortest Group Processing Time(PSGPT) First/ Pull Smallest Setup Cost Job-- SGPT (PSSC-S) First/ Pull Smallest Setup Cost Job-- LGPT (PSSC-L) First

Earliest due-date (EDD), shortest processing time (SPT), longest processing time (LPT), and minimum setup time (MST).

The proposed scheduling policies

combining job release and group scheduling significantly outperform current practice for the number of tardy jobs; the job release rules developed significantly impact due- date performance.

Bas

Oosterman, Mar tin Land, Gerard Gaalman

.

(2000)

Converted load/ Aggregate load/ Shop load/ Improved aggregate load/ Improved shop load

FCFS The relative performance of the release methods changes completely with the flow direction in the shop. And the adjustment makes the traditional release methods more robust.

HL Lu, GQ Huang, HD Yang (2010)

Mentioned all ORR methods including work load control (WLC), but only four kinds are used in the simulation:

Backward infinite loading with work load control (BILWLC)/

Backward infinite loading for assembly job shop (BILA)/

Backward infinite loading (BIL)/ Immediate release (IMM)

FCFS/FASFS (first arrival into system first serviced)/ SPT / NUP (number of unfinished parts)/ TWKR (total work remaining)/ JDD (product due-date)

Both ORR and dispatching rules are highly relevant fpr due-date and flow time related performance. Particularly, the incorporation of ORR control into the existing control of dispatching rules could improve assembly job shop control.

Table 2.1 Review papers reporting the literature on combination of job release and dispatching

(12)

9

2.5 Contributions of this paper

Scheduling is an integral part of production systems planning (Leon et al. 1994). By properly planning the timing of shop floor activities, performance criteria, such as the operation throughput time, mean lateness, and mean tardiness could be optimized. So it is necessary to compute and fix the schedule in advance to achieve optimal performance of the job shop.

However, a major draw back of pre-computed schedule is the random nature of shop floor conditions, so continuous updating is necessary. Otherwise, the performance of the schedule will degrade. One of the main moments that an update might be useful is at the time of release, which will be particularly important in situations with controlled release.

Given the problems observed in the literature reviewed above and in the table 2.1, we develop a technique to better combine the WLC and ODD dispatching rules with dynamic job shop information in this paper. We want to improve the combination of the ODD dispatching rule and job release by using order progress information and release times which do not comply with the planned ODD schedule. Our techniques/methods are described in details in the next section.

3. The Proposed Method

In this section, we will describe our proposed techniques to improve the simple ODD. This section starts by presenting how the ODD works before introducing the proposed mechanism.

We use the Operation due-date (ODD) dispatching rule to determine the next job in the queue to be processed when a workstation becomes available. The operation due-date of a job is the difference between its due-date and the estimated time required to complete its downstream operations. The operation with the earliest due-date is given highest priority. Formally, the operation due-date of i

th

operation of job j, ODD

ij

is defined in equation (1)

ODD

ij

: = δ

j

− ( n

j

i ) T

S*D

. (1)

n

j : the number of operations required by job j;

T

S*D:the station throughput station time and value is assumed the same for all the operations;

δ

j : the due-date of job j.

(13)

10

If the operation is completed by its operation due-date, there will be sufficient time to process the remaining operations and for the job to be completed on time. As the total throughput time is the sum of the floor time and pool time, it is possible to shorten or increase the floor time to offset the negative effects caused by earlier or later job release by adjusting the planned waiting times. It is feasible to reduce the slack time of the unfinished operations to offset the potential lateness caused by upstream operations.

We develop two methods to integrate the release time information with ODD dispatching. The two methods aim at maintaining the load balancing effects from Work Load Control (WLC). In the following analysis, we will elaborate our new methods when the job release dates deviates from the planned one. We focus on the delayed release scenario in the visualization.

When the realized job release date ( t

Rj

) is certain, ODD-J can reschedule the predetermined operation due-dates correspondingly. This method integrates the controlling of the job release and dispatching.

Figure 3.1 the ODD-J dispatching priority rule

Based on the actual job release date, the operations of the job get updated due-dates. The priority of the operations before the workstation will be decided by the updated ODDs. The operation with earliest updated operation due-date ( ODD

ij'

) will be processed first.

j R j D j

S

n

T ( t )

'

: = δ − ; (2)

2 3

1

ODD-J dispaching rule Realized state Planned ODD rule

Release delay

 We assume that the job could be completed on time, so we neglect the situation that the job could not be finished on time;

 : The waiting time before process; : the processing time on the workstation.

2 3

1

2 3

1

(14)

11

D S j j

ij

n i T

ODD

'

: = δ − ( − )

'

. (3) Although this technique of simply updating the ODD with realized job release information does not ensure the jobs’ completion before its due-date, it can nonetheless improve the job shop’s performance, such as the mean tardiness and lateness standard deviation.

Our proposed technique is compared with another new method, called ODD-JO. ODD-JO incorporates both the actual (realized) job release and actual (realized) operation completion time in the ODD dispatching rule. For operations that are not completed by their operation due-dates, we adjust their original ODD, similar to the release adjustment. The ODD-JO priority dispatching rule repeatedly adjusts the unfinished ODDs until the last operation of the job is completed. Every time when one ODD is certain, all the downstream ODDs will be changed to cater the planned job due-date.

To evaluate our model, we developed a simulation. The next section discusses the experimental design of the study, including the modeling choices and performance indicators.

4. The Model Description

In this section, we discuss the simulation model. The job and shop characteristics are be in subsection 4.1. The release control method is presented in section 4.2. We also elaborate on the performance measurement variables in section 4.3. The final section in this part is the design of the experiments.

4.1 The job and shop characteristics

The mechanism tested in the job shop model consists of six workstations. Each workstation is having unique capacity. Each operation will be processed by one specific workstation. And the routing and operation processing time are known once the job enters the job shop. The operation number of one job (routing length) varies between one and six with all stations have an equal probability of being visited. A station will be visited at most once in the routing of the job.

We used the general flow shop model of Enns (1995) because the general flow shop most accurately models real-life job shops. In the general flow shop, some workstations will be more likely to perform upstream operations, and the others will perform downstream operations.

Although a pre-smoothed workload could alleviate the task of job release, it makes little sense to

diminish the requirements on controlled release in the paper. So the Poisson arrival process is be

(15)

12

used to control the release of the jobs, and the amount of load imposed by arriving jobs is independent of the state of the shop. Oosterman et al. (2000) suggest that a 2-Erlang distribution may be a better method compared with the exponential distribution for modeling the processing times found in real-life job. So the operation processing times in each station are identically and independently distributed according to a 2-Erlang distribution. Processing times are used as a reference measure with the average operation processing time being 1 time unit. The arrival rate of jobs is such that the stations have an average utilization level of 90 percent.

Since we only want to investigate the adjustment of operation due-dates, we adopt the approach of Land (2004) to avoid interaction with due-date setting decisions. In this approach, the due-date is set by just adding a random allowance to the job entry time to create a variable level of urgency among jobs (Equation 4).

+

= t

Ej

a

δ

j

, with a uniformly distribute on [m, M] (4) A minimum waiting time allowance (m) of 35 time units and a maximum waiting time allowance (M) of 60 have been used. The 35 time units can cover a station throughput time of 5 time units for the maximum of 6 operations plus a waiting time of 5 time units before release.

The model characteristics can be summarized as following:

 Shop: 6 stations, each with unique capacity;

 Routing length: discrete uniformly distributed on [1, 6];

 Stations in routing: general flow shop, no re-entrant loops;

 Operation processing times: 2-Erlang distributed (mean: 1 time unit);

 Inter-arrival times: exponential (mean: 0.648 time unit).

4.2 The release method

First, the planned job release time t

*jR

is determined by back scheduling from the due-date. With the assumption that all jobs have equal planned throughput time for all stations in the routing of j (the set S

j

), which results in equation (5).

=

SJ

S D S j

R

j

T

t

*

: δ

*

(5)

This is considered for the unadjusted calculation of ODD. When jobs are not released according

to their planned release dates, the planned station throughput time T

S*D

should be changed to cater

(16)

13

the job due-date δ

j

. Earlier release of the job leads to an increase of the planned station throughput time and the delayed release will reduce the planned station throughput time to ensure the completion of the job before the due-date.

In the literature review (section 2), we mention a lot of WLC methods. According to the simulation results performed by Oosterman et al. (2000), both the uncorrected aggregate loads (method B) and the corrected aggregate load (method B’) outperform the others, in the context of the general flow shop.

But method B has a problem that different norms are set for different workstations. The norms of the downstream workstation will be larger than those of the gateway stations. So we will adopt the method B’. An important advantage of the corrected loads method relative to uncorrected aggregate loads is that norm determination can be simplified. Norms for this method can be regarded as the norm for average direct loads. And this release method simplifies the norm determination by avoiding the problems related with station position. The average direct load can be calculated as formulation (6).

=

+

= −

J

j D js t t

js U js

D js J

j R js t t

js C js

Q js C D js

st C

js R j C

js R j

t I T p

T t T

I t p

t t L t

) , [ )

,

[

* * ( )

) (

*

*

. (6) By replacing the actual station throughput time with the estimated ones, we get the estimated average direct load, called: the corrected aggregate load.

+

=

J j

t js t

D js U js

D D js

st

p I t

Rj Cjs

T T

L

*

T

* * [ , )

*

*

* * ( ) (7)

In the equation, the time that the job contributes to the direct job is divided by the time it contributes to the aggregate load. The corrected aggregate loads can be regarded as a prediction of average direct loads.

According to the assumption above, all the planned station throughput times of the job are equal in the model. As all the station throughput times for one job are the same, the ratio used to multiple p

js

is changed to 1/ n

js

. Now the corrected aggregate loads can be simply estimated by equation (8).

~ D

L

st

= ∑

j∈J js t t

js

C js R j

t I

n p [ , )

* * ( ) 1 *

. (8)

(17)

14

Release is supposed to take place once every 5 time units (T= 5) with an infinite time limit, which means all the job in the pool will be considered for release. For the jobs with the next earliest release date, the corrected aggregate load is calculated in case the job will be released subject to the norms. A job will be released, if its release does not cause the workload norm of any station to be exceeded.

4.3 Performance indicators

It is hard to generate the optimal schedule with best performance in practical. As a result, our tests are dedicated to compare and to quantify the improvement of our proposed techniques. Good performance indicators should be able to measure the performance differences for comparison.

For the performance indicators, we measure the due-date performance by four aspects: the percentage of jobs tardy, the lateness standard deviation, the mean lateness, and the mean tardiness. For benchmarking, the mean shop floor throughput time is measured as an indicator of the workload reduction realized by the workload norms. Lateness is the difference between the job completion time and its due-date. It is different from tardiness since lateness can be either positive or negative. Tardiness is the positive difference between the completion time and due- date of a job.

Items Explanation

The mean floor time T

jF

The time from job release to the completion of the last operation

The percentage of jobs tardy( f

T

) The ratio of the tardy job numbers and the total job numbers. Tardiness=Min(completion time - due-date, 0) Mean tardiness(MT) The average of the job tardiness

Mean lateness (ML) The average level of the lateness time period Lateness = completion time - due-date

The lateness standard deviation (LSD) The standard deviation of the job lateness

Table 4.1 Performance Criteria

We are most interested in the performance differences between methods and not the absolute

value of performance measures. These differences can be measured most accurately by giving

each method the same set of jobs to deal with. Thus, common random numbers are used to reduce

the variance among experiments. Each of experiment consists of 100 runs of 6000 time units,

which includes a warming-up period of 3000 time units.

(18)

15

4.4 Design of the experiments

The preceding description left us with two experimental factors. The factors and their experimental levels are summarized in table 4.2.

Factor Experimental levels

Norm level Stepwise down from infinity (12 steps)

Priority rule ODD, ODD-J, ODD-JO

Table 4.2 Experimental factors

In a given routing context and with a given job release method, we want to compare the dispatching methods at a certain level of norm tightness. Besides the infinite norm level, we choose 12 levels of norm tightness for each priority rules. Reduction of the norms takes place in steps of 85%.

5. Analysis of Results

For each experiment, values for SFTT, f

T

, MT, ML and LSD are determined. There are three priority rules with 12 norm levels each; therefore, there are 36 experiments in total. The mean shop floor throughput time T

jF

will be reduced when the workload norms are tightened. T

Fj

is set on the horizontal axis. Each dot on the curve represents an experiment with a different norm level.

The end points at the right of the curve represent the experiments at infinite norms.

In this section, we evaluate the performance of the new ODD dispatching rules. In section 5.1, we

will focus on the Mean Lateness performance; Section 5.2 tries to examine the reasons behind the

LSD differences. At last, the performance of other indicators will be presented.

(19)

16

5.1 Mean Lateness (ML)

We start our analysis with the ML (figure 5.1). When norms are lower, the ML of both ODD-J and ODD-JO decreases smoothly to a minimum, before starting to increase as shop floor throughput times are lowered further.

The general patterns of ODD-J and ODD-JO can be explained as follows. Tighter workload norms hinder the release of the jobs, and consequently the mean job pool time will increase. But as fewer jobs are released to the shop, the jobs which are allowed to release have shorter queue waiting times, so the mean shop floor throughput time ( T

Fj )

decrease. When the reduction of the T

Fj

is greater than the increase of the pool time, the gross throughput time ( T

Gj

,) of the job is reduced. As due dates are pre-assigned, ML decreases to exactly the same extent. When the norms are below a certain level, the value of ML begins to rise, as too many jobs are required in the pool to select from. For ODD, ML already begins to rise when the norm is reduced below the infinite level. ODD-J and ODD-JO show to be able to reduce the ML when tightening norms.

-20,4 -20,2 -20 -19,8 -19,6 -19,4 -19,2 -19 -18,8

17 18 19 20 21 22 23 24 25 26

Shop floor throughput time

Mean Lateness

ODD ODD-J ODD-JO

Figure 5.1 Mean Lateness (ML)

Compared with ODD, ODD-J and ODD-JO improve ML a lot at tighter norms. The better

performance of ML is generally explained by improved load balancing, which should results in

stable and short queues. However, load balancing at the time of order release may require to

speed up some jobs while delay others at the same time. ODD will try to correct the resulting

deviations from the planned release date, which disturbs the planned load balancing. Therefore,

ODD does not lead to a lower ML at tighter norms.

(20)

17

5.2 The lateness standard deviation (LSD)

For LSD, ODD performs better than ODD-J and ODD-JO at infinite norms. The possibilities of ODD seem to be exploited best with the longest queues. The worse performance of ODD-J and ODD-JO at infinite norms is due to that they speed up the progress of the job at the beginning of the routing, compared with ODD. This will be explained in the remainder of this section in more detail.

13,5 13,7 13,9 14,1 14,3 14,5 14,7 14,9 15,1 15,3

17 18 19 20 21 22 23 24 25 26

Shop floor throughput time

Lateness Std

ODD ODD-J ODD-JO

Figure 5.2 Lateness Standard Deviation (LSD)

However, the corrections of ODD-J already provide improvement at tight norms compared to ODD performance at these norms. Compared to ODD-J, the additional adjustments of ODD-JO do not provide a further improvement and even deteriorate the LSD a little.

We may assume that it is the planned station throughput time change which leads to the LSD differences between ODD and ODD-J, as the adjusted ODDs will typically update the planned station throughput time. When the norms are tightened, an increasing number of jobs will be released later than the planned release date. And ODD-J and ODD-JO will then reduce the planned station throughput time to meet the planned job due-dates. On the contrary, at infinite norms, the new methods will increase planned station throughput times, as an average job will have more slack per operation than the original planned station throughput time of 5 time units.

What happens at infinite norms can be explained as follows.

The planned job release time t

*jR

is determined by back scheduling from the job due date with

equal planned throughput times of 5 time units for all stations. The mean routing length of one

job should be 3.5 as the routing length is discrete uniformly distributed on [1, 6] in the model.

(21)

18

With the guidance of equation 5, we can determine that on average t

*jR

must be 30 time units after its entry:

=

SJ

S D S j

R

j

T

t

*

: δ

*

= 47.5 - 3.5*5= 30

The mean due date allowance in our model is 47.5 time units which is the average of the uniform distribution on [35, 60]. In our model, the mean pool time

T

Pj

at infinite norm level is 2.5 time units, which is half of the release period of 5 time units. So the mean available time on the shop floor will be 45 time units at infinite level. Using ODD-J, the updated station throughput time T

'jsD

is 45/3.5, the value of which is nearly equal to 13, which is much higher than the planned station throughput time 5 units adopted by the ODD method. The consequence is shown in figure 5.3.

Figure 5.3 shows how the original ODD priority rule postpones particularly the first operation of the job in case the job due date is still far away. It will only perform the first operation of such a job when the workstation will otherwise get idle. Particularly for the first operation, the traditional ODD priority rule seems wise as the only trigger for early processing at the first station will be an idle machine. However, ODD-J may give the first operation of a job priority over other jobs, which may be more urgent in reality. Other jobs may approach their final job due date, being at their last operations. ODD-J makes the first operation due date earlier which gives this operation an increased priority. For example, at infinite norms it will treat an job being 1 time unit before this first ODD as more urgent than an job being 2 time units before its final due date.

Also in (Soepenberg et al., 2010), the influence of varying planned operation lead times (PLT) per operation is discussed, suggesting the use of longer planned operation lead times at the

ODD

ODD-J

T

jP=2.5

T

jP=2.5

D

T

S* =5

D

T'js =13

1

T

*=30

D

T'js =13 T'jsD=13 T'jsD=13

D

T

S* =5

T

S*D=5

* T

*1: the available time of the first operation of job j with the method ODD

z

t

j

Figure 5.3 the station throughput time distribution of ODD and ODD-J

(22)

19

beginning of a routing and shorter planned operation throughput times at the end. In a practical case, accelerations in the last stages resulted in many orders to be delivered on time.

Land (2004) and Enns (1994) have researched a similar problem and said that “although the use of ODD with a lower planned station throughput time decreases the priority of the job in the beginning of its routing, it gives a relatively high priority to a job as it approaches the end of its routing”. However, ODD-J and ODD-JO do not provide a mechanism to speed up the jobs when they are very close to their due date. Therefore the methods do not show good results at high norm levels and improve at lower norm levels in terms of the LSD.

We designed another simulation for ODDs based on the planned station throughput time of 3 and 7 time units. For each ODD we perform 8 experiments with the norm decreasing stepwise by taking 85% of the previous level.

13,7 13,8 13,9 14 14,1 14,2 14,3 14,4 14,5 14,6 14,7

20 21 22 23 24 25 26

Shop Floor Throughput Time

Lateness Std. 7 units

PSTT 3 units PSTT 5 units PSTT

Figure 5.4 Lateness Std. (LSD) of ODD with different Planned Station Throughput Time (PSTT)

As shown in figure 5.4, the ODD with planned station throughput times of 5 performs best with the lowest LSD across the full the scale including the infinite norm level. This confirms our conjecture that it is the longer station throughput time T

js'D

which explains the worse performance of ODD-J and ODD-JO at infinite norms.

5.3 Other indicators

As can be observed in figure 5.5 and figure 5.6, mean lateness (MT) and percentage tardy ( f

T

) of

ODD-J and ODD-JO reach their lowest levels at finite norm level, contrary to ODD. Thus,

(23)

20

adopting the realized job release time ( t )

Rj

and realized operation completion date ( t ) in the

cjS

schedule could improves the ML and f

T

performance of the job shop when the norm is tighter.

Notice that the f

T

is the combined consequence of the ML and LSD. The low ML of ODD-J and ODD-JO at tighter norm levels appears sufficient to compensate for the LSD which is still slightly higher than that of ODD at infinite norms. Therefore, the lowest percentage tardy overall is realized for ODD-J at infinite norm level.

7,5 8 8,5 9 9,5 10 10,5

17 18 19 20 21 22 23 24 25 26

Shop floor throughput time

MeanTardiness

ODD ODD-J ODD-JO

Figure 5.5 Mean Tardiness (MT)

0,065 0,07 0,075 0,08 0,085 0,09 0,095 0,1

17 18 19 20 21 22 23 24 25 26

Shop floor throughput time

Percentage Tardy(fT) ODD

ODD-J ODD-JO

Figure 5.6 Percentage tardy ( f

T

)

We can also see that regarding the Mean tardiness (MT) ODD-JO performs better than ODD-J

while other measurements were worse. This relates to the problem that the corrections get

illogical for jobs that are already tardy at release or after operation completion. In that case, a

negative slack is redistributed.

(24)

21

6. Conclusion

In this paper, we propose corrections to Operation Due-dates (ODDs) which can normally be used for priority dispatching rules. The original ODD rules did not perform well when combined with restricted order release based on workload norms. We tested two types of corrections.

Method ODD-J recalculates the ODDs by redistributing the remaining slack across all operations, when the realized job release date deviates from the original one. ODD-JO makes a similar recalculation after each finished operation. With the tightening of the workload norms, our proposed corrected ODD dispatching rules perform better than the original ODD rule for all measured due-date performance variables. Nonetheless, when considering the percentage tardy and lateness standard deviation at loose norm levels, ODD performs better. However, it is not the preferred situation with high workload norms when workload control will be applied. So, our proposed methods will be effective in the daily job shop situations.

If we compare mean tardiness (MT), lateness standard deviation (LSD) and percentage tardy ( f

T

) of ODD-J and ODD-JO, we could conclude that it is better to adjust the ODDs only after the release decisions, rather than to adjust again after each dispatching step. However, ODD-JO can still improve the performance compared to the original ODD rule, which does not use the realized job progress information. Finally, experimental results indicate that for both new methods considered, all due date objectives – small lateness standard deviation, reduced mean lateness, low mean tardiness and a small percentage tardy can be obtained simultaneously by controlling job release with appropriate workload norms. Furthermore, we have tried to point out the reasons for performance differences. As the new methods gain most advantage in terms of mean lateness, the main contribution of the new methods must come from improved load balancing. The load balancing function of release was disturbed by the priorities resulting fro traditional ODDs.

This research has managerial implications. The use of realized job shop information on deviations from planned progress to update original ODDs can improve the delivery reliability in practice.

However, updating too often might not produce the best schedule. By simulation we have proven

that updating after each operation produces negative effects. Incorporating the proper amount of

information will not only reduce the information processing costs, but also improve the due-date

performance.

(25)

22

References:

Ahmed, I. and Fisher, W. W. 1992, “Due-date assignment, job release, and sequencing interaction in job shop scheduling”, Decision Sciences, 23(3), 633-647.

Ashby, J. R. and Uzsoy, R. 1995, “Scheduling and job release in a single-stage production system”, Journal of Manufacturing Systems, 14(4), 290-306.

Baker, K. R. and Bertrand, J. W. M. 1982, “A dynamic priority rule for sequencing against due- dates”, Journal of Operations Management, 3(1), 37-42.

Baker, K. R. 1984, “The effects of input control to a simple scheduling model”, Journal of Operation Management, 4(2), 99-112.

Bechte, W. 1988, “Theory and practice of load-oriented manufacturing control”, International Journal of Production Research, 26(3), 375-395.

Bechte, W. 1994, “Load-oriented manufacturing control just-in- time production for job shops”, Production Planning and Control, 35(2), 329-420.

Berry, W. L. Penlesky, R. J. and Vollmann, T.E. 1975, “Critical Ratio Scheduling: Dynamic Due- Date Procedures under Demand Uncertainty”, Management science, 22 (2), 81-89.

Bertrand, J. W. M. and Wortmann, J. C. 1981, “Production control and information systems for component-manufacturing shops”, Elsevier Science Publishers B.V., North-Holland.

Bertrand, J. W. M. 1983, “The use of workload information to control job lateness in controlled and uncontrolled release production systems”, Journal of Operations Management. 3(2), 67-78.

Blackstone, J. H., Phillips, D. T. and Hogg, G. L. 1982, “A state-of-the-art survey of dispatching rules for manufacturing job shop operations”, International Journal of Production Research, 20( 1) ,27 – 45.

Chang, F. C. R. 1996, “A study of due-date assignment rules with constrained tightness in a dynamic job shop”, Computers and Industrial Engineering, 31 (1-2), 205-208.

Chang, F. C. R. 1997, “Heuristics for dynamic job shop scheduling with real-time updated queueing time estimates”, International Journal of Production Research, 35(3), 651 – 665.

Cowling, P. I., Ouelhadj, D. and Petrovic, S. 2000, “Multi-agent systems for dynamic scheduling”,

Proceedings of the Nineteenth Workshop of Planning and Scheduling of the UK, pp. 45-54, Ed.

(26)

23

Enns, S. T. 1994, “Job shop lead time requirements under conditions of controlled delivery performance”, European Journal of Operational Research, 77, 429-439.

Enns, S. T. 1995, “An integrated system for controlling shop loading and work flow”, International Journal of Production Research, 33(10), 2801-2820.

Glassey, C. R. and Resende M. G. C., 1988 “Closed-Loop job release control for VLSI circuit manufacturing ”, 1EEE Transactions on Semiconductor manufacturing, 1, 36-46.

Hendry, L. C. and Kingsman, B. G. 1991, “A decision support system for job release in make-to- job companies”, International Journal of Operations and Production Management, 11(6), 6-16.

Hendry, L. C., Kingsman, B. G and Cheng P. 1998, “The effect of workload control (WLC) on performance in make-to-job companies”, International Journal of Operations and Production Management 16(1), 63-75.

Kanet, J. J. and Hayya, J. C. 1982, “Priority Dispatching with Operation Due-dates in a Job Shop”, Journal of operations management, 2(3), 167-175.

Kingsman, B. G., Tatsiopoulos, I. P. and Hendry, L. C. 1989, “A structural methodology for managing manufacturing lead times in make-to-job companies”, European Journal of Operational Research, 40, 196-209.

Kempf, K., Russell, B., Sidhu, S. and Barrett, S. 1991, “Artificially Intelligent Schedulers in Manufacturing Practice: Report of a panel discussion”, AI Magazine, 1, 46-56.

Land, M. J. 2004, “Workload control in job shops, grasping the tap”, (dissertation) Labyrint Publications, Ridderkerk, The Netherlands.

Leon V. J., Wu D. S. and Storer R. H. 1994, “Robustness measures and robust scheduling for job shops”, IIE Transactions, 26( 5 ) 1994 , 32 - 43

Leus, R. and Herroelen, W. 2005, “The complexity of machine scheduling for stability with a single disrupted job”, Operations Research Letters, 33 (2), 151-156.

Lu, H. L., Huang, G. Q. and Yang, H. D. 2010, “Integrating order review/release and dispatching rules for assembly job shop scheduling using a simulation approach”, International Journal of Production Research, 1-23

Maxwell, W. L. 1969, “Priority dispatching and assembly operations via job shop”, Report Rm-

5370-PR. Rand Corporation, CA.

(27)

24

Melnyk, S. A., Vickery, S. K. and Carter, P. L.1986, “Scheduling, sequencing, and dispatching:

alternative perspectives”. Production and Inventory Management Journal, 27, 58-68.

Melnyk, S. A. and Ragatz, G. L. 1989, “Order review/release: research issues and perspectives”, International Journal of Production Research, 27 (7), 1081–1096.

Melnyk, S. A., Ragatz, G. L and Fredendall, L. 1991, “Load smoothing by the planning and order review/release systems: A simulation experiment”, Journal of Operations Management, 10(4), 512-523

Oosterman, B. J., Land, M. J. and Gaalman G,. J. C. 1998, “The influence of shop characteristics on workload control”, International Journal of Production Economics, 68, 107-119.

Ovacik, I. M. and Uzsoy, R. 1994, “Exploiting Shop Floor Status Information to Schedule Complex Job Shops”, Journal of Manufacturing Systems, 13(2), 73-84.

Ouelhadj, D. and Petrovic, S. 2009 , “A survey of dynamic scheduling in manufacturing systems”, Journal of scheduling,12( 4), 417-431.

Panwalker, S. S., and Iskander, W. 1977, “A survey of scheduling rules”, operations Research, 25(l), 45-61.

Ragatz, G. L., Mabert, V. A. 1984, “A simulation analysis of due-date assignment rules”, Journal of Operations, 5(1), 27-39.

Ragatz, G. L., Mabert, V. A. 1988, “An evaluation of order release mechanisms in a job-shop environment”, Decision Sciences, 19, 167-189.

Sabuncuoglu, I. and Comlekci, A. 2002 “Operation based flow time estimation in a dynamic shop”, Omega, 30 (6), 423–442.

Soepenberg, G. D., Land, M. J. and Gaalman, G. J. C 2010, “ Adapting workload control for complex job shops ”, Department of Operations, Faculty of Economics and Business, University of Groningen. Paper under submission.

Tatsiopoulos, I. P. 1983, “A microcomputer-based interactive system for managing production and marketing in small component manufacturing firms, using an integrated backlog control and lead-time management methodology”. In: Research Report, Department of Operations Research, University of Lancaster.

Wein, L. M. 1988, “Scheduling Semiconductor Wafer Fabrication”, IEEE Transactions on

Semiconductor Manufacturing, 1, 115-130.

(28)

25

Wisner, J. D. 1995, “A review of the job release policy research”, International Journal of Operations and Production Management, 15(6), 25 – 40.

Wu, D.S., Storer, R. H. and Chang, P. C. 1993, “One-machine rescheduling heuristics with efficiency and stability as criteria”, Computers and Operations Research archive, 20 (1), 1 – 14.

Referenties

GERELATEERDE DOCUMENTEN

Recipients that score low on appropriateness and principal support thus would have low levels of affective commitment and high levels on continuance and

Belgian customers consider Agfa to provide product-related services and besides these product-related services a range of additional service-products where the customer can choose

Therefore, by means of this explanation, we expect that job satisfaction can explain why extraverted employees in general have better employee job performance than those

A simulation approach was taken in order to model twenty-five priority rules from the JSP and HJSP literature for a complex multi-item case study with setup times, costs

Longitudinal research on the role of employment in ex-prisoners’ recidivism patterns is scarce, and most existing work used a simplistic employment measure (i.e., employed

U bent tevens van harte welkom op de receptie die na afloop van de promotie van 12.15-13.00 uur in het Academiegebouw

This paper should not be quoted or referred to without the prior written permission of the author.. An approximation of waiting times in job shops with customer demanded

applied knowledge, techniques and skills to create and.be critically involved in arts and cultural processes and products (AC 1 );.. • understood and accepted themselves as