• No results found

Improving the performance of workload control in job shops with long routings

N/A
N/A
Protected

Academic year: 2021

Share "Improving the performance of workload control in job shops with long routings"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Improving the performance of workload control in job shops with long routings

By:

Arvid Hornyák

Supervisors:

dr. Martin Land

drs. Remco Germs

Faculty of Economics and Business

MSc BA Operations & Supply Chains

August 2010

Abstract

(2)

1. Introduction

Increased demand for specialized products has led to a large growth in the number of Make-To-Order (MTO) companies, leading to greater competition amongst these companies and the increasing strategic importance of lead times and delivery reliability. In MTO companies, products are made based on customer specifications. Many of these MTO companies are Small and Medium sized Enterprises (SMEs). Small companies, with limited financial resources, are in particular danger of suffering the consequences of implementing an inappropriate, hence unsuccessful contemporary Production Planning & Control (PPC) approach. As yet, the most successfully adopted PPC concept by MTO companies is Workload Control (Stevenson et al., 2005). Workload Control (WLC) conceptualizes the job shop as a queuing system. In front of each workstation, an arriving job finds a queue of jobs waiting to be processed (Land & Gaalman, 1996). The approach of WLC is to control these queues, so that throughput times become predictable for each workstation. Balancing and restricting the load on the shop floor leads to a transparent shop floor with predictable shop floor throughput times (STT). In this way, due dates can be accurately set, enhancing delivery performance.

The restriction of orders to the shop floor creates a pool of unreleased jobs, making the release decision the focal point in WLC. The undisputed focus on the release decision has led to criticism from both practitioners and scientists, particularly concerning shops with high routing length complexity (Henrich et al. 2004a, Dogger et al. 2010, Thürer et al. 2010). Routing length complexity can be distinguished by two indicators: the amount of workstations subsequently visited within a job shop (i.e., the routing length) and the routing length variability amongst jobs.

One of the concerns in job shops with long routings is the quality of the input estimating properties of the release decision that worsens with the number of workstations visited in a routing. This may lead to undesired (cyclic) effects of workloads at downstream workstations (Breithaupt et al., 2002). This is corroborated by Soepenberg et al. (2009), by citing that decisions at the time of release are subject to high uncertainty on the future progress of orders.

Another issue is the variability in routing length amongst jobs. Land (2004) argues that jobs with long routings waiting in the job pool may have insufficient chances to be released timely. Moreover, these jobs require a slacker bounding of the released workloads (Thürer et al., 2010a). The latter is a consequence of jobs with long routings having a smaller chance of fitting all the norms. Existing theory has a tendency to treat all jobs equally, even though variability in routing length is present. Disregarding the distinct variability in routing length does not result in effective solutions for all jobs (Thürer et al., 2010a).

Many MTO companies need to cope with the routing length complexity as just described. Contrarily it has only privileged little attention in previous research on WLC. Conventional research on WLC mainly focused on job shop models with six workstations (Melnyk & Ragatz, 1989, Oosterman et al. 2001. Land, 2004), with the notable exception of Thürer et al. (2010b), making the routing length aspects in WLC a matter of importance. Based on earlier findings and the urge from industrial practitioners, the viability of WLC in job shops with long routings will be explored and suggestions for redesign will be provided. To be able to explore the applicability of WLC in job shops with high routing complexity, a simulation study is performed. The main interest is the balancing capabilities of WLC in job shops with high routing length complexity.

The structure of this article is as follows. Section 2 analyses to basic principles of WLC and its implications for job shops with high routing complexity, based on literature review. Also, improvement possibilities for the existing release methods are described. Section 3 details the experimental design of the simulation study, which is used to analyze the WLC methods in job shops with high routing length complexity. Section 4 and 5 offer an analysis and discussion of the results. Conclusions of the research are presented in section 6.

2. Workload Control and routing length complexity

(3)

Figure 1. TTT components in job shop manufacturing (from Oosterman et al., 2000)

The job pool is a metaphor for unreleased jobs, waiting in front of the shop floor. This results in an average pool time (PT). Within the job pool, jobs compete among each other for release to the shop floor. Whether to release a job or not, depends on the release procedure, which exists of two components: a sequencing and a selection decision (Land & Gaalman, 1996). The sequencing decision serves as a timing function, where jobs are sequenced based on their relative urgency. The selection decision serves as a load balancing function, striving for stable loads on the shop floor. Depending on the available capacity on the shop floor, selected jobs in the job pool can be released.

Release procedures in WLC allow for the release of any job from the pool. The jobs j are sequenced in order of a planned release date t*Rj. Successively these jobs are considered for release. If a job fits all workload norms it will be selected for release, otherwise the job will remain in the job pool, waiting for the next release period. All selected jobs will be released at the end of the release period. The rejection of jobs that exceed workload norms, serves as the load balancing function. By balancing the loads at the shop floor, variation in throughput times is reduced. In this way, it supports the timing function, as accurate release dates can be calculated (Land, 2004).

Once released, jobs proceed on the shop floor according to their predestined routing. For modeling simulation purposes, different routing characteristics are distinguished. The routing of jobs can be completely undirected, as in a Pure Job Shop model (PJS). In a PJS the same workstation might perform the first operation in the routing of one job, while it may perform the final operation of another job. Enns (1995) argues that most real life job shops are distinguished by a dominant flow direction, which is the case in a General Flow Shop (GFS) model. In a GFS the routing of orders have the same direction, but jobs may visit a subset of workstations where any workstation may be excluded from the routing.

In both shop types, routing length may vary. As mentioned in Thürer et al. (2010a), disregarding the routing length variability amongst jobs leads to unequal performance between jobs with short and long routings, leading to a high standard deviation of lateness. The probability of jobs with long routings fitting the norms is lower than jobs that only require few operations (Land, 2004). Moreover, jobs with long routings may have to wait until their urgency brings them in front of the pool sequence, before they have a reasonable probability of being selected. On the contrary, small jobs can often fill a load gap before they become urgent. Thürer et al. (2010a) have shown that prioritizing jobs according to their routing length leads to the best overall performance within a 6-station PJS, only showing little deterioration for jobs with shorter routings.

The aforementioned routing issues have been extensively researched (Land & Gaalman, 1998, Oosterman et al. 2000, Breithaupt et al. 2002). Different WLC methods have been developed, each coping with different routing characteristics. The next section explains the most recognized WLC methods.

2.1 Workload control methods

(4)

workload calculation methods. Acknowledged methods are load conversion (A), aggregate load (B) and modified aggregate load (B′).

aggregate load

converted load aggregate load modified aggregate load

B converted load

A

modified aggregate load aggregate load

converted load aggregate load modified aggregate load

B converted load

A

modified aggregate load

Figure 2. The contribution off job j across time to the workload (from Oosterman et al., 2000)

Figure 2 illustrates the difference between the three methods, with respect to the timing of input to the shop floor. Once a job is released at R

j

t , method A contributes an estimation of the upstream load to

the direct load of a workstation, by means of load conversion. All upstream load portions are depreciated, depending on the current distance from the downstream workstation (Bechte, 1988). The load contribution of a job j to a workstation s is a fraction djst of the operation processing timep . The js

fraction depends on the position of workstation s in the routing of job j at time t and is indicated as the depreciation factor djst (for a detailed explanation of this factor see Land (2004), eq. [3.9]). Once a job enters the queue of a workstation ( Q

js

t ), the full direct load is contributed to the workstation s. At the

time of completion ( C js

t ) at a workstation, the workload is fully depreciated.

Method B aggregates the direct load and the indirect load of a workstation (Bertrand & Wortmann, 1981). From the release moment R

j

t , the complete workload for each workstation in a job’s routing is

contributed. Instead of estimating inputs to the direct loads of a workstation, norms are used to keep the aggregate loads at a stable level.

Method B′ corrects the aggregate load, by dividing the load pjsby the position of workstations in the routing of job j, thus the job contributes pjs/njs from the moment t . (Oosterman et al., 2000). Method Rj B′ can be seen as an estimation of the average direct load resulting from the actual mix of jobs on the shop floor.

Land (2004) has already shown that, when routings become completely undirected, method A performs better than method B considering TTT. It thus appears to be effective to estimate the influence of job release on the direct load of each workstation in a PJS. As aggregate loads include the indirect workload of a workstation, a momentarily increased workstation position within a job’s routing, requires an increased aggregate load (Oosterman et al., 2000). In a PJS, the workstation position incessantly changes, hence showing worse performance for method B in a PJS. Alternatively, method A performs worse in a GFS, by mainly focusing on the direct loads. In Oosterman et al. (2000) it has been shown that the performance problems of method A in a GFS can be attributed to the focus on direct loads at downstream workstations. This focus might create undesirable (cyclic) effects at these workstations, since the release decision might react on events that take place in the far future. When a downstream workload exceeds a norm, release is completely blocked causing starvation in later release periods at these downstream workstations. The release decision may react on this by releasing more work to the shop floor. These cyclic effects in workload thus might lead to the erroneous restriction or approval of job release, conflicting with the aim of WLC. Evidently, when routings become longer in a GFS, it will be more difficult to estimate the future input for downstream workstations. Breithaupt et al. (2002) have suggested to simply exclude downstream workstations of a job from contribution to the load accounts, avoiding the release decision being influenced by events taking place in the far future.

(5)

2.2. Identified research gaps

Routing length complexity has only privileged little attention in WLC literature. Dealing with high routing length complexity is deemed as one of the weaknesses of WLC. Thürer et al. (2010a) have extensively investigated the routing length variability issues. Therefore, the main interest of this paper is the routing length issue as indicated by Breithaupt et al. (2002). Excluding downstream workstations is suggested in the research of Breithaupt et al. (2002), however simulation results have never been presented. The suggestion is done in order to eliminate the shortcomings of method A in a GFS. On the other hand, excluding downstream workstation may be an effective solution to deal with long routings in general.

A positive externality of excluding downstream workstations is that jobs with longer routings have a higher chance of being selected. Norms of downstream workstations within a job’s routing do not restrict the release of these jobs. This is expected to improve timing qualities of the release method, thus reducing the standard deviation of lateness among jobs.

Furthermore, the gross of WLC literature has investigated the WLC applicability in job shops comprising six workstations (Melnyk et al. 1991, Oosterman et al. 2000 and Land 2004). Routing length being the main interest of this research, the performance of the current workload calculation methods and the suggestions of Breithaupt et al. (2002) will be explored for shop configurations with twelve workstations as well. This creates longer routings and higher routing length variability. Recent research of Thürer et al. (2010b) already showed results for a 12-station GFS and PJS. Once again, the superior performance of method B′ over method B is emphasized. However, results of method A have not been presented within the research of Thürer et al. (2010b).

Based on the identified research gaps, the scope of this research can be defined. The main research topic of this article is how the different workload calculation methods perform in job shops with long routings and how the exclusion of downstream workstations from workload calculation influences the performance of the workload calculation methods.

2.3. WLC methods for job shops with long routings

Excluding workstations from the workload calculation methods means that the workload calculation method is only applied to a fixed number of consecutive workstations within the routing of a job. This number of workstations in a job’s routing included in the workload calculation methods, is indicated as the look-ahead limit l. Workload contributions of jobs to workstations that fall within the limit l are determined according to the workload calculation method as described in the section 2.1. The adjusted methods for A and B′ are indicated as C and D respectively.

An arising concern is whether the calculated workloads of a job for workstations falling outside the look-ahead limit should not be contributed at all, or that it should be contributed as the job approaches these workstations. No clarity about this matter is given by Breithaupt et al. (2002), so this leads to two variants of each workload calculation method:

1. The workload calculation methods do not contribute a job’s calculated workload to a workstation

s beyond the lth workstation at all

2. The workload calculation methods contribute a job’s calculated workload to a workstation s as a job progresses and arrives within the last l workstations before the workstation s.

Both variants can be used when applying either method C or D, with variant 1 announced as respectively C1 and D1 and variant 2 as C2 and D2. It goes without saying that in case of method C1 and D1 no workload contribution is accounted for workstations with a routing position njs > l, from the moment of release R

j

t until the moment the job leaves the shop floor Z j

t .

(6)

tEj t Q js tCjs t Z j pjs jo b co n tr ib u ti o n tRj arrival at njs-l+1 jo b co n tr ib u ti o n pjs / l tE j t R j t Q js tCjs t Z j arrival at njs-l+1

Figure 3. The contribution of job j across time for methods C2, and D2

Methods C2 and D2 do contribute the jobs calculated workloads as a job progresses on the shop floor. By taking a look at figure 3, one can see that method C2 considers the converted workload for a workstation s from the moment of arrival at the workstation with sequence number (njs-l)+1. In other words, workstation njs uses workload information of the last l workstations in a jobs routing upstream of workstation njs, in order to estimate the input to the direct load of njs.

For example, when a job’s routing contains four workstations, with a look-ahead limit of l=3, the calculated workloads of only the first three workstations increase at the release moment R

j

t . Once the fourth

workstation falls within the look-ahead limit of j, that is when the job arrives at workstation (njs-l)+1 (in this case workstation 2) workloads are contributed according to the calculation methods.

The same holds for method D2, where the workload contribution is pjs/l at the moment a job arrives at workstation (njs-l)+1. Compared to method B′, this will create a higher workload contribution for workstations beyond the look-ahead limit. For example, when a job with nine workstations in its routing is released according to method B′, the calculated workload for the ninth workstation in the job’s routing is pjs/9 . Applying method D2 with l=6 results in a contribution of pjs/6 from the arrival at (njs-l)+1, in this case the fourth workstation. However the contribution will last shorter than in method B′, therefore it may provide a good indication of the indirect workloads.

3 Experimental Design

The previous section has indicated several suggested improvements for the existing WLC methods, to deal effectively with long routings. A simulation study is performed in order to evaluate the effectiveness of these suggestions.

Simulation models of e.g. Melnyk et al. (1991), Oosterman et al. (2000) and Land (2004) comprised six workstations, which is deemed as sufficient to model variability in job shops. The suggestion of Breipthaupt et al. (2002) is to simply exclude the latter two workstations, i.e., a look-ahead limit of l=4. This will be verified for a GFS and PJS by means experiments with six workstations. To create longer routings, twice the size of the conventional job shop configurations will be modeled. The look-ahead limit will be varied [3,6,9] for this shop size, since no indication about the effects of the length of the look-ahead limit can be given beforehand. The experimental variables are summarized in table 1.

Table 1

Model inputs: experimental variables

Routing sequence directed, undirected Shop size N and maximum routing length 6, 12

Load calculation methods A, B′, C1, C2, D1 and D2

Look-ahead-limit l 4 for N=6 and 3,6,9 for N=12 Norm level Stepwise down from infinity

For each experimental variable, nine norm levels are simulated (including the infinite norm). The first norm is infinite, resulting in unrestricted release of jobs to the shop floor. Next, a finite norm level is simulated. This norm level is successively reduced by 15% relative to the previous norm. At least 9 norms will be simulated for each release method (including the infinite norm). As norms are set tighter, STT will decrease. By choosing the norms stepwise down, STT can be compared with PT and TTT for each experimental variable.

(7)

1. For each job in the pool, a planned release date R j

T is determined by backward scheduling from

the due date δj,using a planned throughput time of D s

T* of all workstations in the routing set S

j. That is:

∈ − = j S s D s j R j T t* * : δ

The planned throughput time T*Dsis defined as in Land (2004).

2. The job j with the earliest planned release date is considered first. A job can only be released when the calculated workloads Ls including the job’s contribution fit into the norms Λs.

Both workloads and norms are expressed in time units, enabling convenient comparison between the two parameters. The calculated workloads are compared with the norms of the workstations: Ls + djstpjs ≤ Λs for j S s ∈ (for A and B′) or Ls + djstpjs ≤ Λs for l j S s ∈ (for C1, C2, D1 and D2)

For A, C1 and C2 djst is defined as in Land (2004), see eq. [3.9]. This creates a pattern as in figure 2 (method A). For B′, D1 and D2 djst= 1/njs. With njs as the position of workstation s in routing of job j and pjsas the required processing time of job j at workstation s.

Methods A and B′ check the workloads of all workstations in the routing of j, which is the set j

S .

C1, C2, D1 and D2 check the workloads for the first l workstations in the routing of j, which is the set l

j

S . l is defined as the look-ahead limit.

3. If job j fits the norm Λs, j will be released and the workloads are updated: Ls = Ls + djstpjsfor j S s ∈ (for A and B′) or Ls = Ls + djstpjs for l j S s ∈ (for C1, C2, D1 and D2)

4. If the pool contains any jobs that have not been considered yet, then return to step 2 considering the job with the next earliest release date. Else, the release procedure is finished and the selected jobs are released.

Methods C1 and D1 do not consider the workload contributions of workstations which are not within the set l

j

S

, once released to the shop floor. Methods C2 and D2do consider the workload of these jobs, as they progress within their routing (see figure 3). From the moment njs - ajt < l, with ajt as the actual routing position of a job j at time t, the workload is updated as follows:

Ls = Ls + djstpjs

For C2, djst is defined as in method A (see Land (2004), eq. [3.9]). For D2: djst= 1/ l.

All workstations consist of one machine, comprising equal, constant capacity. Both directed (PJS) and undirected flows (GFS) are considered. Further modeling assumptions are given in table 2.

Table 2

Model characteristics N=6 workstations N=12 workstations Routing length Uniform [1,N] Uniform [1,N] Inter-arrival time exponentially distributed (mean:

0.648 time unit)

exponentially distributed (mean: 0.602 time unit)

Operation processing time 2-Erlang with µ = 1 time unit 2-Erlang with µ = 1 time unit Priority dispatching rule FCFS FCFS

(8)

Once a job enters the pool, a due date δj is set by adding a random allowance to the job’s entry time E

j

t . This creates a variable level of urgency among jobs:

δ

j =tEj +a, with a uniformly distributed on

[m, M]. The minimum allowance (m) is determined at 5+(5*N) time units, sufficient to cover a planned station throughput time T*Dsof 5 time units for N operations, plus a pool time of 5 units. The pool time is a result of the release period, which is set at 5 time units, showing least sensitivity to routing length variability (Land, 2006). The maximum allowance (M) is chosen such that the basic set of experiments result in a percentage tardy between 5 and 20 per cent. The number of operations of a job will be discrete uniformly distributed on [1,N], thus each workstation is equally likely to be visited and is visited at most once in a jobs routing.. Observations in real life job-shops have shown that processing times can be approached with a 2-Erlang distribution (Land, 2004). For normalization purposes, a mean of 1 time unit is used. Furthermore, the inter-arrival time of jobs is determined so that a workstation utilization of 90% is achieved.

For the experimental variables, i.e., routing sequence, norm type (A, B′, C1, C2, D1 and D2), look-ahead limit (for C1, C2, D1 and D2) and norm level, a full factorial design is used. This results in 2 x 14 x 9 experiments, plus the fractional set of experiments for N=6. An experiment consists of 100 replications. Experiments for N=6 each have a length of 6000 with a warm-up period of 3000 time units as in Land (2006). For N=12, each experiment has a length of 13000 time units, as proposed by Henrich et al. (2004b). However, the warm-up period has been redefined at 6000 time units, in order for the system to be in a steady state. The use of Common Random Numbers (CRN) makes sure that each experiment gets the same set of jobs to deal with. This reduces variance between experiments (Law & Kelton, 1991).

To be able to evaluate causality between experimental settings and the outcomes of the experiments, several measures have been mentioned in the previous section. Table 3 shows an overview of performance indicators as a measure of workload control aspects.

Table 3

Performance measures Model outputs

Norm tightness/shopfloor workload STT

Lead time performance TTT

Balancing performance TTT

Average direct load

Direct load standard deviation

Timing performance Standard deviation of lateness

STT is used to represent the norm tightness, to make norm levels for different methods comparable. By Little’s results it is also linearly related to the number of jobs on the shop floor (Little, 1961). The imperative indicator is TTT, being inherently related to balancing performance. Other measures of balancing performance are the average direct load and the standard deviation of direct load. The timing performance is indicated by the standard deviation of lateness, showing the capability to reduce the dispersion of lateness among jobs. All indicators are measured in time units.

4. Results

This section presents and discusses the results of the simulation study for the investigated experimental parameters. First, the results of the 6 workstation configuration are discussed. Next, the results of the 12 workstation configuration will be evaluated. Special attention is paid to TTT performance and the underlying balancing influences. Eventually the timing performance of the different workload calculation methods is discussed.

4.1. 6-station PJS and GFS performance

(9)

level, except for A in a GFS. As norms are tightened from the minimum point further, jobs will eventually have to wait extremely long in the job pool, resulting in an increase of TTT. When the curve of a method stays uniformly below another curve, the performance of the method is seen as superior.

PJS Performance 25 28 31 34 37 40 43 46 49 10 12 14 16 18 20 22 24 26 STT T T T A C1 (l=4) C2 (l=4) B' D1 (l=4) D2 (l=4) GFS Perform ance 25 28 31 34 37 40 43 46 49 10 12 14 16 18 20 22 24 26 STT T T T A C1 (l=4) C2 (l=4) B' D1 (l=4) D2 (l=4)

Figure 4. Lead time performance in a 6-station PJS and GFS with a look-ahead limit of l=4.

The performance of the release methods in the PJS is not improved by any of the adjusted methods. The corrections for method A (C1and C2),even worsen performance slightly. Method A still shows superior performance. However, differences are rather small. Corrections for method B′ (D1 and D2) show equal performance compared to method B′, as norms get tighter.

(10)

4.2. 12-station PJS and GFS performance

The results for the 12-station configurations with a look-ahead limit of l=6 are shown in figure 5. Again, TTT is plotted against STT. Results of applying a look-ahead limit of l=3 and l=9 are presented and discussed in section 4.3.

The most interesting insight is that method A has lost its performance advantage in the 12-station PJS compared to method B′, which it has in a 6-station PJS. The long routings result in long upstream distances to workstations, making it more difficult to estimate the influence of job release on the direct load of each workstation. Thus, by mainly focusing on the direct loads, the input estimating qualities of method A have worsened. Method B′ preserves a more stable flow of workloads to each workstation, by not reacting on the actual distances to a workstation.

PJS Perform ance 47 48 49 50 51 52 53 54 55 25 28 31 34 37 40 43 46 49 STT T T T A C1 (l=6) C2 (l=6) B' D1 (l=6) D2 (l=6) GFS Perform ance 45 46 47 48 49 50 51 52 53 54 55 25 28 31 34 37 40 43 46 49 STT T T T A C1 (l=6) C2 (l=6) B' D1 (l=6) D2 (l=6)

Figure 5. Lead time performance in a 12-station PJS and GFS with a look-ahead limit of l=6.

Applying a look-ahead limit of l=6 for C1 and C2, now improves method A, which it did not do for the 6-station shop with shorter routings. Correcting for method B′ now leads to the best performance, D1 performing superior compared to B′ and D2. Remind that method D1 does not contribute to jobs workloads when njs > l, whereas method D2 contributes a job’s workload from the moment a workstation falls within the look-ahead limit, that is at the arrival of the workstation with sequence number (njs-l)+1. Method D

1

provides that the release decision does not react on far downstream arrival of jobs.

(11)

It becomes impossible to reduce TTT by applying method A. In Oosterman et al. (2000) it has been shown that the performance problems of method A in a GFS can be attributed to cyclic effects. Due to the long routings and the directed character of the GFS, stronger cyclic effects will appear in this 12-station GFS. Method A is not able to respond adequately to direct load changes at downstream workstations, resulting in periods of starvation followed by overload periods at downstream workstations and resulting in extremely long PT. Obviously, applying a look-ahead limit for method A improves performance significantly, since the downstream workstations are simply ignored.

Method B′ shows the best results in a GFS. Correcting for B′ leads to a deterioration when norms are set tighter, as was the case in the 6-station GFS. Method D1 performs worst of these three, causing higher average direct loads at downstream workstations. In addition, the standard deviation of direct loads at downstream workstations also has increased strongly.

Contrarily, method D2 unnecessarily restricts jobs from being released, since jobs contribute more calculated workload for downstream workstations compared to method B′ and D1, from the moment a workstation falls within the look-ahead limit of a job. For workstations this occurs at the workstation with sequence number (njs-l)+1 in the routing (see figure 3). This may create similar cyclic patterns as for method A, causing unnecessary restriction of orders.

Figure 6 shows these differences in downstream load influences. The average direct load and standard deviation of direct load for the best performing norm setting (7.45) are depicted for each workstation. The bars compare methods D1 and D2 (with l=6) with method B′. The desired effectiveness of the load balancing function is not achieved by neither D1 nor D2, leading to a deterioration in TTT as shown in figure 5.

Load profiles in a GFS 0 2 4 6 8 10 12 14 1 2 3 4 5 6 7 8 9 10 11 12 Workstation A v e ra g e D ir e c t L o a d w it h s ta n d a rd d e v ia ti o n B' D1 (l=6) D2 (l=6)

Figure 6. Direct loads at individual workstations in a GFS

4.3. Varying the look-ahead limit

The look-ahead limit has been varied [3,6,9] for C1, C2, D1 and D2 in the 12-station shop configurations. This section elaborates the sensitivity of the look-ahead limit for the 12-station PJS and GFS.

(12)

PJS Perform ance 46 47 48 49 50 51 52 53 54 55 27 30 33 36 39 42 45 48 STT T T T B' D1 (l=3) D1 (l=6) D1 (l=9)

Figure 7. Lead time performance of method D1 in a 12-station PJS with varying l

This phenomenon may be explained by the fact that norm setting becomes a more delicate concern when the look-ahead limit is shortened. As l is decreased, tighter norms are required in order to reach the same STT reduction. This is depicted in figure 8. Yet again, norms lower than 4.6 are not plotted.

PJS Perform ance 19 22 25 28 31 34 37 40 43 46 49 4 6 8 10 12 14 16 18 Norm S T T B' D1 (l=3) D1 (l=6)

Figure 8. STT reduction related to norm setting within a 12-station PJS

(13)

Table 4

D1 (l=3) performance

Norm Direct load PT STT Utilization 8,87 6,92 3,71 46,12 90% 7,54 6,69 4,74 44,55 90% 6,41 6,35 6,21 42,31 90% 5,45 5,91 8,20 39,44 90% 4,63 5,43 10,77 36,22 90% 3,94 4,86 13,83 32,68 89% 3,35 4,31 17,31 29,03 89% 2,84 3,75 20,58 25,33 87% 2,42 3,17 23,11 21,71 84%

The average STT is drastically reduced at the cost of these unreleased jobs. Measured PT has only slightly increased, since the individual PT for unreleased jobs is infinite and could not be accounted in the simulation. As soon as jobs are not released during an experiment, the simulation experiment becomes unstable and cannot be included in the results.

Figure 4 already showed that correcting for B′ in the GFS leads to a deterioration when norms are set tighter, as was the case in the 6-station GFS. Similar performance differences for other parameter settings are therefore not included in the results section. The shorter the look-ahead limit, the more the load balancing function becomes disturbed. The same phenomena occur as depicted in figure 6. A look-ahead limit of l=3 shows the worst performance. Method D1 (l=3) does not control the workloads at all of workstations beyond l, resulting in too many jobs being released to the shop floor and thus high STT. On the contrary, method D2 (l=3) unnecessarily restricts jobs from being released resulting in extremely long PT.

Since the corrections for method A show considerable performance improvements in the GFS, the question whether l=6 is an appropriate look-ahead limit or not, is of interest. C1 and C2 show similar performance curves for different look-ahead limit l. Therefore only the performance of C1 is depicted in figure 9. A shorter and a longer look-ahead limit both result in worse performance. C1 with l=9 is still facing the problem of reacting too strongly to the direct loads of downstream workstations, resulting in extremely long PT. Using a look-ahead limit of l=3 causes higher direct loads at downstream workstations, resulting in less STT reduction.

GFS Perform ance 47 48 49 50 51 52 53 54 55 27 30 33 36 39 42 45 48 STT T T T A C1 (l=3) C1 (l=6) C1 (l=9)

Figure 9. Lead time performance of method C1 in a 12 workstation GFS with varying l

4.4. Timing capabilities of the release decision by workload calculation methods

As explained in section 2.2., applying a look-ahead limit may have a positive effect on the timing capabilities of the release decision. As tight norms favor the release of jobs with short routings, jobs with long routings may normally have to wait longer in the job pool. By using a look-ahead-limit, norms of downstream workstations within a job’s routing do not restrict the release of these jobs.

(14)

resulted in a significant TTT reduction and also achieve a lower standard deviation of lateness. Within the other configurations, corrections for method A show a standard deviation of lateness equal to that of method A.

Methods D1 and D2 result in a slightly higher standard deviation of lateness than method B′. Taking the standard deviation of lateness as a reference, the look-ahead limit does not improve the timing qualities of release. Most probably, the new methods result in shorter PT for jobs with long routings, as they will fit in more norms. On the other hand jobs with shorter routings may have to wait longer in the pool. This is shown by the weighted average routing length, compared to the unweighted average routing length of 6.5. The weighted average routing length of a job is determined by multiplying each routing length with the pool time and dividing the sum of weighted routing lengths by the sum of pool times. The weighted average routing length is plotted against STT in figure 10.

PJS Perform ance 4 4,5 5 5,5 6 6,5 7 7,5 8 20 23 26 29 32 35 38 41 44 47 50 STT W e ig h te d A v e ra g e R o u ti n g l e n g th A C1 (l=6) C2 (l=6) B' D1 (l=6) D2 (l=6) GFS Perform ance 4 4,5 5 5,5 6 6,5 7 7,5 8 20 23 26 29 32 35 38 41 44 47 50 STT W e ig h te d a v e ra g e r o u ti n g l e n g th A C1 (l=6) C2 (l=6) B' D1 (l=6) D2 (l=6)

Figure 10. Weighted average routing length in a PJS and GFS

Figure 10 shows that the weighted average routing length has decreased for the corrected methods of A and B, with l=6, meaning that jobs with long routings have a shorter PT. Indeed, this means that the release of jobs with shorter routings is slowed down. The same holds for the corrected methods in a GFS, with the exception of method D1, whichresults in a higher weighted average routing length. 5. Summary of results

(15)

As routings become longer, differences in performance appear. Method B′ shows a better performance compared to method A in both 12-station PJS and GFS. Modifying the aggregate load (method B′), considering variable workstation positions within a job’s routing, provides a better indication of the future flow of work to a workstation. Reacting strongly on the direct loads by means of load conversion (method A) deteriorates performance when flows become directed. The release decision creates cyclic effects at downstream workstations when applying load conversion. The use of a look-ahead limit to method A strongly improves performance as shown by the simulation results. Still, a look-ahead limit set too short or too long does not lead to the best performance. The setting of the look-ahead limit thus requires careful consideration.

Long routings in a job shop with a completely undirected flow, as in a PJS, also favor the use of a look-ahead limit, regardless of the workload calculation method. In this case, it has proven to be useful to focus on the workloads close to the workstation. Completely neglecting workloads not within the look-ahead limit seems to be even more appropriate. By contributing a jobs calculated workload as a job approaches a workstation, the same phenomena (cyclic effects) as shown in Oosterman et al. (2000) may occur, resulting in erroneous restriction or release of jobs to the shop floor.

By shortening the look-ahead limit, norm setting becomes a more questionable concern, in order to achieve the desired STT reduction. A too tight norm setting might lead to the rejection of jobs due to their individual processing times exceeding the norm, as was the case for method B′ in a PJS.

Furthermore, the use of a look-ahead limit has not resulted in the desired improvement in timing performance of the release decision. For jobs with relatively long routings, average waiting time in the job pool has decreased, consequently resulting in longer waiting times for jobs with shorter routings. The standard deviation of lateness has therefore not decreased.

6. Conclusion

Routing length complexity has thus far only received little attention in WLC research. The presence of long routings in job shops is nonetheless irrefutable. The loss of control of the release decision for downstream workstations in job shops with long routings is evident. This requires effective adaptations of the WLC concept, to ensure wider applicability in job shops. The simulation study has indicated several improvements for the WLC methods in order to cope with long routings in job shops.

The WLC release method that performs best in a shop with relatively short routings performs worst in a shop with relatively long routings. Evidently, it becomes more difficult to estimate the direct load in front of a workstation as routings become longer. Focusing on the future flow of work to a workstation and maintaining this at a constant level, has proven to be a more effective approach in shops with long routings.

In order to deal effectively with long routings in job shops, the use of a look-ahead limit has been introduced. The look-ahead limit excludes downstream workstations that a job is planning to visit from the calculations in WLC release methods. Applying a look-ahead limit has proven to be effective in job shops with long routings, when the WLC concept focuses on the direct load of a workstation. Future events on the shop floor do not unnecessarily influence the release of jobs to the shop floor, preserving a more stable flow to workstations.

When the WLC release method already focuses on the future flow of work to a workstation, applying a look-ahead limit becomes more sensitive and questionable concern. When routings are directed, the look-ahead limit disturbs the incoming flow of orders at downstream workstations. In job shops with undirected routings it is preferable to consider the flow of work that is relatively close to a workstation. A look-ahead limit is thus effective in this case, remarking that a look-ahead limit set too shortly leads to norm setting issues. Simulation results have shown that finding the optimum setting of the look-ahead limit requires consideration of routing characteristics and differs per WLC method. Further research is required to explore the sensitivity of the look-ahead limit under the circumstances arising from different job shop characteristics.

(16)

References

Bechte, W., 1988, Theory and practise of load-oriented manufacturing control, International Journal of Production Research, 26(3): 375 – 395

Bertrand, J.W.M., and Wortmann, J.C., 1981. Production control and information systems for component manufacturing shops, Elsevier Scientific Publishing Company, Amsterdam

Breithaupt, J.W., Land, M.J. and Nyhuis, P., 2002. The workload control concept: theory and practical extensions of load oriented order release, Production Planning & Control, 13(7): 625–638

Dogger, B., Land, M.J., van Foreest, N.D., 2010. Making CONWIP Work in High-Variety Manufacturing

Enns, S. T. 1995. An integrated system for controlling shop loading and work flow, International Journal of Production Research, 33(10):2801-2820

Henrich, P., Land, M.J., Gaalman, G.J.C., 2004a. Exploring applicability of the workload control concept, International Journal of Production Economics, 90: 187-198

Henrich, P., Land, M. J., Gaalman, G. J. C., & Zee, D. J. v. d., 2004b. Reducing feedback requirements of workload control, International Journal of Production Research, 42(24): 5235-5252

Land, M.J., 2004. Workload control in job shops: grasping the tap. Dissertation (PhD). University of Groningen, Labyrint Publications, The Netherlands

Land, M.J., 2006. Parameters and sensitivity in workload control, International Journal of Production Economics, 104: 625-638

Land, M.J., Gaalman, G.J.C, 1996. Workload control concepts in job shops: A critical assessment, International Journal of Production Economics, 46: 535–538

Little, J.C.D., 1961. A proof for the queuing formula: L =λW. Operations Research 9, 383-387) Law, A. M., Kelton, W. D., 1991, Simulation modeling & analysis, 2 edn. McGraw- Hill, Singapore Melnyk, S. A., Ragatz, G. L., Fredendall, L. D. 1991. Load smoothing by the planning and order

review/release systems: a simulation experiment, Journal of Operations Management, 10(4): 512-523

Melnyk, S.A., Ragatz, G.L., 1989. Order review/release: research issues and perspectives, International Journal of Production Research, 27(7): 1081 – 1096

Oosterman, B., Land, M.J., and Gaalman, G., 2000. The influence of shop characteristics on workload control, International Journal of Production Economics, 68(1): 107 – 119

Schönsleben, P., 2007. Integral Logistics Management, Operations and Supply Chain Management in Comprehensive Value-Added Networks, Auerbach Publications

Soepenberg, G.D., Land, M.J., Gaalman, G.J.C. 2010. Production Planning and control in job shops with high complexity.

Stevenson, M., Hendry, L.C., and Kingsman, B.G., 2005. A review of production planning and control: The applicability of key concepts to the make to order industry, International Journal of Production Research, 43(5): 869 - 898

Thürer, M., Silva, C., and Stevenson, M., 2010a. Workload control release mechanisms: From practice back to theory building, International Journal of Production Research, 48(12): 3593 – 3617 Thürer, M., Silva, C., Stevenson, M., 2010b. Optimizing workload norms: The influence of shop floor

Referenties

GERELATEERDE DOCUMENTEN

In particular, the effects of Simons’ levers-of-control (i.e. beliefs systems, boundary systems, diagnostic control systems and interactive control systems) for two different

Moreover, dynamic tension has a positive impact on autonomous motivation under an organic structure, and a negative impact when the organizational structure is

The conceptual schema, the constraints and the domains are defmed as follows. A data structure diagram of the conceptual schema is given in figure B.4. The

People who “live a virtual life” in a virtual world may consider this life to be a more desirable reality then the physical one, for the virtual world frees them from their

After analysis of the data, a planning approach is designed to combine input- and output control to establish efficient batches for a simultaneous batch process characterized

roche tout entièrement pour y faire ma maison ». Ce toponyme actuellement disparu, désigne la colline du chäteau, auquel mène encore une rue de la Roche, seul vestige

I1 s'agit de fragments de meules, de molette et polissoir en grès, un éclat de taille en grès bruxellien, un couteau à dos naturel, des déchets de taille,