• No results found

Applying A Variety Of Heuristics To Solve A Complex Flexible Job Shop

N/A
N/A
Protected

Academic year: 2021

Share "Applying A Variety Of Heuristics To Solve A Complex Flexible Job Shop"

Copied!
40
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Applying A Variety Of Heuristics To Solve A

Complex Flexible Job Shop

(2)

Master’s Thesis Econometrics, Operations Research and Actuarial Studies Supervisor: dr. W. Romeijnders

(3)

Applying A Variety Of Heuristics To Solve A

Complex Flexible Job Shop

Karstjan Kalma

Abstract

In this paper, we minimize weighted tardiness of a flexible job shop problem with release times, precedence constraints and fixed unavailabil-ity. We develop several Dispatching Rules and a Shifting Bottleneck Heuristic for this problem. In addition, we present a Tabu Search pro-cedure that may improve sub-optimal job shop schedules. Numerical ex-periments show that none of the heuristics significantly outperform the others. However, since the computation time of the Shifting Bottleneck Heuristic is much larger than the computation time of the Dispatching Tiles, the latter is more suitable for practical application.

1

Introduction

With the growing and globalizing world economy, companies are faced with increasingly complex scheduling problems for their production processes. Ad-ditionally, to prevent unnecessary costs and to stay competitive in the interna-tional market, the quality of these schedules has to be good. The flexible job shop is among the most difficult of these scheduling problems, but very relevant in areas like manufacturing (King, 1976).

A flexible job shop represents a shop with different work centres, each con-taining a number of identical machines. In this shop, several jobs need to be processed, with each job consisting of a possibly different set of operations that need to be carried out in a predetermined order. Moreover, each operation can only be carried out on one of the machines of a specific work centre.

The objective in the flexible job shop that we consider in this paper is to schedule the operations of the jobs to minimize the weighted tardiness of the jobs. We consider this objective since, in practice, it is very important for a company to deliver its products on time.

To increase the applicability of the flexible job shop, we add several practice-inspired constraints to the classical version of the flexible job shop as described above. In particular, we assign release times and due dates to jobs, representing the time when the first operation of a job can start and the time when a job should be completed, respectively. Moreover, we also consider precedence con-straints on operations of different jobs, and we introduce machine-specific time windows during which the machine cannot carry out any operations. We refer to the latter type of constraint as fixed unavailability.

(4)

an algorithm developed by Carlier and Pinson (1989), which indicates the com-plexity of the problem. Therefore, many useful heuristics have been developed for this problem over the years. The heuristics that are used in this paper are Dispatching Rule (DR), Shifting Bottleneck Heuristic (SBH) and Tabu Search (TS). Dispatching Rule is a simple technique where operations are scheduled by prioritizing some operations over others based on a given rule. The Shifting Bottleneck Heuristic divides the whole problem into several smaller problems of only one work centre each that can be solved relatively quickly. The Tabu Search starts with a given initial feasible solution and makes changes to poten-tially find better solutions. All of these techniques are evaluated in this paper to find the best heuristic for the flexible job shop with these constraints.

Summarizing, the main contribution of this paper is to consider the flex-ible job shop problem with release times, fixed unavailability and precedence constraints, which has not been studied in this combination before. As this specific flexible job shop problem has nor been studied before, several heuris-tics are used so results can be compared. This paper then suggests the most appropriate heuristics to use for this problem.

Numerical experiments in Section 5 show that none of the heuristics out-perform the others. None of the Dispatching Rules stand out from the rest, and no single configuration of the Shifting Bottleneck Heuristic works better than the other configurations for this problem. Yet, the Dispatching Rules of-ten perform better than the Shifting Bottleneck Heuristic, but not consisof-tently. However, since the computation time of the Dispatching Rules is many times lower than the computation time of the Shifting Bottleneck Heuristic, the for-mer is more suitable for practical application. The Tabu Search can be applied to the schedules created by both the Dispatching Rules and the Shifting Bottle-neck Heuristic. Results show that the Tabu Search can improve most solutions, at the cost of additional computation time.

(5)

2

Literature review

2.0.1 Non-flexible job shop

The job shop problem was first introduced by Muth et al. (1963). The objec-tive in their job shop problem is to minimize the total completion time, called the makespan. Since then, several deterministic solution techniques have been developed to create schedules with minimizing the makespan as objective. A solution technique is considered deterministic if the algorithm can be run several times with the same input, and the results will be identical every time. One of the easiest and earliest of the solution techniques is an algorithm that uses Dis-patching Rules, as used by Jones (1973), Holthaus and Rajendran (1997) and Dominic et al. (2004). The dispatching rules give priority to some operations according to predetermined rules. They are easy to implement, easy to extend to new rules and fast to run, but the lack of real optimization makes it unlikely to generate the optimal solution.

To increase the quality of the job shop schedules, several heuristics have been developed and tested. One of these heuristics is called the Shifting Bottleneck Heuristic and works by dividing the problem into several small subproblems of only a single machine, which can often be solved optimally. The Shifting Bottleneck Heuristic was first introduced by Adams et al. (1988) and uses the branch and bound technique of Carlier (1982) to solve the subproblems. This method is also used by Ivens and Lambrecht (1996) to solve a job shop with release times and precedence constraints. Additionally, setup and transit times are also included for operations rather than just processing times. Like this paper, Ivens and Lambrecht (1996) also choose their constraints for practical application, but they focus on additional components for the processing time over the fixed unavailability.

Pezzella and Merelli (2000) combine the Shifting Bottleneck Heuristic with a deterministic variant of the Tabu Search to get even better solutions. The Tabu Search takes a feasible solution and tries to improve it by examining neighbourhoods around it. To prevent cycles and local optima, some previous solutions are considered tabu and will then not be accepted as a new solution. We develop a heuristic based on this procedure for the flexible job shop with release times, due dates, precedence constraints and fixed unavailability. 2.0.2 Flexible job shop

Extending the job shop to the flexible job shop, other techniques are examined to handle the increased complexity. One of the few papers that seek to solve a flexible job shop optimally is Birgin et al. (2014), where a Mixed-Integer Linear Program (MILP) is used. Although they can solve some problems optimally, their instances are relatively small and the corresponding solution time is quite substantial. The MILP provided is very useful in understanding the problem, and their results provide additional evidence to the need for heuristics.

(6)

that opposed to the flexible job shop, most of the methods used are non-deterministic, which means that the results will likely not be the same on suc-cessive runs. Non-deterministic heuristics are used because adding random ele-ments to algorithms can improve the solution quicker, at the cost of not being able to guarantee the quality of the solution and making the process harder to track and interpret. None of these 200 papers use the combination of constraints used in this paper. By definition of the flexible job shop, they all consider paral-lel machines. Besides this, the release times and the precedence constraints are among the most common of the constraints. The Tabu Search is traditionally non-deterministic and is also used a lot to solve the flexible job shop, as shown by Saidi-Mehrabad and Fattahi (2007) and Li et al. (2011). Another popular meta-heuristic is the Genetic Algorithm, as applied by Gao et al. (2006), Lee et al. (2012) and Pezzella et al. (2008).

Several other techniques have been used to solve the flexible job shop prob-lem, but are not often referred to in the literature. Examples of these methods include the GRASP algorithm used by Rajkumar et al. (2011) and the filtered beam algorithm used by Wang and Yu (2010). These papers show that there exists a need for other and better algorithms to solve the flexible job shop, but that finding these alternative algorithms is difficult.

In the literature, by far the most common objective for optimization is to minimize the makespan of the problem. Some papers that minimize the weighted tardiness rather than the makespan, like this paper are Brandimarte (1993), Mason et al. (2002) and Pinedo and Singer (1999). Brandimarte (1993) uses a Tabu Search to minimize the weighted tardiness of a flexible job shop. Both Mason et al. (2002) and Pinedo and Singer (1999) consider a classical job shop and apply a Shifting Bottleneck Heuristic that is modified to minimize the weighted tardiness.

(7)

3

Problem description

3.1

Problem outline

For the classical flexible job shop, we have a set of N jobs each with a set of operations Oj. Each operation o ∈ O has a given deterministic processing

time po and can only be completed at a specific work centre c ∈ C. However,

the operation can be completed by any of the machines m ∈ Mc in that work

centre. Since we consider identical machines, we assume that the processing time for any operation o ∈ O is independent of the machine. The processing time is assumed to be known and independent of time. For simplicity, we will assume setup and transportation times are either negligible or included in the processing time.

Every job j ∈ N has a due date dj. This due date represents the time at

which the customer expects to receive the product in question. We assume the due date includes transport, final checks or packing that is not done by any of the work centres. Since we want to minimize the weighted tardiness, the due date is required to calculate how tardy a given job is. For any job that does not have a due date, we can specify a sufficiently large number as we do not care how much the job is completed before the due date. Besides the due date, every job j also has a release time rj, which denotes the earliest time that

the first operation of the job can start processing. This release time is non-negative and can, therefore, be 0. A common source for release times is that raw materials have to be ordered before production can begin. Also, companies can have planners that roughly plan the orders of the company over a large time-horizon, giving the jobs release times and due dates corresponding only to the time-windows that they are relevant. Such a planning can help limit the size of scheduling problems for a given time-window, as some of the jobs can be ignored. These planners are another common source of release times.

The objective is to minimize the total weighted tardiness. Tardiness Tj is

defined as the time that job j is completed after its due date dj. Tardiness is

related to lateness Lj, which is the difference between the completion time and

the due date, and can, therefore, be negative:

Lj = tj− dj, (1)

where tj and dj denote the completion time and the due date of job j

re-spectively. Since we do not consider any benefits or costs related to finishing a product early, we have chosen tardiness, which can also be written as the maximum between the lateness and 0:

Tj= max(Lj, 0). (2)

Every job will also get a weight wj, which gives the user the power to make

(8)

function of our problem now becomes:

N

X

j=1

wjTj (3)

The model includes several constraints in addition to the classical flexible job shop model. Besides the release times and due dates, we have precedence constraints and fixed unavailability. Precedence constraints are constraints that specify the order in which two different operations can be processed. These constraints are also found in the classical flexible job shop problem, as for any given job an operation has to be finished before the next operation of the job can start production. However, we want to make a distinction between constraints that link two operations of the same job and the constraints that link two operations of different jobs. If the operations are of the same job, we call it a routing constraint. If the operations are of different jobs, we use the term precedence constraints. Precedence constraints occur regularly at assembly, as products that are produced in the same shop are assembled and proceed with further operations as a single product.

Finally, we consider fixed unavailability. Since no machine can run 100% of the time, some preventive maintenance should be accounted for. In this model, we assume that preventive maintenance is known in advance and fixed. The unavailability is specified for a machine rather than a work centre. This means that for a work centre with multiple machines, most can still process operations even if one is unavailable. Fixed unavailability periods can also be used to model jobs that have been scheduled and set beforehand. Assuming these fixed jobs do not have precedence constraints with jobs that still need to be scheduled, the machines can be considered to be unavailable when they are processing the operations of the fixed job.

3.2

MILP

Table 1: The sets used in the MILP Symbol Description Sub-sets

N Jobs

C Work centres

O Operations Oj, j ∈ N or Oc, c ∈ C

M Machines Mc, c ∈ C

E Order of operations Er and Ep

B Periods of unavailability Bm, m ∈ M

(9)

Table 2: The parameters used in the MILP Symbol description

po processing time operation o

rj release time job j

dj due date job j

wj weight job j

Table 3: The decision variables used in the MILP Symbol Description

yo starting time of operation o ∈ O

xom 1 if operation o ∈ Oc is processed on machine m ∈ Mc for c ∈ C

zo1o2 1 if operation o1∈ O is processed before operation o2∈ O

vob 1 if operation o ∈ Oc is processed after unavailability period b ∈

Bm, where m ∈ Mc and c ∈ C

The routing and precedence constraints are contained in E = Er∪ Ep, where

Eris the set of routing constraints and Epis the set of precedence constraints.

An element from this set is given by a pair of operations (o1, o2), where o1 has

to be completed before o2 can start. All unavailability periods are contained

in the set B. An element b ∈ B is given by b = {bt, bd}, where bt and bd give

the starting time and the duration of the unavailable period respectively. The release times, due dates, weights for every job j ∈ N are given in Table 2, as well as the processing time of every operation o ∈ O. L is used as a sufficiently large number.

The decision variable yogives the starting time of operation o and xom is a

dummy variable indicating whether operation o will be processed by machine m. Since the user wants to know when each operation has to begin and which machine has to process this operation, these two sets of variables are enough to generate the final schedule. However, to model the problem mathematically, a few more variables need to be added. To model the order of the operations, zo1o2

is used to indicate whether operation o1 is processed before o2. The dummy

variable vob indicates whether operation o is processed after breakdown b.

For the objective function, values for the weights and the due dates for the jobs are needed, denoted by wj and dj respectively. Then the tardiness Tj of

job j is given by the maximum of the lateness of job j and 0, where the lateness of job j is calculated by taking the difference between the completion time and the due date of job j, like in Equations 1 and 2. Then, the MILP formulation of the flexible job shop problem is given by:

min

N

X

j=1

(10)

yo1+ po1 ≤ yo2 ∀(o1, o2) ∈ E (5) yo≥ rj ∀o ∈ Oj, j ∈ N (6) X m∈Mc xom= 1 ∀o ∈ Oc, c ∈ C (7) zo1o2+ zo2o1 ≥ xo1m+ xo2m− 1 ∀o1, o2∈ O c, m ∈ Mc, c ∈ C (8) yo1+ po1 ≤ yo2+ (1 − zo1o2)L ∀o1, o2∈ O (9) yo+ po≤ bt+ (1 − xom+ vob)L ∀o ∈ O, {bt, bd} ∈ Bm (10) bt+ bd ≤ yo+ (2 − xom− vob)L ∀o ∈ O, {bt, bd} ∈ Bm (11) Tj ≥ yo+ po− dj ∀j ∈ N, o ∈ Oj (12) yo≥ 0, Tj ≥ 0 ∀o ∈ O, j ∈ N (13) xo1m, zo1o2, vo1b ∈ {0, 1} ∀o1, o2∈ O, m ∈ M, b ∈ B (14)

(11)

3.3

Graphical representation

This section gives an example of a flexible job shop problem through a graphical representation. The graph is given in Figure 1. Every node denotes an operation o ∈ O, and every arc denotes a sequence of operations. The solid directed arcs are called conjunctive arcs, which show the routing and precedence constraints and are known and fixed at the start of the problem. These arcs correspond to the constraints in E, given in Section 3.2. The other arcs are called disjunctive arcs, which are decisions that still have to be made, as they represent the order of operations within the work centres. These disjunctive arcs are undirected until a decision has been made, after which they either become directed or are removed. The disjunctive arcs graphically show the decision variables zo1o2with

o1, o2 ∈ O. To prevent clutter, only the disjunctive arcs related to work centre

two are shown in Figure 1, the other such arcs are omitted for this figure.

Figure 1: A graphical representation of the flexible job shop problem with release times, precedence constraints and fixed unavailability.

(12)

not yet connected with any conjunctive arcs as they do not belong to a job. Rather than showing numbers for the work centre and the job, they get a pair of numbers to indicate both the work centre and the machine in that work centre respectively, which are shown in the parenthesis. The letter B is also shown in the node to indicate that the node corresponds to an unavailability constraint or a breakdown. In addition to the usual number beside the node that indicates the duration, these nodes also show a number in a box, which denotes the end time of the unavailable period. The other nodes will also get a similar value after the schedule has been generated. These end times will give the resulting schedule for the user. Since the end times for the unavailable period are known and relevant to the algorithm from the start, they are already included. In this example, both work centres 1 and 2 have one machine, but work centre 3 has two machines. Unfortunately, the graphical representation does not display the number of machines in any given work centre, until a schedule has been made. This limitation stems from the fact that the graphical representation is created from the perspective of the jobs and the operations, who only care about the machines when the schedule is mostly complete.

Figure 2: A graphical representation of a schedule for the flexible job shop problem with release times, precedence constraints and fixed unavailability.

(13)

connected to each other. This is because work centre 3 has two machines, and can, therefore, sustain two chains of conjunctive arcs. As a consequence of these separated chains, some operations of the same work centre can be processed at the same time. Since the unavailability period has a fixed machine, the graph shows that operation (3, 2) has to be processed on machine 1 of work centre 3. That leaves machine 2 for the remaining two operations. If for a different instance, the machine is not specified in such a manner, the machines can be chosen arbitrarily. Note that every node now has an end time, which is calcu-lated by taking the minimum of the end times of all incoming arcs and adding the processing time to it. Given these end times, the total weighted tardiness related to this schedule can be calculated. Job 2 is on time and therefore adds nothing to the tardiness, but job 1 is two time units too late, and job 3 is seven time units too late. Including the weights of the jobs, we get the following weighted tardiness: 3 X j=1 wjTj= 2 · 2 + 1 · 0 + 1 · 7 = 11 (15)

4

Solution technique

This section explains all the heuristics used in this paper to solve the flexible job shop. Each of the following subsections explains one of the heuristics. Within each subsection, first, the heuristic is described in light of the classical job shop, ignoring the release times, the precedence constraints, the fixed unavailability and parallel machines. Second, it is explained how this heuristic is modified to deal with the additional constraints.

4.1

Job Sequencing

Job Sequencing is among the simplest of the scheduling procedures, and con-siders one job at a time, temporarily ignoring future jobs and taking previously scheduled jobs as fixed. Every job is scheduled by fitting them into the partially completed schedule. Starting at the first operation of the job, every operation is fixed at the earliest time that can fully service the operation, taking prece-dence constraints into account. This strategy is repeated for every subsequent operation until the job is fully scheduled. Job sequencing will create a sched-ule very fast, and it will be easy to see how the schedsched-ule is created. However, the strategy has little guarantees regarding the quality of the schedule as the procedure does not consider alternatives and does not take the due dates and weights into account when scheduling. Some optimization will be possible by trying different orders in which the jobs will be scheduled, but as precedence constraints will restrict the relative order of some jobs, this fix will have limited impact.

(14)

tak-ing them into account when schedultak-ing the first operation of every job. Since every operation other than the first can be seen as being released after the pre-vious operation is completed, adding release times to the first operation hardly changes the algorithm. The parallel machines give more options to the opera-tion for being scheduled, of which the machine is chosen that can process the operation the earliest. In the event of ties, the machine with the lowest index is chosen to process the operation. Unavailable periods can be considered as a previously scheduled job and therefore, do not change the algorithm. Only the precedence constraints will affect the algorithm significantly. If any operation has been scheduled and a different operation is trying to be scheduled and has to be finished before the already scheduled operation, then there may not be room to schedule the operation in such a way to satisfy the constraint. This situation will then create an infeasible solution. To solve this, a specific order for the jobs can be used, such that every time an operation has to be scheduled, all its prerequisite operations have already been scheduled. If this is not possible because multiple precedence constraints have different directions, jobs can be split in multiple jobs and then ordered properly. Applying this extra precaution will guarantee a feasible solution, but limits optimization as the sequence of some of the jobs will be fixed.

4.2

Dispatching Rules

Dispatching rules will schedule operations by prioritizing some operations over others using some pre-determined rule. For example, using Earliest Due Date, every time a machine has finished processing a job, it will start with the opera-tion that has the earliest due date of all the operaopera-tions waiting to be processed on that machine. For any of the rules, a machine is assumed not to be idle if any operation can be processed at that moment. All operations waiting to start processing on a given machine is called the queue of that machine. Here several relevant Dispatching rules are shown:

• Earliest Due Date (EDD): This rule will sort every operation in the queue based on the due date of the corresponding job. At every decision step, the operation with the lowest due date is chosen. This rule is straightforward and easy and as such, is often chosen in the literature.

• Shortest/Longest Processing Time (SPT/LPT): These rules sort opera-tions in the queue based on the next processing time. From these times, the shortest/longest time is chosen at every step. These rules are also widely used in literature.

(15)

Figure 3: A graphical representation of a partial schedule for the flexible job shop problem with additional constraints

• Minimal Weighted Slack (MWS): The slack is calculated by subtracting the sum of the current time and the remaining processing time from the due date. This weight is incorporated in this value by either multiplying the slack by the weight if the slack is positive or dividing the slack by the weight if the slack is negative. The operation with the lowest weighted slack is then chosen at every step. This rule is proposed in this paper to account for the weights that are important for the final objective. Modifying the algorithm to account for the constraints is relatively simple. Like Job Sequencing, release times are already used for all operations other than the first of each job. Release times can then easily be added for the first operation of each job as well. The parallel machines create more opportunity for operations to be scheduled and can alternatively be seen as separate work centres with the same queues. Including fixed unavailability will result in the fact that some operations cannot fit at a given time t, which would create infeasible solutions if those operations are scheduled at t. To fix this, the queue has to be restricted to only include the operations that can be completed in that particular time frame. Finally, for the precedence constraints, the operations under consideration need to be restricted even more. If an operation requires that a different operation is complete, the former should not be in the queue if the latter is not yet complete.

4.3

Shifting Bottleneck Heuristics

(16)

have a solution and is called solved. Note that this does not have to mean that the work centre is solved optimally. Once all work centres are solved, the best feasible schedule that is consistent with these solutions can be determined using the strategy described in Section 3.3.

The order of each work centre is determined using several smaller subprob-lems, each relating to only one work centre and all operations that are to be processed by that work centre. The SBH solves the job shop problem using the following main steps: Solving the master problem, creating subproblems, solving the subproblems, choosing a work centre to get a solution and re-optimizing the already solved work centres. The master problem is the modified version of the original job shop problem, where some work centres are solved, and the other work centres have relaxed capacity constraints. Work centres that have relaxed capacity constraints allow multiple operations to be processed simultaneously on the same machine, as can be seen in Figure 3. In this example, where only work centre 2 has been solved, work centre 1 has operations that are processed simultaneously. Specifically, the operations of job 1 and 3 are both processed at work centre 1 between 19 and 24. Without relaxed capacity constraints, this would be infeasible. Once work centre 1 also has been solved, one of these opera-tions will wait for the other to finish before starting to be processed itself. If the master problem has been solved, subproblems can be created. Only subprob-lems relating to work centres that have not yet been solved are required to be created. All those subproblems are solved using a branch-and-bound algorithm. Then one of the work centres is chosen to be solved. Finally, the work centres that have already been solved are re-optimized to improve the solution further. Each of these steps is explained in further detail in the following sections.

The steps and explanations in Sections 4.3.1 to 4.3.5 refer to the SBH applied to a classical job shop problem with minimizing makespan as the objective. Sections 4.3.6 to 4.3.10 will show the modifications required to adapt each of the steps of the SBH to allow for parallel machines, minimizing weighted tardiness and the release times, precedence constraints and fixed unavailability.

4.3.1 Master problem

(17)

The master problem is solved by calculating the earliest time at which every operation can start. Every operation can start when all prerequisite operations have been completed. A prerequisite operation is an operation that has to be completed before the other operation can start. Such restrictions can arise from solutions of work centres of the routing constraints of the jobs. In Figure 3, the prerequisite operations for a given operation are given by all incoming arcs. The maximum end time of all these operations is used as the start time for the operation under consideration. The end time of any operation is calculated by adding the processing time to the start time. Note that the start times of all prerequisite operations have to be known, in order to determine the start time. The only operations that can determine their start and end times without any knowledge of other operations are the operations without any prerequisite operations, which are operations that are both the first in their respective job, and the first to be processed at their work centre. These operations can start at time 0, and calculate the end time by adding their processing time. Once these operations have been calculated, other operations can be as well. Deter-mining the times of later operations when possible, the start and end times of all operations can be calculated.

To find good solutions for work centres quickly, subproblems are used. Each subproblem deals with a single work centre at a time with only the operations that are to be processed at that work centre, to reduce the size and complexity of the problem.

4.3.2 Creating subproblems

For a master problem where some work centres are solved, subproblems are created for every work centre that has not yet been solved. A subproblem is created relating to a single work centre and contains all the operations that have to be processed by that work centre. The objective of every subproblem is to select the order of operations such that when that order is fixed in the master problem, it leads to the least increase in the makespan when comparing the schedules of the current and the updated master problem. To find this order, the parameters of the subproblem have to be chosen with care, to ensure that the increase predicted by the subproblem is indeed the increase that will be shown by updating the master problem.

For this subproblem, we need to determine the release times and due dates of all operations in order to generate good solutions. Also, some precedence constraints have to be determined to ensure a feasible schedule of the master problem with the solution of this subproblem. The objective of this problem is to minimize the maximum tardiness.

(18)

operations that have to be completed before the operation. Since this is also how the best schedule from the master problem is determined, the release time for an operation can be taken from the schedule of the current master problem. The due date for an operation in a subproblem is the latest possible time that the operation can be completed before the makespan increases in the schedule of the updated master problem. To find this value, first, the operation has to be identified that is responsible for the current value of the makespan. That is, the operation that has an end time equal to the makespan has to be found. This last operation is the most significant since delaying this operation by any amount will also increase the makespan by the amount. To find when the operation of the subproblem affects the last operation, the longest path between the ends of both operations can be calculated. A path between the operation of the subproblem and the last operation of the latest job is the sequence of the operations that are to be processed in order, starting at the operation of the subproblem and ending at the last operation of the latest job. The length of this path is the sum of all operations on that path. Since the path leads from the ends of both operations, the last operation is included in the length, but the first is not. The longest path is the path that gives the biggest value for the length. Subtracting the length of the longest path from the makespan gives the due date for the operation in the subproblem. If the operation is completed any amount of time after the due date, the makespan increases by the same amount.

The subproblem also needs to determine some precedence constraints. As can be seen from Figure 3, with work centre 2 fixed as shown, operation (3, 1) cannot be completed before operation (3, 2). If work centre 3 would process these operations in reverse order, the master problem would contain a cycle, making it impossible to generate a schedule in that scenario. Precedence con-straints can be determined by taking combinations of operations and testing if one can be reached from the other while following the directed arcs. If so, this would require a precedence constraint for those two operations to prevent the possibility of a cycle. If all combinations are tested, all precedence constraints of the subproblem are found.

(19)

4.3.3 Solving the subproblem

The subproblems are solved using the algorithm used by Srinivasan (2012), which is a variant of the branch-and-bound algorithm. The algorithm iteratively builds the order of operations by fixing the first part of the order and creating branches where each branch adds one of the remaining operations to the back of the fixed part. Only orders that do not conflict with any precedence constraints are considered.

The algorithm can afford not to test every branch, by calculating lower and upper bounds for every branch and excluding some branches based on those values. The upper bound is the lowest objective value of any known complete order, as it is the current best known order, and any better order would have to be at least as good. The lower bound is a value that gives a minimal increase in makespan for a given branch, based on the given partial order. The lower bound is calculated by taking the completion time of the partial order and adding the processing time of the remaining operations to that. Subtracting the highest due date of the remaining operations from this sum gives a value for the objective that can only increase as more operations are fixed in the partial order. Now for any branch, if they have a lower bound that is higher than the upper bound, then we know that this branch can never get a better result than the current best. This branch can then be ignored completely. After all branches have either been completed or ignored, the best order is returned as the solution of the branch-and-bound.

4.3.4 Choosing a work centre to get a solution

When subproblems have been created and solved for each of the work centres that have not yet been solved, one of the work centres is chosen to be solved for the next iterations. The subproblem with the highest objective value shows that even in the current master problem, the best possible order already results in a significant increase in the makespan of the schedule. As solving more work centres only restricts the master problem more, the objective value of this subproblem can only get worse. This work centre is called the bottleneck and is chosen to be solved to make sure that the most problematic work centres are solved early. The work centres that are easier to schedule, can later be planned around them. As the work centre that is considered the bottleneck changes with the progression of the algorithm, the bottleneck is shifting from work centre to work centre, which gives the heuristic its name.

4.3.5 re-optimization

(20)

schedule of the unoptimized master problem, the re-optimization is stopped. Otherwise, a different work centre is re-optimized. The re-optimization contin-ues until an improvement has been made in the re-optimized schedule, compared to the unoptimized schedule. If all work centres have been re-optimized and no improvement has been found, the optimization terminates as well. The re-optimization is used to check if, for a given work centre, a different order is now optimal in the more restricted master problem with more work centres solved. 4.3.6 Modifications for the master problem

One of the main contributions of this paper is to modify the SBH for the classical job shop problem such that the heuristic can solve a problem that includes multiple machines in some work centres, minimizes total weighted tardiness and includes release times, precedence constraints and fixed unavailability. This section explains how the master problem is modified. The modifications required for the other steps are explained in Sections 4.3.7 to 4.3.10.

The master problem has many aspects that do not change significantly. The objective function does not affect how the master problem is created or solved. The only change regarding the objective function is how the end times are used to calculate the objective value. For any work centre that has not yet been solved, the multiple machines also do not affect the master problem. Since a work centre that has not been solved relaxes its capacity constraints, adding machines and thus increasing its capacity does not affect the master problem. If a work centre has been solved, it does affect the master problem. Since the work centre contains multiple machines, the solution of a work centre is not merely the order of all operations that have to be processed at that work centre. Instead, an order of operations is needed for each of the machines in the work centre. As long as machines are identical, it is not necessary to specify which operations have to be processed on which machines, as those can be switched arbitrarily.

The release times only affect the first operation of each job. When calculating the start and end times for each operation, the first operation of each job now does not start at 0 if no operations have to be completed before it. Instead, it will start at its release time. If some operations do have to be completed before it, then it starts at the maximum of the end times of all prerequisite operations and the release time. After the first operation has been calculated, the other operations of the jobs are solved as before.

(21)

The fixed unavailability adds additional operations to the master problem that are not related to any job. These operations have a fixed start and end time and also specify both the work centre and the machine in that work centre that they are to be processed on. If the work centre that has an unavailable period is not yet solved, the relaxed capacity constraint allows operations to be processed at the same time as the unavailable period. Therefore, for unsolved work cen-tres, adding fixed unavailability does not change the master problem. When a work centre is solved, the fixed unavailability does affect the master problem in a significant way. Since the unavailable period is specified for a given machine, it distinguishes some machines from the others. With machines now not being identical, the solution of a work centre needs to specify both the order in which operations need to be processed and also which machine a given order of opera-tions will be processed on. Notably, the solution of a work centre does not need to specify the unavailable period in the order of operations, that is, the solution does not specify which operations will precede and succeed the unavailability period in the schedule derived from the master problem. The solution cannot fix the unavailable period in the order of operations since these periods already have fixed start and end times. If the operation before the unavailable period gets delayed due to other solutions, this operation may not be able to complete before the unavailable period starts. Hence, the fixed unavailability is excluded from the solutions of the work centres but still crucial in the calculations of the start and end times. For any given operation, it is now no longer enough just to check all prerequisite operations. If the start time given by the prerequisite op-erations means that the operation cannot be completed before an unavailability period on the machine of the operation starts, then the operation cannot begin at the time given by the prerequisite operations. Instead, it will start after the conflicting unavailability period. If the operation again cannot be completed because of another unavailability period, then the end of this next period is used as the next start time. This repeats until the operation can be completed. 4.3.7 Modifying the creation of the subproblems

(22)

Minimizing weighted tardiness does significantly change the creation of the subproblems. The new objective function differs from the makespan in two significant ways. First, the objective value no longer depends on the completion time of a job, but on its tardiness. Second, the objective value is calculated using all jobs, not just the last. To measure the effect of operations on the tardiness instead of the completion time, the due dates of the operations have to be changed. If a job is already tardy in the master problem, a delay in that job does increase the tardiness of that job and we can calculate the due date as before, by taking the completion time of the job and subtracting the length of the longest path between the operation under consideration and the end of the job. If a job is not yet tardy, a delay does not immediately increase the tardiness of that job, but a delay that is long enough does increase the tardiness. The point where the job becomes tardy is the point where the job gets delayed until it is completed after its due date. Hence, we need to take the due date as the end time, where this was previously the completion time of the job. The due date for the operation is then calculated by subtracting the length of the longest path between the operation and the end of the job, from the due date of the job. Determining the due date in both scenarios, where a job is or is not yet tardy, can be summarized in a single formula. The length of the longest path between the operation and the end of the job is used in both formulas and now has to be subtracted from the maximum of the completion time of the job and the due date of the job. This formula gives the due date for a given combination of operation and job.

Since the total weighted tardiness depends on every job in the master prob-lem, every operation in the subproblem requires several due dates, one for every job in the master problem. This creates a matrix of due dates with every com-bination of operation and job. If any operation in the subproblem does not affect a given job, then the due date is set at a sufficiently high number as that operation does not delay the job regardless of the end time of the operation. Since changing the objective function does not restrict or relax the order of operations, the precedence constraints are determined in the same way.

Adding release times to the problem affects the calculation of the release times and the due dates in the subproblem as release times change the start and end times in the master problem. However, since the release times are already incorporated into the master problem, the method of calculating the release times and due dates in the subproblems is not changed by adding release times to the problem. Similarly, the method of determining the precedence constraints is also not affected.

Adding precedence constraints to the problem changes the start and end times of many operations in the master problem. As with the release times, this addition does affect the release times and end times of the subproblems, but not the method of calculating them. The method of determining the prece-dence constraints also is not affected, but adding the preceprece-dence constraints to the problem does make the precedence constraints more numerous in the subproblems.

(23)

corresponding work centre is solved. Then the fixed unavailability can affect both the times in the master problem and the longest path between opera-tions. However, if unavailability periods are treated like any other operation, the methods of determining the release times and the due dates for the sub-problems are not changed. The unavailability periods are necessary for solving the subproblems and therefore, do have to be recorded.

4.3.8 Modifications for solving the subproblems

The parallel machines expand the solution space that the branch-and-bound algorithm has to examine significantly. Yang et al. (2000) show that the branch-and-bound algorithm is to slow to solve subproblems that involve a work centre that contains multiple machines. In this paper, we adopt their solution of solving these subproblems using Dispatching Rules. We test several Dispatching Rules to find which rule produces the best results. A problem that is solved using Dispatching Rules requires the same parameters as a subproblem that is solved using branch-and-bound. As such, the subproblem can be created the same way, and the Dispatching Rules will be able to solve it using the method described in Section 4.2.

The branch-and-bound algorithm does have to be modified to account for the change in the objective function. Specifically, the calculations of the upper and lower bounds have to be modified for the weighted tardiness. The upper bound for any order of operations is the increase in total weighted tardiness in the solutions of the master problem if that order of operations is used as a solu-tion for the work centre. For the makespan, this upper bound is calculated by determining the completion time of each operation, and the corresponding tar-diness based on the due dates of the subproblem. The increase in the makespan is then given by the maximum of these values. Considering the makespan as the completion time of a single job, specifically the job with the highest completion time, then this can be extended to include all jobs, not just one. Since we al-ready have a matrix of due dates, one for every combination of operation in the subproblem and job in the master problem, we can calculate the tardiness of every one of those combinations. The completion time of the operations in the subproblem can be calculated as before. Then a value for tardiness can be cal-culated for a given due date by using the completion time of the corresponding operation. The resulting tardiness is an expected delay for the job of that due date. Repeating this for every operation, we can get every expected delay for that job. The resulting delay for that job if the considered order of operations is applied to the master problem is the maximum of the expected delays. The delays for every job can be determined in the same way. Finally, the delays have to be multiplied by the corresponding weights, to get the increase in the total weighted tardiness, which is also used as the upper bound.

(24)

makespan, we want to find the minimal increase in the objective function. One of the remaining operations has to be last in the completed order, which would give this last operation the highest weighted tardiness given this partial order. This weighted tardiness is calculated for each of the remaining operations, each time assuming that the operation in question is the last and thus has an end time equal to the calculated completion time. The lowest of these weighted tardiness values is known to be a minimal increase in the objective function. This lowest value is taken as the lower bound, as we can be sure that any order of operations involving that partial order will have an increase in weighted tardiness of at least that much.

Neither the release times and the precedence constraints affect the solution technique significantly. Like before, release times and precedence constraints in the master problem do affect the release times, due dates and precedence constraints of the subproblem, but not the solution technique.

(25)

4.3.9 Modifications for choosing a solution

No modification is really necessary as again we could choose the solution that adds the most to the objective value of the problem as the bottleneck and take that solution. However, due to the increased complexity of the objective function, parallel machines and fixed unavailability, we also test if the choosing the lowest increase at every iteration changes or possibly improves the result from the heuristic.

4.3.10 Modifications for re-optimization

The parallel machines, change in the objective function and the constraints do change the re-optimization, but only in the sense that it changes the master problem, creating the subproblems and solving the subproblems. These changes are outlined in the sections above. Apart from these changes, the re-optimization is not changed or modified by the additions of the more complex problem. Algorithm 1 Pseudocode for the shifting bottleneck

while any work centres are not solved do for all non-solved work centres do

create a subproblem for the work centre solve the created subproblem

store the result of the subproblem end for

give the subproblem with the highest result a solution for all work centres that already have been solved do

remove the sequence of the work centre create a subproblem for the work centre solve the created work centre

give the subproblem a solution

if the solution has been improved then stop re-optimizing

end if end for end while

4.4

Tabu Search

(26)

in a similar change in the time of the second operation in almost all cases. The links between these operations that form the path come from job routes, the ma-chine sequence and the precedence constraints. The unavailable periods create a complication in the paths, as they separate operations that would otherwise have created a path. To make the paths long enough to work with, the path is continued using the first operation of the path. This operation will extend the path through the operation that precedes it, through its route if available and through the machine if not. A critical path is defined as the path that defines the completion time of a job, which therefore identifies all operations that are responsible for the completion time of the job. For the Tabu Search, all critical paths can be used, or only the critical path related to the job that adds the most to the weighted tardiness. Both these options are examined to see if either outperforms the other.

A critical path is further sub-divided into separate blocks. Each block con-tains some subsequent operations that are processed by the same machine. For example, the critical paths in the example in subsection 3.3 are given by the critical paths of job 1 and 3. These paths are {B11, B12} and {B31, B32}, with

B11= B31= {(2, 3), (2, 2), (2, 1)}, B12= {(3, 1)} and B32= {(3, 1), (3, 3)}. The

changes to the solution can then be limited by changing the order of operations within individual blocks. The solution space can be limited to these changes as the arcs between the blocks are given by routing or precedence constraints and are therefore fixed. We first consider the neighborhoods used by Pezzella and Merelli (2000), N1(), N2(), N3(), second we subdivide N2() and N3() into

smaller neighborhoods:

N1(): We create new solutions by exchanging two subsequent operations in a

block, of which neither are the first or the last operation of the block. N2a(): We create new solutions by putting operations that are not the first two

or the last into the second position in the block

N2b(): We create new solutions by putting operations that are not the last two

or the first into the one but last position in the block

N3a(): We create new solutions by exchanging the first two operations of a block

N3b(): We create new solutions by exchanging the last two operations of a block

Like Pezzella and Merelli (2000), we classify neighborhoods N1(), N2a() and

N2b() as internal changes, as the arcs between the different blocks will remain the

same. Neighborhoods N3a() and N3b() are classified as external changes, as they

change the arcs between the different blocks. Finally, one new neighbourhood is added to account for the parallel machines in a work centre. The additional neighborhood is chosen in advance from the following three: N4a(), N4b() and

N4c().

N4a(): All operations of a critical block are moved to the back of all other blocks

(27)

N4b(): The internal operations of a critical block are moved to the back of all

other blocks of parallel machines

N4c(): The external operations of a critical block are moved to the back of all

other blocks of parallel machines

Neighbourhood N4() is created to allow the Tabu Search to deal with parallel

machines. Just like with the previously proposed neighbourhoods, a distinction is made between internal and external operations. Moving an internal operation to a different machine will make this block shorter, but keep the path that this block is located in relatively similar. Moving an external operation will change the block more significantly. Finally, it is also possible to combine the neigh-bourhoods and create new solutions by switching both internal and external operations.

The targets of the operation that will be switched are the end of the blocks of all the other machines in the work centre. Remember that blocks are defined as a number of operations processed on the same machine that are processed without any idle time between the individual operations. The targets are the end of the blocks, rather than on every position because, by definition, the machine will be idle after each block, increasing the odds of the operation fitting decently at that place. Contrary to neighborhoods N1(), N2() and N3(), the 3 neighborhoods

of N4() are not all used in the heuristic, but only 1 is chosen. Since these

neighbourhoods are introduced in this paper, they are tested against each other to find which works best for this problem.

The tabu list will contain illegal changes, rather than solutions, because storing and comparing solutions would take much time, defeating the purpose of the heuristic. Changes are considered illegal if they do the exact opposite of a change that was performed in a recent previous iteration, but also if they perform the same change as one that was used before. This both prevents most of the iterations from returning to a previous solution, but also prevents the same path to be taken if a previous solution does happen to be accepted. The length of the tabu list will be the number of jobs scaled with a given parameter. Since the tabu search can create repeating sequences if the list is too short, and can prevent good solutions or increase computation time if the list is too long, different values for these parameters will be examined, to find good values for this problem. Like Pezzella and Merelli (2000), every time a new better solution has been found, a local re-optimization like the re-optimization used in the SBH will be applied.

(28)

they create a cycle, and only changes that do not create cycles are considered. Parallel machines do require modifications, which are the new neighbourhoods proposed above to allow operations to switch machines.

The stopping criterion for the Tabu Search is to go s ∗ N iterations with-out finding a better solution or after the tabu search has gone through T ∗ N iterations in total, where N is the number of jobs. The tabu list will contain at most l ∗ N forbidden moves in the problem. If the tabu list is full, the oldest move will be removed. All these parameters can be adjusted to get the best performance of the tabu search.

Algorithm 2 Pseudocode for the tabu search start with an initial feasible solution

while the maximum amount of iterations has not been reached do create all possible changes from the neighbourhoods

exclude all changes that are on the tabu list exclude all changes that create a cycle test every change

implement the change that creates the best schedule and add it to the tabu list

if the tabu list exceeds its maximum length then remove the oldest change on the list

end if

if the new solution is better than the old solution then re-optimize the solution based on the SBH

end if end while

5

Results

5.1

Computational experiments

5.1.1 Instance design

The heuristics are tested using a large set of randomly generated instances. The instances are created by specifying the number of jobs and work centres of that instance. To make the job shop flexible, several additional machines are added to the work centres. By default, the number of additional machines is half of the number of work centres and are added randomly one by one until all are allocated. The machines are allocated independently of the number of machines already allocated to any given work centre; hence a work centre with five machines has the same probability of getting an additional machine as a work centre with one machine.

(29)

same job can have the same work centre. The processing time of each operation is chosen from a continuous uniform distribution between 2 and 10. This time is then multiplied by the number of machines in the work centre, to balance the load of each work centre. Next, the release times are generated for every job, with a time uniformly chosen from 2 to 10.

Once the operations and release times are created, the precedence constraints can be generated. For these constraints, two distinct jobs are randomly chosen, and a precedence constraint is created such that the last operation of the job with the lowest index has to be finished before a randomly chosen operation of the other job can begin. This system where a precedence constraint goes from a job with a lower index to a job with a higher is chosen to prevent cycles from occurring in the problem itself as these are very difficult to prevent otherwise. The reason why the precedence constraint goes from the last operation of the first job to a random operation of the second job is that these constraints are the most commonly found in real job shops, as they simulate sub-assembly. In assembly, it is a common occurrence that some operations will require a fully completed version of a different job before the operation can begin.

After the precedence constraints are finished, the due dates for the jobs can be determined. These times are not randomly generated but calculated based on the values for the jobs. For every job, the maximum total completion time is determined, including additional time because of precedence constraints. If this job were the only one that has to be completed, this completion time would be enough to complete all operations on time. However, since jobs have to compete for time on the machines, the completion time will likely not be enough in practice. To account for this, we multiply the due date by 1.5, to give jobs time to be scheduled but still keep it reasonably tight so the schedule will not be trivial. Finally, every job gets a random weight, which is a random integer from 1 up to and including 3.

The last things that need to be generated are periods of unavailability. By default, several unavailability periods are generated such that on average, 80% of the work centres will have an unavailable period. Every period is created by randomly choosing a machine and a period length that is created in the same way as the processing time for any other operation. For the starting time, we need an unavailable period to plausibly occur throughout the scheduling time window, but it is also undesirable to have the maximum starting time to be nearly infinite, as this would put all unavailable periods after the scheduling window. To fix this, the longest due date is used to get a reasonable indication for the time window of the problem. The starting time is then randomly chosen between 0 and the maximum due date.

5.1.2 Experimental design

To test the best configuration for the heuristics, several different versions of every heuristic is tested. For the Dispatching Rules, many different rules are tested:

(30)

• Shortest Processing Time • Longest Processing Time • Earliest Due Date

• Shortest Remaining Processing Time • Longest Remaining Processing Time • Job Sequencing

Although Job Sequencing is technically not a Dispatching Rule, the process is similar, so the results of the Job Sequencing heuristic are shown with the results of the Dispatching Rules.

The Shifting Bottleneck Heuristic has two different options that can be changed:

• subproblem choice (choice)

– The smallest increase in tardiness is chosen at every iteration – The biggest increase in tardiness is chosen at every iteration • The Dispatching Rule used to solve parallel machine subproblems (rule)

– Minimal Weighted Slack – Earliest Due Date

– Shortest Remaining Processing Time

The six different configurations of the SBH are created by taking all combina-tions of the opcombina-tions above.

Tabu Search also has some options that can be changed and tested. The first is the length of the tabu list. To make it scalable with many instances, the length is taken as a multiple of the number of jobs in the instance. The default multiple is 1.5, and the other tested values are 1 and 2. The next option for the Tabu Search is the stopping condition. The Tabu Search terminates when no improved solution has been found for a given number of iterations. This number is also scaled with the number of jobs to account for instances of different sizes. The default multiple is 2, and the other tested values are 1.5 and 2.5. As explained in Section 4.4, neighborhood 4 has three different options. Of these three options, N4a() is taken as the default option. Finally, different

(31)

Table 4: The different configurations for the Tabu Search Configuration length stop N4() path

1 1.5 · |N | 2 · |N | N4a() all 2 1 · |N | 2 · |N | N4a() all 3 2 · |N | 2 · |N | N4a() all 4 1.5 · |N | 1.5 · |N | N4a() all 5 1.5 · |N | 2.5 · |N | N4a() all 6 1.5 · |N | 2 · |N | N4b() all 7 1.5 · |N | 2 · |N | N4c() all 8 1.5 · |N | 2 · |N | N4a() one

5.2

General instances

For the results in this section, a large number of instances are created using the method described in Section 5.1.1. A variety of different sizes are used to test the consistency of the results. The main results are taken from 100 instances of 12 jobs and 12 work centres. Additional sizes that are considered with an equal number of jobs and work centres are 10 of each and 15 of each. Additional sizes to test how more lopsided instances perform are instances with 10 jobs and 15 work centres, and instances with 15 jobs and 10 work centres. Increasing the number of jobs is difficult, as the solution time of Shifting Bottleneck Heuristic is dominated by the solution time of the subproblems of the single machines, which are largely determined by the number of jobs in the subproblem. To still test larger instances, the amount of work centres is increased, resulting in instances with 15 jobs and 20 work centres, and instances with 10 jobs and 30 work centres. The results for these larger instances do not differ significantly from those of the standard instance. The only real difference is the solution time, with instances of 15 jobs sometimes requiring hours to solve.

In Section 5.2.1, we analyze the results of the different Dispatching Rules and Job Sequencing. Next, Section 5.2.2 analyzes the results of the Shifting Bottleneck Heuristic. Finally, Section 5.2.3 considers the Dispatching Rules to the Shifting Bottleneck Heuristic to find how well they perform compared to the other.

5.2.1 Dispatching rules

The Dispatching Rules are tested using 100 instances of 12 jobs and 12 work centres. To determine which dispatching rule works best, Table 5 contains the percentage of times that each rule produces the best schedule. Note that these percentages do not add to 100, as multiple rules can produce the same best schedule. Interestingly, the rules are surprisingly similar, with no rule being the best more than one-third of the times. Even the Longest Processing Time rule is not significantly worse than the other rules, despite achieving the worst result of all rules.

(32)

Table 5: Results of the Dispatching Rules

Rule MWS SPT LPT EDD SRPT LRPT Sequencing % unconditioned best 31% 32% 16% 26% 21% 20% 22%

% unique best 8% 13% 8% 2% 1% 10% 22%

average time required (seconds) < 1 < 1 < 1 < 1 < 1 < 1 < 1

equally good. The second row of the table shows the same results, but now every tie is ignored, resulting in a total percentage of less than 100. Combining the rows, some interesting points can be observed. The most obvious is that the sequencing strategy has the same percentage in both rows, implying that if it is the best, it is the only best. This result is not illogical, as the sequencing heuristic works somewhat different than the other rules and is therefore prone to creating schedules of different natures. These different schedules are not the best for almost 80% of these instances, but if they are the best, they are uniquely the best. Of course, the creation of the schedule does have similarities to the other rules, which can result in the same schedule, but an identical schedule is unlikely. Two more observations are the low unique percentage, despite the high unconditional percentage, which implies that when these rules create the best schedule, a different rule often creates an equally good (but not necessarily identical) schedule. If any rule has to be used to make a schedule in practice, no single rule seems to be sufficient. Therefore multiple rules will have to be used to create several schedules from which the best can be chosen. The best rules for this appear to be Minimal Weighted Slack, Shortest Processing Time and Sequencing. These rules have great results on its own and when combined, encapsulate nearly three-quarter of all best schedules. Since the time required to calculate the schedule is at most 1 or 2 seconds for any instance or rule, these three rules can be used together for any real application. To achieve the best possible result, the other rules can also be applied when enough time is available to try them all.

(33)

the job will not be affected by precedence constraints.

Table 6: The difference between MWS and the other rules with extreme in-stances

SPT LPT EDD SRPT LRPT Job Sequencing % difference 247.80% 467.01% 211.67% 335.84% 161.47% 2869.72%

Table 6 shows that for these extreme instances, the other rules are many times worse than the MWS. This result does imply that in some special in-stances, the MWS is by far the best rule to solve the problem. However, some practical research would be required to discover how likely these instances are to appear in real-life scheduling problems.

5.2.2 Shifting Bottleneck Heuristic

The main results for the Shifting Bottleneck Heuristic and the Tabu Search are generated with the same 100 instances with 12 jobs and 12 work centres. The different configurations that will first be compared are given in Table 7.

Testing the SBH, we first compare the choice of the subproblem. When the rule for parallel problems is MWS, choosing the highest increase will on average result in a total weighted tardiness of 6.59% higher than if the lowest subproblem is chosen at each iteration. For the rules EDD and SRPT, choosing the highest increase will increase the tardiness by 8.71% and 13.88% respectively. Although this result appears to suggest that the lowest increase should be chosen, further data indicates that the highest increase is better than the lowest increase in many of the instances. The highest choice outperforms the lowest choice in 48%, 48% and 40% of the instances for rules MWS, EDD and SRPT respectively. Both choices achieve the same result in 2%, 3% and 5% for the rules MWS, EDD and SRPT respectively. These numbers suggest that both choices outperform the other almost half the time, but if the lowest increase is better, it achieves a more significant difference than if the highest choice is better. This would result in the fact that they both are best in half the instances, but the lowest choice still has an edge on average.

The rules to solve the parallel subproblems are tested in pairs, and the results are shown in Table 8. This table shows that when the highest increase is chosen

Table 7: The different configurations for the SBH Rule Choice

(34)

Table 8: Results rules parallel subproblems

Percentage increase MWS to EDD MWS to SRPT EDD to SRPT lowest choice −0.50% −4.03% −2.01% highest choice −0.02% −0.15% 0.00%

for the subproblems, it hardly matters which rule is used to solve the parallel problems. If the lowest increase is chosen, the difference is slightly more, but still not significant. We do see that the SRPT rule performs the best out of the three. However, in most cases, the rules create the same result, which implies that this choice is of limited importance.

Since these instances are still very limited, the time required to create the solution is also relatively small. When the highest increase is chosen, the average solution time is 3.07 seconds. If the lowest increase is chosen, the average solution time is 3.76 seconds.

5.2.3 Comparison

Table 9: The difference between the SBH and the Dispatching Rules MWS SPT EDD Job Sequencing SBH low choice SRPT -27.86% -28.84% -28.93% -23.38% SBH high choice SRPT -26.75% -31.46% -28.31% -23.15%

Table 9 shows the percentage difference between some of the configurations of the SBH and some of the Dispatching Rules. Other types of instances will use the same format to compare the difference between SBH and DR. The configurations of the SBH are to consider both types of subproblem choice and the parallel subproblems will be solved by the SRPT rule. Looking at Table 7, the chosen configurations are number 3 and 6. These configurations are compared to the rules MWS, SPT, EDD and Job Sequencing.

It is clear that the Dispatching Rules are creating significantly better sched-ules than the SBH. Job Sequencing is the worst of the rsched-ules, but still better than the SBH. To find reasons why the SBH performs this bad, different types of instances will be tested in Section 5.3. In that section, the constraint will be ignored one by one to see if any of the constraints are responsible for the bad performance of the SBH.

5.2.4 Tabu Search

(35)

Table 10: Main results of Tabu Search

configuration Tabu improvement % instances reaching max iterations

1 -22.59% 34% 2 -8.83% 0% 3 -22.30% 32% 4 -19.47% 33% 5 -22.12% 33% 6 -22.58% 29% 7 -21.27% 20% 8 -22.92% 41%

Table 10 shows the results of the different configurations from Table 4 of the Tabu Search. Although there are some variations between the results, it shows that most configurations reduce the total weighted tardiness by a little over 20%. Configuration 4 does not quite achieve the 20% reduction but comes very close. Only Configuration 2 significantly underperforms compared to the rest. Looking at this configuration, we see that it is identified by the fact that only the internal operations are used for changes. Clearly, limiting the changes to only internal operations is too restricted, and is therefore unable to find enough good solutions.

Off all configurations, only number 8 was able to achieve a better result than the default configuration. It is not surprising that this option will be better since it prolongs the tabu search, which can never make the final objective value worse. Here will be a trade-off between solution quality and solution time. Comparing this, we see that configuration 10 is 0.35% better in solution quality, but 6.96% worse in solution time. This trade-off implies that at least for these instances it is not worthwhile to take configuration 10 over configuration 1.

The second column shows what percentage of times the Tabu Search reaches its maximum number of iterations. This maximum is set at 100 times the num-ber of jobs in the problem and exists to prevent the Tabu Search from entering an infinite loop. Most of the configurations reach the limit quite frequently. This can occur if the Tabu Search creates a cycle that is longer than the length of the tabu list. For the final iterations, the Tabu Search will then repeat the same solutions until the limit is reached. Although a high number in this column may seem undesirable, the results show that the best performing configurations also reached the limit very frequently. Only configuration 2 never reached the limit, but this configuration also performed poorly in general. A possible rea-son is that only choosing internal operations was restricting to the point that no changes could be made at all. This would terminate the Tabu Search and ensuring a low value in both columns.

Referenties

GERELATEERDE DOCUMENTEN

These specific tasks are: order acceptance and rush order criteria, daily release of orders based on the shop load, operators works according the working list, production

These three settings are all investigated for the same input set with 100 locations: one distribution center, nine intermediate points and 190 customer locations; the demand range

higher order tensor, balanced unfolding, rank-1 approximation, rank-1 equivalence property, convex relaxation, nuclear norm.. AMS

higher order tensor, balanced unfolding, rank-1 approximation, rank-1 equivalence property, convex relaxation, nuclear norm.. AMS

48 Randall McGowen, ‘The body and punishment in eighteenth-century England’, The Journal of Modern History 59-4 (December 1987) 651-679. 48 It seems to have been the intention of

As will be shown in section 4.5, this definition of modal tautology is adequate in the sense that it can form the counterpart to a sound and complete deduction system for alethic

If some subset of discs are initially more massive or extended, then they could exhibit greater mass loss rates at the present day and may contribute to the number of bright and

However, the fact that barley, the main cereal in the Early Iron Age samples of Spijkenisse 17- 35, could probably have been cultivated on peat near the settlement, is of