• No results found

Sorting out the order batching problem

N/A
N/A
Protected

Academic year: 2021

Share "Sorting out the order batching problem"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

W.G. Scherphof

(2)

Supervisors:

Prof. Dr. K.J. Roodbergen (RUG)

J. Bouman (Centric)

Co-assessor:

(3)

W.G. Scherphof

July 17, 2015

Abstract

We extend the standard Order Batching Problem to include a sorting component. The aim is to divide a set of customer orders into batches such that the overall effort, consisting of both tour length and sorting effort, is minimized. Currently, it is customary to sort orders either during or after the picking process. Both strategies are special cases of the more general model we introduce. We present an Iterated Local Search Algorithm with Tabu Thresholding to solve the Order Batching Problem with Sorting Effort. The algorithm is based on state-of-the-art literature concerned with Order Batching Problems and contains a new sorting method. The proposed algorithm is extensively tested on a set of 17 settings with different pick cart layouts, order structures, and sorting speeds. Each setting contains 100 randomly generated instances. Results show that the algorithm outperforms the second best performing heuristic on average by 3.1%. Furthermore, our proposed algorithm produces on average the best solution in 16 out of the 17 settings. Finally, we show that the sorting method is easily implemented in current order batching algorithms making it attractive to be used in practice.

(4)

1.1 Background . . . 2 1.2 Literature . . . 2 2 Problem Formulation 5 2.1 Warehouse . . . 5 2.2 Routing . . . 5 2.3 Sorting . . . 6 2.4 The model . . . 7

3 The Sorting Procedure 9 3.1 The algorithm . . . 9

3.2 Bin packing . . . 9

3.3 No singles . . . 11

3.4 Capacity exceeds singles . . . 12

3.5 Capacity equals singles . . . 14

3.6 More singles than capacity . . . 15

4 Solution approaches 17 4.1 Iterated Local Search with Tabu Thresholding . . . 17

(5)

1

Introduction

In this section we introduce the Order Batching Problem with Sorting Effort (OBPSE). In subsec-tion 1.1 we discuss the background of the problem and in subsecsubsec-tion 1.2 we review current literature related to order batching.

1.1

Background

Warehouses and distribution centers play a key role in supply chain management. Within ware-houses various processes such as receiving, shipping, storage, and picking take place that need to be coordinated to allow for a smooth day-to-day operation. Out of these processes order picking tends to be most labor intensive and thereby costly. Hence, it is essential to strive for operational excellence in the picking process. One possible approach to reduce labor intensity is to combine multiple orders in one pick tour. To this end one can divide a set of customer orders into subsets according to some decision rule. Each subset is then collected in a single tour on a pick cart according to some routing strategy. This process is known as batching.

When considering batching care must be taken that orders of different customers are not mixed up. In order to achieve this one can sort orders either during or after the picking process. The former strategy requires that the number of orders cannot exceed the number of compartments on the pick cart restricting the batch composition possibilities. Consequently, a collection of batches with a small total tour length may be infeasible. On the other hand, if sorting takes place after the picking process it is possible that multiple orders are placed in the same compartment. This requires extra effort to separate the orders and calls for additional floor area to setup a sorting mechanism. Currently, it is common practice to either sort during or after the picking process.

Another possibility is to combine the strategies to fruitfully utilize the strengths of both. That is, we could assign the orders to the pick cart such that some compartments contain only one order whereas others may contain multiple orders. Hence, rather than solely applying one strategy it may prove to be beneficial to let the strategy depend on the collection of orders. To the best of our knowledge this approach has not received any attention in literature.

In this thesis we extend the traditional order batching problem to include a sorting component. We show that the current views on order batching of sorting either during or after the picking process are special cases of our proposed model. Furthermore, we introduce a sorting procedure that minimizes the sorting effort given a batch. We use this procedure in combination with the results from state-of-the-art literature devoted to the order batching problem in order to construct an Iterated Local Search algorithm with Tabu Thresholding that takes into account sorting effort. The performance of this algorithm is tested by comparing results with other known order batching heuristics.

The remainder of this thesis is structured as follows. In section 2 we introduce some definitions as well a mathematical formulation of the problem. Section 3 contains a detailed description of our proposed sorting method. In section 4 we introduce the iterated local search OBPSE algorithm. We present the results of this algorithm in section 5. The influence of the characteristics of the pick cart is discussed in section 6. We end with a conclusion in section 7.

1.2

Literature

(6)

the order picking field lacks a general design procedure as most literature is devoted to a particular warehouse layout.

So far, there have only been a few papers in which the OBP is solved to optimality. In Gademann et al. (2001), for example, the authors consider the batching problem by minimizing the maximum lead time. They propose an exact branch-and-bound algorithm able to provide optimal solutions for up to 30 orders and a maximum of 6 batches in a parallel aisle warehouse. By using an efficient preprocessing step they are able to reduce the running time of their algorithm significantly.

Gademann and van de Velde (2005) also consider the parallel-aisle layout but with a different objective. The authors model the OBP as a set partitioning problem and develop a branch-and-bound algorithm in order to minimize total travel time. Another contribution is their proof of the NP-hardness in the strong sense of the OBP.

Several heuristics have been developed for the OBP in case of more realistic sized problems. For example, in Ho and Tseng (2006) the OBP is considered in a specific warehouse consisting of two cross-aisles by varying route-planning methods as well as aisle-picking distributions and the use of seed algorithms. The seed algorithm consists of an order selection and addition rule. The former determines when to construct a new batch whereas the latter selects the orders that should be included in that batch. They propose 9 different seed-order selection rules in combination with 10 different addition rules.

In de Koster et al. (1999) the authors use the classic savings heuristic developed by Clarke and Wright (1964), referred to as CW(i), to solve the OBP. Also, they introduce an improved heuristic, CW(ii), with better performance at the cost of an increased computational complexity. In order to determine solution quality they use the S-shape and largest gap routing heuristic in combination with three different warehouse designs and compare the performance of the savings algorithms to various seed algorithms. Their main result is that the seed algorithm is best combined with the S-shape heuristic together with a large batch size. On the other hand, the savings algorithm is superior in case batch size is small in combination with the largest gap routing heuristic.

Xu et al. (2014) propose an analytical model to approximate the expected customer order through-put time in a warehouse with random storage locations and an S-shape routing strategy. The imple-mented batching principle is based on a First Come First Serve decision rule. Their results suggest that for an arbitrary customer order the expected customer order throughput is a convex function of the batch size. This result is in line with previous findings of the existence of an optimal batch size. A 2-block warehouse with the S-shape routing heuristic is considered in Le-Duc and de Koster (2007). Orders are assumed to arrive according to a Poisson process. The authors estimate the average order throughput time in order to calculate the optimal batch size. Simulation experiments support the conclusion that their approach provides a good accuracy level. Furthermore, they claim their method is simple and therefore easily applied in practice.

Another way to tackle the OBP is by use of meta-heuristics. For example, in Henn and W¨ascher (2012) an Attribute Based Hillclimber (ABHC) algorithm is proposed as well as a Tabu Search. Extensive numerical experiments support the superior quality of the proposed meta-heuristics com-pared with different priority-rule based heuristics such as Earliest Due Date. Besides that, the low computational effort makes the meta-heuristics very suitable to be implemented in software systems. In Hsu et al. (2005) a genetic algorithm (GA) is introduced. The authors claim that the proposed algorithm shows potential for medium- to large-sized problems. Furthermore, the structure of the GA makes sure that it can be implemented for various batch structures as well as warehouse layouts. A major drawback of the GA is the significant computational effort required to obtain results.

(7)

and efficiency.

During the past decade different extensions of the OBP were introduced. In Matusiak et al. (2014), for example, precedence constraints are taken into account. The authors propose a simulated annealing method which outperforms other known heuristics, such as the CW(ii) savings algorithm and the ABHC heuristic. In addition, they perform a case study on a large order picking warehouse and report savings of over 16% in travel distance in 3 months compared to the current method.

Another large-scale order batching situation is discussed in Hong et al. (2012b) in which a ware-house with 10 aisles and over 2000 orders is considered. The authors introduce a route-selecting order batching formulation which basically acts as a matching problem between batches and routes. To this end they use the S-shape routing heuristic to construct all possible routes in the warehouse. Given a batch the proposed route-packing based order batching procedure (RBP) finds a best fit route. Solution quality is determined by comparing the RBP with both seed and savings algorithms with the RBP outperforming the other heuristics. Finally, it turns out that the RBP is relatively robust to congestion.

In case of narrow aisles Hong et al. (2012a) propose an integrated batching and sequencing procedure that considers picker blocking. The authors show that their proposed method can reduce total retrieval time by 5-15% primarily by avoiding picker blocking.

(8)

2

Problem Formulation

In this section we discuss the layout of a warehouse as well as different strategies to construct order picking routes. Furthermore, we introduce some definitions and notation to explain the sorting concept. Then, we mainly follow the notation used in Gademann and van de Velde (2005) in order to formally present the OBPSE.

Table 1: All relevant variables and parameters for the OBPSE.

Parameter Description

J ={1, ... , n} set of orders

K ={1, ... , r} set of compartments on each pick cart C ={c1, ... , cn} set of all order sizes

C capacity in items of each compartment

I set of all possible batches

di distance required to pick batch i

ti sorting effort associated with batch i

ai =(ai 11, ... , ai 1k, ... , ain1, ... , aink) representation of batch i

aijk =

(

1 if order j is stored in compartment k in batch i 0 otherwise xi = ( 1 if batch i is used 0 otherwise

2.1

Warehouse

Warehouses accommodate storage locations from which products can be picked. A typical ware-house layout consists of a number of aisles. Each aisle supports a number of storage locations on both the left and right side that, in turn, offer space to a single product type. Aisles can be entered either at the front or at the end from vertical aisles, called cross-aisles. A block is defined by all aisles that share the same two cross-aisles.

A customer order consists of a number of orderlines that correspond to both the type and quantity of the requested products. Given a collection of customer orders one has to decide when to collect what orders. Once such a decision is made an order picker uses a pick cart with several homogeneous compartments to collect a set of orders in a tour. We assume that there are no orders larger than the capacity of a single compartment. Furthermore, we impose the restriction that each order must be assigned to exactly one compartment. Consequently, it is not possible that a single order is picked in multiple tours.

Without loss of generality we assume that an order picker starts and ends in the depot located in front of the leftmost aisle. A graphical representation of a single block warehouse with 8 aisles each offering 20 storage locations is given in Figure 1.

2.2

Routing

(9)

Figure 1: Graphical representation of a single block warehouse with 8 parallel aisles each offering 20 storage locations.

is mainly due to the fact that the resulting routes are often confusing for the order picker because of their apparent complexity. Also, multiple routes can easily cause congestion. Simpler routing heuristics that cause less congestion and confusion are therefore often an adequate alternative.

The so-called S-shape or traversal heuristic is one of the simplest heuristics with the following rules. Aisles are visited from left to right such that only aisles in which an item needs to be picked may be entered. Once an aisles is entered it has to be traversed completely. The only exception to this rule is the rightmost aisle in case there is no other way to return to the depot.

Another simple routing heuristic is the midpoint method. According to this policy we pick the upper half of the warehouse from left to right and the lower half from right to left. Each aisle containing a pick is traversed to at most the midpoint after which the order picker returns to the cross-aisle he came from. Only the first and last aisle that need to be visited may be used to change between the front and back cross-aisle.

The final routing strategy we discuss is the largest gap which is to a large extent similar to the midpoint method. A gap is defined as the space between either two adjacent picks, the first pick and the front aisle, or the last pick and the back aisle (de Koster et al., 2007). Then, rather than traversing to at most the midpoint the order picker now enters the aisle up to the largest gap. An illustration of all mentioned routing strategies for a given set of orders is provided in Figure 2.

2.3

Sorting

In order to formally state the sorting process we introduce the following definitions. We say that an allocation of a set of orders with respect to a pick cart is feasible if we can assign each order to exactly one compartment and no compartment violates the capacity constraint. Furthermore, a compartment is called clean if it either contains one order or all orders request a single item only. In the remainder we will refer to the latter as singles. If a compartment is not clean it is called non-clean. Each clean compartment is sorted at speed vf per item whereas a non-clean compartment requires vs >vf per item. The intuition behind this is that if a non-clean compartment reaches the

(10)

Figure 2: Different routing strategies for a particular set of orders in a single block warehouse consisting of 6 parallel aisles each offering space to 20 storage locations.

2.4

The model

First, we introduce the following notation. Let J = {1, ... , n} denote the set of orders. Each order j ∈J requests cj items, where each item has a known storage location. Furthermore, we make the assumption that each item requires unit volume. Orders are manually collected by order pickers who use a pick cart consisting of r different compartments, each capable of storing C items. We let K = {1, ... , r} correspond to the set of compartments. Hence, the total capacity of the pick cart equals rC . We assume that congestion is not possible.

A batch i is characterized by a vector ai = (ai 11, ... , ai 1k, ... , ain1, ... , aink), where aijk =1 if in

batch i order j is assigned to compartment k, and aijk = 0 otherwise. Then, we let I denote the set containing all possible different batches which implies that |I| = 2nk. With each batch i we associate a certain effort to collect all the orders assigned to that batch. This effort consists of the travel distance, di, and the sorting effort, ti.

In order to calculate the travel distance we basically have to solve a traveling salesman problem. However, at this point we focus on the main structure of the OBPSE and therefore omit the complex calculations required to compute di. Besides, the travel distance forms no new aspect and is explained in more detail in numerous papers devoted to the OBP, see for example Roodbergen and de Koster (2001). For now it suffices to note that di corresponds to the travel distance necessary to collect all items in batch i given the rules of a certain routing policy.

(11)

Now we can define the total sorting effort for a batch i by ti =∑k ∈Ktik, where

tik = (

vfj ∈Jcjaijk if ∑j ∈Jaijk =1 or ∑j ∈Jaijk =∑j ∈Jcjaijk, and

vs∑j ∈Jcjaijk otherwise.

In order to decide which batches are part of the solution we introduce the following decision variables.

xi = (

1 if batch i is chosen, and 0 otherwise.

Then, we can represent the Order Batching Problem with Sorting Effort by the following mathematical program. minimize

i ∈I (di+ti)xi (1) subject to

i ∈Ik ∈K

aijkxi =1, j ∈J (2)

j ∈J cjaijk ≤C, i∈I, k∈K (3) xi ∈ {0, 1}, i ∈I (4)

Since our goal is to find a collection of batches such that the overall effort required to collect these batches is minimized we use (1) as objective function. Furthermore, (2) makes sure that we choose a collection of batches such that all orders are assigned to exactly one compartment. Note that this also implies that all orders are assigned to exactly one batch. Another requirement is that the total order size in each compartment cannot exceed a compartment’s capacity which is imposed by (3). Finally, (4) makes sure that order integrality is guaranteed. All relevant parameters and decision variables are summarized in Table 1.

We should note that the model defined by (1)-(4) is similar to the one in Gademann and van de Velde (2005) in case sorting is effortless and the pick cart consists of a single compartment, i.e., in case vs = vf = 0 and r = 1. In Gademann and van de Velde (2005) it is shown that the order

batching problem is NP-hard in the strong sense whenever there is a batch containing more than one order. Since their model is a special case of our proposed model it follows that the OBPSE is NP-Hard in the strong sense as well.

On the other hand, in case sorting of clean items is effortless and requires infinite effort for non-clean items, i.e., vf = 0 and vs = ∞ the feasible region with respect to (1) - (4) solely consists

(12)

3

The Sorting Procedure

Once orders have been grouped into batches a new challenge arises of assigning the orders to compartments on the pick cart. This subsection stresses the importance of a good assignment as well as an approach to derive one. Before we describe each step in detail we review the main outline of the algorithm.

3.1

The algorithm

The general outline of our proposed sorting algorithm is as follows. Recall that a clean compart-ment is defined as a compartcompart-ment that either contains one order or all orders request a single item only. Then, we first create as much clean compartments completely filled with singles as possible. Next, we keep transferring the largest order contained in the non-clean compartments to a new clean compartment as long as an assignment of the remaining orders is feasible.

Since we do not know beforehand whether it is optimal to create the maximum number of clean compartments containing singles only we iteratively reduce this number. After each reduction we again find the optimal content of the non-clean compartments given this new situation and ultimately pick the one resulting in the least sorting effort.

Finally, if it turns out that this result is such that the number of singles in the non-clean com-partments is less than a compartment’s capacity we need to explore the possibility of creating an additional clean compartment containing these singles. By doings so, we minimize the total size of orders in non-clean compartments and thereby the minimal sorting effort. Algorithm 1 contains the pseudocode that corresponds to this procedure.

3.2

Bin packing

It turns out that the sorting problem is closely related to that of the famous bin packing problem. In order to see this we first state the definition of a feasible assignment.

Definition 1. We are given r compartments each of capacity C and a set I = {1, ... , n}of items with sizesC = {c1, ... , cn}such that 1≤ ci ≤ C ∀i ∈ I. A feasible assignment of the items over

the compartments is given byB = {B1, ... , Br}with Bi ⊆ I, i=1, ... , r such that

∩r j =1Bj = ∅, ∪rj =1Bj = I, and

i ∈Bj ci ≤C, ∀j∈ {1, ... , r}.

Then, we can formally define the order assignment problem as follows.

Definition 2. We are given a set B = {B1, ... , Br}of compartments each of capacity C and a set

I = {1, ... , n}of items with sizesC = {c1, ... , cn}such that1≤ci ≤C∀i ∈ I. Let vf <vsdenote

the sorting rate per item for clean and non-clean compartments, respectively and let g :B →R be defined by g(Bk) = ( vfi ∈B kci if |Bk| =1 or |Bk| =∑i ∈Bkci, and vs∑i ∈Bkci otherwise.

(13)

Algorithm 1 getSortEffort

Input: C set of order sizes; C : compartment capacity; r : number of compartments; ns: number

of singles

Output: set of sizesC1 allocated to the non-clean compartments

1: k ← −1; ncs←0; 2: C0← C; cmax← ∅;

3: check ←false; maxSingles←false; 4: n+cs ← bnCsc;

5: while f(C \ {cmax}) ≤r−k−1 do;

6: C ← C \ {cmax};

7: k ←k+1;

8: cmax←max{C};

9: maxSingles←false;

10: if ns ≥C then

11: check ←true; maxSingles←true;

12: cmax←C ; 13: ncs←ncs+1; 14: ns ←ns−C ; 15: C ← C ∪ {C} \ {1, ... , 1}; 16: if maxSingles then 17: C ← C ∪ {1, ... , 1} \ {C}; 18: ncs ←ncs−1; 19: C ←getNonCleanOrders(C, C , r , k); 20: C1← C; 21: if check then 22: C1restore(C0,C, C , r , ncs);

23: if ∑C1=C and ncs =ncs+ and k > −1 and ns>max{C} then 24: C1←checkFraction(C1, C , r , k);

25: returnC1;

Since orders are sorted either slow or fast it follows that minimizing the total sorting effort is equivalent to minimizing the sum of all order sizes in the non-clean compartments. Now we focus on the bin packing problem, which is formally stated in Definition 3.

Definition 3. We are given a set of binsB = {B1, ... , Bn}each of capacity C and a setI = {1, ... , n}

of items with sizesC = {c1, ... , cn}such that 1≤ci ≤C ∀i ∈ I. Then, the bin packing problem is

concerned with finding a feasible assignment a:I → B such that the number of non-empty bins is minimized.

An important distinction between the bin packing and the order assignment problem is that in the former the goal is to minimize the number of bins used to store the items whereas the latter focuses on the minimization of the total order size in non-clean compartments. Since the number of compartments on each pick cart is fixed we do not need to find the minimum number of compartments required to collect a set of orders.

(14)

to develop an algorithm that minimizes the sorting effort. The reason for this is as follows. Since minimizing the total order size in the non-clean compartments is equivalent to maximizing the total order size in the clean compartments we can try to allocate as much items to clean compartments as long as the bin packing problem guarantees that the remaining orders can be assigned to the remaining compartments.

3.3

No singles

In our effort to derive an optimal sorting method we first consider the scenario in which orders of size one do not occur. This assumption provides helpful insights which can be used later in more complex situations. Now, we make the following claim.

Lemma 1. Let there be n orders assigned to r compartments of capacity C such that there are k ≤r clean compartments. Furthermore, let vs and vf denote the sorting speed for items in non-clean and

clean compartments respectively. If vs >vf the minimum sorting effort is achieved by assigning the

largest k of the n orders to the clean compartments. Proof. Let C0 = {c

1, ... , ck}, and C = {ck +1, ... , cn} denote the sizes of the orders assigned to

the clean and non-clean compartments, respectively. Then, the sorting time, t, with respect to this assignment is

t =vs

C +vf

C0.

Let c1∈ C and c2∈ C0 be such that c1>c2. That is, there is a clean compartment that does not contain one of the k largest orders. Suppose this allocation is optimal. Since c2<c1we know we can interchange these orders without violating the capacity constraint. Hence, an alternative allocation of ˜C = C \ {c1} ∪ {c2} and ˜C0 = C0\ {c2} ∪ {c1} is feasible. Let ˜t be the total sorting effort with respect to this allocation. That is,

˜t=vs

C +˜ vf

C˜0.

Note that,

t−˜t=vs(c1−c2) +vf(c2−c1),

= (vs−vf)(c1−c2),

>0,

which contradicts the assumption of optimality of t.

Now that we know how to utilize clean compartments we will focus on how to determine the right amount of clean compartments. Obviously, a first guess would be to strive for a maximum number of clean compartments. It turns out that in absence of singles this approach indeed results in minimal sorting effort. In order to see this we will use results from to the bin packing problem. That is, given a capacity, we know there exists a function f : Nn →N returning the minimal number of compartments, denoted by zn∗, required to store n orders. This observation leads to the following Lemma.

Lemma 2. Let there be r compartments on a pick cart, each of capacity C . Without loss of generality let C = {c1, ... , cn} denote the set of order sizes such that 2 ≤ c1 ≤ ... ≤ cn ≤ C .

Let f : C → N be a function returning the minimal number of compartments, denoted by z∗ n,

(15)

Proof. Suppose it is possible to assign the orders such that there are k >k∗ clean compartments. Due to construction of k∗ we know there is at least one clean compartment that does not contain one of the the largest k orders. Hence, we can replace the order in this clean compartment by a larger order from the non-clean compartments without violating capacity constraints. This implies that for any number of clean compartments admitting feasibility it is possible to relocate the orders such that the largest are assigned to the to the clean compartments. Therefore, the maximum number of clean compartments, k∗, is given by k∗ =maxm∈N{m: zn−m∗ ≤r−m}.

According to Lemma 2 we can iteratively try to assign the largest order to a clean compartment as long as an assignment of the remaining orders admits feasibility. This leads to the following Theorem. Theorem 1. Let there be r compartments on a pick cart, each of capacity C . Without loss of generality let C = {c1, ... , cn} denote the set of order sizes such that 2 ≤ c1 ≤ ... ≤ cn ≤ C .

Let f : C → N be a function returning the minimal number of compartments, denoted by zn∗, required to store the n orders. The maximum number of clean compartments, k∗, is given by k∗=maxm∈N{m: zn−m∗ ≤r−m}. Furthermore, let vs and vf denote the sorting speed for items

in non-clean and clean compartments respectively. If vs >vf the minimum sorting effort is acquired

by creating k∗clean compartments and store the largest k∗ orders in them.

Proof. From Lemma 2 we know it is not possible to create k >k∗ clean compartments. Suppose it is optimal to create k <k∗ clean compartments. Then, without violating the capacity constraint we can create an additional clean compartment such that one order is now sorted at vf rather than vs reducing the overall sorting effort. Furthermore, according to Lemma 1 it is optimal to store the

largest k∗ orders in the clean compartments. This completes the proof.

We can use Theorem 1 to construct an algorithm that returns the orders that need to be assigned to the non-clean compartments such that overall sorting effort is minimized. Algorithm 2 provides the pseudocode of this algorithm.

Algorithm 2 getNonCleanOrders

Input: C: order sizes in non-clean compartments; C : compartment capacity; r : number of compartments; k: current number of clean compartments;

Output: set of sizesC allocated to the non-clean compartments

1: k ← −1;

2: cmax← ∅;

3: while f(C \ {cmax}) ≤r−k−1 and|C| >0 do 4: C ← C \ {cmax};

5: k ←k+1;

6: returnC;

3.4

Capacity exceeds singles

Now we will consider the case in which some orders are of size one but there are not enough singles to completely fill a compartment. That is, if ns denotes the number of singles in the set of

given orders we assume that 0 < ns < C . Suppose we have a set of n orders and we have found

a feasible allocation that allows the creation of k clean compartments. Let C0 = {c1, ... , ck} and

C = {ck +1, ... , cn} denote the sets containing the order sizes assigned to the clean and non-clean

(16)

and c+ =max{C}. According to Theorem 1 sorting effort is minimized if the orders are assigned such that c− ≥ c+ with |C0| = k∗, whenever ns = 0. Let cs denote the actual number of singles

we consider as one larger order and suppose we have cs such that c− ≤cs <ns. Since we can add

the remaining singles from the non-clean compartments to the clean compartment without violating capacity constraints and thereby reduce the overall sorting effort this solution can never be optimal. Therefore, we only need to distinguish between either treating all singles as one larger order or not, i.e., it suffices to consider cs =1 and cs =ns.

In absence of singles Theorem 1 states it is optimal to assign the largest orders to the clean compartments. Hence, one might suspect that we only need to consider grouping the singles in case their number exceeds the smallest order present in the clean compartments, i.e., if ns >c−. In order to see that this need not be the case note that the function f(·) only provides information on the minimum number of required compartments and not all potential allocations. Consequently, it may be possible to arrange the orders in the non-clean compartments such that there is one containing singles only. We will show this can only be the case if the number of singles is strictly larger than the largest order in the non-clean compartments, i.e., if ns >c+. The intuition behind this is that in case

ns ≤c+ and there exists a feasible allocation such that one of the non-clean compartments contains

the singles then there must be an alternative assignment such that one compartment contains the order corresponding to c+. However, if such an assignment is possible Algorithm 2 would have already found it. Hence, only if ns > c+ we need to explore the possibility of combining the singles. We

summarize this result in the following Lemma.

Lemma 3. Let there be r compartments on a pick cart, each of capacity C . Let C = {c1, ... , cn}

denote the set of order sizes. The number of singles is given by ns such that 0 < ns < C . Let ˜

C denote an alternative set of order sizes in which the singles are replaced by one order, i.e., ˜C = C ∪ {ns} \ {1, ... , 1}. Furthermore, letC0 and ˜C0be the output of Algorithm 2 with respect toCand

˜

C, respectively. Then, the following must hold

C˜0 <

C0 =⇒ ns >max{C0}.

Proof. Let c+ and c− denote the largest order in the non-clean and the smallest order in the clean compartments, respectively, corresponding to the order sizes in C. That is, c+ = max{C0} and

c− =min{C \ C0}. Due to the construction of Algorithm 2 we must have cc+. Let k denote

the the number of clean compartments with respect to the original order sizes, i.e., k = |C \ C0|. Similarly, we let ˜k correspond to the number of clean compartments with respect to the alternative set with order sizes, ˜C. Now, suppose that ∑ ˜C0 < C0. First we consider the case k = ˜k. Since

∑ ˜C0 < C0 we must have a clean compartment containing the singles such that n

s > c− which

implies that ns ≥c+. Second, consider the case ˜k <k. Similar argumentation implies that again we must have ns>c−≥c+. Finally, suppose ˜k >k. Since the original order sizes allowed only k clean compartments we must have that in the new situation we have a clean compartment containing the singles. If ns ≤c+ it is possible to exchange the orders with respect to ns and c+. However, this

would imply that we can create k+1 clean compartments with respect to the original order sizes which is impossible. Hence, if ∑ ˜C0 < ∑C0 it is not possible that ns ≤ c+. This completes the

proof.

Now we can use Lemma 3 to develop the following Theorem that produces the minimal sorting effort in case0<ns <C .

Theorem 2. Let there be r compartments on a pick cart, each of capacity C . Let C = {c1, ... , cn}

(17)

Algorithm 2. Then, only if ns >max{C0}we replace the singles by one order in a new set of order

sizes, denoted by ˜C, i.e., ˜C = C ∪ {ns} \ {1, ... , 1}. Let C∗ denote the set of order sizes placed in

the non-clean compartments associated with minimum sorting effort. Then,C∗ is defined by C∗=

(

C0 if C0 ∑ ˜C0, and

˜

C0 otherwise.

Proof. LetC0denote the order sizes assigned to non-clean compartments as a result from Algorithm 2 with respect toC. That is, given that we do not group the singles the setC0allows for the minimum sorting effort. Let ˜C denote the set containing the sizes if we replace all singles by one larger order. Furthermore, let ˜C0 be the result from Algorithm 2 with respect to ˜C. It follows that ˜C0 corresponds to an optimal content of the non-clean compartments, given that we group the singles. Since in an optimal allocation we either group all singles or not we must have that either ˜C0 orC0 provides the

optimal content of the non-clean compartments. Finally, according to Lemma 3 we only need to consider grouping the singles if ns >max{C0}.

Then, we can use Theorem 2 to construct Algorithm 3 that verifies if it is beneficial to group the singles and outputs the corresponding set of order sizes in the non-clean compartments.

Algorithm 3 checkFraction

Input: C: order sizes that need to be assigned to compartments; C : compartment capacity; r : number of compartments; k: number of clean compartments; ns : number of singles;

Output: set of sizesC allocated to the non-clean compartments

1: C1← C; 2: C2getNonCleanOrders(C, C , r , k); 3: if ns >max{C2}then 4: C1← C ∪ {ns} \ {1, ... , 1}; 5: C1getNonCleanOrders(C1, C , r , k); 6: if ∑C1C2 then 7: returnC1; 8: else 9: returnC2;

3.5

Capacity equals singles

Now we consider the special case ns = C . Again, we will analyze the consequences of a clean

compartment consisting of singles by comparing two different scenarios. To this end we let C and ˜

C denote the order sizes in case the singles are not grouped and grouped, respectively. If we letC0

and ˜C0 denote the optimal content of the non-clean compartments with respect to these order sizes

it follows that C0 =getNonCleanOrders(C, C , r , 0), and ˜ C0 =getNonCleanOrders(C˜, C , r , 0). Let k = |C \ C0| and ˜k = ˜C \C˜0

denote the optimal number of clean compartments in case the original order sizes are given by C and ˜C, respectively. First, suppose that k ≤ ˜k. Since ns = C

(18)

the singles we have at least two other clean compartments containing order sizes, say, c1+ and c2+ with c1+ = max{C˜0} and c2+ = max{C˜0\ {c1+}}. It follows that if c1++c2+ > C we know that ungrouping the singles may decrease the sorting effort. Hence, we need to verify if the creation of an additional clean compartment allows for a feasible assignment of the remaining orders to the non-clean compartments. Contrary to the case ns <C there is a way to predict whether or not this

is possible. In order to see this observe the following. Since c1++c2+ >C we know that the orders corresponding to c1+ and c2+were assigned to different non-clean compartments. Consequently, there must have been two alternative order sizes, say, c10 and c20, possibly consisting of multiple orders, such that c1++c10 ≤ C and c2++c20 ≤ C . Since c1++c2+ > C it follows that c10+c20 ≤ C . Hence, if the orders in ˜C0 can be assigned to r˜k non-clean compartments it must be that the orders in

˜ C0\ {c+

1, c2+}can be assigned to r−˜k−1 non-clean compartments. Now we only need to verify if

there is sufficient space to add the C singles to these r−˜k−1 compartments. That is, we need to verify if

(r−˜k−1)C ≥

C −˜ c1+−c2++C, or (r−˜k−2)C ≥

C −˜ c1+−c2+. Hence, only if both

c1++c2+ >C, and (5)

(r−˜k−2)C ≥

C˜0−c1+−c2+, (6) hold it follows thatC0 <∑ ˜C0. Now we can summarize the previous result in the following Theorem. Theorem 3. Suppose we have a pick cart with r compartments each of capacity C . Let C = {c1, ... , cn} denote the set of order sizes such that there are C singles. Furthermore, we let ˜C

correspond to the set of order sizes in case we group the singles. Let ˜C0 denote the output of

Algorithm 2 with respect to ˜C and ˜k be the corresponding number of clean compartments. Let C∗ denote the set of order sizes in the non-clean compartments such that the total sorting effort is

minimized. Then, if either (5) or (6) is violated we haveC∗=C˜0. Otherwise, it follows thatC= C0,

whereC0 corresponds to the output of Algorithm 2 with respect toC

Proof. First note that given the fact that we group the singles Algorithm 2 outputs the optimal set of non-clean orders. According to our previous discussion we know that only if both (5) and (6) hold there exists an alternative allocation in which the singles are not grouped such that the total sorting effort is reduced. Hence, only if both (5) and (6) hold ungrouping the singles results in less sorting effort.

3.6

More singles than capacity

So far we have discussed the cases ns = 0, 0 < ns < C , and ns = C . Now we will focus

on the scenario in which the number of singles exceeds the compartment’s capacity, i.e., ns > C .

Let ncs denote the number of clean compartments completely filled with singles. Similar to the

case 0 < ns < C it does not make sense to partly fill a clean compartment with singles as long

as there are singles in the non-clean compartments. Hence, in an optimal allocation we must have ncs∈ {0, 1, ... ,bns

Cc}.

(19)

this new situation we can again verify if it is worthwhile to sacrifice an additional clean compartment completely consisting of singles. We can repeat this procedure until either (5) or (6) is violated in order to determine the right value of ncs. Then, Algorithm 4 gives the pseudocode that determines

the optimal value of ncs by iteratively verifying if decreasing ncs results in less sorting effort.

Algorithm 4 restore

Input: C0: original order sizes; C: order sizes in non-clean compartments; C : compartment

capacity; r : number of compartments; ncs: number of clean compartments containing singles;

Output: set of sizesC allocated to the non-clean compartments

1: k ← |C0| − |C| − (C1)n cs;

2: c1+ ←max{C}; c2+←max{C \ {c1+}};

3: while c1++c2+≥C and (r−k−2)C ≥C −c1+−c2+ and ncs >0 do

4: C ← C ∪ {1, ... , 1} \ {c1+, c2+}; 5: ncs ←ncs−1; k ←k+1; 6: C ←getNonCleanOrders(C, C , r , k); 7: c1+←max{C}; c2+ ←max{C \ {c1+}}; 8: k ← |C0| − |C| − (C1)n cs; 9: returnC;

Furthermore, only if ncs= bnCscwe need to consider if grouping the remaining(ns mod C) singles

(20)

4

Solution approaches

In this section we discuss our approach to solve the OBPSE. To this end we use the sorting procedure as described in section 3 to construct a meta-heuristic based on the Iterated Local Search Algorithm with Tabu Thresholding (ILST) introduced in ¨Oncan (2015) that is able to solve the order batching problem in presence of sorting effort. First, we sketch the general outline of our proposed algorithm. In the remainder of this section we discuss each step in detail.

4.1

Iterated Local Search with Tabu Thresholding

The meta-heuristic we propose is based on the Iterated Local Search Algorithm with Tabu Thresh-olding introduced in ¨Oncan (2015). To distinguish between our local search and the one in ¨Oncan (2015) we refer to the former as ILST(ii) whereas the latter is addressed by ILST. Then, the general outline of ILST(ii) is described as follows.

Algorithm 5 Iterated Local Search with Tabu Thresholding Input: s: initial solution;

Output: sbest: best solution found;

1: ˜s ←LST(s); 2: sbest←˜s; 3: repeat 4: s1←Perturbation(˜s); 5: s2←LST(s1); 6: ˜s←AcceptanceCriterion(˜s, s2); 7: if h(˜s) <h(sbest) then 8: sbest←˜s

9: until termination condition is satisfied

10: return sbest;

Given an initial solution the ILST(ii) consecutively diversifies and intensifies. In the diversification phase a new solution is formed by perturbing the initial solution. This new solution is improved by a local search, LST, during the intensification phase. Only if the outcome of this local search satisfies the acceptance criterion the initial solution is replaced. This process is repeated a predetermined number of times after which the best solution so far is returned.

The pseudocode of our algorithm is exactly the same as the one presented in in ¨Oncan (2015) and given in Algorithm 5, where h(s) denotes the objective value of a solution s. The algorithm is implemented on an Intel(R) Core(TM) i5-3470 with 3.20GHz clock in R using the RCPP package that allows coding in C++.

4.2

Initial solution

The initial solution should be generated quickly and of sufficient quality. To this end we use the CW(ii) savings algorithm introduced in de Koster et al. (1999) which remains one of the best performing OBP heuristics, especially in terms of computation time.

Given an initial set of n orders we characterize a solution s by an n-dimensional vector, i.e., s = (s1, ... , sn) such that order j ∈ {1, ... , n} is assigned to batch sj. Since the sorting effort of a

(21)

to only store the content of each batch. In order to retrieve the content of each compartment in the final solution we can easily employ a variant of the sorting algorithm to find which orders are stored in clean and which in non-clean compartments. Then, we can use a bin packing algorithm to verify the content of each compartment.

4.3

Objective function

Different than in ¨Oncan (2015) we let the objective value of a batch i consist of a weighted sum of the distance, di, and sorting time, ti, required to collect all orders in batch i . Since good solutions are likely to be found on the borders of feasibility we allow solutions to be infeasible during each iteration of ILST(ii).

We introduce a penalty, pi, that corresponds to the number of compartments required to store the items in batch i minus the number of compartments on a pick cart, with a minimum of 0. That is, pi =max{0, f(C) −r}, where r is the number of compartments on each pick cart, C is the set of order sizes in batch i , and f(C)returns the minimum number of bins required to store the items inC according to the first fit decreasing heuristic.

Note that we do not use an optimal algorithm to calculate the minimum number of required compartments. The reason for this is that it is simply too computationally expensive to use an optimal algorithm. Then, for a solution s the objective value, h(s), is calculated by the formula

h(s) =

max(s)

i =1

(di+γ·ti +ζ·pi).

The parameter γ gives control over the nature of the OBP at hand. That is, for γ=0 the objective function corresponds to a pick-and-sort environment in which the focus is primarily on minimizing the total travel distance.

For γ =M, where M is a sufficiently large number we mimic a sort-while-pick environment. In order to see this recall that the sorting effort, ti, depends on the speed at which clean and non-clean items are sorted captured by the parameters vf and vs, respectively. Then, for vf =0 and vs >0 the

sorting effort, ti, measures the number of items in non-clean compartments. Hence, for γ=M only solutions in which the compartments of all pick carts are clean will have acceptable objective values. Finally, γ = 1 corresponds to the situation in which we dynamically determine which compart-ments will be sorted during and which after the picking process.

The parameter ζ determines the weight of the penalty pi and is used to steer the local search

towards either more feasible or more infeasible solutions. To this end we update ζ after each run of LST. If LST returns an infeasible solution we multiply ζ by1+e, with e>0. On the other hand, in

case a feasible solution is returned we divide ζ by1+e.

4.4

Neighborhoods

The neighborhood structure determines to a large extent both the overall solution quality and running time. Therefore, we let the neighborhood of a solution s, N (s), consist of the union of an add and swap move. More specifically, we let

N1(s) = {inter(i ,j )(s)

si 6=sj, 1≤i<j ≤n ,

where inter(i ,j )(s)is the operator which interchanges the elements in position i and j in solution s. Furthermore, we let add(i ,j )(s)be the operator which sets the i th element equal to the value of the j th element in solution s. Then, we define the neighborhood with respect to the add operator by

N2(s) = {add(i ,j )(s)

(22)

Finally, we define the neighborhood of a solution s by N (s) = N1(s) ∪ N2(s). In order to reduce

computational effort we only consider a randomly determined fraction, φ, of a solution’s neighborhood. This can easily be achieved by associating with each neighbor a uniformly distributed random number in the interval [0, 1]. Only if this random number is smaller than φ the neighbor is part of the neighborhood of s. We refer to the fractional neighborhood asNφ(s).

4.5

LST

The LST sub procedure is a local search with tabu treshholding in which an initial solution s acts as the starting local best solution. Each iteration of the LST consists of both an improvement and mixed phase.

The improvement phase finds a local minimum of the initial solution, s, with respect to the neighborhoodNφ(s)and updates the local best solution if necessary.

The mixed phase consists of the following steps. The neighborhood of the solution found in the improvement phase will again be explored. However, rather than selecting the neighbor with the lowest objective function we consider a random neighbor solution, ˜s, of sufficient quality. In a next step we move to the best neighbor, s, of the solution ˜s. If the objective value of s is strictly less than that of the local best solution the mixed phase stops. Otherwise, a new iteration starts with selecting a random neighbor of s. In each iteration of the LST the mixed phase runs at most ψ times. For the sake of completeness we provide the pseudocode of the LST in Algorithm 6.

Algorithm 6 LST

Input: s: initial solution;

Output: s∗: the best solution found;

1: z∗←h(s);

2: s∗ ←s;

3: τ←1;

4: repeat

5: while s is not a local minimum do

(23)

4.6

Perturbations

To avoid being trapped in a local optimum we introduce two perturbation moves that both have a probability of 0.5 of being selected. Once the perturbation move is determined and performed the resulting solution is accepted only if it is feasible. We should note in ¨Oncan (2015) an additional perturbation move is considered. However, from their explanation it is not entirely clear how the move is exactly defined and therefore we choose to omit their third perturbation move.

The first perturbation move is the addition of a new batch which works as follows. First, we randomly pick an order j ∈ J and remove it from its current batch. Then, we create a new batch to which we assign order j . By considering the savings formula Cij =ei+ej−eij we determine the remaining content of the newly created batch. Here, ei (ej)corresponds to the distance required to collect all items in order i (j ). Similarly, eij denotes the distance in case orders i and j are picked in a single tour.

We consider all possible pairs that contain order j and compute the corresponding savings. Then, we start with the order associated with the highest savings. Only if capacity constraints are satisfied we reassign the order to the new batch after which the order corresponding to the second highest savings is selected. The process stops if either the new batch is full or all combinations have been considered. Note that we express the savings in terms of distance and leave out the sorting component. The reason for this is that savings in terms of sort effort is not relevant in case there are only two orders under consideration.

The second perturbation move is the removal of a batch in which a random batch is chosen and destroyed. The orders that belonged to removed batch are assigned to other batches by considering the same savings formula as employed in the addition perturbation. That is, an order i from the destroyed batch is relocated to the batch that contains order j such that Cij is largest and capacity constraints are satisfied. If no such batch exists for at least one of the orders in the destroyed batch the removal is undone and the perturbation phase ends.

4.7

Acceptance criterion

Each iteration of ILST ends with the output of the LST procedure, i.e., a solution s2. Since we do want to discard promising solutions we keep a solution s2 if h(s2) ≤ ρ·h(s∗), with ρ> 0.

Furthermore, we only accept solution s2 if the capacity constraints are satisfied. Of course we only update the current best solution if the objective value of s2is strictly less than the objective value of the current best solution.

4.8

Stopping criterion

(24)

5

Results

This section presents the main results with respect to the ILST(ii) algorithm. To this end we consider a pick-and-sort environment, sort-while-pick environment and a combination of both. We randomly create instances for different kinds of warehouse layouts and see how the ILST(ii) algorithm performs compared to other heuristics.

5.1

Parameter Tuning

The performance of the ILST(ii) algorithm heavily relies on a suitable choice of parameter values. For the sake of clearness we present an overview of all relevant parameters as they occur in the ILST(ii) algorithm as well as a short description in Table 2.

Table 2: A description of all parameters that occur in the ILST(ii) algorithm. parameter description

χ determines the number of iterations in ILST(ii) θ determines the number of iterations in LST

ψ determines the number of iterations in the mixed phase in LST φ determines the size of each neighborhood

α influences the value of q to obtain the qth best neighbor

γ determines the weight of the sorting effort in the objective function ζ determines the weight of the penalty associated with capacity violation e influences the change in ζ after each iteration of ILST(ii)

ρ influences whether solutions are accepted after each run of LST

In our experiments we do not consider the parameter that determines the weight of the sorting effort in the objective function, γ, because this parameter merely provides control over the sorting environment. That is, for γ= 0, ILST(ii) is suitable to solve OBP in a pick-and-sort environment

Table 3: Characteristics of different warehouses, order structures, and pick carts. Parameter Warehouse 1 Warehouse 2

number of orders 40 60

order size U [2, 8] U [2, 10]

fraction of singles 0 0.3

number of aisles 6 10

locations per aisle 2×40 2×30

aisle length 40 m 30 m

distance between aisles 5 m 5 m travel speed 1 m / s 1 m / s vs 1 s / item 2 s / item

vf 0 s / item 0 s / item compartment capacity 10 items 10 items

number of compartments 2 3

(25)

During preliminary testing we determined a default value for each parameter. Then, based on these default settings we construct a range of three values for each parameter that will be tested. Since the number of parameters is large we do not consider all possible combinations. In each experiment we set all but one parameter to their default. In Table 4 we present all values that are tested. For each parameter we let the first row depict the default value. The remaining three rows correspond to the three different values that are tested.

We should note that the default values for the parameters corresponding to the number of itera-tions in ILST(ii) and LST, i.e., χ and θ, differ significantly from the values used in the ILST algorithm in ¨Oncan (2015). It turns out that the running time of our algorithm is no longer acceptable for those values.

Table 4: Values of the parameters in the ILST(ii) algorithm that are tested as well as the average objective value and computation time. The bold values correspond to the parameter values we will employ in ILST(ii). For each parameter the first row corresponds to the default value. The remaining rows denote the different values that are tested.

par value objective CPU par value objective CPU

χ 5 1820.68 4.28 φ 0.3 1820.68 4.28 2 1835.30 1.71 0.1 1830.24 1.02 10 1818.13 8.67 0.5 1817.66 14.08 20 1809.68 17.28 0.7 1806.36 30.56 θ 10 1820.68 4.28 e 0.2 1820.68 4.28 5 1826.48 1.81 0.1 1812.68 4.47 15 1812.43 7.69 0.3 1823.68 4.29 20 1809.92 10.08 0.4 1825.68 4.32 ψ 30 1820.68 4.28 ζ 107 1820.68 4.28 10 1825.53 1.82 106 1821.34 4.34 20 1821.14 2.67 105 1820.98 4.44 50 1818.22 6.79 104 1829.23 4.28 α 15 1820.68 4.28 ρ 0.050 1820.68 4.28 10 1821.88 4.24 0.025 1821.57 4.18 20 1822.37 4.33 0.075 1819.88 4.39 30 1820.34 4.41 0.100 1822.94 4.33

For each experiment we create 100 random order list instances, each consisting of 40 orders. Order sizes are randomly distributed in the range [2, 8]. The characteristics of the warehouse we consider for these experiments are given by Warehouse 1 as defined in Table 3. Furthermore, we consider random storage locations such that all products have equal probability of being ordered. We use ILST(ii) to solve all instances using both the S-shape and largest gap routing methods. In Table 4 we show the average objective value and running time for each of the experiments.

(26)

Table 4 we use a bold font to emphasize the values that each parameter will take in the final version of ILST(ii).

5.2

Heuristics

The OBP is a computationally very complex problem. In fact, there exists no algorithm that can optimally solve realistic sized instances in reasonable time. To further complicate matters there is a lack of an appropriate lower bound making it hard to measure solution quality. Therefore it is custom in the OBP literature to compare newly proposed methods with existing heuristics. Hence, we will compare solution quality of our ILST(ii) algorithm with the following heuristics.

First we consider the two savings algorithms, CW(i) and CW(ii), as discussed in de Koster et al. (1999), and an alternative version of the savings algorithm that takes into account the sorting effort, CW(s). Then, we construct another ILST algorithm based ¨Oncan (2015). We briefly discuss the outline of each of the heuristics.

• CW(i): the basic variant of the savings heuristic consists of the following 5 steps as given in de Koster et al. (1999).

1. Calculate the savings for all possible pairs of orders that do not violate the capacity constraint.

2. Sort the savings in decreasing order.

3. Select pair with the highest savings where ties are broken randomly. 4. Distinguish between the following cases:

(a) Assign both orders to a new batch if neither order is already contained in a route and the capacity constraint is not violated.

(b) If only one order is assigned to a batch and capacity constraints remain satisfied, put the unassigned order in this batch.

(c) If both orders have been assigned to a batch proceed with step 5.

5. Select the next pair of orders and repeat step 4 until all orders are assigned to a batch. • CW(ii): The structure is similar to that of CW(i). However, rather than only computing all

savings once the savings matrix is updated after each merging of orders. Evidently, this increases the computational effort.

• CW(s): The structure is similar to that of CW(ii). The only difference is that we use an alternative objective function that not only consists of the travel distance but also takes into account the sorting effort. In fact, we use the same objective function as employed in ILST(ii). That is, for a batch i the objective value is di+γti. However, rather than setting γ equal

(27)

5.3

Test Instances

To test the performance of our ILST(ii) meta-heuristic we consider a warehouse with 10 aisles. The main characteristics are given by Warehouse 2 defined in Table 3. Based on this warehouse we create different settings by varying the fraction of singles in each order, the number of orders, the sorting speed of non-clean compartments, the number of compartments on each pick cart, and the compartment’s capacity.

Order list are created as follows. Suppose that we consider a warehouse with n orders and a fraction of singles equal to β. Then, with each of the n orders we associate a random number between 0 and 1. If this random number is smaller than β we let the size of the order under consideration be 1. On the other hand, in case the random number exceeds β we let the order size be uniformly distributed in the range corresponding to Table 3.

The intuition behind the focus on orders of size 1 is that, for example for web shops, a significant amount of orders consists of singles. In fact, it is not uncommon that up to 50% of the orders request a single item only. Next to that, the nature of the order assignment problem in which singles play an important role makes it interesting to see how our sorting method performs in presence of singles.

Table 5: Different settings of the instances that are used to test the performance of ILST(ii). settin singles fraction number of orders vs capacity number of compartments

1 0.3 60 2 10 3 2 0.1 60 2 10 3 3 0.2 60 2 10 3 4 0.4 60 2 10 3 5 0.5 60 2 10 3 6 0.3 30 2 10 3 7 0.3 80 2 10 3 8 0.3 100 2 10 3 9 0.3 200 2 10 3 10 0.3 60 1 10 3 11 0.3 60 5 10 3 12 0.3 60 10 10 3 13 0.3 60 2 15 3 14 0.3 60 2 20 3 15 0.3 60 2 10 2 16 0.3 60 2 10 4 17 0.3 60 2 10 6

Note that we normalize the sorting speeds such that vf =0. To see why this is without loss of generality observe the following. Suppose that we first solve an instance with vf = x and vs =y ,

x <y . Then, if we solve the same instance with vf =0 and vs=y−x we obtain the same solution,

given that we use the same seed value to control for randomness. Hence, in all settings we let vf =0 and vary vs in order to indicate how the sorting speeds of clean and non-clean compartments relate.

(28)

5.4

Pick-and-sort

Since our proposed algorithm bridges the gap between pick-and-sort (PAS) and sort-while-pick (SWP) we will compare solutions with heuristics for both environments. We should note that in the traditional PAS environment the sorting effort is left out of the equation such that our proposed algorithm, with γ=0, is similar to the one introduced in ¨Oncan (2015) where its competitiveness is demonstrated.

5.5

Sort-while-pick

If we do not allow non-clean compartments we consider the SWP environment. To demonstrate the quality of ILST(ii) we compare results with CW(i), CW(ii) and FCFS. We adjusted the heuristics such that they obey the rules with respect to SWP. That is, no compartment may contain more than one order unless the compartment is completely filled with orders of size one. Furthermore, we use the adjusted CW(ii) heuristic as initial solution to make sure that ILST(ii) starts with a solution solely containing clean compartments. We set γ =1012 to make sure that ILST(ii) will not consider any

non-clean compartments during the tabu search.

Table 6: Computational results of the 100 random instances for each setting in case of a sort-while-pick environment.

ILST(ii) CW(ii) CW(i) FCFS

set obj CPU %dev CPU %dev CPU %dev CPU

1 4007.30 28.27 9.38 0.18 13.12 0.04 34.17 0.00 2 4913.90 27.08 4.43 0.20 6.98 0.04 25.93 0.00 3 4435.00 27.89 7.02 0.19 10.12 0.04 30.26 0.00 4 3480.92 29.68 12.16 0.18 17.27 0.04 39.59 0.00 5 3005.41 30.34 15.17 0.18 21.98 0.04 44.17 0.00 6 2115.20 5.98 9.02 0.04 12.23 0.01 29.91 0.00 7 5162.57 54.26 9.50 0.39 13.82 0.07 36.67 0.00 8 6392.83 90.33 9.21 0.78 13.72 0.11 37.56 0.00 9 12114.94 511.52 7.92 5.51 14.28 0.37 40.39 0.01 10 4006.81 28.14 9.02 0.19 12.97 0.04 34.40 0.00 11 3996.85 27.91 9.42 0.18 13.33 0.04 34.35 0.00 12 3986.45 28.77 8.94 0.19 13.33 0.04 34.48 0.00 13 3955.82 27.62 9.44 0.18 13.46 0.04 34.52 0.00 14 3977.93 26.82 9.53 0.18 13.57 0.04 34.66 0.00 15 5464.00 22.48 6.14 0.17 9.44 0.04 28.56 0.00 16 3206.81 31.68 11.03 0.19 15.19 0.04 36.41 0.00 17 2392.30 44.34 13.25 0.21 16.73 0.03 35.32 0.00

The results with respect to this environment are given in Table 6. For each setting we let the boldfaced value correspond to the lowest average objective value. We observe that ILST(ii) produces on average the lowest objective values in all 17 settings. Although both savings algorithms are fairly simple we should note that the objective function of the second best performing heuristic, CW(ii), exceeds the objective value of ILST(ii) on average by approximately 9.45%.

(29)

compartments is 6 the objective value of CW(ii) exceeds ILST(ii) on average by 13.25%. In case the fraction of singles is 0.5 the objective of CW(ii) exceeds ILST(ii) on average even by 15.17%. This shows that the sorting method is able to successfully allocate orders to compartments.

The computation times of ILST(ii) are significantly higher than those of the simpler heuristics. Furthermore, we observe that the running time of ILST(ii) strongly depends on both the number of orders and the number of compartments on each pick cart.

5.6

Combination

In case we allow some compartments to be clean and others to be non-clean we combine the PAS and SWP environments. We suspect that our algorithm performs best in this scenario. In order to verify this we consider the same settings as before. That is, we use the same set of 17 different warehouse settings and compare results with ILST(i), CW(s), CW(ii), CW(i), and FCFS. Note that ILST(i), CW(ii), CW(i), and FCFS do not take into account any sort effort such that they represent a PAS environment.

Table 7 contains the computational results of the 100 random instances for each setting where the boldfaced value corresponds to the lowest average objective value. We observe that the averages of the total objective values are significantly lower than in case of SWP. The reason for this is that compartments can now hold more than one order decreasing the number of required batches and travel distance.

Table 7: Computational results of the 100 random instances for each setting in case of a pick-and-sort environment.

ILST(ii) ILST(i) CW(s) CW(ii) CW(i) FCFS set obj CPU %dev CPU %dev CPU %dev CPU %dev CPU %dev CPU

1 3392.28 41.43 0.94 39.54 1.94 0.42 2.32 0.21 4.78 0.03 25.05 0.00 2 4128.31 40.20 0.67 41.37 3.05 0.41 2.26 0.23 5.45 0.03 26.07 0.00 3 3733.71 41.02 0.45 41.80 2.19 0.43 2.24 0.20 4.87 0.04 24.45 0.00 4 2979.78 42.38 0.79 39.52 1.40 0.41 1.97 0.18 4.93 0.03 24.18 0.00 5 2631.94 41.60 0.84 38.96 1.48 0.38 2.76 0.21 4.83 0.03 23.47 0.00 6 1763.17 7.12 1.28 5.62 2.65 0.07 3.20 0.04 5.56 0.01 20.00 0.01 7 4322.91 76.74 1.45 76.19 4.07 0.92 4.07 0.42 6.83 0.06 29.30 0.01 8 5310.97 121.83 2.15 117.08 5.14 1.71 4.62 0.84 7.75 0.04 32.52 0.01 9 10057.20 660.81 2.61 636.49 5.29 12.71 4.06 5.62 8.01 0.08 37.44 0.03 10 3232.72 40.95 0.37 38.89 3.94 0.44 2.06 0.21 4.96 0.04 27.63 0.00 11 3697.58 40.01 7.51 37.94 2.10 0.43 8.52 0.21 10.5 0.04 24.76 0.00 12 3866.92 38.34 25.38 39.38 7.36 0.42 25.96 0.21 28.52 0.04 35.71 0.00 13 2674.85 46.10 1.18 43.83 5.47 0.41 3.98 0.19 6.51 0.04 18.01 0.00 14 2361.06 46.20 -0.43 46.82 6.86 0.37 3.05 0.20 5.82 0.04 5.83 0.00 15 4489.77 34.76 0.63 33.06 2.26 0.41 1.46 0.18 2.76 0.04 29.43 0.00 16 2761.09 45.63 1.58 45.15 2.94 0.38 3.63 0.22 7.24 0.03 18.17 0.00 17 2114.63 63.04 3.13 58.35 4.07 0.39 7.08 0.24 9.60 0.03 8.61 0.00

(30)

The total average computation time of ILST(ii) equals 84.01 seconds whereas ILST(i) requires on average 81.18 seconds. Hence, ILST(ii) requires on average 3.4% more computation time than ILST(i). Furthermore, we observe that the average computation time and solution quality increase in the number of both orders and compartments.

Note that ILST(ii) performs best if the sort speed with respect to the non-clean compartments is high. In this case it is important to find a good assignment of orders over the compartments since non-clean compartments are severely punished in terms of sort effort.

The only setting in which ILST(ii) does not provide the best solutions on average corresponds to a pick cart layout in which each compartment has a capacity of 20. A possible explanation could be as follows. Since the maximum order size is only 10 and we let the number of orders equal 60 of which 30% request a single item it may not be beneficial to create clean compartments. That is, a clean compartment consisting of a single order is at most half full which implies that one needs more batches to collect all orders.

Figure 3: The average objective value per heuristic for all instances. The percentage in each bar corresponds to the sorting effort’s share in the objective value.

Figure 3 shows the average objective value per heuristic for all instances. The percentage in each bar represents the sorting effort compared to the total objective value. Although the solutions produced by ILST(ii) require on average more travel distance than the ILST(i), the efficiency of the sorting procedure compensates this by reducing the sorting effort. In fact, the average overall objective value is smallest for ILST(ii), outperforming the ILST(i) by approximately 3.1% on average. Furthermore, we observe that our version of the savings algorithm outperforms CW(i) and CW(ii) by 3.6% and 1.1%, respectively. This suggest that the sorting algorithm can successfully be imple-mented in existing heuristics to take into account the sorting effort.

(31)

6

Experimental Insights

In this section we discuss how characteristics of the pick cart affect the solution quality of ILST(ii). In order to do so we consider a warehouse design similar to the one used in section 5, i.e, we use the characteristics given by Warehouse 2 in Table 3. Also, the order lists are created in the same manner as in section 5. This implies that the number of orders is set to 60 of which a fraction of 0.3 has size 1. The remaining orders request a random number of items in the range[2, 10]. We let the total capacity of each pick cart take on the values 20, 30, 48, and 60. Furthermore, for each capacity we let the number of compartments vary such that the total capacity of the pick cart remains similar. All different configurations are shown in Table 8.

Table 8: The average objective value of ILST(ii) as a fraction of the average objective value of ILST(i) for different designs of the pick cart by varying the number of compartments, r , and the capacity C per compartment.

experiment 1 experiment 2

r C S-shape largest gap r C S-shape largest gap

1 20 0.999 1.007 1 30 1.002 1.000

2 10 0.988 0.989 2 15 0.999 1.006

3 10 0.980 0.990

experiment 3 experiment 4

r C S-shape largest gap r C S-shape largest gap

1 48 1.002 1.016 1 60 0.998 1.006 2 24 1.001 1.020 2 30 1.000 1.018 3 16 0.977 1.012 3 20 0.988 1.006 4 12 0.960 0.981 4 15 0.979 0.991 5 12 0.971 0.986 6 10 0.960 0.981

For each configuration we solve the OBP for 100 random order list instances with both ILST(i) and ILST(ii) using an S-shape and largest gap routing method. In Table 8 we provide the average total objective value of ILST(ii) as a fraction of the the average objective value of ILST(i). We should note that for the S-shape routing method we observe a clear pattern. That is, compared to ILST(i) the performance of ILST(ii) increases in the number of compartments. The intuition behind this is that the more compartments a pick cart has, the more clean compartments we can create which is the main feature of ILST(ii).

Referenties

GERELATEERDE DOCUMENTEN

Het doel van de leidraad is processen te identificeren, die van invloed kunnen zijn op de waterkwaliteit in klein oppervlaktewater en deze processen te vertalen naar maatregelen

Waardplantenstatus vaste planten voor aaltjes Natuurlijke ziektewering tegen Meloïdogyne hapla Warmwaterbehandeling en GNO-middelen tegen aaltjes Beheersing valse meeldauw

Een punt van zorg blijft het feit dat in het vmbo heel veel wiskundelessen worden gegeven door docenten die niet in de eerste plaats docent wiskunde zijn maar naast hun eigen

The standard mixture contained I7 UV-absorbing cornpOunds and 8 spacers (Fig_ 2C)_ Deoxyinosine, uridine and deoxymosine can also be separated; in the electrolyte system

‘n werkstuk getiteld Postkoloniale terugskrywing: verset teen of verbond met kolonialisme, wat die vorm aanneem van ‘n essay oor die problematiek rondom die representasie van die

“Fm a french fossil collector who has worked since a few. years on tertiairy fossils

In 2008 and 2009, the percentage of multidisciplinary forensic reports that lead to an advice of imposing detention under a hospital order with compulsory psychiatric