• No results found

On a residual algorithm for the one-dimensional cutting stock problem and the effect of order characteristics on the waste percentage Public version

N/A
N/A
Protected

Academic year: 2021

Share "On a residual algorithm for the one-dimensional cutting stock problem and the effect of order characteristics on the waste percentage Public version"

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

On a residual algorithm for the one-dimensional cutting

stock problem and the effect of order characteristics on the

waste percentage

Public version

Kevin John Mann

(2)

Master’s Thesis Econometrics, Operations Research and Actuarial Studies Specialization: Operations Research

University of Groningen

K.J. Mann (s1773313) Groningen, February 26, 2015

Supervisors:

Prof. Dr. R.H. Teunter (RUG) FB (company X)

JB (company X)

Co-assessor:

(3)

On a residual algorithm for the one-dimensional cutting

stock problem and the effect of order characteristics on the

waste percentage

Kevin John Mann

February 26, 2015

Abstract

In this thesis a residual algorithm is presented to solve one-dimensional cutting stock prob-lems at company X. We consider a special case of this problem, dealing with multiple stock lengths in finite supply and a number of restrictions imposed by the cutting machine. Com-pared to the currently implemented solution algorithm, in a set of 41 real problem instances, the residual algorithm reduces the amount of waste by 1.0%. Furthermore, we provide an extensive analysis on how a number of characteristics of an order affect the amount of waste that is generated when this order is cut. This analysis is carried out for both one-dimensional and two-one-dimensional cutting stock problems, confirming the similarity of these problems with respect to how the waste percentage is affected by order size, size of the demanded items and the use of specified material.

(4)

Executive summary

This thesis investigates the one-dimensional and two-dimensional cutting stock problem, the latter often called the nesting problem, at company X. The main goal is to implement a modern solution algorithm for the one-dimensional cutting stock problem and subsequently to investigate whether this algorithm can be used in future simulation studies aiming to obtain insights in the behaviour of the waste percentage in two-dimensional problem instances.

We have implemented a residual algorithm from the literature and adjusted some of the problem parameters such that the obtained solution is compatible with the additional restrictions imposed by the cutting machine. The performance of this algorithm was tested on all material types of sections 20, 40, 60 and 140, all from order 1430. Compared to the currently implemented solution approach, a 1% decrease in waste was achieved.

The majority of cutting layouts for the two-dimensional cutting stock problem are generated manually, unless the material type is medium-density fibreboard, in which case the automatically generated solutions from SigmaNEST are used. In order for SigmaNEST to perform to the best of its ability, we have tested various of its settings and compared their solutions on 22 historical nesting problems. Based on this comparison, we recommend using the Lookahead algorithm with lookahead factor 3 and strategy 5.

We then performed a statistical analysis to research how the size of an order, the size of the demanded items and the use of specification material influence the waste percentage associated with cutting that order. We performed this analysis for both the one-dimensional and the two-dimensional cutting stock problem and find very similar results:

(a) When the size of an order increases (for example, when we combine several sections), the expected waste percentage decreases as more efficient cutting layouts are available. (b) Orders are nested inefficiently when they contain too many large items. The amount of

small items in an order does not affect the waste percentage significantly.

(c) The use of specification material decreases the waste percentage associated with an order and allows standard sized objects to be nested more efficiently.

(5)

Acknowledgements

With this thesis I will conclude my masters in Econometrics, Operations Research and Actuarial Studies, and with it the wonderful years I have had as a student in Groningen.

By writing this thesis at company X, I have been able to put many years of theory into practice, which has been a very broadening and educational experience and will definitely be of value in my further career.

I am very thankful to my supervisors FB and JB, who have supported me from the day I joined company X back in June and have always made feel a part of this company. Furthermore, they have had a significant role in shaping this thesis, whilst giving me the freedom to steer the general theme and approach into the direction of my personal interests.

I would also like to thank all my other colleagues from the Contract Management and IT depart-ments, who have been very patient in showing me their work and were always available for small talk and a cup of tea. I would particularly like to thank EM for the interest he showed in my mathematical model and solution algorithm, which have been greatly improved in the process of explaining them.

Writing this thesis would not have been possible without the great help of my supervisor Ruud Teunter. He has taught me about conducting scientific research in general, showed me the right direction at times I was slightly lost and has proofread this thesis many times.

Last, but not least, I would like to thank my parents, friends and sister who were always there to support and motivate me when I needed it.

(6)

Contents

1 Introduction 4 1.1 Situation . . . 4 1.2 Research goal . . . 6 1.3 Scope . . . 6 1.4 Thesis outline . . . 7 2 Literature review 8 2.1 Our contribution . . . 8 3 Problem definition 10 3.1 Assumptions . . . 10

3.2 Mathematical problem description . . . 10

3.3 Practical considerations . . . 13

4 Solution approach 15 4.1 Column generation . . . 15

4.2 Obtaining an integer solution . . . 16

5 Performance of the algorithm 22 5.1 Results by Poldi and Arenales . . . 22

5.2 Production data comparison . . . 22

5.3 Comparing with a theoretical lower bound . . . 23

6 The two-dimensional cutting stock problem 25 6.1 Literature review . . . 25

6.2 SigmaNEST . . . 26

7 Input analysis 29 7.1 The one-dimensional problem . . . 29

7.2 The two-dimensional problem . . . 33

7.3 Specification objects . . . 36

8 Conclusion 39 8.1 Future research . . . 40

References 40 A Appendices 43 A.1 Input example . . . 43

(7)

1

Introduction

1.1

Situation

Company X supplies aluminium and steel building kits to the shipbuilding industry and archi-tecture projects worldwide. A major production step in the manufacturing of these building kits is to cut the parts of which the building kit consists from raw material, i.e. from aluminium or steel bars and sheets. Because raw materials account for 65% of total production costs, cutting efficiently, that is, in such a way that the amount of wasted material is kept small, is one of the company’s primary goals.

In combinatorial optimization, the problem of cutting a set of objects available in stock into a set of items required by a customer is known as the cutting stock problem (CSP), of which there are many variants. Cutting stock problems arise naturally in the production planning in many industries, such as the paper, glass, textile and, of course, shipbuilding industry. Due to their relevance a large body of research on cutting stock problems is available in the literature, which we will discuss in Sections 2 and 6.1.

Remark. Throughout this thesis we will keep the distinction between the terms objects and items. The term objects refers to the stocked sheets and bars of raw material, whereas the term items refers to the parts that are demanded by the customer and will be cut from the raw material at company X.

Cutting stock problems are generally divided into two groups based on their dimensionality. The first group is that of one-dimensional cutting stock problems, in which steel bars of a given length are to be divided into the demanded items using only vertical cuts. Even though the physical objects and items obviously exists in three dimensions, they can solely be distinguished from one another by their length dimension. In Figure 1 a graphical example is given where two steel bars are cut into eight smaller steel bars. The area in grey indicates the two pieces of waste that have now been created.

Naturally, the second group of cutting stock problems is the group of two-dimensional cutting stock problems. Here the stocked objects are steel sheets of given length and width. These sheets are cut into two-dimensional items of whichever shapes and sizes are demanded. Two-dimensional cutting stock problems of this nature are often referred to as nesting problems, as the items have to be nested in the objects surface before being cut from it. In Figure 2, a real-world example is given where seven items are nested in a single sheet. After cutting the items from the sheet, the remaining material is considered waste.

Figure 1 – Example of a one-dimensional CSP layout, where two objects are cut into eight items (striped) creating two pieces of waste (in grey).

Figure 2 – Real-world example of a two-dimensional CSP, where seven items are nested on the surface of a single sheet.

(8)

problem are faced on a day-to-day basis and finding cutting layouts that generate a low percentage of waste is not only crucial from a cost perspective, but in fact demanded by many of their customers. We will investigate a number of aspects that are dealt with at company X when solving these problems.

In regard to the one-dimensional problem, a computer algorithm is in place to generate cutting layouts based on a provided set of demanded items and available objects. However, it is unknown how exactly this algorithms works, making it impossible for the company’s IT department to implement such an algorithm into their own software products. Therefore, we will implement a modern algorithm available in the literature and compare its solutions with those of the currently implemented solution method.

In regard to the two-dimensional problem, most cutting layouts are produced manually. The nesting software product, SigmaNEST, does have the ability to nest automatically, but due to the many practical side-constraints and superior quality of human made layouts this option is rarely used for cutting the expensive steel or aluminium sheets. The cutting layouts on cheaper material however, such as medium-density fibreboard, are generated automatically, but the quality of the solution varies. We will therefore compare various of the settings within SigmaNEST to make sure its solution algorithm performs to the best of its ability.

Another interesting aspect is the splitting of large orders into a number of smaller orders, called sections (coming from the Dutch word (scheeps)secties). All demanded items within a single section are nested together and not with items from other sections, even if a second section has to be produced in the same week and belongs to the same customer. One can imagine that in a section with a large number a items many cutting layouts are possible and a low percentage of scrap is expected. On the other hand, in a section with only a small number of items, a smaller number of cutting layouts is possible and one might expect a higher percentage of scrap. Similarly, one might suspect that a section consisting mostly of large items to be nested will yield a high waste percentage, as only few cutting layouts are possible without exceeding the size of the bar or sheet. On the other hand, sections containing a larger number small parts may be expected to generate a smaller percentage of scrap as more cutting layouts are possible.

Furthermore, the possibility exists to order metal sheets of specified size in order to nest large items efficiently. That is, when an order is known far enough in advance, sheets can be ordered that are similar in size as the large items in that order, increasing the efficiency with which those items are nested. We will investigate how these order characteristics (order size, the distribution of item sizes and the use of specification material) influence the amount of waste that is generated when an order is cut.

As such, we aim to provide insight in the usefulness of recombining sections and ordering specifi-cation material. That is, when certain characteristics of two sections are known beforehand, one can make an informed decision on whether or not to combine these sections or for which items specification material should be ordered.

(9)

1.2

Research goal

Our main research question is:

Can we implement a modern solution algorithm for the one-dimensional cutting stock problem and use this model to obtain insights in the behaviour of the solution of two-dimensional problem instances?

In order to answer the main research question we should find answers to the following supporting research questions:

1. What is the current state of the academic literature on the one-dimensional and two-dimensional cutting stock problems?

2. How can we model the one-dimensional cutting stock problem mathematically and imple-ment a modern solution algorithm to solve real-world problem instances?

(a) How does the performance of a modern solution algorithm compare to that of the currently implemented solution approach at company X?

3. How do certain characteristics of both one-dimensional and two-dimensional problem in-stances influence the amount of waste that is generated when that order is cut?

(a) How does the size of an order influence the waste percentage?

(b) How does the size of the items in an order influence the waste percentage?

(c) What is the influence of using specification objects for large items on the waste per-centage?

4. How does the influence of above characteristics compare between one-dimensional and two-dimensional cutting stock problems?

1.3

Scope

The scope of this thesis is limited to the geometrical aspect of cutting stock problems. That is, given an order, our main focus is on the generation of efficient cutting layouts and the study of which characteristics of this order affect the amount of waste when it is cut.

In reality, company X needs to find a delicate balance between minimizing the amount of waste and the costs associated with doing so. In view of minimizing the waste percentage it might, for example, be beneficial to combine as many sections as possible. However, when different sections are cut from the same set of objects, afterwards more time and space would be needed in order to sort all demanded items based on customer and delivery moment than would have been needed were all items belonging to the same section. Similarly, any decrease in waste achieved by using specification material for large items must be weighed with the higher price and longer delivery time of such material compared to material of standard sizes.

(10)

1.4

Thesis outline

(11)

2

Literature review

In this section we treat the literature on the one-dimensional cutting stock problem. An addi-tional review of the literature on the two-dimensional problem is given in Section 6.1.

The origins of the one-dimensional cutting stock problem can be traced back to Russian economist Kantorovich (1960). Already in 1939 he gave a mathematical description of the problem at hand, although his work was not published in English for another twenty-one years.

A great advance in solving practical sized problem instances came with the work of Gilmore and Gomory (1961, 1963) in which they described the now famous column generation technique. Today, their work is considered a true milestone in operations research and from it a rich body of literature on cutting stock problems has sprung.

Reviews on the one-dimensional cutting problem and its solution procedures can be found in Haessler and Sweeney (1991) and Haessler (1992). More recently W¨ascher et al. (2007) improved the typology of cutting and packing problems introduced by Dyckhoff (1990) and provided a categorization of the relevant literature up to 2005. According to them, cutting stock problems can be distinguished on a number of criteria: dimensionality, kind of assignment, assortment of small items, assortment of large objects, and the shape of small items.

The one-dimensional problem under investigation in this thesis is categorized by W¨ascher et al. (2007) as the multiple stock-size cutting stock problem (MSSCSP), which is a generalization of the standard CSP and has received a relatively small amount of attention by researchers. Early works on the MSSCSP include the heuristic sequencing approach by Gradiˇsar et al. (1999) and the rounding approach by Holthaus (2002), who uses the column generation technique to solve a relaxation of the problem and a number of rounding procedures to obtain a much smaller residual problem, which is solved by an ILP-solver. Belov and Scheithauer (2006) developed an exact approach combining column generation with Chvatal-Gomory cuts, a method that has been improved further by Alves and De Carvalho (2008).

Poldi and Arenales (2009) generalize the problem even further, by assuming the stock of objects is limited. They propose a residual algorithm, iteratively using column generation and a rounding procedure until all demand is met. Recently, Gracia et al. (2013) proposed a hybrid approach based on a genetic algorithm to solve the MSSCSP with a finite stock of objects.

2.1

Our contribution

In the first part of this thesis, we will implement the residual algorithm from Poldi and Arenales (2009) to solve the multiple stock-size cutting stock problem at company X. We will adjust this algorithm such that it deals with a number of different restrictions imposed by the cutting machine and compare its performance with the currently implemented solution algorithm and a generic lower bound.

(12)
(13)

3

Problem definition

Following standard notation in the majority of cutting stock literature, we formally define the one-dimensional cutting stock problem (1D-CSP) with multiple stock lengths of limited quantity as follows:

A stock of objects of one material, available in K different types, is maintained to meet an order for I different types of items of the same material. Object type k has length Lk ∈ R>0 and is

available in quantity ek ∈ N, for k = 1, . . . , K. In an order, item type i has length `i ∈ R>0

and demand di ∈ N, for i = 1, . . . , I. As the demanded item lengths generally differ from the

stocked object lengths, the objects are cut into smaller pieces corresponding to the demanded items; a specific way to cut an object is called a cutting pattern (of which a formal definition will be provided in Section 3.2). The objective of the problem is to cut the demanded items from the available stock, such that the total waste of material is minimized. Finally, a solution of the problem specifies the number of times each cutting pattern is utilised.

3.1

Assumptions

Without loss of generality, we assume that L1> Lk0, for k0 = 2, . . . , K. That is, we assume the

object of type 1 to be the longest object. As a necessary condition for a feasible solution to exist, we then also assume that `i≤ L1for i = 1, . . . , I, i.e. we require that each item can be cut from

at least one type of object.

3.2

Mathematical problem description

Kantorovich’s formulation

Assuming a single object type, i.e. assuming K = 1, Kantorovich (1960) was the first to give a mathematical programming formulation of the 1D-CSP. In his model, we denote by H a known upper bound to the number of objects needed, for example the total number of demanded items, and define variables

yh=

(

1, if object h is used,

0, otherwise, (1)

(14)

for h = 1, . . . , H and i = 1, . . . , I. Kantorovich’s model is then given by: minimize x H X h=1 yh (3) subject to H X h=1 xih= di, i = 1, . . . , I, (4) I X i=1 `ixih≤ L1yh, h = 1, . . . , H, (5) yh∈ {0, 1}, h = 1, . . . , H, (6) xih≥ 0 and integer, i = 1, . . . , I, h = 1, . . . , H. (7)

As there is only one type of object, minimizing the total of amount of objects used in (3) is equivalent to minimizing waste. Constraints (4) guarantee thay each item is cut exactly as often as it is demanded. When an object is used, constraints (5) set its corresponding y-variable equal to 1 and at the same time prevent too many items to be cut from it. Finally, constraints (6) and (7) restrict the values of the variables.

Despite its intuitive appeal, Kantorovich’s model is not commonly used in practice due to having a very weak continuous relaxation, which, in the worst case, approaches 1/2 of the optimal solution (see Brand˜ao and Pedroso, 2012). Another formulation of the one-dimensional cutting stock problem, which does in fact take into account multiple object lengths and has a much stronger continuous relaxation, is known as Gilmore and Gomory’s formulation and is presented below.

Gilmore and Gomory’s formulation

We formally define the concept of a cutting pattern.

Definition 3.1. (Cutting pattern) Given the demanded items and an object of type k, one can construct Nk cutting patterns ajk, for j = 1, . . . , Nk. Vector ajk = (a1jk, . . . , aIjk) is the jth

cutting pattern for object type k, if it satisfies

I

X

i=1

`iaijk ≤ Lk

0 ≤ aijk≤ di and integer, i = 1, . . . , I.

(8)

That is, aijk specifies how many times item type i is cut from object type k in its jth cutting

pattern, for i = 1, . . . , I, j = 1, . . . , Nkand k = 1, . . . , K. Considering all object types, we denote

by J =PK

k=1Nk the total number of cutting patterns.

Let the costs associated with cutting pattern ajkbe equal to the amount of waste that is generated

by the cutting pattern, i.e.

cjk= Lk− I

X

i=1

(15)

Stock | Demand

k Lk ek i `i di

1 5 10 1 3 9

2 1 2

Table 1 – Example problem instance

Example 3.2. Consider a problem instance where a stock of K = 1 object types is maintained to meet an order with I = 2 demanded item types. In Table 1 one finds the other problem parameters.

There can then be constructed a total of J = N1 = 5 cutting patterns, given by a11 = (1, 0),

a21 = (1, 1), a31 = (1, 2), a41 = (0, 1) and a51 = (0, 2). When we consider, for example, the

second cutting pattern, one item of length 3 and one item of length 1 are cut from an object of length 5, leaving waste of length c21= 1.

When we introduce decision variable xjk, that specifies how many times an object of type k is

cut according to jth cutting pattern ajk, the integer linear programming (ILP) formulation of

the problem is given by

minimize x K X k=1 Nk X j=1 cjkxjk (10) subject to K X k=1 Nk X j=1 aijkxjk= di, i = 1, . . . , I, (11) Nk X j=1 xjk≤ ek, k = 1, . . . , K, (12) xjk≥ 0 and integer, j = 1, . . . , Nk, k = 1, . . . , K. (13)

The objective (10) is to minimize the total amount of waste that is generated. Constraints (11) ensure demand for each item type is met and no overproduction of items takes place, whereas constraints (12) guarantee that only available objects will be cut. Finally, constraints (13) specify the non-negativity and integrality of the decision variables.

Following Cherri et al. (2009) we rewrite the ILP using matrix notation as minimize x c >x subject to Ax = d, Ex ≤ e, x ≥ 0 and integer. (14)

In (14), a column of the I × J matrix A corresponds to a cutting pattern, whereas E is a binary matrix in which Ekj= 1 if and only if the jth cutting pattern in A is cut from an object of type

k, for k = 1, . . . , K and j = 1, . . . , J .

(16)

Associated with the cutting patterns is the generated amount of waste, given by c = (2, 1, 0, 4, 3). Demand and stock vectors d and e follow immediately from Table 1:

d =9 2 

and e = 10. (15)

The constraint matrices in (14) are given by

A =1 1 1 0 0

0 1 2 1 2



and E = 1 1 1 1 1 . (16)

Notice that a column of the I + K by J matrix (A, E) represents a unique cutting pattern. Its first I entries specify how many items of each type are cut, whilst from the last K entries one reads which object type is used.

The optimal solution to this problem instance is given by xilp = (8, 0, 1, 0, 0). That is, in the

optimal solution, cutting pattern a11= (1, 0) is used eight times and cutting pattern a31= (1, 2)

is used once, whereas the remaining cutting patterns are not used. One of the objects is not needed to meet demand and can stay in stock to be used for future orders. Total waste is then given by 8c11+ c31= 16.

3.3

Practical considerations

Before presenting our solution approach, a number of practical considerations need to be ad-dressed and taken into account. These considerations deal with the restrictions the cutting machine imposes on the cutting patterns:

1. In order to compensate for the thickness of the blade, between each consecutive pair of items to be cut from an object, a space of length δ ∈ R≥0 must be left free.

2. Because of the poor quality of the edges of many objects, it has been decided that no items can be cut from the first and last λ ∈ R≥0 units of the objects length.

These restrictions are illustrated in Figure 3. We will assume that space that is unused in order to meet these restriction, is not considered waste.

Figure 3 – An example where three items are cut from a single object. Between two consecutive items a space of length δ has been left free and the first and last λ units of the objects length are unused. The objects gray coloured area is the waste that is generated by this cutting pattern.

Lemma 3.4. Solving (14) whilst taking into account the machine restrictions is equivalent to solving (14) with adjusted item lengths ˆ`i= `i+ δ and adjusted object lengths ˆLk = Lk− 2λ + δ,

for i = 1, . . . , I and k = 1, . . . , K.

(17)

problem as I X i=1 `iaijk+ δ ≤ I X i=1 ˆ `iaijk≤ ˆLk = Lk− 2λ + δ ≤ Lk+ δ, (17)

which immediately implies that (8) is satisfied when the original lengths are used.

Furthermore, when cutting pattern ajk is applied to the objects and items of their original

lengths, an additional 2λ + (njk− 1)δ units of space become available, where njk =P I i=1aijk

denotes the number of items in cutting pattern ajk. This exactly equals the amount of space

that is needed to meet the machine restrictions.

Lastly, as space that is unused in order to meet the machine restrictions is not counted as waste, any optimal solution of the adjusted problem is also an optimal solution to the original problem and we conclude the proof.

(18)

4

Solution approach

Since our problem is an extension of the standard one-dimensional cutting stock problem1, which

is known to be N P-complete (see Fowler et al., 1981), there are no efficient algorithms to solve instances of (14) to optimality, unless P = N P. Not only the integrality of x, but also the size of J makes the problem very hard to solve. In fact, a moderate size real-world problem published by Haessler (1975), with K = 1 and I = 27, already resulted in a model with more than 75 million cutting patterns. Both of these problems are dealt with in subsequent sections.

4.1

Column generation

Gilmore and Gomory (1961) considered the continuous relaxation of the standard cutting stock problem. That is, they considered (14) in which stock constraints (12) are not considered and the integrality conditions on x are dropped. They then solved the problem to optimality by making use of a technique that is now known as (delayed) column generation.

Using column generation, no longer does one need to generate all J columns (cutting patterns) in (14) explicitly. Instead, the algorithm is started from an initial set of I + K cutting patterns, representing a basis of the constraint matrix. Subsequently, until a global optimum is reached, at each simplex iteration one of the columns in the basis is replaced by a new column with negative reduced costs, improving the current basic solution. The improving column is found by solving a series of K knapsack problems, utilising the shadow prices of constraints (11) and (12). We shall look into each of these steps in more detail below.

The initial basic matrix

The simplex algorithm requires an initial basic matrix B with I + K columns, the number of constraints in the continuous relaxation of (14). First, we define I simple cutting patterns. Definition 4.1. (Simple cutting patterns) There are I simple cutting patterns, each associ-ated with a unique item and the first object type. Let b·c denote the floor function and bj = min{dj, bL1/`jc}. The simple cutting patterns are then given by

aj1= (0, . . . , bj, 0, . . . , 0), j = 1, . . . , I. (18)

Example 4.2. Consider again the problem instance given in Table 1. We then have

b1= min{9, b5/3c} and b2= min{2, b5/1c}, (19)

yielding simple cutting patterns a11= (1, 0) and a21= (0, 2).

We associate the first I columns of B with the simple cutting patterns. Introducing slack variables µk, for k = 1, . . . , K, Poldi and Arenales (2009) then rewrite stock constraints (12) as

Nk

X

j=1

xjk+ µk = ek, k = 1, . . . , K, (20)

(19)

and associate the last K columns of B with empty cutting patterns 0, cut from each of the slack variables µk, for k = 1, . . . , K. Furthermore, they describe a big-M method for slightly

adjusting the initial basic matrix when stock e1 is insufficient to meet demand using only the

simple cutting patterns. We refer the interested reader to their paper for technicalities.

A knapsack problem

From mathematical programming theory, we known that at a current basic solution, adding to the basis a variable with negative reduced costs decreases objective value (10). Let π = (πd, πs)

denote the (I + K)-vector of shadow prices associated with the constraints and Njk the column

in the constraint matrix of the continuous relaxation of (14) associated with non-basic variable xjk. The reduced costs for non-basic variable xjk are then given by

ˆ

cjk= cjk− π>Njk. (21)

Which, using (9), we rewrite as

ˆ cjk= Lk− I X i=1 (`i+ πdi)aijk− πks. (22)

For a given object type k, the most profitable cutting pattern a∗k, with reduced costs ˆc∗k, is thus found by solving maximize ak I X i=1 (`i+ πdi)aik subject to I X i=1 `iaik≤ Lk 0 ≤ aik≤ di and integer, i = 1, . . . , I, (23)

where the constraints ensure ak is indeed a valid cutting pattern for an object of type k, as

defined in (8). We recognise that (23) is the N P-hard bounded knapsack problem, which can by solved in pseudo-polynomial time using dynamic programming (see Martello and Toth, 1990).

Reaching the optimum

If ˆc∗= min{ˆc∗1, . . . , ˆc∗K} < 0, the new cutting pattern a∗

k∗ is added to the basis by a pivot step,

improving the current basic solution and updating shadow prices π. The process of finding profitable cutting patterns is repeated until ˆc∗ ≥ 0, at which point we have reached the global minimum of the relaxed problem with corresponding cutting frequencies xlp. In the remainder

of this paper, we shall also use xlp to refer to the solution the continuous relaxation of 14.

4.2

Obtaining an integer solution

(20)

Holthaus (2002). We present the well performing residual algorithm from Poldi and Arenales (2009), which is based upon ideas presented in the former references.

Essentially, the algorithm solves the relaxed problem using column generation and then finds an integer solution in the neighbourhood of xlp, using some rounding procedure. This solution

satisfies stock constraints (12) and is constructed in such a way that, for all item types, the number of produced items can not exceed their demand. The part of demand that is unmet, together with the remaining stock, constitutes another one-dimensional cutting stock problem, called the residual problem. The algorithm then solves the residual problem and all subsequent residual problems using column generation and rounding its solution, until all demand is met. We elaborate on these concepts in more detail in subsequent sections.

Remark. For the sake of convenience, we introduce a slight change of notation. So far, we have referred to by xjk as the frequency of the jth cutting pattern on object type k, where

j = 1, . . . , Nk for k = 1, . . . , K. Using this notation, however, we are unable to loop through all

cutting patterns using a single index.

Therefore, from now on xjk(j) refers to the frequency of the cutting pattern that is associated

with the jth column in constraint matrix (A, E), for j = 1, . . . , J . The second index, k(j), returns the object type that is used to produce this cutting pattern. Using this notation we can loop through all cutting patterns using only the j index.

Suppose, for example, that there are five cutting patterns. The first three cutting patterns are associated with the first object type and, using the old notation, their corresponding frequencies would be denoted by x11, x21 and x31. The last two cutting patterns are associated with the

second object type and have corresponding frequencies x12 and x22. From now we will refer to

these frequencies as x11, x21, x31, x42and x52, respectively.

Greedy rounding heuristic

The rounding procedure that is used by the residual algorithm to find an integer solution in the neighbourhood of xlp is called a greedy rounding heuristic, which we present in this section.

First, we formally define the notion of an approximate integer solution, to avoid the usage of the more ambiguous term neighbourhood.

Definition 4.3. (Approximate integer solution) Denote by xlp the solution of the continuous

relaxation of (14) and by y a vector of non-negative integers that is obtained from xlp by some

rounding procedure. We then call y an approximate integer solution if it satisfies Ay ≤ d and Ey ≤ e.

Example 4.4. Perhaps the most intuitive way to obtain an approximate integer solution is the round down the frequency of each cutting pattern, i.e.

yjk(j)= j xlp jk(j) k , j = 1, . . . , J. (24)

As y ≤ xlp and xlp satisfies (11) and (12), clearly Ay ≤ d and Ey ≤ e.

(21)

solution in which that frequency is rounded down to 14. We therefore present a different, more sophisticated, rounding procedure.

Consider xlp and let T denote the number of non-zero elements. The greedy rounding heuristic,

as given in Algorithm 1, will iteratively round all T non-zero frequencies, giving priority to high frequencies.

Algorithm 1: Greedy rounding heuristic

Data: Solution xlp of the continuous relaxation of (14)

Result: Approximate integer solution y of xlp

initialize T ← the number non-zero elements in xlp and y ← 0

Sort xlp in non-increasing order

for j = 1 → T do

yjk(j)← min{dxlpjk(j)e, ek(j)} // d·e denotes the ceiling function

ek(j) ← ek(j)− yjk(j) while Ay 6≤ d do yjk(j)← yjk(j)− 1 ek(j) ← ek(j)+ 1 end end return y

In order prioritise high frequencies, the heuristic starts by sorting xlp in non-increasing order.

The approximate integer solution of y of xlp, is initialized as y = 0. Starting with the first

cutting pattern, we try to round up its frequency, i.e. we try y1k(1)= l xlp 1k(1) m and y = (y1k(1), 0, . . . , 0). (25)

If y is then an approximate integer solution, i.e. if Ay ≤ d and Ey ≤ e, the rounded frequency is accepted. If, however, y is not an approximate integer solution, i.e. if Ay 6≤ d or Ey 6≤ e, we reduce y1k(1)by one until y satisfies the conditions of an approximate integer solution.

Next, the frequency of the second cutting patterns is rounded. Again, the rounding heuristic first tries y2k(2)= l xlp 2k(2) m and y = (y1k(1), y2k(2), 0, . . . , 0). (26)

If y is not an approximate integer solution we reduce y2k(2)by one until Ay ≤ d and Ey ≤ e.

This process is repeated up to cutting pattern T and the greedy rounding heuristic returns y = (y1k(1), . . . , yT k(T ), 0, . . . , 0).

Example 4.5. Consider a 1D-CSP instance with K = 2, I = 3 and parameters defined in Table 2.

First, we solve the continuous relaxation of (14) using column generation and obtain the solution presented in Table 3. We have already sorted xlp in order of non-increasing frequencies and

(22)

Stock | Demand

k Lk ek i `i di

1 70 10 1 29 4

2 50 3 2 9 10

3 16 8

Table 2 – Example problem instance

j k(j) xlp jk(j) ajk(j) cjk(j) 1 2 2.6 (1, 1, 2) 0 2 1 1.4 (1, 2, 0) 3 3 1 1.2 (0, 2, 2) 0 4 2 0.4 (0, 6, 1) 0

Table 3 – Solution of the relaxed problem

Note that the relevant parts of A and E are given by

A =   1 1 0 0 1 2 2 6 2 0 2 1   and E = 0 1 1 0 1 0 0 1  . (27)

As all other cutting patterns have frequency zero, we omit them from our notation.

To obtain approximate integer solution y of xlp we follow a number of steps according to

Algo-rithm 1.

1. Consider cutting pattern a12 = (1, 1, 2). We round up its frequency and obtain y =

(3, 0, 0, 0) producing vector of items Ay = (3, 3, 6). As no items are overproduced, i.e. as Ay ≤ d, and enough objects are available in stock, i.e. as Ey = (0, 3) ≤ e, y satisfies the conditions of an approximate integer solution and we may move on to the next cutting pattern. Notice that the full stock of objects of the second type is now used.

2. We now consider second cutting pattern a21= (1, 2, 0). Rounding up its frequency gives

y = (3, 2, 0, 0) and corresponding production vector Ay = (5, 7, 6) 6≤ d, overproducing the first item. Therefore we reduce its frequency by one and we try y = (3, 1, 0, 0), which produces item vector Ay = (4, 5, 6) ≤ d. Enough objects are available in stock, as Ey = (1, 3) ≤ e.

3. We now consider cutting pattern a31= (0, 2, 2). We obtain y = (3, 1, 2, 0) and production

vector Ay = (4, 9, 10), overproducing the third item. We try y = (3, 1, 1, 0), yielding produced items Ay = (4, 7, 8) ≤ d and using stocked objects Ey = (2, 3) ≤ e.

4. Finally, we immediately observe that rounding up the frequency of cutting pattern a42=

(0, 6, 1) will not be possible, as there are no more objects of type 2 remaining in stock. The greedy rounding heuristic returns approximate integer solution y = (3, 1, 1, 0) of xlp =

(2.6, 1.4, 1.2, 0.4).

Residual algorithm

(23)

W¨ascher and Gau (1996) tested the performance of several solution algorithms for the one-dimensional cutting stock problem with a single object type and noted the excellent performance of the residual algorithm. As a closely related problem, we feel this approach is also reasonable for the one-dimensional cutting stock problem with multiple stock types.

Definition 4.6. (Residual problem) We shall let customer demand be denoted by d0 and the

original stock by e0. Let y0the approximate integer solution of (14) with d = d0 and e = e0.

We shall then refer to (14), with demand d = dr, stock e = erand approximate integer solution

yr, as the rth residual problem when, for r ∈ N,

dr= dr−1− Ayr−1, (28)

and

er= er−1− Eyr−1 (29)

Example 4.7. Consider again the 1D-CSP instance from Table 2 with approximate integer solution y0= (3, 1, 1, 0) and cutting patterns as in (27). The first residual problem is then given

by (14) with demand and stock equal to

d1=   4 10 8  − Ay0=   0 3 0   and e1= 10 3  − Ey0= 8 0  , (30)

respectively. This problem is easily solved by using cutting pattern (0, 3, 0) on object type 1. The residual algorithm, as given in Algorithm 2, first solves the original problem and obtains approximate integer solution y0. The non-zero frequencies in y0 are stored in vector ˜x and

corresponding cutting patterns in matrices ˜A and ˜E. Then, the first residual problem is deter-mined and the process repeated until there is no more residual demand. That is, until dr= 0,

a new residual problem is solved by column generation, after which the greedy rounding heuris-tic finds an approximate integer solution of which the non-zero frequencies are added to ˜x and corresponding cutting patterns to ˜A and ˜E.

When there is no more residual demand, the residual algorithm returns the cutting patterns stored in ˜A and ˜E with corresponding integer frequencies ˜x. If, at some point, residual stock er= 0 while dr> 0, the residual algorithm is unable to find a feasible integer solution.

(24)

Algorithm 2: Residual algorithm Data: Demand d and stock e

Result: Matrix of cutting patterns ( ˜A, ˜E) and corresponding integer frequencies ˜x initialize d0← d and e0← e // The 0th residual problem is the original problem

r ← 0

while dr6= 0 do

if er= 0 then

return No feasible solution can be found // Algorithm is stopped

else

Solve continuous relaxation of (14) with d = dr and e = er using column

generation

Obtain approximate integer solution yr using the greedy rounding heuristic

Add non-zero elements of yr to ˜x and corresponding cutting patterns to ( ˜A, ˜E)

dr+1← dr− Ayr and er+1← er− Eyr

r ← r + 1 end

end

(25)

5

Performance of the algorithm

5.1

Results by Poldi and Arenales

To determine the quality of Algorithm 2, Poldi and Arenales (2009) compared its performance with that of a number of different, albeit much simpler, algorithms they developed in the same paper. Furthermore, for 28 problem classes, they reported the relative gaps between the solu-tion values provided by the exact method of Belov and Scheithauer (2002) and the algorithms developed in their paper.

Two other algorithms they implemented were so called constructive algorithms, called first-fit decreasing (FFD) and greedy. Both these algorithms were outperformed by Algorithm 2 on all of the 18 problem classes they tested, confirming the earlier mentioned results on the performance of residual algorithms by W¨ascher and Gau (1996) for the CSP with only one object type. Additionally, they investigated an approach where, like in the residual algorithm, the continuous relaxation of the problem was solved by column generation, but where all frequencies where simply rounded down as in Example 4.4 and the residual problems was solved by either FFD or greedy. Also this approach was outperformed by Algorithm 2 on all 18 problem classes, which suggests the rounding procedure of residual algorithms indeed needs be constructed carefully. The performance of Algorithm 2 using two different greedy rounding heuristics was also inves-tigated. Note that the greedy rounding heuristic defined in Algorithm 1 is referred to by Poldi and Arenales (2009) as GRH1, the other greedy rounding heuristics being GRH2 and GRH3. In the 18 problem classes that were tested, neither of the rounding heuristic dominated the others. These results were further confirmed by the results on the 28 problem classes from the Belov and Scheithauer (2002) paper. By a large margin, the residual algorithms with greedy rounding outperformed the two constructive algorithms and the two algorithms where the residual problem was solved by a constructive algorithm. On average, the amount of waste in the solution found by Algorithm 2 was 4.4% higher than the amount of waste in the provable optimum.

5.2

Production data comparison

In this section we compare the performance of Algorithm 2 with the performance of the solution algorithm that is currently implemented at company X using real production data from four different sections. These are sections 20, 40, 60 and 140, all from order 1430.

Each of these sections consists of items that are ordered on a specific type of material, being Holland-profiel (HP), strip (ST) and hoek (HK), and each of these materials is available in a different number of widths and thicknesses. As such, each set of items ordered with certain thickness, width and type of material constitutes a one-dimensional cutting stock problem of its own. Combined, a total of 41 problem instances of varying sizes were solved.

(26)

last λ = 30 millimetres of the object. In the case of the saw, these numbers are given by δ = 4 and λ = 0.

To measure the performance of algorithm that is currently used by company X, we looked at the relevant production reports and counted the number of objects that were used in the actual production of the section. Conveniently, for the robot cutting machine, all objects that were used in production had a length of Lr = 12 metres, whereas all objects cut by the saw had a

length of Ls= 6 metres.

In Table 4, the columns labelled used indicate the number of objects used in actual production. For example, to produce the HP 80×6 items from section 1430-20, 49 twelve-metre objects were used as they were cut by the robot. Similarly, to produce the HK 75×50×7 items from that same section, 2 six-metre objects were cut by the saw.

The columns labelled sol. give the solution found by Algorithm 2 in terms of objects of the same size that were used in actual production. So, to produce the HP 80×6 items from section 1430-40, a solution was found using 40 twelve-metre objects, whereas in actual production 41 of these objects were used. The sol. column entry is printed in boldface when our solution is different from the solution found by the currently implemented algorithm.

Out of the 41 problem instances we tested, in six cases the solution we found improved the solution from actual production by a single object and in one case, where some unfortunate rounding took place, our solution was worse by a single object. In these problem instances, a total of 337 twelve-metre objects were used in actual production on the cutting robot, whereas 333 twelve-metre objects are needed using the solution from Algorithm 2, a decrease of 1.2%. Furthermore, an additional 201 six-metre objects were cut by the saw, versus 200 six-metre objects needed in our solution, a decrease of 0.5%. Overall, a decrease of 1.0% in metres of used objects was achieved.

5.3

Comparing with a theoretical lower bound

In order to give some additional insight in the quality of the solutions that were found, we also compare them to the minimal number of twelve- or six-metre objects that are needed to fit the items, just by looking at their (adjusted) lengths.

As in Lemma 3.4, we denote by ˆ`i the adjusted length of item type i, for i = 1, . . . , I, and by ˆL

the adjusted length of the object. A trivial lower bound on the number of used objects on either the robot or the saw is then given by

LBh= &m X i=1 di`ˆi ˆ Lh ' , h = robot or saw. (31)

In Table 4, the column labelled LB gives the lower bound of needed objects for all 41 problem instances, given that the robot only cuts objects of Lr= 12 metres and the saw only cuts objects

of Ls= 6 metres. In 40 of these instances the algorithm finds the best possible number of needed

(27)

Robot (λ = 30 and δ = 20) | Saw (λ = 0 and δ = 4)

Material LB Used Sol. Material LB Used Sol.

Section 1430-20 HP 80×6 49 49 49 HK 75×50×7 2 2 2 HP 100×6 46 46 46 HK 90×60×8 15 16 15 HP 100×7 2 2 2 HK 120×80×8 16 16 16 ST 100×10 35 35 35 ST 60×8 2 2 2 ST 120×10 2 2 2 ST 140×10 2 2 2 ST 150×10 2 2 2 Section 1430-40 HP 80×6 40 41 40 HK 75×50×7 29 29 29 HP 100×6 16 17 16 HK 80×65×6 1 1 1 HP 100×8 14 14 14 HK 90×60×8 6 6 6 ST 80×8 1 1 1 ST 40×5 36 36 36 ST 80×10 1 1 1 ST 40×10 1 1 1 ST 100×10 20 21 20 ST 50×8 1 1 1 Section 1430-60 HP 80×6 22 23 22 HK 75×50×7 5 5 5 HP 100×6 4 4 4 HK 90×60×8 9 9 9 HP 120×6 22 22 22 HK160×80×10 2 2 2 ST 100×10 11 11 11 ST 40×5 18 18 18 ST 150×15 2 2 2 Section 1430-140 HP 80×6 25 25 25 HK 75×50×7 17 17 17 HP 100×6 1 1 1 HK 160×80×10 2 2 3 ST 100×10 9 9 9 ST 40×5 36 37 36 ST 110×10 1 1 1 ST 75×8 1 1 1 ST 150×15 6 6 6

(28)

6

The two-dimensional cutting stock problem

Recall that for our main research question, we wish to investigate to which extent the behaviour of one-dimensional cutting stock problems translates to the behaviour of two-dimensional problem instances and consequently, whether our algorithm can be used to obtain insights in behaviour of the solution of two-dimensional problem instances.

Therefore, in this chapter we focus our attention on the two-dimensional cutting stock problem. We first review the literature on this problem, after which we take a closer a look at SigmaNEST.

6.1

Literature review

In the literature, the two-dimensional cutting stock problem faced at company X is often referred to as the nesting or irregular strip packing problem, the main characteristic of these problem being the irregular shape of the items to be cut. In contrast to the one-dimensional CSP, where items can be distinguished solely by their length, in the two-dimensional problem items can have a large variety of shapes and can be cut from the object at any rotation, see Figure 2. Furthermore, in addition to placing items next to each other, they can also be placed on top of each other or even in holes of bigger items.

Not surprisingly, the nesting problem is a very difficult problem to solve. An interesting note on its N P-hardness is given by Nielsen and Odgaard (2003). The difficulty of nesting problems is well illustrated by the fact that that an overwhelming majority of solution methods reported in the literature is heuristic in nature and that by the end of the last century, in an invited review on nesting problems, Dowsland and Dowsland (1995) state that “many studies comparing layouts obtained using automatic processes with those produced by an in-house expert show the computer generated solutions to be inferior in terms of trim-loss”. More recently, studying a generalization of the nesting problem, Baldacci et al. (2014) found their algorithm to be “competitive with human nesters for relatively nicely behaved part sets and surfaces”.

An exact method using constraint logic programming was proposed by Carravilla et al. (2003), but could solve to optimality only problems with no more than seven items. Another exact method was proposed by Toledo et al. (2013), who propose a mixed-integer model with binary decision variables associated with each discrete point of the object and each item type. However, the precision of this method depends on the amount of discretization of the objects and the model was unable to solve instances with more than 56 items.

(29)

6.2

SigmaNEST

The software product that is used by company X to solve nesting problems is called SigmaNEST2. Within the SigmaNEST environment, two-dimensional cutting stock problems can be solved manually or automatically.

Most orders processed by company X are for items of various steel types, which, due to the high price of steel and many side-constraints from production, are nested manually. Some of these items are bent into three-dimensional shapes later in the production process. To aid the bending process, a set of wooden molds is used, which are cut from a large sheet of medium-density fibreboard (MDF). As the price of MDF is considerably lower than that of steel, the cutting layouts of the molds are generated automatically.

To solve nesting problems automatically, a number of settings can be used within SigmaNEST. In this section, a number of problem instances is solved using various settings and the resulting waste percentages are compared. The reason for this small investigation is twofold:

• To determine whether the settings that are currently used to nest automatically produce the optimal results.

• To determine which SigmaNEST settings we should use to solve a set of two-dimensional cutting stock problem instances in Section 7.2.

Type of nesting algorithm

We will consider three of the built-in nesting algorithms: Advanced True Shape (which produced the exact same results as the True Shape algorithm), Lookahead and Auto Select. The other nesting algorithms, which are designed to produce certain types of layouts or to handle, for example, only rectangular items, are not considered as they do not match our type of problem. Currently, the Auto Select algorithm is used.

Strategy

Given a type of nesting algorithm, one can choose to nest the items vertically (strategy 1), horizontally (strategy 5) or according to intermediate combinations of both (strategies 2 - 4). That is, using strategy 1 long items are nested standing up, whereas they are nested laying down using strategy 5. As the objects provided to nest the items from are also laying down (i.e. such that the objects x-dimension is greater than its y-dimension in the standard (x, y)-plane), intuitively we expect strategy 5 to produce solutions of superior quality over strategy 1. Each type of nesting algorithm will be considered using both strategy 1 and strategy 5. Currently, strategy 5 is used.

Lookahead factor

Lastly, in case the Lookahead algorithm is chosen, the user can choose a lookahead factor between 1 and 5, which, according to the SigmaNEST manual, is an efficiency parameter. We will test the Lookahead algorithm using both extremes of the lookahead factor and refer to them as Lookahead 1 and Lookahead 5 respectively.

(30)

Results

We selected 21 historical two-dimensional problem instances and solved each with the settings as described above. We then recorded the percentage of waste in each solution, using the weight of demanded items and the weight of the objects needed to cut them from:

Waste percentage = 

1 − total weigth of the items total weight of the objects



× 100%. (32)

After solving the first eight problem instances, we observed that strategy 5 outperformed strategy 1 on each problem instance and for each type of nesting algorithm and decided to drop the first strategy from the experiment.

In Table 5 one finds the waste percentage in the solution of the 21 problem instances for Advanced True Shape (ATS), Lookahead 1, Lookahead 5 and Auto Select, each using the fifth strategy. In seven problem instances all four sets of settings produced the same solution in terms of corresponding waste percentage (even though the layout of the items on the objects could have been completely different). In another four problem instances the waste percentage across settings differs only marginally due to some settings choosing smaller objects to cut the items from. In ten problem instances the solution across settings differed not only in waste percentage, but also in the number of objects that were needed the solve the problem.

In five instances the best solution (compared to the other settings) is found by Lookahead 1 and in 7 instances Lookahead 5 found the best solution. In none of the instances the Advanced True Shape algorithm or the Auto Select algorithm found a solution that is better than the one found by either one of the two Lookahead algorithms. In terms of computing time, Advanced True Shape and Lookahead 1 are the fastest algorithms, taking no more than a minute on most problem instances and 5 minutes on the largest problem instances. The Lookahead 5 and Auto Select algorithm take considerably longer to obtain a solution, on average being roughly three and five times as slow as Advanced True Shape, respectively.

(31)

Instance 1 2 3 4 5 6 7 ATS 37% 60% 29% 50% 37% 66% 46% Lookahead - 1 24% 60% 29% 50% 37% 66% 46% Lookahead - 5 29% 51% 29% 50% 35% 67% 46% Auto Select 37% 61% 29% 50% 41% 67% 46% Instance 8 9 10 11 12 13 14 ATS 63% 37% 81% 33% 80% 37% 75% Lookahead - 1 63% 33% 81% 26% 76% 24% 75% Lookahead - 5 59% 34% 67% 25% 75% 36% 75% Auto Select 63% 33% 81% 28% 76% 37% 75% Instance 15 16 17 18 19 20 21 ATS 30% 58% 56% 36% 64% 19% 67% Lookahead - 1 24% 52% 53% 36% 64% 19% 67% Lookahead - 5 21% 53% 56% 36% 64% 19% 67% Auto Select 30% 58% 56% 36% 73% 19% 67%

(32)

7

Input analysis

In this section we investigate how a number of order characteristics influence the amount of waste that is generated when the order is produced. These characteristics are the size of an order, the size of the ordered items and the possible use of specification objects for large items. We will first carry out the sensitivity analysis for the one-dimensional CSP using Algorithm 2. A similar analysis for the two-dimensional cutting stock problem is carried out using SigmaNEST. We then compare these results to observe if, in both problems, the waste percentage is affected similarly by the order characteristics.

7.1

The one-dimensional problem

To study the behaviour of the one-dimensional problem with respect to order characteristics, we randomly generate and solve 4000 problem instances. The parameters of the problem are such that they resemble historical orders at company X.

In each problem instance an infinite stock of six- and twelve-metre objects is kept to meet the randomly generated demand. The number of item types, I, is randomly chosen between 10 and 55, each integer with equal probability, so that both small and large orders are generated. Then, for each item type i, its length `i and demand di are also randomly generated.

By analysing the same sections we used to measure the performance of our algorithm in Table 4, we observed that roughly 50% of demand is for items smaller than a metre, 42.5% of demand is for items longer than a metre but smaller than four metres and the remaining 7.5% of demand is for items larger than four metres. The item lengths are randomly generated between 0.1 metres and 11.9 metres with 20 millimetre increments, such that, on average, the distribution of item lengths matches that of real orders.

Similarly, we observed that roughly 30% of the items types was only demanded once and that, on average, 50% of the item types was demanded twice. With 10% probability demand for an item type is 3, 4 or 5 and the remaining 10% of the item types has a demand randomly generated between 6 and 40, each integer with equal probability. We generate the demands in the instances such that they resemble this distribution.

Regression

For each problem instance, we record the waste percentage and a number of characteristics of the order, with the aim to gain insight on how these characteristics influence the waste percentage. We first introduce some notation.

First, let total item demand be denoted by D, i.e.

D =

I

X

i=1

di. (33)

(33)

length in the interval [`, `]. Formally, p[`,`]= I X i=1 `≤`i≤` di D× 100%. (34)

When, for example, we find that p[500,1000]= 20%, we know that twenty percent of all demanded

items have length between 500 and 1000 millimetres. As such, we can vary the values of ` and ` and gain insight in the distribution of the lengths of the demanded items.

To measure the size of the order in instance k, k = 1, . . . , 4000, we record the number of item types Ik and the total item demand Dk, and to measure the distribution of the lengths of the

demanded items we record pk

[100,550], p k [551,1000], p k [1001,2000], . . . , p k [10001,11000]and p k [11001,11900].

We then regress the waste percentage on these characteristics, leaving out p[11001,11900]to prevent

multicollinearity. That is, when y denotes the vector of 4000 waste percentages and the kth row of matrix of observations X, x>k, equals

x>k = (1, Ik, Dk, pk[100,550], pk[551,1000], pk[1001,2000], . . . , pk[10001,11000])>, k = 1, . . . , 4000, (35)

we estimate vector of coefficients β in

y = Xβ + , (36)

where  denotes a vector of error terms.

The results of this regression can be found in Table 6. Note that both the waste percentages in y and the different values of p[`,`] are measured in percentages and not in their decimal values.

Number of observations: 4000 Adjusted R-squared: 0.49

Variable Coeff. Variable Coeff. Variable Coeff.

Intercept 6.39∗∗∗ p[1001,2000] −0.02 p[6001,7000] 0.37∗∗∗ I −0.09∗∗∗ p [2001,3000] −0.02∗ p[7001,8000] 0.27∗∗∗ D −0.01∗∗∗ p [3001,4000] −0.01 p[8001,9000] 0.17∗∗∗ p[100,550] 0.01 p[4001,5000] 0.04∗∗ p[9001,10000] 0.14∗∗∗ p[551,1000] −0.01 p[5001,6000] 0.00 p[10001,11000] 0.07∗∗∗

Table 6 – Regression results according to (36). ∗,∗∗,∗∗∗ indicates significance at the 90%, 99%, and 99.9% level, respectively.

Remark. An interpretation of the regression results in Table 6 is provided here for the reader who is less familiar with linear regressions.

We observe an adjusted R-squared value of 0.49, indicating that 49% of the variation in the waste percentages can be explained by variation in the explanatory variables. This indicates that model (36) would not be very suited to predict waste percentages a priori. However, the high significance of a number of variables, indicated by∗∗∗ above the corresponding coefficient, shows that significant trends do exist in the data.

(34)

10 × 0.09 = 0.9, keeping all other variables fixed. The coefficient of 0.37 corresponding to p[6001,7000] indicates that when the percentage of items of length between six and seven metres

millimetres is increased by one, we expect the waste percentage to increase by 0.37, again keeping all other variables fixed.

Order size

Looking at Table 6, we find that increasing the size of an order (measured by either I or D) decreases the expected waste percentage.

We grouped the problem instances according to the the number of item types and provide a boxplot of the waste percentages in each group in Figure 4a. As expected from the regression results, we observe a downward sloping trend. The rationale behind the downward sloping trend is that when more item types are added to an order, the number of cutting patterns increases too, of which the algorithm can use the most efficient ones. Furthermore, when the number of item types increases the variability of the waste percentage seems to decrease as well.

Similarly, the same trend is observed in Figure 4b, where a boxplot of the waste percentages is shown for problem instances grouped on the size of their total demand. The rationale here lies again with the number of cutting patterns, as more efficient cutting patterns can be used due to the higher demand of the items. Furthermore, on average, larger orders require a larger number of objects to be cut from, decreasing the impact a single non-efficient cutting pattern has on the final waste percentage, which we see quite often.

Size of the items

From the regression results in Table 6, we observe that the share of small (say items smaller than 550 millimetres) or medium (say items of length between 551 and 6000 millimetres) sized items in an order does not strongly and significantly affect the resulting waste percentage, at least not linearly. In Figure 4c we grouped the instances according to the share of demand that is for items smaller than 550 millimetres, i.e. for different intervals of p[100,550], and show a boxplot

of waste percentages in each group. The median waste percentage value decreases slightly when p[100,550] increases to 30%, but when p[100,550]increases even further, we observe that the median

waste percentage does too. We believe the reason behind this pattern is the following. Initially, additional small items can be combined efficiently with large items, creating cutting patterns with only small amounts of waste. However, when the share of small items is increased even further at some point the small items can only be combined with each other, and often need a whole object to cut only the last couple of small items, increasing the waste percentage. From the regression it is however apparent that the share of large items in an order does affect the expected waste percentage, as the coefficients of p[6001,7000], p[7001,8000], p[8001,9000], p[9001,10000]

and p[10001,11000] are all positive and significant. This can also be seen in Figure 4d, where

we observe an increasing waste percentage median as the share of large items (measured by p[6001,11900]) is increased. The rationale here is that items of length exceeding six metres, can not

(35)

10−18 19−27 28−36 37−45 46−55

0

5

10

15

Number of item types

W

aste percentage

(a) Problem instances grouped on number of item types, I. <50 50−99 100−149 150−199 200−250 >250 0 5 10 15 Total demand W aste percentage

(b) Problem instances grouped on total demand, D. <0.1 0.1−0.2 0.2−0.3 0.3−0.4 0.4−0.5 0.5−0.6 >0.6 0 2 4 6 8 10 12 14

Fraction of demand that is for small items

W

aste percentage

(c) Problem instances grouped on share of small items, p[100,550]. 0 0−0.1 0.1−0.2 0.2−0.3 0.3−0.4 0.4−0.5 >0.5 0 10 20 30

Fraction of demand that is for large items

W

aste percentage

(d) Problem instances grouped on share of large items, p[6001,11900].

Figure 4 – Boxplots of the waste percentage where the problem instances are grouped according to different values of I, D, p[100,550]and p[6001,11900]. Widths of the boxes are proportional to the

(36)

instances, with p[6001,7000]= 0, also has larger waste percentages, due to having too many small

items, a problem we discussed above.

7.2

The two-dimensional problem

We now turn attention to the two-dimensional cutting stock problem. Particularly, we wish to investigate whether the relationship we found between certain order characteristics and the waste percentage for the one-dimensional problem in Section 7.1, also exists for two-dimensional problems.

As such, we proceed, like we did in the one-dimensional case, by generating a set of 120 problem instances. We cannot generate a very large number of problem sets, due to the high amount of computational time that is needed to solve two-dimensional CSP instances, especially for large orders. In each problem instance an infinite stock of 12 by 3 metre objects is available to meet the randomly demand for I = 11 different item types as shown in Figure 5.

Figure 5 – The eleven item types (diagonally striped) of which the orders are composed, and the object (in grey) from which they are cut. The item types are ordered from large to small.

The item types differ in size and shape and their demand is generated randomly. In the first set of 20 problem instances, demand for each item type is randomly drawn between 1 and 5, each integer with equal probability. In second set of 20 problem instances, demand for each item type is randomly drawn between 1 and 10. This pattern is continued until the sixth set of 20 problem instances, where demand for each item type is randomly drawn between 1 and 30. As such, we are guaranteed to our dataset consists of small and large problem instances. Each problem instance is solved by SigmaNEST using the settings we found to be optimal in Section 6.2 and the waste percentage is determined according to (32).

Remark. It is important to remark here that due to the high level of abstraction in the creation of these problem instances, the resulting waste percentages (the average waste percentage being 25%) are by no means illustrative of the actual level of waste at company X. Even though the items shown in Figure 5 resemble items from actual orders in terms of variety in shapes and sizes, from the fact that only a single object type is used, large leftovers are not considered reusable and that the solutions are generated without any involvement of a human nester, a strong resemblance with reality in terms of realistic waste percentages is lost.

(37)

Regression and boxplots

As the number of item types in each problem instance is fixed at I = 11, problem size is measured solely by total demand D =PI

i=1di. To measure the distribution of item sizes, for i = 1, . . . , I,

we define pi as the share of demand that is for item type i, or formally,

pi=

di

D× 100%, i = 1, . . . , I. (37)

To find out whether there exists a relationship between the waste percentage as dependent variable and the size of an order and distribution of item sizes as independent variables, another ordinary least squares regression is performed, finding coefficients ˆβ as in Table 7. Similarly as in the previous regression, p11is not used as right hand side variable to prevent multicollinearity.

Number of observations: 120 Adjusted R-squared: 0.60

Variable Coeff. Variable Coeff. Variable Coeff.

Intercept 24.29∗∗∗ p3 0.18∗ p7 −0.08

D −0.02∗∗ p

4 0.02 p8 −0.06

p1 0.74∗∗∗ p5 0.10 p9 −0.01

p2 −0.22∗ p6 −0.24∗∗ p10 −0.04

Table 7 – Results of regressing the waste percentage on D, p1, . . . , p9 and p10.∗,∗∗,∗∗∗indicates

significance at the 90%, 99%, and 99.9% level, respectively.

In Table 7, we observe that the coefficient of D equals −0.02 and is significant at the 99% confidence level. So, like in the one-dimensional case, the expected waste percentage decreases when the number of demanded items increases, keeping other variables fixed. In Figure 6a boxplots of the waste percentage are provided for instances of varying sizes of D, revealing as expected, the same trend.

From this observation we learn that from a geometrical point of view, when different sections are nested together, the waste percentage is expected to decrease. Note, however, the decreas-ing marginal expected decrease in the waste percentage in both Figure 4b for one-dimensional problem instances and Figure 6a for two-dimensional problem instances.

Furthermore, we observe that the coefficient of p1equals 0.74 and is significant even at the 99.9%

confidence level. Looking at Figure 5, we see that item type 1 is the largest object. That is, as we have seen before in the one-dimensional case, when the share of demand that is for large items increases, the expected waste percentage increases accordingly, keeping all other variables fixed. The boxplots revealing this trend are given in Figure 6b. Visually inspecting the cutting layouts provided by SigmaNEST confirms that this relation is explained by the fact that when the share of large items increases, at some point there are no more small items to combine the large items with and cutting the last few large items is necessarily done very inefficiently. Interestingly, the coefficient of p2 equals −0.22 and is significant at the 99% confidence level,

(38)

to fill the remaining space. A similar observation was made in Section 7.1, where the threshold length of a large item type was set such that no two large items could be cut from a single object. The coefficients of p8, p9 and p10, corresponding to the smallest item types, are negative, but

at the same time strongly insignificant. Only the coefficient of item type 6, another small item type, is in fact both negative and significant at the 99% confidence level.

Additionally, we considered the four smallest item types as a whole, i.e. item types 8, 9, 10 and 11, and grouped the instances according to different ranges of p8+ p9+ p10+ p11. In Figure 6c

one compares the boxplot of waste percentages in these groups. Like we observed in the one-dimensional case, there does not seem to be a strong relationship between the share of demand that is for small items and the waste percentage.

● ● ● ● ● <40 41−80 81−120 121−160 >160 15 20 25 30 35 40 Total demand W aste percentage

(a) Problem instances grouped on total demand, D. ● ● ● 0%−5% 6%−10% 11%−15% 16%−25% 15 20 25 30 35 40

Percentage of demand that is for item 1

W

aste percentage

(b) Problem instances grouped on value of p1.

● ● ● <25% 25%−30% 30%−35% 35%−40% 40%−45% >45% 15 20 25 30 35 40

Percentage of demand that is for small items

W

aste percentage

(c) Problem instances grouped on percentage de-mand for small items.

Figure 6 – Boxplots of the waste percentage where the two-dimensional problem instances are grouped according to different values of D, p1and the percentage demand for small items. Widths

Referenties

GERELATEERDE DOCUMENTEN

Schmidtverhaal over de koekoek , en dat op een plekje waar onze heempark­ ko ekoek altijd koekoek roept.... Heel wat kinderen kregen gr assprietfluitjes en lui sterden

Sander Bax (De taak van de schrijver 2007) daarentegen plaatst terecht kritische nuances bij het gebruik van het begrip autonomie door Vaessens en door Ruiter en Smulders, en laat

Een betrouwbare herkenning wordt bemoeilijkt door de vele kleuren van de vuilschaligheid, variaties in eikleur, eivorm en de aanwezigheid van kalk- en pigmentvlekken op de schil die

Het aantal zeedagen voor deze schepen lag in 2006 gemiddeld op 196 en de verdienste voor een opvarende op deze kotters lag met 46.000 euro op een 28% hoger niveau in vergelijking

Als tijdens het ontwerpend onderzoek blijkt dat een bepaald concept of scenario afbreuk zal doen aan de krachtlijnen van de site, dan kunnen de onderzoekers eerst bekijken of het

Du Plessis’s analysis clearly does not cover the type of situation which we shall see came before the court in the Thatcher case, namely a request for assistance in criminal

NME M 0ν for the 0νββ decay of 150 Nd → 150 Sm, calculated within the GCM + PNAMP scheme based on the CDFT using both the full relativistic (Rel.) and nonrelativistic-reduced (NR)

Gangpolbahn und der Rastpolbahn wie 1 : 2. Koppelpunkte, die sich in einem Undulationspunkt befin- den, durchlaufen ein nahezu geradliniges Bahnstiick der Kop- pelkurve. Da die