• No results found

Applying the cross-entropy method in multi-objective optimisation of dynamic stochastic systems James Bekker

N/A
N/A
Protected

Academic year: 2021

Share "Applying the cross-entropy method in multi-objective optimisation of dynamic stochastic systems James Bekker"

Copied!
243
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

multi-objective optimisation of dynamic

stochastic systems

James Bekker

Dissertation presented for the degree of Doctor of Philosophy in the Faculty of Engineering at Stellenbosch University

Promoter: Professor JH van Vuuren December 2012

(2)

By submitting this dissertation electronically, I declare that the entirety of the work contained therein is my own, original work; that I am the sole author thereof (save to the extent explicitly otherwise stated); that reproduction and publication thereof by Stellenbosch University will not infringe any third-party rights, and that I have not previously in its entirety or in part submitted it for obtaining any qualification.

Copyright © 2012 Stellenbosch University All rights reserved

(3)

A difficult subclass of engineering optimisation problems is the class of optimisation problems which are dynamic and stochastic. These problems are often of a non-closed form and thus studied by means of computer simulation. Simulation production runs of these problems can be time-consuming due to the computational burden implied by statistical inference principles. In multi-objective optimisation of en-gineering problems, large decision spaces and large objective spaces prevail, since two or more objectives are simultaneously optimised and many problems are also of a combinatorial nature. The computational burden associated with solving such problems is even larger than for most single-objective optimisation problems, and hence an efficient algorithm that searches the vast decision space is required. Many such algorithms are currently available, with researchers constantly improving these or developing more efficient algorithms. In this con-text, the term “efficient” means to provide near-optimised results with minimal evaluations of objective function values. Thus far research has often focused on solving specific benchmark problems, or on adapting algorithms to solve specific engineering problems.

In this research, a multi-objective optimisation algorithm, based on the cross-entropy method for single-objective optimisation, is developed and assessed. The aim with this algorithm is to reduce the number of objective function evaluations, particularly when time-dependent (dynamic), stochastic processes, as found in Industrial Engineering, are studied. A brief overview of scholarly work in the field of multi-objective optimisation is presented, followed by a theoretical discussion of the cross-entropy method. The new algorithm is developed, based on this information, and assessed considering continuous, deterministic problems, as well as discrete, stochastic problems. The latter include a classical single-commodity inventory problem, the well-known buffer

(4)

urable manufacturing system. Near multi-objective optimisation of two practical problems were also performed using the proposed algorithm. In the first case, some design parameters of a polymer extrusion unit are estimated using the algorithm. The management of carbon monoxide gas utilisation at an ilmenite smelter is complex with many decision variables, and the application of the algorithm in that environment is presented as a second case.

Quality indicator values are estimated for thirty-four test problem instances of multi-objective optimisation problems in order to quantify the quality performance of the algorithm, and it is also compared to a commercial algorithm.

The algorithm is intended to interface with dynamic, stochastic simula-tion models of real-world problems. It is typically implemented in a programming language while the simulation model is developed in a dedicated, commercial software package.

The proposed algorithm is simple to implement and proved to be efficient on test problems.

(5)

’n Moeilike deelklas van optimeringsprobleme in die ingenieurswese is optimeringsprobleme van ’n dinamiese en stogastiese aard. Sulke probleme is dikwels nie-geslote en word gevolglik met behulp van reke-naarsimulasie bestudeer. Die beginsels van statistiese steekproefneming veroorsaak dat produksielopies van hierdie probleme tydrowend is weens die rekenlas wat genoodsaak word. Groot besluitnemingruimtes en doelwitruimtes bestaan in meerdoelige optimering van ingenieursprob-leme, waar twee of meer doelwitte gelyktydig geoptimeer word, terwyl baie probleme ook ’n kombinatoriese aard het. Die rekenlas wat met die oplos van sulke probleme gepaard gaan, is selfs groter as vir die meeste enkeldoelwit optimeringsprobleme, en ’n doeltreffende algoritme wat die meesal uitgebreide besluitnemingsruimte verken, is gevolglik nodig. Daar bestaan tans verskeie sulke algoritmes, terwyl navorsers steeds poog om hierdie algoritmes te verbeter of meer doeltreffende algoritmes te ontwikkel. In hierdie konteks beteken “doeltreffend” dat naby-optimale oplossings verskaf word deur die minimum evaluering van doelwitfunksiewaardes. Navorsing fokus dikwels op oplossing van standaard toetsprobleme, of aanpassing van algoritmes om ’n spesifieke ingenieursprobleem op te los.

In hierdie navorsing word ’n meerdoelige optimeringsalgoritme gebaseer op die kruis-entropie-metode vir enkeldoelwit optimering ontwikkel en geassesseer. Die mikpunt met hierdie algoritme is om die aantal evaluerings van doelwitfunksiewaardes te verminder, spesifiek wanneer tydafhanklike (dinamiese), stogastiese prosesse soos wat dikwels in die Bedryfsingenieurswese te¨egekom word, bestudeer word. ’n Bondige oorsig van navorsing in die veld van meerdoelige optimering word gegee, gevolg deur ’n teoretiese bespreking van die kruis-entropiemetode. Die nuwe algoritme se ontwikkeling is hierop gebaseer, en dit word ge-assesseer deur kontinue, deterministiese probleme sowel as diskrete,

(6)

in ’n klassieke enkelitem voorraadprobleem, die bekende buffer-toede-lingsprobleem, en ’n nuut-ontwerpte, laboratorium-skaal herkonfigureer-bare vervaardigingstelsel. Meerdoelige optimering van twee praktiese probleme is met die algoritme uitgevoer. In die eerste geval word sekere ontwerpparameters van ’n polimeer-uittrekeenheid met behulp van die algoritme beraam. Die bestuur van koolstofmonoksiedbenutting in ’n ilmeniet-smelter is kompleks met verskeie besluitnemingveranderlikes, en die toepassing van die algoritme in daardie omgewing word as ’n tweede geval aangebied.

Verskeie gehalte-aanwyserwaardes word beraam vir vier-en-dertig toets-gevalle van meerdoelige optimeringsprobleme om die gehalte-prestasie van die algoritme te kwantifiseer, en dit word ook vergelyk met ’n kommersi¨ele algoritme.

Die algoritme is veronderstel om te skakel met dinamiese, stogastiese simulasiemodelle van regtewˆereldprobleme. Die algoritme sal tipies in ’n programmeertaal ge¨ımplementeer word terwyl die simulasiemodel in doelmatige, kommersi¨ele programmatuur ontwikkel sal word. Die voorgestelde algoritme is maklik om te implementeer en dit het doeltr-effend gewerk op toetsprobleme.

(7)

I am grateful to the following people who supported me toward this dissertation:

• Professor Jan H van Vuuren, my promoter, role model and es-teemed colleague, for his guidance and wisdom on a strategic level, and his meticulous attention to detail.

• The Chairman of the Department of Industrial Engineering at Stellenbosch University, Doctor Andr´e van der Merwe, for affording me the major scarce commodities, namely time and financial support.

• My colleagues, for absorbing a substantial part of my workload and supporting me in many ways.

• My mentors, Nicolaas du Preez and the late Willem van Wijck, for introducing me to computer simulation and supporting me when I needed it the most.

• My parents, for encouraging me to study.

• My family, for their support, encouragement and above all, pa-tience.

• Marlene Rose, for proofreading the document and making valuable suggestions.

• The many students I worked with and will work with, for enriching my life and teaching me more than I can ever teach them.

(8)

Declaration i Abstract ii Opsomming iv Acknowledgements vi Nomenclature xxv 1 Introduction 1

1.1 Background to the research hypothesis . . . 1

1.2 The research hypothesis . . . 7

1.3 Aim and objectives . . . 7

1.4 Structure of the document . . . 8

2 Multi-objective optimisation: Literature 10 2.1 Introduction to MOO . . . 11

2.2 Definitions used in MOO . . . 12

2.3 Evolutionary algorithms and MOO . . . 14

2.3.1 Fitness assignment and ranking of solutions . . . 16

2.3.2 Proximity and diversity . . . 17

2.3.3 Test problems for MOO . . . 18

(9)

2.4 Other MOO metaheuristics . . . 22

2.4.1 Simulated annealing . . . 22

2.4.2 Tabu search . . . 23

2.4.3 Ant systems . . . 24

2.4.4 Particle swarm optimisation . . . 25

2.4.5 Hill-climbing techniques . . . 26

2.4.6 Distributed reinforcement learning . . . 26

2.4.7 Differential evolution . . . 27

2.4.8 Artificial immune systems . . . 28

2.4.9 Evolution strategy . . . 28

2.4.10 Memetic algorithms . . . 30

2.4.11 Firefly algorithm . . . 31

2.5 Hyperheuristics . . . 31

2.6 General applications of MOO . . . 32

2.7 MOO applications in Industrial Engineering . . . 34

2.8 MOO applications in Process Engineering . . . 35

2.9 Robust MOO . . . 37

2.10 Summary: Chapter 2 . . . 38

3 The cross-entropy method for optimisation 40 3.1 The CEM for optimisation . . . 40

3.1.1 The CEM for continuous optimisation . . . 42

3.1.2 The CEM for discrete optimisation . . . 43

3.2 The CEM and single-objective optimisation . . . 45

3.2.1 De Jong’s first function . . . 46

3.2.2 The Rosenbrock function . . . 46

3.2.3 The Shekel function . . . 47

3.2.4 The Rastrigin function . . . 50

3.3 The CEM in other research and applications . . . 50

3.4 Summary: Chapter 3 . . . 53

4 Multi-objective optimisation with the cross-entropy method 54 4.1 The proposed MOO using the CEM . . . 54

4.2 MOO CEM assessment and the continuous case . . . 61

(10)

4.3.1 MOO and the VRP . . . 69

4.3.2 Benchmark problems for the VRP . . . 69

4.3.3 The VRP with soft time windows . . . 70

4.3.4 The VRP and the CEM . . . 72

4.3.5 Results: experimenting with the VRP . . . 74

4.4 Summary: Chapter 4 . . . 78

5 Multi-objective optimisation applications of the MOO CEM algo-rithm 79 5.1 An inventory problem . . . 80

5.2 The buffer allocation problem . . . 84

5.2.1 Background on the BAP . . . 84

5.2.2 BAP: Simulation-optimisation model validation . . . 89

5.2.3 Finding buffer allocations with an equality constraint . . . . 90

5.2.3.1 The BAP with equality constraint: results of ap-proximation sets found . . . 91

5.2.3.2 The BAP with equality constraint: results of ap-proximation set quality indicators . . . 94

5.2.3.3 The BAP with equality constraint: Trends in buffer size allocation . . . 95

5.2.4 Finding buffer allocations with an inequality constraint . . . 99

5.2.4.1 Experimenting with the BAP WIP under the in-equality constraint . . . .100

5.2.4.2 A new objective for the BAP . . . .104

5.2.4.3 Experimental setup for the new BAP objective . . .108

5.2.4.4 Results with the new BAP objective . . . .109

5.2.5 The BAP: Summary and conclusions . . . .116

5.3 A reconfigurable manufacturing system . . . 117

5.4 An extrusion equipment design problem . . . .119

5.5 CO gas management at an ilmenite smelter . . . .124

5.5.1 Background on the CO gas problem domain . . . .124

5.5.2 Formulation of the CO gas problem . . . .125

5.5.3 Results of the CO gas problem . . . .126

(11)

6 Comparative assessment of the proposed algorithm 130

6.1 Introduction to algorithm assessment . . . .130

6.2 Quality indicators . . . 131

6.3 Assessment experiment . . . .133

6.4 Algorithm assessment results . . . .135

6.5 Comparison between the MOO CEM algorithm and OptQuest® . .137

6.5.1 Experimental setup for the MOO CEM and OptQuest® comparison with the inventory problem . . . 141

6.5.2 Experimental results for the MOO CEM and OptQuest® comparison with the inventory problem . . . 141

6.5.3 Experimental setup for the MOO CEM and OptQuest® comparison with BAP17 . . . 141

6.5.4 Experimental results for the MOO CEM and OptQuest® comparison with BAP17 . . . .144

6.6 Conclusions: Algorithm performance quality assessment . . . 147

7 Research summary and conclusions 149 7.1 Project summary and conclusions . . . .149

7.2 Further research . . . .152

7.3 Philosophy . . . .153

References 154

A Plots for the approximate Pareto fronts of the BAP A-1

B Solutions for the vehicle routing problem B-1

B.1 Results for VRP 50 d1 tw4 . . . .B-2 B.2 Results for VRP 250 d1 tw4 . . . .B-5 C Box-whisker plots for hyperarea and the epsilon quality

indica-tors C-1

D Implementation guidelines D-1

D.1 Integration of Matlab® and Arena® . . . .D-2 D.2 Requirements for executing the optimisation . . . .D-2 D.3 Matlab® code for the MOO CEM algorithm . . . .D-5

(12)

1.1 MOO mapping. . . 4

1.2 Pareto front explained for two minimised objectives. . . 5

3.1 De Jong’s first function. . . 46

3.2 The Rosenbrock function with D= 2 and −2 ≤ xi ≤ 2. . . 47

3.3 Rosenbrock function optimisation: progress of the µi. . . 48

3.4 Negative Shekel function with 10 peaks. . . 49

3.5 Shekel function optimisation: progress of the v vector. . . 49

3.6 Rastrigin function with D= 2. . . 50

3.7 Rastrigin function optimisation: progress of the v vector. . . 51

4.1 Truncated normal distribution on −1 ≤ x ≤ 2, µ = 1, σ = 1. . . 56

4.2 Example of a histogram for the DV xi and r= 5. . . 58

4.3 The effect of adjusting histogram frequencies for the DV xi. . . 59

4.4 Approximate fronts for MOP1 and MOP2 obtained by the MOO CEM. . . 63

4.5 Approximate fronts for MOP3 and MOP4 obtained by the MOO CEM. . . 63

4.6 Approximate fronts for MOP6 and ZDT1 obtained by the MOO CEM. 64 4.7 Approximate fronts for ZDT2 and ZDT3 obtained by the MOO CEM. 64 4.8 Trends of the CE vector v for MOP1. . . 65

4.9 Trends of the CE vector v for MOP4. . . 66

(13)

4.11 Structure of the VRP optimisation model. . . 73

4.12 Front progression of 50 d1 tw4 for Z1 vs Z3. . . 76

4.13 Final approximate front of 50 d1 tw4 for Z1 vs Z3. . . 76

4.14 Front progression of 50 d1 tw4 for Z1 vs Z5. . . 76

4.15 Final approximate front of 50 d1 tw4 for Z1 vs Z5. . . 76

4.16 Map of routes of solution A1, 50 d1 tw4 for Z1 vs Z3. . . 77

5.1 Some characteristics of the generalised (s, S) inventory process. . . . 82

5.2 Pareto fronts for the (s, S) inventory process. . . 83

5.3 Typical series of machines in a queuing network. . . 86

5.4 Graphic results for m= 5 machines and n = 10 niches, exponential processing times. . . 92

5.5 Sixteen-node network. . . 92

5.6 Buffer allocations for m= 5, n = 10, exponential processing times. . . 95

5.7 Buffer allocations for m= 5, n = 10, Erlang2 processing times. . . 96

5.8 Buffer allocations for m= 5, n = 40, exponential processing times. . . 96

5.9 Buffer allocations for m= 5, n = 40, Erlang2 processing times. . . 97

5.10 Buffer allocations for m= 10, n = 10, exponential processing times. . 97

5.11 Buffer allocations for m= 10, n = 40, exponential processing times. . 98

5.12 Throughput and WIP requirements in a serial processing line. . . 99

5.13 Approximate fronts for m = 5 and various values of ni, with the maximum estimated WIP. . . 101

5.14 Estimated maximum physical buffer space required for m= 5 and various ni (derived). . . .102

5.15 Estimated maximum physical buffer space required for m= 5 and various values of ni (complete optimisation). . . .104

5.16 A simple graph illustrating WIP intensities over time. . . .106

5.17 Progression of the values of ˆλi and ˆσi for the case of BAP17. . . 111

5.18 Progression of the values of ˆλi for the case of BAP20. . . 111

5.19 Progression of the values of ˆσi for the case of BAP20. . . .112

5.20 Progression of the values of ˆλi and ˆσi for the case of BAP23. . . . .112

5.21 Approximate Pareto front and archive for BAP17. . . .113

5.22 Approximate Pareto front and archive for BAP18. . . .114

5.23 Approximate Pareto front and archive for BAP19. . . .114

(14)

5.25 Approximate Pareto front and archive for BAP21. . . .115

5.26 Approximate Pareto front and archive for BAP22. . . .115

5.27 Approximate Pareto front and archive for BAP23. . . .116

5.28 Schematic of a reconfigurable manufacturing system. . . .118

5.29 Results of an exhaustive enumeration of solutions for the reconfig-urable manufacturing system. . . .119

5.30 True and approximate Pareto front for the reconfigurable manufac-turing system. . . .120

5.31 Schematic of a polymer extrusion unit. . . 121

5.32 Population subset and optimality approximation set for Design 1 of an extrusion process. . . .122

5.33 Population subset and optimality approximation set for Design 2 of an extrusion process. . . .123

5.34 The complete solution set for the CO gas problem with the true Pareto front. . . 127

5.35 The true and approximate Pareto fronts for the CO gas problem. . .128

6.1 Example of a hyperarea and reference point. . . .133

6.2 Box plot for the hyperarea comparison of the MOO CEM algorithm and OptQuest® using the inventory problem. . . .143

6.3 Best and worst approximation fronts found by the MOO CEM algorithm and OptQuest, for the inventory problem. . . .143

6.4 Box plot for the hyperarea comparison of the MOO CEM algorithm and OptQuest® using BAP17. . . .146

6.5 Best and worst approximation fronts found by the MOO CEM algorithm and OptQuest, for BAP17. . . .146 A.1 Graphic results for m= 5 machines and n = 20 niches, exponential

processing times. . . .A-2 A.2 Graphic results for m= 5 machines and n = 40 niches, exponential

processing times. . . .A-2 A.4 Graphic results for m = 5 machines and n = 20 niches, Erlang2

processing times. . . .A-3 A.3 Graphic results for m = 5 machines and n = 10 niches, Erlang2

(15)

A.5 Graphic results for m = 5 machines and n = 40 niches, Erlang2 processing times. . . .A-4 A.6 Graphic results for m= 10 machines and n = 10 niches, exponential

processing times. . . .A-4 A.7 Graphic results for m= 10 machines and n = 20 niches, exponential

processing times. . . .A-5 A.8 Graphic results for m= 10 machines and n = 40 niches, exponential

processing times. . . .A-5 A.9 Graphic results for m= 16 machines and n = 10 niches, exponential

processing times. . . .A-6 A.10 Graphic results for m= 16 machines and n = 20 niches, exponential

processing times. . . .A-6 A.11 Graphic results for m= 16 machines and n = 40 niches, exponential

processing times. . . .A-7 A.12 Graphic results for m= 16 machines and n = 50 niches, exponential

processing times. . . .A-7 A.13 Graphic results for m= 16 machines and n = 60 niches, exponential

processing times. . . .A-8 B.1 Front progression of 50 d1 tw4 for Z2 vs Z3. . . .B-2 B.2 Final approximate front of 50 d1 tw4 for Z2 vs Z3. . . .B-2 B.3 Front progression of 50 d1 tw4 for Z2 vs Z5. . . .B-3 B.4 Final approximate front of 50 d1 tw4 for Z2 vs Z5. . . .B-3 B.5 Front progression of 50 d1 tw4 for Z4 vs Z3. . . .B-3 B.6 Final approximate front of 50 d1 tw4 for Z4 vs Z3. . . .B-3 B.7 Front progression of 50 d1 tw4 for Z4 vs Z5. . . .B-4 B.8 Final approximate front of 50 d1 tw4 for Z4 vs Z5. . . .B-4 B.9 Front progression of 250 d2 tw1 for Z2 vs Z3. . . .B-5 B.10 Final approximate front of 250 d2 tw1 for Z2 vs Z3. . . .B-5 B.11 Front progression of 250 d2 tw1 for Z4 vs Z5. . . .B-5 B.12 Final approximate front of 250 d2 tw1 for Z4 vs Z5. . . .B-5 B.13 Map of routes of solution G, 250 d2 tw1 for Z4 vs Z5, Part 1. . . . .B-6 B.14 Map of routes of solution G, 250 d2 tw1 for Z4 vs Z5, Part 2. . . . .B-6 B.15 Map of routes of solution G, 250 d2 tw1 for Z4 vs Z5, Part 3. . . . .B-6 B.16 Map of routes of solution G, 250 d2 tw1 for Z4 vs Z5, Part 4. . . . .B-6

(16)

C.1 Box-whisker plot for MOP1. . . .C-2 C.2 Box-whisker plot for MOP2. . . .C-2 C.3 Box-whisker plot for MOP3. . . .C-3 C.4 Box-whisker plot for MOP4. . . .C-3 C.5 Box-whisker plot for MOP6. . . .C-4 C.6 Box-whisker plot for ZDT1. . . .C-4 C.7 Box-whisker plot for ZDT2. . . .C-5 C.8 Box-whisker plot for ZDT3. . . .C-5 C.9 Box-whisker plot for BAP1. . . .C-6 C.10 Box-whisker plot for BAP2. . . .C-6 C.11 Box-whisker plot for BAP3. . . .C-7 C.12 Box-whisker plot for BAP4. . . .C-7 C.13 Box-whisker plot for BAP5. . . .C-8 C.14 Box-whisker plot for BAP6. . . .C-8 C.15 Box-whisker plot for BAP7. . . .C-9 C.16 Box-whisker plot for BAP8. . . .C-9 C.17 Box-whisker plot for BAP9. . . .C-10 C.18 Box-whisker plot for BAP10. . . .C-10 C.19 Box-whisker plot for BAP11. . . C-11 C.20 Box-whisker plot for BAP12. . . C-11 C.21 Box-whisker plot for BAP13. . . .C-12 C.22 Box-whisker plot for BAP14. . . .C-12 C.23 Box-whisker plot for BAP15. . . .C-13 C.24 Box-whisker plot for BAP16. . . .C-13 C.25 Box-whisker plot for BAP17. . . .C-14 C.26 Box-whisker plot for BAP18. . . .C-14 C.27 Box-whisker plot for BAP19. . . .C-15 C.28 Box-whisker plot for BAP20. . . .C-15 C.29 Box-whisker plot for BAP21. . . .C-16 C.30 Box-whisker plot for BAP22. . . .C-16 C.31 Box-whisker plot for BAP23. . . .C-17 C.32 Box-whisker plot for the(s, S) inventory problem. . . .C-18 C.33 Box-whisker plot for the reconfigurable manufacturing problem. . .C-19 C.34 Box-whisker plot for the CO gas problem. . . .C-19

(17)
(18)

2.1 Some of the standard MOO test functions used for evaluation of

MOO algorithms. . . 19

2.2 Publications pertaining to MOO in Computers & Industrial Engi-neering. . . 36

4.1 Structure of the working matrix. . . 55

4.2 Quality indicator values obtained for the test problems of Table 2.1. 62 4.3 Specific values of v for MOP1. . . 66

4.4 Objectives of the VRPSTW. . . 71

4.5 Routes of solution A1 in Figure 4.13, 50 d1 tw4 (Z1 vs Z3). . . 77

4.6 Routes of solution B in Figure 4.15, 50 d1 tw4 (Z1 vs Z5). . . 77

5.1 Notation for the (s, S) inventory problem. . . 81

5.2 Simulation validation values for reference instances (Set 1). . . 89

5.3 Simulation validation values for reference instances (Set 2). . . 90

5.4 Simulated MOO CEM results for some reference instances (Set 1 and 2). . . 91

5.5 Simulated MOO CEM results for instances with m= 10 and exponential processing times (Set 3). . . 93

5.6 Simulated MOO CEM results for the 16-node instance. . . 93

5.7 Values for quality indicators of the MOO CEM algorithm applied to instances of the BAP. . . 94

(19)

5.8 Average buffer size allocation for the 16-node non-serial topology and various values of n. . . 99 5.9 Buffer allocation estimations for an inequality constraint. . . .100 5.10 Solutions for highest throughput rates and sums of buffer allocations.103 5.11 Example of WIP intensity proportions. . . .106 5.12 Model parameters for BAP17–19. . . .110 5.13 Simulated results for the seven BAP instances, for given maximum

buffer sizes. . . .110 5.14 Ranges for design parameters of extrusion equipment. . . .122 5.15 Extreme solution values of the extrusion design (Design 1). . . .122 5.16 Proposed design and operating values for extrusion unit (Design 2). 123 5.17 Decision variables used in the CO gas problem. . . .126 5.18 Decision variables used in two scenarios pertaining to the CO gas

problem. . . 127 6.1 Mean indicator values found during comparative testing. . . .136 6.2 Mean values and 95% confidence interval half-widths for the

hyper-area and epsilon indicator. . . .138 6.3 Outcomes of the hypothesis tests for the hyperarea indicator:

con-tinuous problems. . . .139 6.4 Outcomes of the hypothesis tests for the epsilon indicator:

continu-ous problems. . . .139 6.5 Outcomes of the hypothesis tests for the hyperarea indicator:

dis-crete problems. . . .140 6.6 Hyperareas for the MOO CEM and OptQuest® comparison using

the inventory problem. The Simio random number stream indices per trial are included. . . .142 6.7 Outcome of the hypothesis test for the hyperarea indicator of the

inventory problem: MOO CEM and OptQuest®. . . .144 6.8 Hyperareas for the MOO CEM and OptQuest® comparison using

BAP17. The Simio random number stream indices per trial are included. . . .145 6.9 Outcome of the hypothesis test for the hyperarea indicator of BAP17:

(20)

B.1 Routes of solution C in Figure B.2, 50 d1 tw4 (Z2 vs Z3). . . .B-2 B.2 Routes of solution D in FigureB.4, 50 d1 tw4 (Z2 vs Z5). . . .B-2 B.3 Routes of solution E in Figure B.6, 50 d1 tw4 (Z4 vs Z3). . . .B-3 B.4 Routes of solution F in FigureB.8, 50 d1 tw4 (Z4 vs Z5). . . .B-3

(21)

Roman Symbols

Bi Size of buffer space i, page 85 Cv Vehicle capacity, page 70

CV Pareto front convergence indicator, page 21

D Number of decision variables in an optimisation problem, page 4 Di Number of units demanded by customer i, page 81

di Euclidian distance, page 20

D′i Internal diameters of extrusion equipment, page 120 Er Number of rows in elite vector, page 57

et Flight thickness of extrusion equipment, page 120

f, fi Mathematical function, including probability mass and density function, page 3

GD Generation Distance indicator, page 20

gi Mathematical function, including probability mass and density function, page 4

hi Mathematical function, including probability mass and density function, page 4

hw Confidence interval half-width, page 134

I Indicator function, page 41

I Average inventory carried over period T , page 81 Iǫ+ Epsilon quality indicator, page 131

IH Hypervolume quality indicator, page 131 Iq Unary quality indicator, page 130

It Inventory level at time t, page 81

K Number of objectives in an optimisation problem, page 4 l Rare-event probability in importance sampling, page 41 Li Decision variable upper limit, page 56

(22)

li Decision variable lower limit, page 56

lsi Section lengths of extrusion equipment, page 120 M Number of inequality constraints, page 4

m Number of machines in the buffer allocation problem, page 85 M E Maximum Pareto front error indicator, page 21

N User-specified population size for population-based algorithms, page 16

n Number of members in a set, for example, the number of nodes in a VRP or number of buffers in a BAP, page 70

Na Size of solution archive, page 88

NC Number of discrete events in period T , page 81

nd Number of elements in the discrete decision vector, page 45 Ns Extrusion equipment screw rotation rate, page 120

P Probability distribution of a discrete optimisation with the cross-entropy method, page 45

ph Probability of inverting MOO CEM histogram counts, page 58 pj Probabilities of elements in a discrete decision vector, page 45 Q Number of equality constraints, page 4

r Number of classes of the elite vector of the MOO CEM algorithm, page 57

ri Mean exponential repair rate, page 89

s Reorder level, page 80

S Reorder quantity, page 80

Sc Inventory shortage per customer, page 81 SL Service level, page 81

sp Screw pitch of extrusion equipment, page 120 SP Pareto front spacing indicator, page 21

T General indicator of end of a time period, page 81 Tbi Barrel temperature of extrusion equipment, page 120 Tj Time duration of work-in-progress level j, page 104 TR Measure of throughput rate, page 87

tr Residence time inside extrusion equipment, page 120 tν

d Total delay time on a route in the VRPSTW, page 71 tν

w Total waiting time to start on a route in the VRPSTW, page 71 U Random number, uniformly distributed on(0, 1), page 58 WM Maximum work-in-progress observed over a time period, page 104 Wj Instantaneous work-in-progress level, page 104

(23)

Greek Symbols

α Smoothing parameter for the cross-entropy method, page 43 β Mean interarrival time for a Poisson process, page 80

βi Mean exponential failure rate, page 89

δ Termination counter of the cross-entropy method, page 43 ǫc MOO CEM common termination threshold, page 61 ǫ Box size of ǫ-dominance to regulate convergence, page 16 γ Cross-entropy optimisation rare-event threshold value, page 40 κ MOO CEM histogram class index, page 57

λn Throughput rate estimation of a buffer allocation problem, n niches, page 89

λ∗ Exact throughput rate of a buffer allocation problem, page 89 µ, µi Mean of a distribution, page 46

ω Stochastic component of a decision problem, page 3

ν Number of vehicles in the vehicle routing problem, page 70 Ω Feasible region of an optimisation problem, page 4

Ωq Set of all approximation sets, page 131 φi Truncated normal distribution, page 55

ρ Rank value of multi-objective solution vector, page 55 ρE MOO CEM algorithm ranking threshold, page 56 σ Standard deviation of a distribution, page 46

τij MOO CEM histogram frequency count, decision variable i, class j, page 57

τm Maximum number of evaluations, page 75 θ Input vector to an optimisation problem, page 3

Θ The complete input domain of an optimisation problem, page 3 ̺ User-specified rare-event threshold value for the cross-entropy

method, page 42

Υ Objective function in the case of discrete cross-entropy optimisa-tion, page 43

Zi Objectives of the vehicle routing problem, page 70 Other Symbols

Ci MOO CEM algorithm histogram class boundaries of decision variable xi, page 57

C The set of customers in the VRP, page 70 D Kullback-Leibler distance, page 41

E Mathematical expectation, page 41 G A directed graph in the VRP, page 70

(24)

N The set of vertices in the VRP, page 70

P Probability symbol, page 41

PK Approximate Pareto set, page 18

PT True Pareto set, representing the true Pareto front, page 18 PR Reference Pareto set, page 131

V The set of vehicles in the VRP, page 70

Vp Parameter vector set of the cross-entropy method, page 42 W Working matrix of the MOO CEM algorithm, page 56

X Feasible region of cross-entropy optimisation problem, page 40 Abbreviations and Acronyms

ARMOGA Adaptive range multi-objective genetic algorithm, page 15

AS Ant systems, page 24

BAP Buffer allocation problem, page 84

CCPSO Cooperative Co-evolutionary Multi-objective Particle Swarm Optimisation, page 25

CE Cross-entropy, page 6

CEM Cross-entropy method, page 6

CMA-ES Covariance matrix adaptation evolution strategy, page 29 CMOIA Constrained multi-objective immune algorithm, page 28 CONPIP Constant number of projects in progress, page 51

DE Differential evolution, page 27

DEMO Differential evolution for multi-objective optimisation, page 27 DES Discrete event simulation, page 2

EA Evolutionary Algorithm, page 12

EMOO Evolutionary multi-objective optimisation, page 15 EP Evolutionary Programming, page 12

ES Evolution Strategy, page 12

GA Genetic Algorithm, page 12

HA Hyperarea, page 134

HMOIA Hybrid Multi-objective Immune Algorithm, page 28 INVN Inventory problem, page 134

LRP Location routing problem, page 23 MADM Multi-attribute decision-making, page 3

MA Memetic Algorithm, page 30

MDQL Multi-objective Distributed Q-learning, page 26 MIMD Multiple Instruction Multiple Data, page 51

MISA Multi-objective Immune System Algorithm, page 28 MOAQA Multi-objective Ant-Q algorithm, page 24

(25)

MOCBA Multi-objective optimal computational budget allocation, page 6 MO-CMA-ES Multi-objective covariance matrix adaptation evolution strategy,

page 29

MOEA Multi-objective evolutionary algorithm, page 14 MOGA Multi-objective Genetic Algorithm, page 14

MOMGA Multi-objective Messy Genetic Algorithm, page 14

MOO CEM Multi-objective optimisation using the cross-entropy method, page 54

MOO Multi-objective optimisation, page 4

MOPSO Multi-objective Particle Swarm Optimisation, page 17 MORS Multi-objective ranking and selection, page 6

MOSADE Measure for a self-adaptive differential evolution, page 17 MOSS Multiple-objective Scatter Search, page 14

MOTS Multi-objective Tabu Search, page 24 NPGA Niched-Pareto Genetic Algorithm, page 14

NP Nondeterministic polynomial; used in computational complexity theory, page 12

NSDE Non-dominated Sorting Differential Evolution, page 27 NSGA Non-dominated Sorting Genetic Algorithm, page 14 ODF Operation dependent failures, page 85

PAES Pareto Archived Evolution Strategy, page 14 PA Perturbation Analysis, page 32

ParEGO Parameterised Efficient Global Optimisation, page 6 pdf Probability density function, page 42

PESA Pareto Envelope-based Selection Algorithm, page 14

PF Pareto front, page 3

pmf Probability mass function, page 43 PSO Particle Swarm Optimisation, page 25

RMOO Robust Multi-objective Optimisation, page 37 RMS Reconfigurable manufacturing system, page 116

RPSGAe Reduced Pareto set genetic algorithm with elitism, page 123 RSM Response Surface Methodology, page 32

SA Simulated annealing, page 22

SFLA Shuffled Frog-leaping Algorithm, page 30

SPEA Strength Pareto Evolutionary Algorithm, page 14

TOPSIS Technique for preference by similarity to the ideal solution, page 3 TSP Travelling salesperson problem, page 68

TS Tabu search, page 23

(26)

VRPTW Vehicle routing problem with time windows, page 68 VRP Vehicle routing problem, page 68

(27)

INTRODUCTION

This chapter serves as an introduction to the research presented in this dissertation. The reasons for the inception of the research hypothesis are explained, followed by the formal statement of the research hypothesis. Finally, the structure of the document is explained.

1.1

Background to the research hypothesis

We make decisions daily in our lives and often have to consider several outcomes of a decision all at once. If one for example has to buy a car, several requirements can be considered: the acquisition cost of the car, its maintenance cost, the fuel consumption, its luxury features, power, torque and acceleration. These requirements are conflicting, since one can usually not obtain a luxurious, fast car at a low cost. So the decision maker has to compromise and look for a candidate car that satisfies most of these requirements to some extent. On the other hand, the decision maker may choose to accept the cheapest candidate car and relax or even ignore the other requirements. If one formalises the decision problem of this example, one may refer to the requirements as objectives and the stated attributes of candidate cars as decision variables. Since there is more than one conflicting objective, it is a multi-objective decision problem, while the decision variables are non-commensurate. A satisfactory candidate is considered as near-optimum, while a set of candidates which cannot be improved upon is the Pareto-optimal set. This set contains a few candidates which will all satisfy the decision maker.

The focus of this research is on aspects of multi-objective engineering decision making. Naturally, the approaches in this discipline are much more formal than in

(28)

the given example. Mathematical models to support decision making are arguably the preferred approach, and the outcomes of decisions are reflected in the value(s) of an objective function f in the case of single-objective optimisation. Constraints that model practical limitations are specified as part of the optimisation model. Optimisation methods have been developed and studied for decades and many techniques exist to find extremal values of f . The nature of f , e.g. linear, non-linear, deterministic or stochastic, and the nature of the decision variables (deterministic [discrete, continuous], stochastic [discrete, continuous]) are important. Also, the constraint functions are linear or non-linear.

While exact analytical methods have many advantages, decision making prob-lems are often hard to formulate and model using these methods, while many problems have no closed form. In such cases, the decision maker can use computer simulation. It is an appealing tool for problem solving and decision making, since it allows one to realistically mimic real-world operations/processes, while it is regarded by some as the “last resort” when other problem-solving tools become inadequate. This is typically the case when studying time-dependent (dynamic) stochastic processes. Computer simulation is a wide discipline, of which the sub-discipline discrete-event simulation (DES) (Banks, 1998; Law & Kelton, 2000) of dynamic, stochastic processes has found its rightful place in engineering. “Dy-namic” implies time-dependency of some model variables, and “stochastic” implies distribution-dependency of some model variables.

Traditionally, DES is used to study point problems. After finding a solution, it is rejected by management, or implemented, or refined and then implemented, or partially implemented. Recently, DES models have been used for optimisation. The models are thus often reused due to changes in business and new needs for more answers. More complicated business problems and increasing computing power naturally lead to the combination of multi-objective optimisation and computer simulation.

Usually, a performance measure (objective) in terms of profit and cost is determined and used to determine the quality of a solution. In DES, the approaches to finding the best solution for one or more performance measures (objectives) are as follows:

1. Considering a single objective and a finite number of alternatives (scenarios), the decision maker can apply statistical methods to determine a distinct

(29)

scenario, if it exists. The KN-algorithm of Kim & Nelson (2001) is the state-of-the-art approach to find such a solution.

2. The decision maker defines a finite number of scenarios for the set of decision variables and the set of two or more performance measures, and the simulation model is executed for each of these input sets. The estimated objective values of the finite solution set are normalised (output is non-homogeneous) into one value using for example the Technique for Preference by Similarity to the Ideal Solution (TOPSIS) (see Jahanshahloo et al. (2006)), and the best scenario is found. This case is known as multi-attribute decision-making (MADM).

3. A mathematical programming approach is followed where one of the objectives is selected to be the objective, and the other objectives are treated as constraints. See for example Bettonvil et al. (2009).

4. A Pareto front (PF) is estimated via some guided search to consider many possible solutions, and conclusions are made from this front. See Gil et al. (2007).

The latter area has received little research attention in the context of DES and industrial engineering applications, according to Rosen et al. (2008).

Rosen et al.(2007) define the traditional simulation optimisation problem as

Minimise f(θ) (1.1)

subject to θ∈ Θ, (1.2)

where f(θ) = E[ψ(θ, ω)] is the expected system performance value, and is esti-mated by ˆf(θ) from samples of a simulation model using instances of discrete or continuous feasible and possibly constrained input θ ∈ Θ ⊂ RD. The stochastic effects of the model are represented by ω.

Since f cannot be mathematically defined (in closed form), computer simulation is used to imitate its behaviour, and f is thus viewed as a black box with inputs and outputs. Also, f can be extended to define multiple objectives. These objectives can have different units of measurement, exhibit different scales, and are usually

(30)

x1 x2

f1 f2

Decision space Objective space

Figure 1.1: MOO mapping.

in conflict. This leads to the Multi-objective Optimisation (MOO) problem, Minimise f(x) = [f1(x), f2(x), . . . , fK(x)]T (1.3)

subject to x ∈ Ω (1.4)

Ω = {x ∣ gi(x) ≤ 0, i = 1, 2, . . . , M; (1.5)

hj(x) = 0, j = 1, . . . , Q} (1.6)

in D decision variables, K objectives and M+ Q constraints in (1.3) (Tsou, 2008), where x= [x1, x2, . . . , xD]T is a D dimensional vector of decision variables, and each xi (i = 1, 2, . . . , D) can be real-valued, integer-valued or boolean-valued. No assumptions in terms of linearity or non-linearity of fi, gi and hj are made.

Many combinations of decision variables in the domain RD form solutions in the domain RK. This is illustrated in Figure 1.1, for D = 2 and K = 2 (Coello

Coello et al., 2007). The multi-objective optimisation problem is solved if a vector x∗ = [x∗1, x∗2, . . . , x∗D]T is found which satisfies the M + Q constraints g

i and hj while minimising f . This set of solutions forms the Pareto set of Pareto-optimal solutions, and formal definitions will be presented in Subsection2.2. These solutions can be shown graphically as the Pareto front.

The main task in MOO is to find the Pareto-optimal solutions, or the Pareto front (Coello Coello et al., 2007; Deb, 2001). There are many approaches to this problem; many of them are based on for example metaheuristics, while other

(31)

f1 f2

Members of Pareto front

Figure 1.2: Pareto front explained for two minimised objectives.

methods are also used. An example of a Pareto front (blue dots) is shown in Figure 1.2, where both objectives are to be minimised.

The dots in the figure are the result of evaluating both objective functions in terms of a given set of decision variables, each representing a solution vector (f1, f2) for a given decision vector (x1, x2). Note that a “good” solution method will return dots that are near, or better, on the Pareto front, but also sufficiently widely distributed.

When a decision-making problem with many objectives can only be modelled using computer simulation, each solution vector (represented by the blue and red dots in Figure 1.2) is estimated by means of a simulation run. If the problem is stochastic, such a run can be computationally expensive because the stochastic components ω in (1.3) (the “noise”), must be sufficiently estimated to control the statistical estimation error. Goh & Tan (2007) investigated the effect of noisy environments in evolutionary multi-objective optimisation, and state that for many problems, the evolutionary optimisation process degenerates into a random search when the noise level in a problem increases. Estimation errors (and by implication the choice of sample size) and outliers contribute to a slower convergence rate and possibly sub-optimal solutions. Finding the Pareto front in MOO is generally a

(32)

difficult task, and a stochastic component makes it even harder.

Lee et al. (2010) proposed a method for finding the non-dominated Pareto set for multi-objective simulation models using Multi-objective Ranking and Selection (MORS). They consider a set of scenarios (they term it “designs”), each having K independent, normal distributed objectives, and find an optimal allocation of simulation replications to each design through a sequential procedure, the Multi-objective Optimal Computational Budget Allocation (MOCBA) algorithm. This algorithm ensures that the Pareto set is found with high confidence and at the least simulation computation expense. The algorithm has proved to be very economical, but the number of scenarios must be known beforehand and must be relatively small.

However, in many MOO problems the Pareto front is unknown when analysis commences, and since the solution space is also potentially very large, the MOCBA algorithm is not generally applicable. To reduce the computational burden and time to obtain results, an efficient algorithm which dictates the search for the Pareto front is desirable. The term “efficient” means that the algorithm must find the Pareto front with as few as possible evaluation trials. Knowles (2006) studied this problem in the context of wet experiments, in which very few evaluations are possible. In such experiments, the time required to perform one evaluation of an experiment is of the order of minutes or hours, only one evaluation is possible at a time (parallel work is not possible), no realistic simulator for approximating the evaluation is available, and the total number of evaluations is limited by financial, time or resource constraints. He proposes the Parameterised Efficient Global Optimisation (ParEGO) algorithm in this work, and assumes that noise is low, the search landscape is locally smooth but multimodal and the dimensionality of the search space is low-to-medium, among others.

In the study presented in this dissertation, problems with similar characteristics but with high noise will be studied using computer simulation. A preliminary literature survey indicated that the Cross-entropy method (CEM) for optimisation, developed by Rubinstein & Kroese(2004), converges fairly fast when performing single-objective optimisation. This leads to the question: can the cross-entropy method be adapted for multi-objective optimisation, and will it still converge fast? It is presumed that if such an adaptation can be made, then solutions to multi-objective stochastic problems can be obtained with a relatively low computational effort and in acceptable time.

(33)

The research hypothesis is based on these considerations and is presented next.

1.2

The research hypothesis

The research hypothesis considered in this study is:

The cross-entropy method reduces the computational burden when applied to multi-objective optimisation of dynamic, stochastic processes.

If the research hypothesis can be substantiated, the contribution to the body of knowledge will be achieved in the following two ways:

1. Extension of the cross-entropy method for multi-objective optimisation. Currently, a single objective is formulated for each of the cross-entropy-based problems found in the literature.

2. Application of the cross-entropy method in dynamic, stochastic processes. The emphasis here is on the word dynamic, which refers to processes that evolve over time such as in for example a manufacturing process. The term stochastic refers to the statistical variation in such processes. If the cross-entropy method converges fast in the multi-objective case, then optimisation problems in this domain with a large computational burden can be studied. The aim of the research and the objectives pursued serve to support the hypoth-esis, and these are discussed next.

1.3

Aim and objectives

The research aim, which is the macropurpose of the study (Muller, 2008), is to demonstrate that the cross-entropy method can be used in multi-objective optimisation, with application to dynamic, stochastic processes, in the context of the industrial engineering problem domain. Problems that include constraints are implied.

The research objectives are the specific research tasks that need to be performed (Muller, 2008), which are:

1. Review the literature.

2. Determine if the CEM can speed up the evaluation of objective functions of dynamic, stochastic processes.

(34)

3. Determine if the Pareto front for a given problem can be approximated economically in terms of computational effort and time.

4. Determine if the Pareto fronts obtained are effective and efficient using appropriate performance quality indicators.

5. Determine if the CEM can be applied to problems with discrete stochastic as well as continuous deterministic decision variables.

The report on the research task execution forms the core of this document. The structure of the document is presented next.

1.4

Structure of the document

This chapter contains a contextual description of MOO and the problem when objective functions have to be evaluated by time-consuming means, typically via computer simulation. This led to the formulation of a research hypothesis, a research aim and objectives.

In Chapter 2, a literature study on multi-objective optimisation is presented. This includes references to methods of multi-objective optimisation, test problems, application areas and the latest research trends in this field.

Chapter 3 contains a description of the cross-entropy method (CEM). The theoretical foundation of the method is presented, as well as its formulation and application to optimisation. Some single-objective optimisation studies are included.

The theoretical background and literature surveys culminate in the development of the multi-objective optimisation using the cross-entropy method (MOO CEM) algorithm, as described in Chapter 4. The proposed method is assessed using known benchmark problems from the literature. These are all continuous or piece-wise continuous mathematical functions that exhibit specific characteristics. Four basic quality indicators are provided to judge the quality of the solutions.

Since the aim of the research is to assess the suitability of the MOO method developed with respect to dynamic, stochastic problems, the classical stochastic inventory problem of a single commodity is studied in Chapter 5 as a first and fairly simple problem of this nature. Further applications in buffer allocation in queueing networks, a reconfigurable manufacturing system, and a polymer extrusion unit are also reported. Each of these application descriptions contains

(35)

focused literature surveys, problem descriptions, quality indicators, results and conclusions. This structure was followed since some of these application studies were also submitted for publication in research journals. In a final experiment, a dynamic stochastic process at a heavy minerals mining operation was studied.

The quality performance of the proposed algorithm is compared to that of two commercially available products, and an extensive experiment is presented in Chapter 6. This experiment serves to provide evidence that supports the research hypothesis.

The summary and general conclusions of the research are presented in Chapter 7. The chapter and this study are concluded with some philosophical remarks. Graphical test results are included in AppendixA, AppendixBand Appendix C, and algorithm implementation guidelines are outlined in Appendix D.

This concludes Chapter 1; the scholarly overview on multi-objective optimisa-tion is presented next.

(36)

MULTI-OBJECTIVE OPTIMISATION: OVERVIEW OF

SCHOLARLY LITERATURE

Information has become accessible to humanity from almost any place on the planet, and a large part of the collective information base is free. With that in mind, a brief overview of the scholarly literature on multi-objective optimisation (MOO) is presented in this chapter. The aim is to provide the reader with pointers to the major topics in the field, which include a short development history, some cornerstone definitions used in the field that are required for better understanding, and a discussion of the various algorithms or approaches used to perform MOO. These include the ubiquitous evolutionary algorithms and other metaheuristics like simulated annealing and particle swarm optimisation. Hyperheuristics are discussed in a section of its own. Each MOO approach is presented according to a common micro-structure, as far as possible: a very brief outline of the mechanism of each algorithm, how it was adapted to MOO (where applicable), a few applications and, where available, reference to recent survey(s) and relevant textbooks.

Topics like ranking of solutions, fitness assignment, proximity and diversity of solutions, test problems for multi-objective optimisation algorithms and test indicators for algorithm performance are discussed under evolutionary algorithms, but these are also applicable to other approaches. Some applications of MOO algorithms are discussed in general, but also specifically in the domain of Industrial Engineering and Process Engineering.

A summary at the end of the chapter includes the author’s views and interpre-tation of what was observed while doing this overview.

(37)

2.1

Introduction to MOO

Practical decision-making requires evaluation of different decision objectives that are conflicting and often measured in different units. An example is an investment decision, where two objectives are present, namely risk and profit. If one wants to increase the profit, one has to accept increased risk, while low risk usually yields low profit. In this decision problem, risk is dimensionless and profit is measured in monetary units. Problems of this nature often have a mutual feature, namely a set of acceptable trade-off solutions. The risk range of the investment problem has an associated profit range, and the decision maker has to choose one of the solutions.

Multi-objective theory originated in the field of economics, and since it is part of economic equilibrium, its origin can be traced back to 1776 when Adam Smith’s work The Wealth of Nations was published (Coello Coello et al., 2007). L´eon Walras introduced the concept of economic equilibrium and Vilfredo Pareto, among others, did important work in this regard between 1874 and 1906. Game strategy is related to multi-objective optimisation and F´elix ´Edouard ´Emile Borel established Game Theory in 1921. The origin of game theory is attributed by most to the famous mathematician and computer scientist John von Neumann who presented work on this topic in 1926, followed by a publication in 1928. Tjalling C Koopmans was the first to apply multi-objective optimisation to domains outside of economics. He worked on production theory and established the concept of an “efficient” vector in 1951 (Koopmans,1951). The first engineering application seems to be by Lofti Zadeh (Zadeh, 1963). John Buzacott in Lu et al.(2009) introduced the term “line-specifc output curve” for production lines in 1967.

The work by Harold W Kuhn and Albert W Tucker in 1951 in the context of the vector maximum problem laid the mathematical foundation of multi-objective optimisation (Kuhn & Tucker, 1951). The Kuhn-Tucker Conditions for Non-inferiority are often applied in research papers such as Kleijnen & Wan (2007). A further significant development was the introduction of Goal Programming by Abraham Charnes and William Cooper (Coello Coello et al., 2007).

The search and optimisation techniques developed over the past decades to solve decision problems are classified by Coello Coello et al.(2007) into three main categories: enumerative, deterministic and stochastic (p. 21). Deterministic ap-proaches include greedy, hill-climbing, branch-and-bound, depth-first, breadth-first, best-first and calculus-based algorithms. These algorithms have been successfully

(38)

applied to many problems, but they have drawbacks. Generally, the presence of local optima, discontinuities, plateaus and ridges in the solution space reduces algorithm effectiveness. Problems can be discontinuous, high-dimensional, multi-modal and/or NP-complete. A problem with one or more of these properties is called irregular (Coello Coello et al., 2007), and many real-world scientific and engineering problems are irregular. Deterministic algorithms, when applied to this problem type, often suffer due to their requirement for problem domain-specific knowledge to direct or limit their search.

Stochastic search and optimisation methods such as Simulated Annealing (SA), Tabu Search (TS), Monte Carlo Methods (MCM) and Evolutionary Computation (EC), were developed to address irregular problems. EC is a generic term for those algorithms that computationally imitate the natural evolutionary process. Specifically, Evolutionary Algorithms (EAs) include the techniques of Genetic Algorithms (GAs), Evolution Strategies (ESs) and Evolutionary Programming (EP) (Coello Coello et al., 2007). Stochastic methods provide good solutions to a wide range of problems, but their results cannot be guaranteed to be optimal. The decision maker can only assume the results are near-optimal.

2.2

Definitions used in MOO

In MOO there is usually no single optimal solution, but rather a set of good solutions which form the Pareto optimal front (Gil et al.,2007). Rosen et al.(2007) provide a good overview of literature in this field. The terms Pareto front, Pareto optimal and dominance have been used before, and these and other terms are now formally defined.

Definition 1: Decision variables: The vector x= [x1, x2, . . . , xD]T of variables for which numerical quantities are to be chosen in the optimisation problem.

Restrictions are often imposed on an optimisation problem due to practical requirements, which must be satisfied for a solution to be acceptable. The con-straints define the dependencies among decision variables and problem parameters (constants). The M inequality constraints are described by

gi(x) ≤ 0, i = 1, . . . , M (2.1) and the Q equality constraints by

(39)

The degrees of freedom is given by M− Q, and it is required that Q < M to avoid an overconstrained problem.

The MOO problem with K objectives and M+ Q constraints was formulated in Chapter 1 and is repeated here (Tsou, 2008):

Minimise f(x) = [f1(x), f2(x), . . . , fK(x)]T (2.3)

subject to x ∈ Ω (2.4)

Ω = {x ∣ gi(x) ≤ 0, i = 1, 2, . . . , M; (2.5) hj(x) = 0, j = 1, . . . , Q}. (2.6) In multi-objective optimisation, two Euclidian spaces are considered:

1. In the D-dimensional space in which each coordinate axis corresponds to a component of the vector x.

2. In the M -dimensional space in which each coordinate axis corresponds to a component of the objective function vector f(x).

Since MOO problems usually have at least two conflicting objectives, many acceptable solutions for a given problem exist. These form the Pareto optimal set. A few definitions pertaining to Pareto optimality are necessary, and the basic definitions in Coello Coello (2009) are repeated here for convenience (assuming minimisation):

Definition 2: Given two vectors u= (u1, . . . , uK) and v = (v1, . . . , vK) ∈ IRK, then u≤ v if ui≤ vi for i= 1, 2, . . . , K, and u < v if u ≤ v and u ≠ v.

Definition 3: Given two vectors u and v in IRK, then u dominates v (denoted by u≺ v) if u < v.

Definition 4: A vector of decision variables x∗ ∈ Ω (Ω is the feasible region) is Pareto optimal if there does not exist another x∈ Ω such that f(x) ≺ f(x∗). Definition 5: The Pareto optimal set P∗ is defined byP∗= {x ∈ Ω ∣ x = x∗}. Definition 6: The Pareto front PT∗ is defined by PT∗ = {f(x) ∈ IRK ∣ x ∈ P∗}. The vectors in P∗ are called nondominated, and there is no x∈ Ω such that f(x) dominates f(x∗).

Solving an MOO problem requires that the Pareto optimal set be found from the set of all decision variable vectors that satisfy constraints (2.1) and (2.2).

(40)

With these definitions in mind, the focus now changes to some MOO algorithms and some of their properties.

2.3

Evolutionary algorithms and MOO

Multi-objective Optimisation using Evolutionary Algorithms (MOEAs) has been widely used and actively researched over the past 25 years (seeCoello Coello et al., 2007:64). The best-known references are those by Coello Coello et al. (2007) and Deb (2001), while a survey of the state of the art of MOEAs was performed by Zhou et al. (2011). In a recent article, Coello Coello (2009) highlighted current research trends and open topics in the field of MOEAs, which include a discussion of alternative metaheuristics for solving MOO problems. It is also noted that there is much focus on designing MOEAs that reduce the number of objective function evaluations, because these evaluations can be very expensive when solving some real-world optimisation problems.

Some of the topics pertaining to MOEAs are also applicable to other approaches in MOO, for example ensuring proximity and diversity (Subsection 2.3.2), design and use of test functions (Subsection 2.3.3), and the development and use of performance quality indicators (Subsection 2.3.4).

GAs and other biologically inspired metaheuristics (e.g. Ant Colony and Particle Swarm Optimisation) have been widely applied in solving MOO problems. Arguably the best-known evolutionary-based algorithms are the Multi-objective Genetic Algorithm (MOGA) of Fonseca & Fleming (1993), the Niched-Pareto Genetic Algorithm (NPGA) of Erickson et al. (1999), the Strength Pareto Evolutionary Algorithm (SPEA) of Zitzler & Thiele (1999), the Pareto Archived Evolution Strategy (PAES) of Knowles & Corne (2000), the Multi-objective Messy Genetic Algorithm (MOMGA) of Van Veldhuizen & Lamont (2000), the Pareto Envelope-based Selection Algorithm (PESA) of Corne et al. (2000) and the Non-dominated Sorting Genetic Algorithm (NSGA-II) of Deb et al. (2002). These algorithms and some of their variants are discussed in Coello Coello et al. (2007).

EAs are widely used in MOO research and applications. Beausoleil (2006) applies a Multiple-objective Scatter Search (MOSS) to test problems from the liter-ature, and Deb et al.(2002) improve on existing algorithms with their NSGA-II. This algorithm includes a dominance principle, diversity preservation principle and elite preserving principle, and is currently the most widely used algorithm. In

(41)

Coello Coello et al. (2004), they apply particle swarm optimisation while incorpo-rating Pareto dominance. Summanwar et al. (2002) solve constrained optimisation problems using MOGAs, while Zitzler & Thiele (1999) apply the SPEA to the 0/1 knapsack problem. Gil et al. (2007) developed a hybrid method for solving MOO problems by combining PESA and NSGA-II, for example.

In other applications, specific methods are developed to solve MOO problems. For example, Lee (2007) developed a trajectory-informed search methodology and applies it to several test problems. The Adaptive Range Multi-objective Genetic Algorithm (ARMOGA) ofSasaki & Obayashi(2005) requires relatively few objective function evaluations to find the approximate Pareto front. The ParEGO algorithm of Knowles (2006) was mentioned in Section 1.1 in this context: few evaluations are performed because of very tight resource constraints. This algorithm uses a normalised objective function set, and so the range of each objective must be known.

Chapter 7 in Coello Coello et al. (2007) contains comprehensive references to applications in engineering, science, industry and miscellaneous fields (e.g. investment portfolio optimisation and stock ranking). A summary of applications of MOEA is also provided in Zhou et al. (2011). This includes scheduling, data mining, assignment and management, communication, bio-informatics, control systems and robotics, image processing, artificial neural networks, manufacturing, traffic and transportation, and others. A comprehensive list of references is also maintained at the Evolutionary Multi-objective Optimisation (EMOO) home page (www.lania.mx/~ccoello/, cited on 10 August 2012).

Current research trends in evolutionary MOO are discussed by Coello Coello (2009). He notes that researchers focus on new algorithms, efficiency, relaxed forms of dominance, scalability and alternative metaheuristics. Researchers propose new algorithms but only some became widely used, as was pointed out earlier in this section.

The term efficiency refers to algorithm design which reduces the number of instructions performed. This is typically attempted by aiming to make the ranking algorithm more efficient and reduce the number of objective function evaluations. This is also an objective of the research presented in this dissertation, as was motivated in Chapter 1. The relaxed forms of Pareto dominance attempt to regulate convergence, and ǫ-dominance is perhaps the most popular of those. A set of boxes is supposed to cover the Pareto front, and the box size is determined by

(42)

the user-defined parameter ǫ. Only one non-dominated solution is allowed within each box, and a large value of ǫ speeds up convergence, but the quality of the Pareto front might suffer as a result. A small value of ǫ results in a high-quality Pareto front obtained at the cost of convergence speed. Choosing the value of ǫ is still an open problem, also when nothing is known about the true Pareto front, which is the case in practical problems.

MOO algorithms are almost always sensitive to scalability, as they do not automatically scale to problems with many objectives. It was shown that the proportion of non-dominated solutions increases proportionally with the number of objectives (Purshouse & Fleming,2007).

Apart from genetically inspired algorithms, there are alternative biologically inspired metaheuristics like artificial immune systems, ant colony optimisation and particle swarm optimisation. Non-biologically inspired algorithms include simulated annealing, tabu search and scatter search. The algorithm proposed in this dissertation is of non-biological nature and is based on statistical principles.

Coello Coello(2009) recommends that constraint-handling, incorporation of users’ preferences and parameter control be further researched in future work. The idea with parameter control is that the MOEA adapts its parameters automatically without user-intervention. Incorporating user preferences may render MOO more suitable to practical problems, and algorithms may even become more efficient since preferences may reduce the problem size (solution space).

2.3.1 Fitness assignment and ranking of solutions

An EA has both objective and fitness functions associated with it. The values of the objective functions give an indication of attainment of the various optimality criteria, while the fitness function assumes a real value, indicating how well a particular set of objective function values satisfy the optimality condition (Coello Coello et al.,2007). A population has to be ranked to distinguish good solutions from bad ones, and the fitness values are used for this purpose.

The best-known ranking method is the Pareto ranking based on work by Goldberg (1989), which is of complexity O(KN2), where N is the user-specified population size. Faster algorithms have been developed by Qu & Suganthan(2009) and Fang et al. (2008). A new fast sorting algorithm by Mishra & Harit (2010) has worst-case complexity of O(KN2) and best-case complexity of O(N log N). In an application Wang & Yang (2009) developed a particle swarm optimisation

(43)

algorithm using the preference order scheme (Das,1999;Pierro et al.,2007) which is more efficient than the Pareto ranking particularly when the number of objectives is large. D’Souza et al.(2010) improved the NSGA-II by reducing the time complexity through a better ranking scheme.

Jaimes et al.(2009) present a comparative study of several ranking methods, and also provide a useful taxonomy of ranking methods. This includes ranking methods with and without parameters, favour ranking, preference order ranking and Pareto ranking. They have found that the preference order ranking method achieves the best scalability, while different ranking methods produce different subsets of the Pareto optimal set. The quality of the solutions produced by an MOO algorithm may thus be affected by the ranking method selected.

2.3.2 Proximity and diversity

A good MOO algorithm ensures that the Pareto approximation set is close to the true front, and that it is also well populated with solutions. It is expected of an algorithm to properly explore and exploit the solution space in order to fulfil these two requirements. Laumanns et al. (2002) have developed the concept of ǫ-dominance and constructed updating strategies for iterative searches that allow for the desired convergence and distribution of solutions. Finding a close and dense Pareto front approximation is in itself a multi-objective problem, as seen in the performance of MOEAs (Bosman & Thierens, 2003). Wang et al. (2010) have proposed a crowding entropy diversity measure for a self-adaptive

differential evolution algorithm called MOSADE. Their algorithm performed better than NSGA-II, SPEA2 and Multi-objective Particle Swarm Optimisation (MOPSO) on 18 different test problems, measured in terms of convergence and diversity.

Purshouse & Fleming(2007) showed that the behaviour of MOEAs change with increasing numbers of conflicting objectives. The configuration of an algorithm for few objectives cannot necessarily be generalised to larger numbers of objectives, and they found that diversity-promoting mechanisms can be highly influential and even harmful to the optimisation outcome. Also, dominance resistance, the phenomenon which makes it difficult to produce new solutions that will dominate poor solutions, also contributes to preserving locally non-dominated solutions, which in turn confines diversity. Other researchers (Purshouse & Fleming, 2007) have confirmed that dominance resistance may increase with increasing solution space. Purshouse and Fleming suggested that the non-dominated set be pruned

(44)

on a solution-by-solution basis to reduce the dominance-resistant solutions. H´ajek et al. (2010) developed a mechanism to improve diversity and implemented it with the µARMOGA of Sz˝oll˝os et al.(2009).

Good proximity and diversity create confidence in the approximation solution set. Test problems and quality indicators to assess the ability of an algorithm to achieve these are the topics of the next two sections.

2.3.3 Test problems for MOO

Several standard MOO test problems with known Pareto fronts are proposed in the literature. These have been consolidated in Chapter 4 of the book by Coello Coello et al. (2007). These test problems were designed to embody a mixture of non-linear, time-independent and deterministic properties, two or more objective functions, disconnected and asymmetric regions in solution space, and a mixture of concave and convex Pareto front shapes. Some of the test functions are listed in Table 2.1 and are referred to in the literature as belonging to the Van Veldhuizen test suite (Veldhuizen,1999) (MOP1–MOP6), while ZDT1–ZDT3 were developed by Zitzler et al. (2000) (see Table 2.1). Test problems and their requirements were analysed in detail by Huband et al. (2006) who proposed test problems in the Walking Fish Group (WFG) Toolkit. Igel et al.(2007) developed the IHR test suite which allows for testing if an algorithm is invariant against rescaling and rotation. Problems should be non-separable in general, have no extremal parameters and have a scalable number of parameters and objectives, to name a few.

The test problems selected for this study all have known true Pareto fronts PT and may be obtained from the EMOO home page (www.lania.mx/~ccoello/, cited on 10 August 2012). These test functions will be used for the algorithm assessment described in Chapter4. Quality indicators which assume numeric values exist to evaluate the quality of the solutions generated compared to the known solutions. Some of these are discussed next.

2.3.4 Quantifying the performance of MOO algorithms

Several quality performance indicators for MOO algorithms exist. They typically consider some estimation of the deviation between the approximate front found by the test algorithm (PK) and a true Pareto front (PT) of a benchmark problem, as depicted in Table2.1. The term quality performance will be used in this dissertation instead of the general term performance, because the latter pertains to time and

(45)

Function Definition Constraints

MOP1 f1(x) = x2 −105≤x≤105

(Min) f2(x) = (x−2)2

MOP2 f1(x) = 1−exp(− ∑ni=1(xi−√1n)2) −4≤xi≤4

(Min) f2(x) = 1−exp(− ∑ni=1(xi+√1n)2) i=1,...,n, n=3

MOP3 f1(x,y) = −[1+(A1−B1)2+(A2−B2)2] −π≤x,y≤π

(Max) f2(x,y) = −[(x+3)2+(y+1)2] A1=0.5 sin 1−2 cos 1+

sin 2−1.5 cos 2, A2=1.5 sin 1−cos 1+ 2 sin 2−0.5 cos 2, B1=0.5 sin x−2 cos x+ siny−1.5 cos y B2=1.5 sin x−cos x+ 2 siny−0.5 cos y MOP4 f1(x) = ∑ni=1−1(−10 exp (−0.2) √ x2i+x2i+1), −5≤xi≤5

(Min) f2(x) = ∑ni=1(∣xi∣a+5 sin(xi)b) i=1,2,3, a=0.8, b=3

MOP6 f1(x,y) = x 0≤x,y≤1

(Min) f2(x,y) = (1+10y)[1−(1+10yx )α−1+10yx sin(2πqx)] q=6, α=2

ZDT1 f1(x) = x1 0≤xi≤1, n=30 (Min) f2(x,g) = g(x)⋅(1−√f1/g(x)) g(x) = 1+n−19 ⋅∑ni=2xi ZDT2 f1(x) = x1 0≤xi≤1, n=30 (Min) f2(x,g) = g(x)⋅(1−(f1/g(x))2) g(x) = 1+n9−1⋅∑ n i=2xi ZDT3 f1(x) = x1 0≤xi≤1, n=30 (Min) f2(x,g) = g(x)⋅(1− √ f1/g(x)−f1/g(x)⋅sin (10πf1)) g(x) = 1+n−19 ⋅∑ni=2xi

Table 2.1: Some of the standard MOO test functions used for evaluation of MOO algorithms.

Referenties

GERELATEERDE DOCUMENTEN

Deze twee handboeken zijn uitgebracht door LaMi, een samenwerking tussen de provincie Utecht en LTO-Noord, dat zich inzet voor innovatieve agrarische ondernemers op het vlak

Wind voor of wind tegen: windenergie op agrarische bedrijven Katrin Oltmer, Marcel van der Voort en Andrea Terbijhe Nederland telt eind 2008 bijna 2.000 windturbines.. Circa 35%

Opkomst, fenologie en onder-en bovengrondse biomassa- ontwikkeling beschrijven in relatie tot temperatuur om de zwakste plekken in de levenscyclus in kaart te brengen Deze

De methaanemissie op een melkveebedrijf wordt veroorzaakt door het opboeren van methaan door runderen (gevormd in de pens) en methaanvor- ming in de mestopslag.. Uit de resultaten

This challenge is scoped to a few combinatorial problems, including the academic vehicle routing problem with soft time windows (VRPSTW) and a real world problem in blood supply

In het onderzoek van Barber en collega’s (2011), waarbij onderscheid wordt gemaakt tussen ondersteunende surface- en deep acting (ondersteunende emotieregulatie) en

de Generaal Baron Jacquesstraat voerde Baac bvba een opgraving uit in een zone die werd afgebakend op basis van de resultaten van het geofysisch bodemonderzoek dat werd

We asked some of the leading social science and humanities scholars in South Africa to throw new eyes on the problem of COVID-19 from the vantage point of their particular