• No results found

Interval methods for global optimization

N/A
N/A
Protected

Academic year: 2021

Share "Interval methods for global optimization"

Copied!
188
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)Interval Methods for Global Optimization by. Belaid Moa B.Eng., Ecole Hassania des Travaux Publics, Morocco, 1998 M.Eng., Ecole Nationale Sup´erieure d’Electrotechnique, d’Electronique, d’Informatique, d’Hydraulique et des T´el`ecommunications, France, 2000. A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of. Doctor of Philosophy in the Department of Computer Science. c Belaid Moa, 2007 University of Victoria All rights reserved. This dissertation may not be reproduced in whole or in part by photocopy or other means, without the permission of the author..

(2) Interval Methods for Global Optimization by. Belaid Moa B.Eng., Ecole Hassania des Travaux Publics, Morocco, 1998 M.Eng., Ecole Nationale Sup´erieure d’Electrotechnique, d’Electronique, d’Informatique, d’Hydraulique et des T´el`ecommunications, France, 2000. Supervisory Committee. Dr. M.H. van Emden, Supervisor (Department of Computer Science). Dr. W.W. Wadge, Co-supervisor (Department of Computer Science). Dr. J. Ellis, Department Member (Department of Computer Science). Dr. F. Gebali, Outside Member (Department of Electrical and Computer Engineering). Dr. M. Ibnkahla, External Examiner (Department of Electrical Engineering, Queen’s University).

(3) Supervisory Committee. Dr. M.H. van Emden, Supervisor (Department of Computer Science). Dr. W.W. Wadge, Co-supervisor (Department of Computer Science). Dr. J. Ellis, Department Member (Department of Computer Science). Dr. F. Gebali, Outside Member (Department of Electrical and Computer Engineering). Dr. M. Ibnkahla, External Examiner (Department of Electrical Engineering, Queen’s University). ABSTRACT We propose interval arithmetic and interval constraint algorithms for global optimization. Both of these compute lower and upper bounds of a function over a box, and return a lower and an upper bound for the global minimum. In interval arithmetic methods, the bounds are computed using interval arithmetic evaluations. Interval constraint methods instead use domain reduction operators and consistency algorithms. The usual interval arithmetic algorithms for global optimization suffer from at least one of the following drawbacks:.  Mixing the fathoming problem, in which we ask for the global minimum only, with the localization problem, in which we ask for the set of points at which the global minimum occurs..  Not handling the inner and outer approximations for. -minimizer, which is the set of.

(4) iv points at which the objective function is within of the global minimum..  Nothing is said about the quality for their results in actual computation. The properties of the algorithms are stated only in the limit for infinite running time, infinite memory, and infinite precision of the floating-point number system. To handle these drawbacks, we propose interval arithmetic algorithms for fathoming problems and for localization problems. For these algorithms we state properties that can be verified in actual executions of the algorithms. Moreover, the algorithms proposed return the best results that can be computed with given expressions for the objective function and the conditions, and a given hardware. Interval constraint methods combine interval arithmetic and constraint processing techniques, namely consistency algorithms, to obtain tighter bounds for the objective function over a box. The basic building block of interval constraint methods is the generic propagation algorithm. This explains our efforts to improve the generic propagation algorithm as much as possible. All our algorithms, namely dual, clustered, deterministic, and selective propagation algorithms, are developed as an attempt to improve the efficiency of the generic propagation algorithm. The relational box-consistency algorithm is another key algorithm in interval constraints. This algorithm keeps squashing the left and right bounds of the intervals of the variables until no further narrowing is possible. A drawback of this way of squashing is that as we proceed further, the process of squashing becomes slow. Another drawback is that, for some cases, the actual narrowing occurs late. To address these problems, we propose the following algorithms: 1. Dynamic Box-Consistency algorithm: instead of pruning the left and then the right bound of each domain, we alternate the pruning between all the domains. 2. Adaptive Box-Consistency algorithm: the idea behind this algorithm is to get rid of the boxes as soon as possible: start with small boxes and extend them or shrink them depending on the pruning outcome. This adaptive behavior makes this algorithm.

(5) v very suitable for quick squashing. Since the efficiency of interval constraint optimization methods depends heavily on the sharpness of the upper bound for the global minimum, we must make some effort to find the appropriate point or box to use for computing the upper bound, and not to randomly pick one as is commonly done. So, we introduce interval constraints with exploration. These methods use non-interval methods as an exploratory step in solving a global optimization problem. The results of the exploration are then used to guide interval constraint algorithms, and thus improve their efficiency..

(6) Table of Contents Supervisory Committee. ii. ABSTRACT. iii. Table of Contents. vi. List of Tables. xi. List of Figures. xii. List of Abbreviations. xvi. Notations. xvii. Glossary. xix. Acknowledgement. xxii. Dedication. xxiii. 1 Introduction. 1. 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 1. 1.2. The problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 2. 1.3. Methods for solving global optimization problems . . . . . . . . . . . . . .. 4. 1.4. Layout of the dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6.

(7) Table of Contents. vii. 2 Interval Arithmetic. 10. 2.1. From floating-point numbers to interval arithmetic . . . . . . . . . . . . . . 10. 2.2. Advantages of interval arithmetic . . . . . . . . . . . . . . . . . . . . . . . 11. 2.3. Basics of interval arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.1. Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. 2.3.2. Interval extensions and Inclusion functions . . . . . . . . . . . . . 14. 2.4. Solving inequalities via interval arithmetic . . . . . . . . . . . . . . . . . . 16. 2.5. Functional box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . 18. 3 Unconstrained global optimization via interval arithmetic. 21. 3.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21. 3.2. The fathoming problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24. 3.3. 3.2.1. The first fathoming problem . . . . . . . . . . . . . . . . . . . . . 25. 3.2.2. The second fathoming problem . . . . . . . . . . . . . . . . . . . 27. The localization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.1. The outer approximation . . . . . . . . . . . . . . . . . . . . . . . 34. 3.3.2. The inner approximation . . . . . . . . . . . . . . . . . . . . . . . 37. 4 Constrained global optimization via interval arithmetic. 40. 4.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41. 4.2. The feasibility problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.1. The outer approximation . . . . . . . . . . . . . . . . . . . . . . . 44. 4.2.2. The inner approximation . . . . . . . . . . . . . . . . . . . . . . . 45. 4.3. The fathoming problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49. 4.4. The localization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 53. 5 Interval constraints 5.1. 59. Constraint satisfaction problems . . . . . . . . . . . . . . . . . . . . . . . 59 5.1.1. Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60.

(8) Table of Contents 5.1.2 5.2. viii. Constraint propagation . . . . . . . . . . . . . . . . . . . . . . . . 62. Interval constraints and interval constraint solver . . . . . . . . . . . . . . 63. 6 Dual, clustered, deterministic, and selective propagation algorithms. 70. 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71. 6.2. Relations and Recursive Functions on Power Sets . . . . . . . . . . . . . . 72. 6.3. Dual Propagation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 74. 6.4. Clustered Propagation Algorithms . . . . . . . . . . . . . . . . . . . . . . 77. 6.5. Deterministic propagation algorithms . . . . . . . . . . . . . . . . . . . . 79. 6.6. Selective propagation algorithms . . . . . . . . . . . . . . . . . . . . . . . 80. 6.7. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80. 7 Dynamic and adaptive box-consistency algorithms. 85. 7.1. Box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85. 7.2. Dynamic box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . 86. 7.3. Adaptive box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . 88. 7.4. Other improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91. 8 Global optimization via interval constraints 8.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95. 8.1.2.  . Computing an upper bound for  . 8.1.3. Branch-and-bound in interval constraints . . . . . . . . . . . . . . 97. 8.1.1. 8.2. 95. Computing a lower bound for . . . . . . . . . . . . . . . . . . 96 . . . . . . . . . . . . . . . . . 96. Exploration and interval constraints for global optimization . . . . . . . . . 98 8.2.1. The fundamental rule of P-NP . . . . . . . . . . . . . . . . . . . . 98. 8.2.2. Motivation of exploration . . . . . . . . . . . . . . . . . . . . . . 98. 8.2.3. Interval constraints with exploration . . . . . . . . . . . . . . . . . 99. 8.2.4. Effect of exploration on the upper bound of . 8.2.5. Effect of exploration on the lower bound of .  . . . . . . . . . . . 101 . . . . . . . . . . 101.

(9) Table of Contents. ix. 8.2.6. Effect of exploration on splitting the search space . . . . . . . . . . 102. 8.2.7. Propagating or squashing? . . . . . . . . . . . . . . . . . . . . . . 104. 8.3. Interval constraints as a verification tool . . . . . . . . . . . . . . . . . . . 107. 8.4. Interval constraint solver with exploration . . . . . . . . . . . . . . . . . . 108 8.4.1. Effect of exploration on splitting and propagation . . . . . . . . . . 109. 8.4.2. Effect of exploration on squashing algorithms . . . . . . . . . . . . 109. 8.4.3. Does a box contain a solution to 8.4.3.1 8.4.3.2. 9 Experimental results 9.1. 9.2. .  ?. . . . . . . . . . . . . . . . . 110. involves inequalities only . . . . . . . . . . . . . . . 111.  involves equalities also. . . . . . . . . . . . . . . . . 114 117. Description of BelSyste . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 9.1.1. BelSyste architecture . . . . . . . . . . . . . . . . . . . . . . . . . 117. 9.1.2. BelSyste components . . . . . . . . . . . . . . . . . . . . . . . . . 119. Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 9.2.1. An application using squashing and propagation . . . . . . . . . . 121. 9.2.2. Box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . 123. 9.2.3. Box satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125. 9.2.4. Box unsatisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . 126. 9.2.5. Largest box in a convex set . . . . . . . . . . . . . . . . . . . . . . 126. 9.2.6. Booth function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131. 9.2.7. Rosenbrock banana function . . . . . . . . . . . . . . . . . . . . . 133. 9.2.8. Six-hump camel function . . . . . . . . . . . . . . . . . . . . . . . 134. 9.2.9. Discrete examples . . . . . . . . . . . . . . . . . . . . . . . . . . 137. 9.2.10 An inequality on integers . . . . . . . . . . . . . . . . . . . . . . . 137 9.2.11 Integral points on elliptic curves . . . . . . . . . . . . . . . . . . . 138 9.2.11.1 Fermat’s last theorem . . . . . . . . . . . . . . . . . . . 139 9.2.11.2 Taxicab number . . . . . . . . . . . . . . . . . . . . . . 139.

(10) Table of Contents. x. 9.2.12 BelSyste as a verification tool . . . . . . . . . . . . . . . . . . . . 140 10 Conclusion. 143. 10.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 10.1.1 Interval arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . 143 10.1.2 Interval constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 143 10.1.2.1 Propagation . . . . . . . . . . . . . . . . . . . . . . . . 143 10.1.2.2 Relational box-consistency . . . . . . . . . . . . . . . . 144 10.1.2.3 Interval constraints with exploration . . . . . . . . . . . 144 10.1.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 10.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 10.2.1 Dependency effect . . . . . . . . . . . . . . . . . . . . . . . . . . 146 10.2.2 Parametric CSPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 10.2.3 Criteria for choosing a consistency algorithm . . . . . . . . . . . . 147 10.2.4 Confidence quantification . . . . . . . . . . . . . . . . . . . . . . 147 10.2.5 Disjunctive constraints . . . . . . . . . . . . . . . . . . . . . . . . 148 10.2.6 Iterative widening with exploration . . . . . . . . . . . . . . . . . 148 Bibliography. 149. Appendix A Source Code. 154. A.1 Box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 A.2 Adaptive box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 A.3 Dynamic box consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 A.4 Box satisfaction algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.5 Default explorer and explorer with propagation . . . . . . . . . . . . . . . 159 A.6 Interval constraint global optimization solver with exploration . . . . . . . 159.

(11) List of Tables Table 9.1. Propagating and squashing in an interval constraint solver. The unit. of Duration is seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Table 9.2. Comparison between BC, ABC, and DBC. The unit of Duration is. microseconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Table 9.3. Performance of MS , MS , MS , and MS . The unit of D is seconds. . 135. Table 9.4. Comparing ExplLB with squashLftC. . . . . . . . . . . . . . . . . . 137.

(12) List of Figures Figure 1.1. A flow chart indicating chapters’ prerequisites. . . . . . . . . . . . .. Figure 2.1. Left bound squashing algorithm using interval arithmetic.. Figure 2.2. A squashing algorithm that uses interval arithmetic to obtain func-. 9. . . . . . 19. tional box consistency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Figure 3.1. The algorithm MS . The function atomic  specifies whether its box. argument is atomic or not. . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Figure 3.2. The algorithm MS . The function. interval argument. The tolerance. . has as value the width of its. is assumed to be sufficiently large as. termination of the algorithm is not assured otherwise. . . . . . . . . . . . . 27 Figure 3.3. The algorithm MS . In case of success, an interval of the desired. width has been found. Otherwise, a lower bound for the global minimum has been found that is the best possible one, given the expression for  and the precision of the arithmetic. . . . . . . . . . . . . . . . . . . . . . . . . 29 Figure 3.4. The algorithm MS . In case of success, an interval of the desired. width has been found. Otherwise, an interval for the global minimum has been found that is best, given the expression for  and the precision of the arithmetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Figure 3.5. The algorithm MS . After termination, the best outer approximation. to the Æ -minimizer is the set of boxes left in the cover. . . . . . . . . . . . . 35.

(13) List of Figures Figure 3.6. xiii A terminal situation of the algorithm MS outer approximation for. the Æ -minimizer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Figure 3.7. The algorithm MS . After termination, the set of boxes left in the. cover is the best inner approximation for the Figure 3.8 of the Figure 4.1.  Æ -minimizer.. . . . . . . . 38. A terminal situation of the algorithm MS for inner approximation  Æ -minimizer.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39. The algorithm    . The function atomic. . specifies whether its. box argument is atomic or not. It finds an inner and an outer approximation for System (1.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Figure 4.2. The algorithm FS . The function atomic  specifies whether its box. argument is atomic or not. . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Figure 4.3. The algorithm FS . The function atomic. . specifies whether its box. argument is atomic or not. . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Figure 4.4. The algorithm FS. The function atomic. . specifies whether its box. argument is atomic or not. . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Figure 4.5. An improved version of the algorithm FS. This version avoids pro-. cessing boxes for which the parents are proved to satisfy some of the conditions involved in the constrained global optimization problem. . . . . . . 50 Figure 4.6. The algorithm FPFS. In case of success, an interval of the desired. width has been found. Otherwise, an interval for the global minimum has been found that is best, given the expression for  and the precision of the arithmetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Figure 4.7. The algorithm FPFS . In case of success, an interval of the desired. width has been found. Otherwise, an interval for the global minimum has been found that is best, given the expression for  and the precision of the arithmetic. So this information can be forced by setting to zero.. . . . . . 55.

(14) List of Figures Figure 4.8. xiv The algorithm LPOFS. After termination, the set of boxes left in the. cover  is the best outer approximation to the Æ -minimizer. Figure 4.9. . . . . . . . . 56. The algorithm LPIFS. After termination, the set of boxes left in the. cover  is the best inner approximation to the.  Æ -minimizer.. . . . . . 58. Figure 5.1. A pseudo-code for GPA.. Figure 5.2. A system of non-linear inequalities without multiple occurrences.  is partitioned into equivalence  is a subset     of   , for. of variables. Instead, the set classes  .  where 

(15). .  . . . An. . . . . . . . . . . . . . . . . . . . . . . . 63. .  . .  . .  . .  constraint asserts that its arguments are equal. . . 64. Figure 5.3. Hardware model for a CSP. . . . . . . . . . . . . . . . . . . . . . . 66. Figure 5.4. Left bound squashing algorithm using interval constraints.. Figure 6.1. A propagation algorithm based on recursive functions on power sets. 75. Figure 6.2. Dual propagation algorithm based on recursive functions on power. . . . . . 69. sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Figure 6.3. The dual generic propagation algorithm. . . . . . . . . . . . . . . . 77. Figure 6.4. A clustered propagation algorithm based on recursive functions on. power sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Figure 6.5. Architecture of our implementation. . . . . . . . . . . . . . . . . . 81. Figure 6.6. UML diagram for our implementation. . . . . . . . . . . . . . . . . 82. Figure 6.7. A graphical user interface for our implementation showing the mod-. eling (clusters of constraints) and the intervals resulted from applying propagation on the user’s CSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Figure 7.1. Illustration of BC steps. . . . . . . . . . . . . . . . . . . . . . . . . 86. Figure 7.2. Illustration of DBC steps. . . . . . . . . . . . . . . . . . . . . . . . 87. Figure 7.3. Illustration of the boundary intervals removed by BC. . . . . . . . . 87. Figure 7.4. Illustration of the boundary intervals removed by DBC. . . . . . . . 88. Figure 7.5. Structure of dynamic box consistency algorithm with a tolerance .. 90.

(16) List of Figures. xv. Figure 7.6. A pseudo-code for adaptive box consistency algorithm with a toler-. ance .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92. Figure 7.7. An improved box-consistency algorithm.. . . . . . . . . . . . . . . 94. Figure 8.1. A visual for the steps involved in interval constraints with exploration.100. Figure 8.2. Computing a lower bound for . Figure 8.3. Architecture of interval constraints with exploration for global opti-.  using exploration.. . . . . . . . . 102. mization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Figure 8.4. Interval constraint algorithm with exploration. The function reduce. applies propagation or squashing and makes use of the function explLB. It returns the reduced domain of , and the interval of  . The function explore explores the optimization problem, and returns a point , and a real number  .. The function . . returns the width of the largest component of  .. Figure 8.5. Left bound stretching algorithm using interval constraints.. Figure 8.6. Zoomin algorithm using interval constraints. The function. gives the width of the largest component of  .. . 106. . . . . . 112  . . . . . . . . . . . . . . . . 113. Figure 8.7. Zoomout algorithm using interval constraints. . . . . . . . . . . . . 114. Figure 8.8. An algorithm for proving the existence of solutions of.  .. . . . . . 115. Figure 8.9. -slicing algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 116. Figure 9.1. Architecture of BelSyste. . . . . . . . . . . . . . . . . . . . . . . . 118. Figure 9.2. Components of BelSyste and their dependencies. . . . . . . . . . . 120. Figure 9.3. The graph of. . .    ,. Figure 9.4 .  . Figure 9.5 . .  . . . , where. . .    . and. . in. using BelSyste. . . . . . . . . . . . . . . . . . . . . . . . . . . 128. The graph of .    . .  . and  in . A graph for.    . . . .   . . and .    ,.   . and  in . .    . .    . where. using BelSyste. . . . . . . . . . . . 141.    .      ,. . .      . with. using BelSyste. . . . . . . . . . . . . 142.

(17) List of Abbreviations. . = the greatest lower bound.. ABC = Adaptive box consistency algorithm. BC = Box consistency algorithm. CSP = Constraint satisfaction problem. DBC = Dynamic box consistency algorithm. GOP = Global optimization problem. ICSP = interval constraint satisfaction problem..

(18) Notations . The feasible region. . The upper bound, found so far, for the global minimum.. The set of floating-point numbers. . . The set of real numbers.. The ICSP representing the feasible region . . The interval arithmetic evaluation of  associated with a given expression for  . . A function that takes a set of reals. . and gives the smallest floating-point interval. containing  . 

(19). A function that takes a set of reals  and gives the smallest real interval containing . . The objective function..

(20) Notations .  The global minimum.. I The set of floating-point intervals. I The set of real intervals.. xviii.

(21) Glossary Box A Cartesian product of intervals. Box consistency Squashing all the components of a box until no narrowing is possible. Box satisfaction Computing a box that cannot be stretched to contain solutions and only solutions. Box unsatisfaction Computing a box that cannot be stretched to contain no solutions. Constrained global optimization A global optimization problem for which the feasible region is not a box. Cover for  A set of non-empty subsets of  such that their union is  . Cover in  A set of non-empty subsets of  . Dependency effect The dependency effect is the excessive over estimation of the range of a function that arises when at least one of the variables occurs more than once in the expression of the function. For example, the expression     .. Elementary operations. . . . gives .  ,. and not  , when.

(22) Glossary Simple operations such as . xx. .      .. Endian Computer architectures are characterized as big-endian or little-endian depending on how memory cells are arranged. Exploration The technique of using explorers to improve the speed of interval constraints. Explorer A polynomial time algorithm that extracts informal information about the problem considered. Feasible point A point that satisfies all the conditions in a constrained global optimization problem. Functional box consistency Obtaining box consistency using interval arithmetic. Inner approximation for a -minimizer A set of boxes that are subsets of the -minimizer. Iterative widening It starts from a point near a solution and tries to find a box that may contain a solution. -minimizer The set of points for which the objective function is within of the global minimum. Minimizer The set of points for which the objective function attains the global minimum. Outer approximation for -minimizer A set of boxes whose union encloses the -minimizer. Relational box consistency Obtaining box consistency using interval constraints..

(23) Glossary. xxi. Reliable Qualifies any algorithm with guaranteed accuracy. A reliable algorithm usually returns a set of boxes containing the solutions of the problem at hand. The results of a reliable algorithm are said to be reliable. Rounding mode To control the round-off errors of the arithmetic operations, the IEEE standard 754 specifies four rounding modes: to the nearest, to zero, to plus infinity, and to minus infinity. Squashing/Narrowing/Pruning Removing from the sides of a box the parts that are proved not to contain any solution. Stretching Enlarging a box to contain more solutions and only solutions. Unconstrained global optimization A global optimization problem for which the feasible region is a box..

(24) Acknowledgement I cannot thank enough my supervisor, Professor Maarten van Emden, for accepting me to be his student. Coming from an educational system in which theories are put first, I was very lucky to be under his supervision. “Hello Belaid, I found this interesting puzzle, ...”, with a smile in his face, this is how Professor van Emden starts when I drop by his office. Any time I meet him, he makes my day by enriching my knowledge. Full of ideas, full of patience, full of support, full of respect, .... Indeed, what a wise teacher you are. Without your help, your support, your encouragements, and your financial aid, I never would have completed the program. Without your technical discussions, suggestions, and enthusiastic supervision this thesis could not have been completed. I would like to warmly thank my co-supervisor, Professor Bill Wadge, for his insights, comments, and suggestions regarding my thesis. Many thanks to Professor John Ellis, Professor Fayez Gebali, and Professor Mohamed Ibnkahla, for taking the time to read over the thesis, for their comments, and for their encouragements. I would like also to thank my colleagues in the department of computer science for their help and support. I am very grateful to my wife, Ghaida Hammado, for her love and support. I am also very grateful to all my friends and my teachers for their help during the course of my research. Finally, I am forever indebted to my mother who taught me how to be strong, and to my father who showed me how to be patient..

(25) Dedication. To My Parents To My Wife To My Teachers To My Friends.

(26) Chapter 1. Introduction This chapter is intended to provide a general overview of the global optimization problem.. 1.1 Motivation Nowadays, optimization seems to be a necessity in almost every field. Various disciplines, from business to engineering, require some sort of optimization in order to improve the performance of every day tasks. To achieve this goal, we need to develop a mathematical model consisting of a numerical objective function that depends on numerical decision variables that are subject to certain conditions. The global optimization problem is to find the values of the decision variables, which satisfy the conditions, and minimize not only locally but also globally the objective function. A photon minimizes the time traveled between two points. More generally, the variational principle, from which several laws of physics can be derived and which states that the trajectory of a particle is the one that minimizes a certain quantity called action (see Chapter. . in [11]), is an instance of the global optimization problem. In statistical me-. chanics, molecules tend to be distributed in a way that maximizes their entropy. A taxi driver minimizes the distance between his position and the destination. An engineer, when faced with many design options, tries to choose the optimal one. In any approximation, the error between a proposed model and a set of data must be minimized. The list of examples where optimization is involved is endless. As claimed in [37], almost every mathematical.

(27) 1.2 The problem statement. 2. problem can be transformed into an optimization problem. This betrays of course a narrow view of mathematics. But it is true that surprisingly many problems can be so transformed. For the more realistic mathematical models, the global optimization problem is difficult to solve. Faithfully modeling real systems introduces objective functions with an unknown and large number of local minima, nonlinear conditions, and a mixture of discrete and continuous decision variables. In the presence of these attributes, numerical analysis has no satisfactory method to solve the global optimization problem. For a long time, it has been thought that no numerical method could guarantee solving the global optimization problem in the presence of nonlinearity [56]. The reasoning behind such a thought was based on the finite precision of computers: there is no way to faithfully approximate the values of a function, especially when it is nonlinear, using a finite number of points. This argument is true when the function to be optimized is evaluated at sampled points, but when it is evaluated over a range the argument does not hold. Indeed, interval arithmetic and interval constraints are both capable of returning a satisfactory interval containing the global optimum.. 1.2 The problem statement Informally, the global optimization problem deals with finding the greatest lower bound, when it exists, of a given real-valued function in a set described by certain conditions. Formally, the global optimization problem is to find. .   . Where. .   . . stands for the greatest lower bound, the objective function. function from   to  , and. ,. (1.1) . is a real-valued. often called the feasible region, is a subset of   . The. greatest lower bound of  , when it exists, is called the global minimum and is denoted by .  . The symbol . denotes the set of real numbers, and   is the Euclidean -space.. If  is a Cartesian product of  intervals, then the global optimization problem is said to be unconstrained, otherwise it is constrained..

(28) 1.2 The problem statement. 3. The restrictions we impose on  are the following:.  For the use of interval arithmetic we suppose that. . has an expression containing. . can be decomposed into con-. operations for which we have interval extensions..  For the use of interval constraints we suppose that. straints for which domain reduction operators are available. These concepts are covered in the upcoming chapters. The set  is a subset of   described by a system of linear and nonlinear inequalities similar to System (1.2). Without loss of generality, we consider the generic system of both linear and nonlinear inequalities below:  . .  .  . .  .

(29)

(30).  . (1.2). ................................  . Where for every.    . ! , . .  .

(31). . has the same restrictions as  . That is, in the case of. interval arithmetic we suppose that  is defined by an expression containing operations for which interval extensions are available, and in the case of interval constraints,. . can be. decomposed into constraints for which domain reduction operators exist. The optimization problem can be decomposed into the following subproblems: the fathoming problem and the localization problem. Fathoming problem This is concerned with finding what the global minimum is. That is, we compute the value. .  (or an interval that contains it), and we are not concerned. about the values of  at which such a value occurs. Localization problem In the localization problem, we are concerned with locating where the global minimum occurs..

(32) 1.3 Methods for solving global optimization problems. 4. That is, finding the minimizer, which is defined as. .     .  and   .. Due to machine limitations, the minimizer cannot always be computed. Instead, however, we compute, for a suitable.  ,. . the -minimizer defined as:.   . . . 

(33). and .  . . We call special attention to the fact that we are dealing with global optimization and not local optimization. In local optimization only the determination of local minima is desired.. 1.3 Methods for solving global optimization problems The global optimization methods can be classified into the following two main categories: 1. point methods, and 2. bounding methods. The point methods are the classical methods for global optimization, that include deterministic and stochastic techniques. They are based on computing function values at sampled points. Most literature about such methods covers one of these methods: Newton-like methods [12, 13], Clustering methods [33, 56], Evolutionary Algorithms [17, 38], Simulated Annealing [34], Statistical Methods [43], Tabu Search [16], and hybrid methods that combine some of these (see for example [39]). The similarity among all these methods is that none of them guarantees the correctness of its results. In fact, these methods may not terminate or may terminate with a result far from the global minimum. For an objective function with a large number of local minima, even the best point method can fail to spot any value in the minimizer and, in general, may get trapped in a local minimum that is not the minimum. On the other hand, the bounding methods compute an interval that is proved to contain the global optimum. These methods can be classified into one of the following classes depending on the way the bounds of the interval are computed:.

(34) 1.3 Methods for solving global optimization problems. 5. 1. Lipschitzian Approach. This approach is based on decomposing the box on which the function is to be bounded into boxes in which the function is Lipschitzian. A function is Lipschitzian in a box  if and only if there exists a constant "  , such that for any  . . .  . .

(35)  .  . " . . If the value of the function at point  is given, then an interval containing the range of  is   . . where. "#     "# ,.  . # 

(36) $.  .. 2. Interval Arithmetic. This approach uses interval arithmetic evaluation to obtain the bounds of. . on a given box. .. This evaluation is based on the notion of interval. extension defined in the next chapter. 3. Interval Constraints. In this approach, the bounds of the function are computed based on applying the constraint propagation algorithm on a “moving” constraint system. This will be explained later. The common characteristic of all bounding methods is that they can be implemented so that the computation is a proof of the result. Their power in solving any global optimization problem of the form (1.1) relies on bounding the value of the global minimum between two floating-point numbers. The global minimum is proved to be found between these two numbers. Thus, the qualification “bounding”. One interesting property of interval methods is that they return bounds even in cases where it is difficult to compute a point in the feasible set . For example, in such cases, the interval given by the interval arithmetic evaluation of. . over   is an interval containing. the global minimum. Because such an interval is in general too wide, we will suppose that  is specified by a system of equation similar to System (1.2) in order to get sharper bounds for the global minimum..

(37) 1.4 Layout of the dissertation. 6. Since we are more interested in reliable computing, we will focus only on the bounding methods: optimization is so important that we must tackle it correctly, completely, and efficiently. Among bounding methods, we will only treat interval arithmetic and interval constraint methods. Even though they are implemented in several software packages (UniCalc, Filib++, Numerica, etc), they still have certain aspects that can be improved to enhance their performance. This dissertation gives an overview of the deficiencies in these approaches, suggests improvements and provides new approaches that can be used to tackle the global optimization problem in a more time-efficient way.. 1.4 Layout of the dissertation As we pointed out earlier, the bounding methods constitute the main topic of the dissertation, and they are presented in two parts. The first part is titled “global optimization using interval arithmetic”. In this part, we discuss interval arithmetic and its use to achieve global optimization. More precisely, interval arithmetic evaluation is used to implement the fathoming algorithm, the innerapproximation algorithm and the outer-approximation algorithm. These algorithms are derived from the Moore-Skelboe algorithm in a rational reconstruction manner similar to that of Lakatos [35]. The second part deals with global optimization using interval constraints. After reviewing the necessary notions of interval constraints, we present revised propagation algorithms. These algorithms are important in that they improve the time-efficiency of the usual propagation algorithm. As we will emphasize in this part, by a suitable choice of a “moving” constraint system, the global optimization problem can be solved by solving constraint satisfaction problems. In fact, as shown in [50], there is no difference between the global optimization problem and solving constraint satisfaction problems; they are even in the same complexity class. Another important aspect of interval constraints is pruning. New approaches for pruning are discussed and compared to box-consistency, and bound-consistency algorithms. We.

(38) 1.4 Layout of the dissertation. 7. fully describe the idea behind dynamic box-consistency and adaptive box-consistency algorithms. These algorithms are used to solve some constraint satisfaction problems as well as some global optimization problems. In Chapter. . we describe a new approach for tackling global optimization problems.. We then conclude by presenting some experimental results, and by mentioning our contributions into the field of global optimization. The rest of the dissertation is organized as follows:.  Part I: Global optimization using interval arithmetic This part presents interval arithmetic and its use in global optimization. It is composed of the following chapters: 1. Chapter : Interval arithmetic This chapter serves to provide the necessary background and to establish the terminology used in interval arithmetic. 2. Chapter : Unconstrained global optimization via interval arithmetic In there, we discuss Moore-Skelboe algorithm and its improvements. These improvements are implemented and used to solve several benchmarks. 3. Chapter : Constrained global optimization This chapter shows how interval arithmetic is used to solve constrained global optimization problems. As we will argue later, ensuring that the conditions of the global optimization problem are satisfied is difficult with interval arithmetic compared to interval constraints. This will lead us into interval constraints and their power..  Part II: Global optimization using interval constraints In this part we tackle global optimization problems using interval constraints. 1. Chapter : Interval constraints This chapter serves to provide the necessary background and to establish the terminology used in interval constraints..

(39) 1.4 Layout of the dissertation. 8. 2. Chapter : Dual, clustered, deterministic, and selective propagation algorithms In this chapter we discuss how to improve the propagation algorithm. Two algorithms are presented: dual propagation and propagation by selective initialization. 3. Chapter : Dynamic box-consistency and adaptive box-consistency In this chapter, we present two new algorithms for pruning: dynamic boxconsistency and adaptive box-consistency, and contrast them with the existing box-consistency algorithms. 4. Chapter : Global optimization via interval constraints In this chapter, we show how interval constraints can solve both constrained and unconstrained global optimization problems. ”Exploration” is used to improve the efficiency of interval constraints..  Chapter. :. Experimental results. In this chapter, we present the experimental results of our implementation..  Chapter. :. Conclusion. We conclude the dissertation with our main contributions in the field of global optimization as well as some suggestions for future work. The flow chart in Figure 1.1 indicates the prerequisites of each chapter..

(40) 1.4 Layout of the dissertation. 9. Chapter 1 Introduction. Part I. Part II. Chapter 2 Interval arithmetic. Chapter 5 Interval constraints. Chapter 3 Unconstrained global optimization via interval arithmetic. Chapter 6 Dual, clustered, deterministic, and selective propagation algorithms. Chapter 4 Constrained global optimization via interval arithmetic. Chapter 7 Dynamic box-consistency and Adaptive box-consistency. Chapter 8 Global optimization via interval constraints. Chapter 9 Experimental Results. Chapter 10 Conclusion. Figure 1.1. A flow chart indicating chapters’ prerequisites..

(41) Chapter 2. Interval Arithmetic In this chapter we shall cover briefly the main concepts of interval arithmetic and fix the terminological and the notational conventions used in the next chapters. For a more comprehensive presentation of interval arithmetic, the reader may refer to [32, 46, 49, 20, 2].. 2.1 From floating-point numbers to interval arithmetic Computers can only manipulate a finite amount of information. This finite aspect prevents them from representing most of the real numbers. In fact, the set of real numbers is uncountable and the set of numbers that can be processed by a computer is finite. To perform numerical computations involving real numbers, the IEEE 754 standard [1] defines floating-point numbers, and describes the operations associated with them. More precisely, the standard specifies: number formats, basic operations, conversions, exceptional conditions, and rounding modes (round to nearest, round to zero, round to . ).. , and round to. 

(42) involving real numbers is transformed, through a process called rounding, into a problem  involving floating-point numbers. The solutions of  are generally taken as a satisfactory approximation to the solutions of 

(43) . But, as In conventional methods, a problem. proved in practice, this is not always the case. Due to the fact that an operation over floating-point numbers can yield a real number that is not a floating-point number, there are many problems where the accumulation of repeated rounding errors affect the final results..

(44) 2.2 Advantages of interval arithmetic To overcome this, we enclose the problem. 11. 

(45) in a problem . ranges of real numbers. Using suitable roundings, the solutions of those for. involving representable. . are proved to contain. 

(46) . An interesting consequence of this technique is that instead of returning a. floating-point number that, more or less, approximates an exact solution, we return a range containing it. Thus, numerical problems can be solved while bounding errors from all sources, including rounding errors, modeling errors and data uncertainty. This technique is interval arithmetic.. 2.2 Advantages of interval arithmetic Interval arithmetic was introduced to control errors, but that’s not its only advantage; there are many more. Below we only and briefly present a few. In interval arithmetic there is no need for exception handling, which is complicated in general. For instance, division by zero, which is considered an error in conventional methods, generates an interval of the form. .   . . in interval arithmetic. Moreover,. as we will show in the last chapter, the overflow that occurs in integer operations can be eliminated by means of interval arithmetic. Interval arithmetic increases the opportunity for parallelism [37]. In fact, several algorithms based on interval arithmetic are parallel in nature, because they proceed by recursively decomposing regions into subregions and deleting the regions that are proved to contain no solution. Interval arithmetic is also a very useful tool for sensitivity analysis. In fact, the beauty of computing with intervals is that it makes accuracy visible as the program progresses, thus, increasing the chances for producing better models. More generally, interval arithmetic provides a simple framework for evaluating different mathematical models. Another interesting property of algorithms that are based on interval arithmetic is that they are widely applicable. As opposed to conventional algorithms, which are developed for a restricted class of problems, interval arithmetic algorithms can be used for a wide class of problems..

(47) 2.3 Basics of interval arithmetic. 12. A surprising implication of the use of interval arithmetic is the flexibility it offers in engineering systems. For instance, suppose that a system is specified by a certain number of unknown parameters (e.g. capacity, resistance, etc.) that satisfy certain conditions. A conventional method will only return a single value for each parameter. Because the values of these parameters must adhere to a certain standard, one of the computed values may not be available, and, thus, causing the system to be unrealizable. An interval method, on the other hand, returns an interval for each parameter. Obviously, the chances to find practical values in such intervals are greater. From our perspective, however, the most striking consequence of interval arithmetic is the concept of numerical provability. Using interval arithmetic, several equations can be proved to have either a solution or not. The graph of a function can be proved to be enclosed in a set of rectangles displayed on a computer screen [58, 40] (see the last chapter for an example). Safety simulations can be done correctly on finite precision machines [40].. 2.3 Basics of interval arithmetic 2.3.1 Intervals In what follows, we briefly review some definitions on floating-point intervals. We refer the reader to [28] for more detail. Let  be the set of real numbers, and let. be the set of floating-point numbers. Let. . be the greatest finite positive floating-point number. For every finite floating-point number % in , let % denote the smallest floating-point number greater than %, and % the greatest floating-point number less than %. Note that if % . , then. number. If %. %  . . , then. %. , and if. % . , then. , then , and if , and if , then % . %  . %. . , then .. %  . %. %. . Let %. %. . be any floating-point. . By convention, if. . Given a real number #, the greatest floating-point number that is smaller than or equal to # is denoted by is denoted by. . #.. . #,. and the least floating-point number that is greater than or equal to #.

(48) 2.3 Basics of interval arithmetic. 13. A real interval is a closed connected subset of reals. In other words, it is a subset of  that has one of the following forms %  . . 

(49) , or  .    . . . . . 

(50)

(51) , %. . . .  

(52) ,. % .  . %. .  , where the bounds % and  are real numbers. The. set of real intervals is denoted by I . When the bounds % and  are restricted to the set of finite floating-point numbers, we obtain floating-point intervals. The set of floating-point intervals is written as I . A box is a Cartesian product of intervals. The elements of a box are called points. A point of a box .  . . is specified by  real numbers called the components. of . The ith component of must be an element of  . The point can be thus denoted by.  .  ,. where  is the ith component of .. Given a box .   , the points of the form in   are called the corners of the box.  %  . is either % or  for every .  .  ,. %  .  . . .. where . A non-corner. point of  is a point in  that is not a corner. From now on, when we write “interval” without qualification we mean a floating-point interval. For every interval.   % ,. bound , and.    . % %. %. where. . %. we use lb. . for the width of. for the left bound %, rb. .. . The intervals of the form. for the right % % ,. and. is a floating-point number, are called atomic or canonical intervals. Any. interval of the form % % is said to be degenerate. For a given subset  of  , we denote by &

(53) ing  , and by &. .   the smallest real interval in I. contain-. the smallest floating-point interval in I containing  .. A notion that will be used extensively in the next two chapters is the notion of cover. It is defined as follows. Let '. . be an integer. If. .  . .  is a set of non-empty subsets of a set. the set is called a cover in  . If, in addition, the union of  . . .  .  .  is a cover for. .. . ,. then. is  , then the set.

(54) 2.3 Basics of interval arithmetic. 14. 2.3.2 Interval extensions and Inclusion functions Let ( be one of the binary operations.   . The operation . . (. could be extended to. real intervals as follows.   .  ()  . where. . ). and. . . and.  . ). such that . (. ,. are two real intervals. But because the right hand of the equation above. . may not be a single real interval (for example,  .   .    . .  ), we.  . . instead use  ( )  &

(55).    . . . and.  . ). such that . This extension is called real interval extension of (. As an example, the real interval extensions for  and   )  lb    lb )  rb    rb )  and . . ) . .   (  ..  are expressed respectively as lb  rb rb  lb . ) . . ) . for every pair of real intervals  and ) . The real interval extensions of the elementary functions.     # . . can. be defined similarly. A floating-point interval extension of the binary operation ( is defined as  ( )  &.    . . . and.  . ). such that . where  and ) are two floating-point intervals.. .   (  ,.  are respectively given   lb  rb . As an example, the floating-point interval extensions for  and by   ) rb. . . .  lb. lb. ) .    lb ) . . rb.    rb )  and . for every pair of floating-point intervals. numerically compute the bounds of the interval  to rb.  just before performing lb ). ). ) . . and. . ).. ) . An easy way to. above is to set the rounding mode.  before performing rb . The same thing can be done when computing the bounds of  . The way of    lb ) , and then to .  . . ). setting the rounding modes just described is usually called outward rounding. The reader may refer to [28] for an implementation of the other operations..

(56) 2.3 Basics of interval arithmetic. 15. The floating-point interval extensions of the elementary functions.     # . . can be defined similarly. In what follows, “interval extension” will be used to mean floating-point interval extension. Given a function  from   to  , an inclusion function of  is a function  from I I such that.  I  . where . .  . is the range of  over  , that is, .  I  we have. , we use.  .   ,.   ..      . .     &   , then . If, in addition, for every  Given a box . . .  .  to. to mean . is said to be minimal..  .  .. For a large class of functions, interval extensions turn out to provide a simple framework for constructing their inclusion functions. Indeed, given an expression for a real-valued function .  .  . from   to  involving the elementary operations mentioned ear-. lier, an inclusion function .  .   can be constructed by replacing, in the expression. of  , each elementary operation by its extension and each variable Moreover, if each of the variables.  . . . by the interval.  .. occurs at most once in the expression of  ,. then  is minimal [23, 32, 46]. By interval arithmetic evaluation of. . we mean the inclusion function. . we just de-. scribed. Note that the result of the interval arithmetic evaluation of a function. . depends. on the expression chosen for  . The fact that the interval arithmetic evaluation of  is an inclusion function of  is commonly known as the fundamental theorem of interval arithmetic. For a formal way of treating expressions, functions, and interval arithmetic evaluations, the reader can refer to [63]. To obtain an interval that better approximates the range of  , we often make use of monotonicity defined below. Let  be a function from a set.  of sets to a set  of sets. The function. . is said to be.

(57) 2.4 Solving inequalities via interval arithmetic monotonic iff for every  , and  in. 16.  such that  . . , we have .  . .   .. As a special case, we have the following. Let  be a function from   to  , and let  be. an inclusion function for  . such that . . ) . . . . is monotonic iff for all intervals  .  and ) . ) , ) ,. we have .  .  . .  ) . ) . If  is a function with an expression involving elementary operations, then the interval arithmetic evaluation associated with the expression of the notation. . . is monotonic. From now on,. will denote the interval arithmetic evaluation of. . with respect to a given. expression of  . We call special attention to the fact that the interval arithmetic evaluation of a function  depends on the expression for  . Moreover, the interval arithmetic evaluation of a function is not minimal in general, because of the dependency effect, an effect that can be described as follows. The dependency effect arises when at least one of the variables occurs more than once in the expression of  . To illustrate such this effect, consider the function of one variable defined by the expression.    . arithmetic evaluation of  is therefore  . .  .. . ..   . . Based on this expression, the interval .. For .   . we have .   . Thus, the function  is not minimal. It is easy to verify that the minimal interval. extension of  is a constant function given by *.     .. 2.4 Solving inequalities via interval arithmetic We are now in a position to analyze the relation between interval arithmetic and solving inequalities. Without loss of generality, let us consider the following generic inequality:    .     . . where the variables.

(58). . are respectively in the intervals. (2.1)   .  ,. and. where the function  has an expression involving the elementary operations mentioned in the previous section. Suppose that ) is the interval arithmetic evaluation of  over      .  ,. where. . is the interval arithmetic evaluation of. . .  ,. i.e.. ) . associated with the.

(59) 2.4 Solving inequalities via interval arithmetic. 17. expression of  . The most useful fact about interval arithmetic is that, with only this information, we can already state three important observations about Inequality (2.1). 1. If lb. )   ,. 2. If rb. ).

(60). ..  for. then Inequality (2.1) has no solution in the box . ,. then Inequality (2.1) has the whole box. . . . . . solutions. 3. If lb. ).

(61). the box . . and rb.    ,. . then Inequality (2.1) may or may not have a solution in.  .. The last case deserves further study. In fact, from our point of view, it is the most interesting case. Here is a complete list of instances that can happen with this case. 1. If the intervals  . . . are all degenerate, then, as far as interval arithmetic. is concerned, we cannot decide whether the Inequality (2.1) has a solution or not. 2. If at least one of the intervals  . . . is canonical and the others are canoni-. cal or degenerate , then, as far as interval arithmetic and the expression of  are concerned, we cannot decide whether a non-corner point in the box . . . is a solution or not. We emphasize that such a claim is based on a given expression for  . Using another expression may yield a different outcome. However, if. . is. minimal, then, as far as interval arithmetic is concerned, there is no way to overcome such an undecidability. The corners of the box can be checked separately. 3. If we have none of the above, then, because of monotonicity, we need to split the box . . . into smaller boxes and look for solutions in each one of them.. In this section, we addressed the question of finding solutions of an inequality using interval arithmetic. Since searching for all solutions often produces an exponential running time, an interesting alternative is to find, based on interval arithmetic, a small box that encloses the solutions of an inequality. This we do next..

(62) 2.5 Functional box consistency. 18. 2.5 Functional box consistency Without loss of generality, let us consider Inequality (2.1). Let We denote by ) the interval .  .  .. As we mentioned in the previous section, if rb . .

(63). ). ,.  . . then the points in the box. are solutions to Inequality (2.1). Hence, the box. . smallest box containing the solutions to Inequality (2.1). If lb. )   ,. then there is no solution for Inequality (2.1) inside the box . If, however, lb. ).

(64).  . rb. ) ,. be intervals.. . is the. .. . then it may be possible to make the box . . . smaller by reducing one of the intervals while keeping the others fixed. In fact, assume that for some floating-point number % lb In this case, the interval. . . . with %  lb.  ,. we have.  lb   % .    . can be improved to. % rb  .. This process of reducing. intervals is called squashing, pruning, or box narrowing [31, 8, 18, 27, 7, 60]. In qualitative terms, squashing can be viewed as a reduction operator that shrinks a box by removing, from its sides, the parts that are proved not to contain any solution. Before proceeding, we have to make a brief digression to define the notion of functional box consistency. Inequality (2.1) is said to be functionally box-consistent with respect to  iff   & .      .   .   &.  . .    .   .. Inequality (2.1) is said to be functionally box-consistent with respect to the box . .  . iff it is functionally box-consistent with respect to each  for every    . .. The usual way to achieve functional box consistency is by shrinking the left and then the right bound of each interval. This process is repeated for each. . until no interval. can be further pruned. The function that shrinks the left bound of the interval  is given in Figure 2.1. A similar function can be used to narrow the right bound of the interval.

(65) 2.5 Functional box consistency. Floating-point number function squashLftI if (lb. 19.  % . .   .   %   .    ),. if (.

(66). %). then return ;. return %; //[a,b] is canonical. //[a,b] is not canonical, so has midpoint + . midpoint of % ;. + . squashLftI. if (+.  +).  % +;. return + ;//+ is the best left bound. //left bound was improved all the way up to + //so maybe m can improved some more. . return squashLftI.  + ;. Figure 2.1. Left bound squashing algorithm using interval arithmetic.  .. We call it squashRghtI. Both functions are used to squash the whole box as shown in. Figure 2.2.. Because the process of pruning becomes slow as the interval. % . gets smaller, Granvil-. liers, Goualard and Benhamou proposed what they call weak box consistency [18]. This consistency is similar to box consistency explained above, except that the squashing process stops when the width of %  is smaller than a predetermined value . As we shall see later, both consistencies can be improved even further using dynamic box-consistency and adaptive box-consistency algorithms..

(67) 2.5 Functional box consistency. Box function squashBoxI.   .  . . initialize the set  to.  . . . while ( is not empty) //the box . 20. . ;. . can perhaps be reduced further. remove  from ; lftDone=false; rghtDone=false; //squash the left and the right bounds of .  rghtDone)  if(lftDone). while( lftDone. lb.   . squashLftI.   ;. lftDone=true;. . if (lb. . . if( rghtDone) rb. is reduced) rghtDone=false;. .   . squashRghtI.   ;. rghtDone=true;. . if (rb. . . is reduced) lftDone=false;. //prune the right bound of the interval  if ( is reduced). . add every

(68). //the box .   . . . and

(69). . . to ;. cannot be reduced any further. //the functional box consistency is thus achieved. . return .  ;. Figure 2.2. A squashing algorithm that uses interval arithmetic to obtain functional box consistency..

(70) Chapter 3. Unconstrained global optimization via interval arithmetic We are now in a position to tackle the unconstrained global optimization problem in the light of interval arithmetic. After briefly presenting Moore-Skelboe (1974), Ichida-Fujii (1979), Hansen’s (1980), and Hansen and Sengupta (1983) algorithms, and after revealing their weaknesses, we shall present new improvements of these algorithms. More precisely, we shall discuss how the fathoming algorithms settle the fathoming problem, and how the inner-approximation and the outer-approximation algorithms handle the localization problem. We shall also present theorems that support our algorithms. Throughout this chapter, we consider the problem (1.1), where the objective function . can be evaluated in interval arithmetic, that is  exists, and where the set  is a box in   .. 3.1 Motivation Interval arithmetic algorithms for global optimization are examples of a larger group of algorithms, referred to as branch and bound algorithms. Branch and Bound applied to interval computations is a general search method that is best viewed as an iteration of two steps:.  The bounding step uses computed bounds to discard subboxes of. ..

(71) 3.1 Motivation. 22.  The branching step generates a set of smaller subproblems by dividing boxes that weren’t discarded in the bounding step. The subproblems created in the branching step are then tackled until all solutions are found. Interval arithmetic algorithms for global optimization are based on the same idea. They use interval arithmetic to obtain a left and a right bound for  on a given box. Depending on the size of the intervals that we provide as arguments, we might end up with a wide interval, especially in the presence of the dependency effect. As a result, we subdivide the box  into small boxes and then evaluate  over each of these (this is the branching step). A list " is used to store the boxes and the bounds of  on each box. Since a right bound of . on a box is an upper bound for the global minimum, the least right bound  can be used. to discard boxes. In fact, any box. . for which the left bound of.  . is greater than. . cannot contain any point of the minimizer and, thus, can be discarded (this is the bounding step). At any time, the interval with the least left bound contains the global minimum, and the union of the boxes remaining in the list " encloses the minimizer. Several algorithms [53, 30, 22, 51, 23] have been developed based on this core idea. The algorithm in [30], due to Ichida and Fujii, improves on the algorithm due to Skelboe [53] by using a value of the objective function inside a box to get a tighter upper bound for the global minimum. The two algorithms order the boxes in the list using the left bound of. . on each box. The union of the boxes returned by both algorithms is in general too. large: the boxes kept in the list may have subboxes that can be discarded. Hansen [22] proposed an algorithm that improves on both algorithms by using a different ordering of the boxes. More specifically, the boxes are ordered either by their widths or by their ages in the list. This allows the algorithm to discard every box that need to be discarded and to converge to all the points in the minimizer [51]. A common feature of the three algorithms is that in the bounding step a box is either discarded or kept as a whole. Hansen and Sengupta suggested a further improvement that allows the remaining boxes to be reduced [23]. This gives rise to the branch and narrow technique. This technique prunes not only by eliminating the boxes for which the left.

(72) 3.1 Motivation bound of. . 23. is greater than the upper bound for the global minimum but also by reducing. the boxes not pruned. Hansen and Sengupta use the inequality .

(73). ,. where  is the best. known upper bound for the global minimum, to narrow the boxes not discarded by branch and bound. The four algorithms, however, have the following drawbacks, which limit their capabilities:.  They all mix the fathoming problem and the localization problem. In fact, for some optimization problems we may know the global minimum and we only need to deal with the localization problem. For instance, an equation of the form .   . viewed as a global optimization problem with an objective function.    . can be ,. and for which the global minimum is equal to 0. For this particular example, the only concern is the localization problem and not the fathoming problem..  They need to be extended to include outer approximation and inner approximation for -minimizer. In several cases the -minimizer is more interesting than the minimizer. In these cases, we need to find boxes that are contained within the -minimizer (inner approximation), and boxes that cover it (outer approximation). This is found to be a useful tool for sensitivity analysis and optimal engineering system design..  They do not say whether their results are the best results that one can compute using interval arithmetic, a fixed expression of  , and a specified arithmetic precision. In fact, the properties of the four algorithms are stated only in the limit for infinite running time, infinite memory, and infinite precision of the floating-point number system (see for example Ratschek and Rokne [51]). In this chapter we find properties that can be verified in actual executions of the algorithms mentioned above. Moreover, as we shall show later, our proposed improvements return the best results that can be computed with a given expression for  and a given hardware. These drawbacks can be eliminated by incorporating the following improvements [62]:.  Separating the fathoming problem and the localization problem..

(74) 3.2 The fathoming problem. 24.  Computing inner and outer approximations for the. -minimizer.. Both of these are discussed in detail in the next sections.. 3.2 The fathoming problem The fathoming problem can be solved by answering the following two questions:.  ?. First Fathoming problem: what is the best possible lower bound for  Second Fathoming problem: what is the best possible interval for .  ?. By the “best possible” lower bound or interval we mean, respectively, the lower bound or the interval that cannot be improved using interval arithmetic given an expression for  and an arithmetic precision. The second fathoming problem is more general than the first fathoming problem: the left bound of the best possible interval for .  is the best possible lower bound for   .. In his essay “Proofs and Refutations” [35], Lakatos presented a rational reconstruction of the proof of Euler’s conjecture about polyhedra. He gave, based on an imaginary dialogue between a teacher and his students, the evolution of an initial idea that stemmed from Cauchy to the final proof of the conjecture. Lakatos, in addition, presented the actual history of the evolution of the proof as footnotes. Using the same idea, we shall present a rational reconstruction of an algorithm that solves the second fathoming problem. In what follows, we assume that we have a cover  the cover. . is ordered by non-decreasing left bounds of. Initially, we consider .  . and '.  .. . .   . . . .  for. over the boxes. ,. and that.  .   .. Such a cover contains the minimizer. We shall. consider algorithms that change a given cover containing the minimizer to one that has a smaller union and still contains the minimizer..

(75) 3.2 The fathoming problem. let . 25.  ;.  ; ) while (atomic // invariant:  . let the cover  be. .   .   . and  is a cover for  (thus, it is a cover for the minimizer). remove  from the cover; split  and insert the results of splitting into the cover in non-decreasing order according.  // . to the left bound of  ;.   . output lb.  .   ;. Figure 3.1. The algorithm MS . The function atomic. . specifies whether its box argument. is atomic or not. 3.2.1 The first fathoming problem In [53], Skelboe presented an algorithm that computes a lower bound for .  .1 Essentially,. the algorithm is as shown in Figure 3.1. The algorithm MS stores the boxes to be processed in a cover sorted using the left bound of. . . for. .. The cover. . is. over the boxes (Skelboe’s heuristic). The algorithm MS. splits the first box  in the sorted cover  until it becomes atomic. The algorithm presented by Skelboe appeared on 1974, and was intended for computing tighter bounds for the range of a rational function. His algorithm first searched for a lower bound and then an upper bound for the range of. over a box. The termination criterion of the algorithm was that the relative error between. the current and the previous lower bounds has to be less than a predetermined . This algorithm was then improved by Moore on 1976 [45]. Moore mainly used Skelboe’s heuristic of ordering the boxes in the list as well as monotonicity tests, the centered form, and Krawczyk’s version of Newton’s method. Today, the original Skelboe algorithm is often referred to as the Moore-Skelboe algorithm. Note that the phrase “Global Optimization” was not used in [53] and [45]. In the context of interval arithmetic, the phrase “Global Optimization” first appeared in Hansen’s paper [21], in Skelboe’s paper [54], and in Ichida and Fuji paper [30]..

(76) 3.2 The fathoming problem. 26. The properties of this algorithm are stated in Theorem 1. Theorem 1 Algorithm MS terminates and outputs the best possible lower bound for .  .. Proof. At each step of the algorithm ,  , the list  is a cover for . This implies that .  has to be in . . are ordered in a non-decreasing order of the left bounds of  ,. . for a certain box. . in the cover. .. Since, the boxes in the cover .  has to be in .  ,. where  is the first box in the cover  . The termination of the algorithm MS results from being finite, and it outputs the best lower bound because lower bound for .  , and   can be equal to lb. . is atomic, lb.   . is a.   .. In addition to returning the best lower bound for. .  , the algorithm MS subdivides. the boxes in an adaptive way, as explained below. Let us consider the operation “split” in the algorithms MS . It assumes that the results are nonempty, are both proper subsets, and have a union that is equal to the box that was split. The box to be split is a Cartesian product of  intervals. So the split can be done in up to  different ways. The number of boxes in the cover created by the algorithm typically becomes so large that the cover cannot be stored. So one should be careful what to split. It is desirable to split a box most likely to contain the minimizer. The heuristic chosen by the algorithm MS is to split the box. . for which.   . has the least left bound. The. subdivision resulting from the splits in this algorithm is adaptive: boxes far away from the global minimum tend not to be split. This should go some way towards avoiding covers with too many boxes. Although the algorithm MS has an efficient heuristic for choosing the box to split, it only partially solves the fathoming problem2 . In practice, it is often the case that an interval containing   The.  is desired.. As we shall see, by modifying of the termination criterion and. Moore-Skelboe algorithm is an example of algorithms that solve only the first fathoming problem.. These kind of algorithms, however, are not sufficient for global optimization; returning a lower bound does not give that much information about the location of.  ..

(77) 3.2 The fathoming problem. let . 27.  ;.  ; ) while ( // invariant:  . let the cover  be. .     . .   . and  is a cover for  (thus, it is a cover for the minimizer). remove  from the cover; split  and insert the results of splitting into the cover in non-decreasing order according.  // . to the left bound of  ;.   . output .  . and .   .

(78).   ;. Figure 3.2. The algorithm MS . The function  has as value the width of its interval argument. The tolerance is assumed to be sufficiently large as termination of the algorithm is not assured otherwise. the output of the algorithm MS , we obtain an algorithm that solves the second fathoming problem. 3.2.2 The second fathoming problem Instead of directly computing the best possible interval for interval for. .  of width at most. , for some real. . .. .  , we deal with finding an. As we shall see later, the best. possible interval can be obtained by setting to zero.. From the algorithm MS , we know that termination is to stop when the width of . . . is in.    ..   is less than. A reasonable criterion for. . See the algorithm MS shown. in Figure 3.2. The algorithm MS has some positive features. In the first place, it may find a sufficiently narrow interval for .  . Second, it does this by subdividing  in an adaptive way as in the. case of the algorithm ,  ..

Referenties

GERELATEERDE DOCUMENTEN

We will give two versions of the theorem. The first gives a decision method in terms of variations in sign of a sequence of numbers. The second answers when a parametrized family

Via intensieve monitoring op praktijkbedrijven en registratie van de inzet aan biologische bestrijders door de telers kwam bladluisbestrijding als grootste probleem naar

De kennis en informatie die daarbij in het verleden door het Rijk werd gebruikt, zal dan door de gebieden zelf ter hand genomen moeten worden om tot de formulering van nieuw beleid

When comparing calendar- and threshold rebalancing, we find evidence that for a 10 year investment horizon, the expected return is highest with threshold

So what if we need to push the limits of current CCD and CMOS camera technology and perform high-speed imaging at exposure times shorter than 1 microsecond, and correspondingly at

From the results in table 4 we conclude that this is due to the fact that for large densities a small number of new entries is fixed in each iteration of the projection operation..

Tijdens de archeologische begeleiding van het afgraven van de teelaarde op de verkaveling Perwijsveld werden geen archeologisch waardevolle resten aangetroffen. Het terrein kan dan

Heat maps displaying the average depth of coverage of each nucleotide along the virus genome (X-axis), obtained through read-mapping (1000 replications) of different subset-sizes