• No results found

Painting from Polygons, Improving the Cooling Schedule of the Simulated Annealing Algorithm

N/A
N/A
Protected

Academic year: 2021

Share "Painting from Polygons, Improving the Cooling Schedule of the Simulated Annealing Algorithm"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Improving the Cooling Schedule of Simulated Annealing

Redouane Dahmani

Institute for Informatics, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands

redouane1997@hotmail.com

Abstract. In an earlier study nature-inspired iterative optimization al-gorithms were used to reconstruct paintings from polygons. One of the used algorithms was simulated annealing, unfortunately this algorithm did not perform well. The simulated annealing is going to be adjusted in this study to try to achieve better performance. This will be done by making two kinds of changes in the cooling schedule. First, parameter settings of the original cooling schedule will be changed. Subsequently, there will be used new different cooling schedules. The cooling schedules that are new in this study are geometric, linear, cosine, linear reheat, sigmoid and staircase. There will be shown that all the cooling schedules used in this study are performing better than the cooling schedule in the earlier study.

Keywords: Paintings · Polygons · Simulated annealing · Cooling sched-ules

1

Introduction

Paauw and Van den Berg [11] researched the usage of three nature-inspired iterative optimization algorithms. The goal of their study was to approximate images of paintings, constructing these from stacked semi-transparent colored polygons. To achieve this goal, features of these polygons should be optimized, such as the location of the vertices, the color and the opacity of the polygon. The three algorithms used to optimize these features are the stochastic hillclimber, the simulated annealing and the plant propagation algorithm. The simulated annealing will be the center of attention in this study.

The procedure of the simulated annealing of Paauw and Van den Berg looks like this: initially, a single random polygon constellation is initialized. This is done by randomly creating polygons and placing them on a black canvas. After the initialisation, a polygon will be mutated at every iteration. The mutation that will be done is chosen randomly, choosing from: move a vertex, transfer a vertex from one polygon to another, change the color of a polygon and change the drawing index of a polygon. After every iteration the mean squared error will be calculated over each pixel’s color value. The calculated error value will be compared with the error value of the previous state. If the error value improves,

(2)

the new state will be accepted. If the error value worsens, the new state can still be accepted, but with a probability which decreases as the difference in error value increases or as more iterations passes. The exact probability is calculated with a function which relies on the cooling schedule of Geman & Geman [6].

This research will face a challenge stated in the study of Paauw & Van den Berg: finding better results by changing parameter settings in the simulated annealing algorithm. Their simulated annealing led to poor results in comparison to the hillclimber. The believed cause of these poor results is the cooling schedule that the simulated annealing used.

The rest of this paper is organized as follows: 8 different new cooling schedules will be tested to try to improve the results found in the original paper. The new cooling schedules will be found in two ways. First, literature about the simulated annealing algorithm and studies in which the simulated annealing algorithm are used will be researched. This research of the literature should lead to finding other frequently used cooling schedules. Second, the cooling schedule that had been used in the original research and the ones that have been found in the studied literature can be analysed and adjusted. The adjustments of these cooling schedules are new cooling schedules as well. All these new cooling schedules will be implemented in the simulated annealing algorithm to try to find better results. This first part will lead to the research question: Which parameter settings or cooling schedules will improve the results of the simulated annealing algorithm?

In the second part of this research, the statistics of the final polygon constel-lations will be investigated. There will be looked at two different statistics. The first is the frequency of the number of vertices per polygon, so the number of triangles, squares, pentagons, etc. The second is the alpha value of the polygons per layer, so the opacity of the polygons per drawing index. These results on the number of vertices per polygon or the alpha value can have some interesting information for future studies.

All the programmatic resources and algorithms that are used (programmed in Python 3.6.3) are publicly available [5].

2

Related Work

The introduction of the simulated annealing algorithm was done independently by Kirkpatrick, Gelatt & Vecchi in 1983 [7] and ˇCern´y in 1985 [4]. This nature inspired algorithm derives from the research field of statistical mechanics, which is the central discipline of condensed matter physics. The origin of this algorithm is the simulating of an annealing process. The goal of an annealing process is to achieve a solid with a low energy state, which usually means a highly ordered state. This highly ordered state can be achieved by annealing the solid. Firstly, the material is heated so that the atoms in the material are permitted to rearrange much. After this the material is cooled slowly until the material freezes into a highly ordered material. Simulated annealing tries to imitate this method of slowly decreasing the temperature [10]. In simulated annealing the temperature represents the probability that a mutation that worsens the overall

(3)

objective value will still be accepted. It is a burdensome process to obtain a perfect cooling schedule, as decreasing the temperature too rapidly might lead to local minima [12].

Simulated annealing has showed its worth in the science, as various combina-torial optimization problems have been solved with this algorithm. Problems as the travelling salesman problem [4], the chip routing problem [7], the scheduling problem [2] and the job shop scheduling problem [9] have been solved with good results.

A great variety of cooling schedules have been used in different studies. Each cooling schedule varying from a linear temperature decrease to a cooling schedule that also allows reheating. Yet, the only cooling schedule that has theoretically been proven to find a global minimum as the iterations go to infinity, is the cooling schedule defined by Geman & Geman [6]. But there are other cooling schedules which might even work better for certain constraints as a limited time frame or a maximum amount of computing power. For example, a few cooling schedules are introduced by Abramson [2]. These cooling schedules are fairly simple, but they do have some practicality in comparison to the Geman & Geman cooling schedule.

Even though that the simulated annealing analogy is derived from a com-plex physical subject, the algorithm is fairly simple to implement. The basics of the algorithm are simple, but there is enough space for adjustment within the algorithm. For example, using other cooling schedules might result in totally dif-ferent results. There is a large amount of cooling schedules, with a great variety as well. This all concludes that simulated annealing is a very versatile algorithm.

3

Paintings from Polygons

For this study nine famous paintings are used. These are represented in bitmaps of the paintings. All of them are converted to a size of 240x180 pixels, either in landscape or portrait orientation. The used paintings (Fig 1.) are: Mona Lisa (1503) by Leonardo da Vinci, The Starry Night (1889) by Vincent van Gogh, The Kiss (1908) by Gustav Klimt, Composition with Red, Yellow and Blue (1930) by Piet Mondriaan, The Persistance of Memory (1931) by Salvador Dali, Convergence (1952) by Jackson Pollock, a portrait of Johann Sebastian Bach (1746) by Elias Gottlieb Haussman, Lady with an Ermine (around 1489 – 1490) by Leonardo da Vinci and Salvator Mundi (1500) by Leonardo da Vinci. Most of these paintings were used in the previous study of Paauw and Van den Berg (Salvator Mundi and Lady with an Ermine were added in this study), because together they span a wide range of ages, countries and artistic styles.

All polygons have properties that influence the final constellation that is constructed by these polygons. First of all, every polygon has a color which is represented by 4 byte sized RGBA values: a red, green and blue channel and an alpha channel for the opacity of the polygon. Each of these channels have a value between 0 and 255. Second, every polygon holds coordinates for each vertex of the polygon. These coordinate values range between 0 and the maximum dimension

(4)

Fig. 1. The paintings that are used in this study. From left to right and top to bottom: The Starry Night (1889) by Vincent van Gogh, a portrait of Johann Sebastian Bach (1746) by Elias Gottlieb Haussman, Salvator Mundi (1500) by Leonardo da Vinci, Lady with an Ermine (around 1489 – 1490) by Leonardo da Vinci, Composition with Red, Yellow and Blue (1930) by Piet Mondriaan, Convergence (1952) by Jackson Pollock, The Persistance of Memory (1931) by Salvador Dali, The Kiss (1908) by Gustav Klimt and Mona Lisa (1503) by Leonardo da Vinci.

of the painting, which is either 180 or 240. The total number of vertices and polygons are predetermined, in this study a total number of v = 1000 vertices are used and the total number of polygons are v4, so 250. Every polygon has at least 3 vertices and at mostv4+3 vertices (this happens when all but one polygons are triangles, that one non-triangle polygon has then 250 + 3 = 253 vertices). Lastly, all polygons have an index number, this number places the polygons in an order that determines the drawing order of the constellation. Polygons with a lower index number can be overlapped by polygons with a higher index number, but not the other way around. This index number ranges from 0 to 250, the total number of polygons.

After a constellation of polygons is rendered to a bitmap, the correctness of this bitmap can be calculated using the mean squared error (MSE), following

(5)

this formula: 180·240·3 X i=1 (renderedi− objectivei)2 180 · 240 (1)

Where renderediis the red, green or blue value of a pixel in the rendered bitmap,

ranging from 0 to 255, and objectivei is the corresponding value of the original

paintings bitmap. From this, it is possible to conclude that the best possible MSE can be found when each pixel in the rendered bitmap contains exactly the same color values as the objective bitmap. This will result in an MSE of 0. The worst possible MSE occurs when the whole objective bitmap consists only of extreme color values, either 0 or 255, and the rendered bitmap consists only of the exact opposite values of it. This will result in an MSE of 2552· 3 = 195075.

As shown in the study of Paauw and Van den Berg [11] it is possible to set useful bounds with the formula:

S = α · (240 · 180)v· (2564)v 4 · (v

4)! (2)

Where the alpha variable represents the number of ways that the vertices are distributed over the different polygons, (240 · 180)v represents the positioning of

every vertex, (2564)v

4 represents every color combination each polygon can have

and (v4)! represents all the possible drawing orders of the polygons.

Also shown in the study of Paauw and van den Berg is the state space of the objective bitmap. This can be calculated by:

2553240·180 ≈ 7.93312107 (3)

This formula neglects rotations and symmetry. Using equation above, one can calculate that 39328 vertices could create ±1.81312109constellations – this

num-ber can be used as a lower bound. As an upper bound one can use 4 vertices per pixel to create every single bitmap constellation. This would lead to needing 240 · 180 · 4 = 172800 vertices.

Even though these numbers of state space and minimum required number of vertices are very large, Paauw and Van den Berg tried to use three heuristic algorithms to come as close to the goal as possible. However, their simulated annealing results did not perform well in comparison to their hillclimber and plant propagation. But there are a few things that can be changed in their simulated annealing that could improve the results, which will be explained later.

4

Simulated Annealing

Every run starts with a random constellation of polygons, which are placed on a black canvas. Every polygon is assigned 3 vertices, the remaining vertices are randomly distributed over the polygons. The x and y coordinates of every vertex is a random value between the dimensions of the painting. At last, every polygons RGBA values are randomized between the values of 0 and 255.

(6)

At every iteration a mutation is done on a random polygon. There are 4 different mutations types, which one is used at an iteration is randomly decided. The different mutation types are:

1. Change Color: randomly chooses one of the RGBA values and change it into a random value between 0 and 255.

2. Move Vertex: randomly chooses one vertex and changes its x and y coordi-nate values into a random value between 0 and 180 or 0 and 240 depending on the corresponding dimension size.

3. Transfer Vertex: randomly chooses two polygons, p1 and p2, of which p1 is not a triangle. From p1 a random vertex gets chosen and deleted. The deleted vertex will be placed in between two random neighboring vertices of p2.

4. Change Drawing Order: randomly chooses a polygon and randomly changes its drawing index value between 0 and the number of polygons, in this case 250. If the new index value is lower than the current index value, all index values of the polygons between the new and current index value will be in-creased with 1. If the new index value is higher than the current index value, all index values of the polygons above the new index value will be increased with 1.

A simulated annealing run starts with the random initialization as described above, from then on it will be mutating the polygons. After every mutation a new constellation is made and the MSE of this new constellation will be calcu-lated. If this new constellation has a better objective value than the previous constellation, thus a lower MSE, the mutation will be accepted and this new constellation will be the starting point for the next iteration. Mutations that lead to a constellation with a worse objective value than the previous constella-tion will not always be accepted. Whether it will be accepted or not depends on the acceptance probability. This acceptance probability is calculated like this: e(−∆M SET ). Where ∆M SE is the difference in MSE between the current and the

new constellation, and T is the temperature which is determined by the cooling schedule that is used. One can see that the smaller the MSE difference or the lower the temperature is, the acceptance probability will increase. Which cooling schedules will be used will be discussed later on. This equation originated from the condensed matter physics, under the name of “Boltzmann’s factor”.

In the previous study of Paauw and Van den Berg a cooling schedule from Geman & Geman (Geman & Geman, 1984) was used: T = ln(i+1)c . Where c is a constant which corresponds with the highest possible energy barrier to overcome. As calculated earlier by Paauw and Van den Berg this highest possible energy barrier is 197075, so they used this value for the constant c.

(7)

5

Cooling Schedules

5.1 Geman & Geman

For this study the original cooling schedule of Paauw and Van den Berg is used to compare all other cooling schedules to. So, the first used cooling schedule is the Geman & Geman cooling schedule with c = 195075:

Ti=

c

log(i + 1) (4)

In figure 2 there can be seen that the starting temperature is about 650000, and the final temperature, at 1 million iterations, is about 325000. This means that virtually every mutation will be accepted throughout the run, even if the MSE worsens a lot, because the chance of accepting a mutation that worsens the MSE is very close to 1. One could say that this cooling schedule looks a lot like a random walk, where every mutation is accepted despite the difference in MSE. This was also found by Geman & Geman [6] and Aarts [1]. But instead of not using the Geman & Geman cooling schedule for this study, there have been made changes in the c value to create new cooling schedules that might produce better results.

Changing the c parameter to a lower value will drop the temperature through-out the whole run (Fig. 2). By experimenting with different values of c there came out two interesting values to use instead of the original c value. Firstly, there have been used a c value of 1. This results in a beginning temperature of about 3 and an end temperature of about 0.15. This is a big difference with the c = 195075 temperatures. In this case, the chance is very low of accepting mutations that worsen the MSE, and throughout the run the chances will become smaller and smaller. One could say for this c value, that it looks a lot like a hillclimber, in which no mutations that worsen the MSE will be accepted. Second, there have been used a c value of 50. The beginning temperature will be around 165 and the end temperature will be around 8. This results in smaller values of MSE dif-ference being accepted throughout the whole run with a pretty high chance, but accepting bigger values of MSE difference will lower a lot more in their chances.

5.2 Geometric & Linear

To test the used cooling schedules in this study, there have to be included some cooling schedules that are frequently used and well performing in other research [13]. The geometric and the linear cooling schedules are mathematically very simple cooling schedules.

The geometric cooling schedule uses the function:

Ti+1= α · Ti (5)

Where α is 0.99999 and for a starting point, T0, there has been chosen for a

temperature of 1000. At this temperature the material is “molten”, it accepts virtually every worsening of MSE due to high accepting chances for every value

(8)

Fig. 2. The Geman & Geman cooling schedule, with different values for the c variable.

of MSE difference. This seems to be a good starting point for simulated annealing as found by Kirkpatrick [8]. For every cooling schedule from now on, T0is set to

1000 for this given reason.

The linear cooling schedule uses the function: Ti=

T0

imax

· i (6)

Where imax is the maximum number of iterations, which is 106 in this study.

This function results in a linear drop of temperature throughout the iterations, as seen in figure 3.

(9)

5.3 Reheating cooling schedules

It is also possible to use cooling schedules that do not monotonically decrease over time. Increasing the temperature during the run might result in better performance than only decreasing the temperature. Boese and Kahng [3] studied the difference between ”best-so-far” algorithms, where the best result is returned, and ”where-you-are” algorithms, where the last found result is returned. In their study they found that for the ”best-so-far” algorithms the best cooling schedules are not always monotonically decreasing cooling schedules. As this study uses a ”best-so-far” algorithm as well, using a reheating cooling schedule might result in better performance.

The first reheating cooling schedule is the cosine cooling schedule. This cooling schedule follows a simple cosine function, with parameters such that T0 = 1000, the final temperature is zero and the function having 9.5 cycles

before ending:

Ti= 500 · cos(

i

16753) + 500 (7) The second reheating cooling schedule is the “linear reheat” function. This function lowers its temperature linearly to 0 like the linear cooling schedule, but reheats to half the previous beginning temperature after reaching 0 as seen in Fig 8. This is done every 105 iterations, resulting in reheating 9 times.

Fig. 4. The cosine and the linear reheat cooling schedules.

5.4 Other cooling schedules

The sigmoid cooling function follows a sigmoid function given by: Ti=

1000

(10)

This cooling schedule decreases slowly in temperature during the run, with its center of gravity point at the middle of the run.

The staircase cooling function keeps an even temperature during every 100000 iterations, after which it drops with T0

9 ≈ 11.11%. This is done to have a

tem-perature of 0 for the final 100000 iterations. The idea for a staircase cooling schedule is inspired by Kirkpatrick, Gelatt and Vecchi [7] and Strobl [14]. They stated that temperatures at a specific heat have to be kept steady long enough to reach thermal equilibrium, as there could happen a phase transition below a that specific heat. This will results in the material solidifying partly, and thus locking all imperfections in that solidified part.

Fig. 5. The sigmoid and the staircase cooling schedules.

6

Experiments and Results

All the cooling schedules ran 5 times for every painting, with 106 iterations per

run. For the number of polygons and vertices, there have been used 250 and 1000, respectively. The results of all runs per painting are averaged and normalized to obtain the final MSE results per painting, which are shown in figure 6. The normalizing is done as follows:

xN ormalized=

x − xmin

xmax− xmin

(9) The results in figure 7 are the runs per painting with the lowest final MSE out of the five runs.

Observing the results, it is possible to see that there is a repeating pattern in the average performance of the cooling schedules. The cooling schedule of Geman & Geman with a c value of 1 performs best on all paintings. Following this up are the geometric cooling schedule in second place, linear reheat cooling

(11)

Fig. 6. Mean results over five runs for all nine cooling schedules on all nine paintings. It is notable that for all paintings the same ranking in performance, between the cooling schedules, can be seen.

schedule in third place and the cooling schedule of Geman & Geman with a c value of 50 in fourth place. The top 4 cooling schedules are relatively close to each other in comparison to the other cooling schedules. In fifth, sixth and seventh place are the sigmoid cooling schedule, the linear cooling schedule and the cosine cooling schedule, respectively. These three cooling schedules lie relatively close together in comparison to the other cooling schedules. Above these two cooling lies the staircase cooling schedule, which performed relatively bad. And finally, the cooling schedule used in the study of Paauw and Van den Berg is the worst performing cooling schedule for all paintings.

(12)

Fig. 7. Best results of five runs for all nine cooling schedules on all nine paintings. Per cooling schedule there can be seen some repeating characteristics in MSE decrement between the different paintings.

A remarkable appearance is the distribution of the vertices per polygon (Fig. 8). The beginning and final constellation have been compared to each other, and there is a repeating characteristic visible for all cooling schedules. The results show that the number of triangles in the end constellation is higher than in the beginning constellation, and the number of all other polygons is lower in the end constellation than in the beginning constellation.

The mean alpha values between the cooling schedules do not seem to have a repeating pattern (Fig. 9). However, when comparing the top three performing cooling schedules, there can be seen a very obvious characteristic. The results show that the higher layers have a lower alpha value in comparison to the lower

(13)

Fig. 8. Mean frequency of vertices per polygons of five runs, so the mean number of triangles, squares, pentagons, etc. in a constellation. There can be seen a repeating pattern between the begin and end constellation.

layers. This characteristic is not as apparent for all other cooling schedules. So, this characteristic could only be apparent for better performing cooling sched-ules.

7

Discussion and Future Work

In this study an attempt was made to improve the performance of the simulated annealing that was used in the study of Paauw and Van den Berg. This was done by changing parameter settings of the cooling schedule used in their study, and

(14)

Fig. 9. Mean alpha value of polygons per layer of five runs. Remarkably enough, the top three performing cooling schedules show the same characteristic of higher layers having lower alpha values. Especially when comparing the end constellation with their begin constellation.

by implementing new cooling schedules. This goal has been achieved, but with varying performances per cooling schedule. The fact that the cooling schedules ”Geman & Geman, with a c value of 1”, ”geometric”, ”linear reheat” and ”Ge-man & Ge”Ge-man, with a c value of 50” outperformed all other cooling schedules might be due to being in a colder state for longer times.

There are some unexplained characteristics observable in figure 7. One of these is the rapid decreasing of MSE in the linear cooling schedule at about the final 100000-150000 iterations. This rapid decrease seems to happen for every

(15)

Fig. 10. The paintings after simulated annealing runs, all with different cooling sched-ules. From left to right and top to bottom: The Starry Night (cooling schedule: Linear, MSE: 4307.8), a portrait of Johann Sebastian Bach (cooling schedule: Geometric, MSE: 585.6), Salvator Mundi (cooling schedule: Sigmoid, MSE: 2193.2), Lady with an Er-mine (cooling schedule: Geman & Geman with c = 50, MSE: 2168.7), Composition with Red, Yellow and Blue (cooling schedule: Geman & Geman, MSE: 97.6), Convergence (cooling schedule: Geman & Geman with c = 195075, MSE: 30079.7), The Persistance of Memory (cooling schedule: Cosine, MSE: 6675.6), The Kiss (cooling schedule: Stair-case, MSE: 9948.8) and Mona Lisa (cooling schedule: Linear reheat, MSE: 1102.1).

painting that is used in this study. It is still unclear what causes this effect, but it might be the cause of some thermal threshold. In other words, the painting can not be solved further than some point, with temperatures higher than the thermal threshold. This could be an interesting investigation for future studies. The cooling schedules that are used in this study are only a fraction of all possible cooling schedules. It might be interesting for future research to study the performance of other different cooling schedules or adjust the ones from this study to achieve an even better performance.

The cost function that is used in this study might not be the best suitable one. For example, when a state is reached where every pixel has the exact opposite color values of the objective, the MSE evaluate method will result in the worst

(16)

objective value possible. Whilst for human perception this inverted picture may be better looking than another constellation with a lower MSE that only consists of random shapes. This topic might be studied in future research, or maybe use human perception on the results to evaluate a achieved constellation.

References

1. Aarts, E., Korst, J., & Michiels, W. (1997). Simulated annealing. In Search method-ologies (pp. 187-210). Springer, Boston, MA.

2. Abramson, D., Krishnamoorthy, M., & Dang, H. (1999). Simulated annealing cooling schedules for the school timetabling problem. Asia Pacific Journal of Operational Research, 16, 1-22.

3. Boese, K. D., & Kahng, A. B. (1994). Best-so-far vs. where-you-are: implications for optimal finite-time annealing. Systems & control letters, 22(1), 71-78.

4. ˇCern´y, V. (1985). Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. Journal of optimization theory and applications, 45(1), 41-51.

5. Dahmani, R., Paintings From Polygons Simulated Annealing (2020). GitHub reposi-tory, https://github.com/RedRedouane/PaintingsFromPolygonsSA/releases/latest 6. Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the

Bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, (6), 721-741.

7. Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598), 671-680.

8. Kirkpatrick, S. (1984). Optimization by simulated annealing: Quantitative studies. Journal of statistical physics, 34(5-6), 975-986.

9. Van Laarhoven, P. J., Aarts, E. H., & Lenstra, J. K. (1992). Job shop scheduling by simulated annealing. Operations research, 40(1), 113-125.

10. Rutenbar, R. A. (1989). Simulated annealing algorithms: An overview. IEEE Cir-cuits and Devices magazine, 5(1), 19-26.

11. Paauw, M., van den Berg, D. (2019). Paintings, Polygons and Plant Propagation. In: Ek´art A., Liapis A., Castro Pena M. (eds) Computational Intelligence in Music, Sound, Art and Design. EvoMUSART 2019. Lecture Notes in Computer Science, vol 11453. Springer, Cham

12. Ski´scim, C. C., & Golden, B. L. (1983, December). Optimization by simulated annealing: A preliminary computational study for the tsp. In Proceedings of the 15th conference on Winter Simulation-Volume 2 (pp. 523-535). IEEE Press. 13. Strenski, P. N., & Kirkpatrick, S. (1991). Analysis of finite length annealing

sched-ules. Algorithmica, 6(1-6), 346-366.

14. Strobl, M. A., & Barker, D. (2016). On simulated annealing phase transitions in phylogeny reconstruction. Molecular phylogenetics and evolution, 101, 46-55.

Referenties

GERELATEERDE DOCUMENTEN

Moreover, when the same constant load is applied, samples crystallized under more drastic conditions are characterized by considerably shorter failure time (Figure 2

‘Het Vlaams Welzijnsverbond staat voor boeiende uitdagingen in sectoren van zorg en ondersteuning van kwetsbare doelgroepen’, zegt Chantal Van Audenhove.. ‘Samen met het team

The piston starts at the top, the intake valve opens, and the piston moves down to let the engine take in a cylinder-full of air and gasoline.. This is the

De memory-bus loopt naar het hoofdseheusen (main-memory, centraal geheugen). De CPU verricht akties op het centrale geheugen op basis van instructies die door een

4087 4 1 Kuil Onregelmatig Heterogeen Matig Bruin Licht Geel Houtskool Verbrande leem Mangaan ZS3 deel in putwand 3064 4088 4 1 Kuil Onregelmatig Heterogeen Matig Bruin Licht

In chapter 7 we will return to the conflicting results for the SMSC's of the !I-VI group mentioned above and show that i t is possible to explain all

the reverse question is relevant: given certain properties of digitisations or digitisation functions (which may differ for various applications). what functions

\NewAppointment By using the \NewAppointment command, the user can customize the appearance of the schedule by changing the color of the text or the background color. The syntax