• No results found

Emergent behaviour: When to follow rules and when to decide for yourself

N/A
N/A
Protected

Academic year: 2021

Share "Emergent behaviour: When to follow rules and when to decide for yourself"

Copied!
14
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Emergent behaviour: When to

follow rules and when to decide

for yourself

Mathieu G.G. Bartels 11329521

Bachelor thesis Credits: 18 EC

Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam

Faculty of Science Science Park 904 1098 XH Amsterdam

Supervisor drs. A. (Toto) van Inge

Informatics Institute Faculty of Science University of Amsterdam Science Park 904 1098 XH Amsterdam June 29th, 2018 1

(2)

C

REATING EMERGENT BEHAVIOUR BY USING ARTIFICIAL PHYSICS IN SWARM ROBOTICS

A PREPRINT

Mathieu G.G. Bartels

Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam

Science Park 904 1098 XH Amsterdam

drs. A. (Toto) van Inge∗ Informatics Institute University of Amsterdam Science Park 904 1098 XH Amsterdam June 29, 2019

A

BSTRACT

This research looks for a method to fly a swarm of drones in formation. The method explored uses physicomimetics to fly the drones into a formation in a decentralised way. Fields are generated by some, in this research empirically chosen, functions, among which the Gaussian and funnel functions. These functions all have their own purpose and combined show the emergent behaviour of flying in formation. As a controller both a naive and a PID controller are chosen.

Keywords Emergence · Artificial Physics · Swarm · Robotics · Physicomimetics

1

Introduction

A drone is a remote-controlled pilotless aircraft or missile. With batteries becoming increasingly lighter and more efficient, drones can fly longer and with more computational power, which in turn leads to a broader range of applications and usages. Drones are multifunctional machines, because of their unique set of abilities. A multirotor drone can fly, move swiftly, stay steady and carry a small load. The freedom of movement makes a drone perfect for completing tasks which are hard to reach or complete by humans. In general, multiple drones can do more than a single drone. However, multiple drones are harder to control than a single drone. Controlling multiple drones is a coordinative challenge, but autonomy unveils a solution. If drones can fly autonomously, it is not necessary to control each drone but to control the swarm. Controlling a swarm is a hard challenge in the field of robotics. For possible solutions, scientists have looked into nature to get inspiration from the solutions evolution created for this complex task.

The first solution is found in biology. Biology shows that complex behaviour can arise from simple interactions between large numbers of relatively unintelligent agents [1]. This so called emergence arises when a group of individuals show behaviour resulting from uncoordinated interactions. Moreover, none of the individuals can show this behaviour on their own. This phenomenon gives the insight that simple interactions can become complex when combined.

The other solution uses physicomimetics, a field between biomimetics, artificial physics and physics. Physicomimetics uses physics-based fields to compute movements for swarms. A big advantage of the physics-based method over traditional movement regulators is that it is uncomplicated and requires very little computational power, especially when used in a multi-agent system. Spears and Spears [2] give multiple reasons for a physics-based approach in controlling and forming formations in swarm robotics. The most important reasons are:

(3)

• Physics based on interactions of small elements create larger behaviours.

• Physics uses a reductionist approach; this allows to express macroscopic behaviour clearly. • Physical systems are naturally robust.

• Low-level physical systems can also self-organise and self-repair.

This research focuses on controlling a swarm consisting of drones. To control a swarm, it is necessary to form a formation and complete task driven challenges. The central question of this research is: How to control a swarm of drones? Included are two sub-questions: How to manoeuvre through a rectangle shaped hole?, and: How to autonomously form formations? The hypothesis is that using artificial physics with the correct set of formulas will result in being able to control the swarm and setting goals to reach.

1.1 Relevance of research Scientific relevance

This research expands the scientific knowledge about controlling drones autonomously. There is plenty of research about both swarms and drone flying. This research intends to fill the gap that exists between these two subjects. Social relevance

A self-controlled swarm is useful for a wide range of real-life applications: from search and rescue tasks to exploration tasks, from warfare to autonomous support drops. Take, for example, the drone, developed in previous research by the UvA, that can localise and count animals in wildlife reservoirs [3]. The efficiency of these drones can be wildly2

increased if multiple drones are used that automatically fly in a formation, with the implied wider combined vision.

2

Background

Since the scientific and social relevance of formation forming in drones have been widely recognized, plenty of research about it exists. Bayındır (2016) presents an overview of research done in the field of swarm robotics [4]. Most importantly, for the present research, it shows the different methods of aggregation: methods for gathering several autonomous individuals in a commonplace. Bayındır discusses different methods for controlling a swarm. According to his research, the most used methods are the application of virtual forces (artificial physics), control of robot behaviour based on a probabilistic approach, and artificial evolution. All these approaches have their advantages and disadvantages.

For artificial physics, the significant advantage is the ability to choose the most useful features from physics and computational science. Physics is an uncomplicated method for determining movement and is computational nonintensive while rulesets can turn physical forces on and off at will. The disadvantage of artificial physics is that practical implementation requires a range of local sensibilities which are often not cost-effective [4].

For the probabilistic method, the advantage is the simplicity of the algorithm. For this method a finite state machine with two states is required. The two states are the walk and wait states. A probabilistic formula together with some local ques decides which state of the finite state machine the drone should be in. These states correspond to different drone behaviours. The big disadvantage of this method is that it is nondeterministic because of the randomness factor. This creates a problem when unwanted behaviour shows up because adjusting this behaviour can be a difficult task. The last method, artificial evolution, uses algorithms like generic genetic algorithms and q-tournament to create an algorithm with the optimal scoring result. A scoring function judges the current state. When designing such a scoring function one needs to award high scores to preferred states. A scoring function can be applied to any desired behaviour. For example, a scoring function which scores for how well a swarm ends up in a formation. The advantage of this method is that a drone can naturally detect behaviour, and with enough iterations of learning a neural net will converge to the preferred behaviour. However, the disadvantage is that the swarm learns only a single behaviour. This makes the method nonadaptive, although the new preferred behaviour is learnable with a cost function that rewards it. This approach is in research by Wessels [5]. Wessels uses deep reinforcement learing to fly a swarm through a rectangle shaped hole. His approach does not implement any form of formation forming. Using reinforcement learing the drones learn how to find a path to the other side of the door without colliding.

The present research chooses the artificial physics approach, mainly because of its adaptive, decentralised, and homogenous properties. Adaptive behaviour is needed to navigate through different kind of environments without offline retraining. Decentralisation helps reduce data traffic between drones because drones make decisions individually. Because the drones are homogenous – meaning they have the same hardware and software – it is a novelty to add and remove drones. The hurdle artificial physics has, namely needing a range of sensors on the drone, can be overcome

2

(4)

by using Beep-Beep, a localisation method that uses sound [6]. This method requires a speaker, microphone and a network connection. In his paper, Kersten concludes that this method is viable for short-range localisation. It can update between the sub-second and the three-second range. Kersten states: "The proposed methods in this thesis should be adequate for drone swarm applications, as long as only relative positional knowledge of in range drones is necessary." Coincidentally it is the case that this research only needs relative positional knowledge.

Artificial physics is not the first solution for aggregation to be inspired by nature. Already in 1987 Reynold [7] studied birds and found a set of rules that described their flocking behaviour. When he translated those rules to lines of code and ran them in a simulation, he found the same flocking behaviour he saw in nature. These so called Reynolds’ flocking rules can still be used today as a naive solution for distributed swarm controlling.

Cheng, Cheng and Nagpal (2005) researched an artificial physics solution; and found a decentralised method for formation forming [8]. They used a contained gas model to determine movements for small robots. In the present research, a 3D artificial physics model is used.

Once a possible formula/function has been established, the next step is to create a method to carry out the movement. A method that eminently fits this project’s case is a proportional–integral–derivative (PID) regulator, because it is designed to create smooth movements. PID continually calculates the error value in reference to the goal value. Using a correction based on the proportional, integral, and derivative terms the error is minimised. Formulas determine the force-fields. A PID regulator can use these formulas to get the information it needs to calculate the direction of movement; this information is the proportional error, the integral, and the differential. The usage of this information is shown in figure 1.

Figure 1: A block diagram of the PID controller. The PID controller starts at the r(t), the goal value, and subtracts the y(t), the current value. The result is the e(t), the error. The error is used by each of the blocks to determine u(t), the action that is taken. After the action the current value is reevaluated and the PID starts from the beginning.

3

Method

To test the hypothesis, a test environment was created. In this environment, different types of objects exist with different kind of force-fields. The different types of objects are: the drone, the obstacle, the goal an the drone centre. The drone centre is the geometric middle point of the drones. To find the correct function to describe a force-field, one needs to think about what the purpose of this field will be. For example: the drone force-field needs two parts. One part needs to get the drones closer; the other part needs to keep a safe distance. So the function to find needs an attractive and a repelling part. For this objective, the second order derivative of the Gaussian function (1) is the function that fits this description perfectly. Its shape matches the goal perfectly as seen in figure 2 a.

~ F = − σ 2− x2 σ5√2 ∗ π ∗ e x2 2∗σ2 (1)

The next step is to look at the third drone, the one in the middle. When determining its movement, it will measure all forces that act on it. The force-field looks like figure 2 a, excluding the force upon itself, seen in figure 2b. The possible moves this drone has are: it can stay in its position, move left or move right. Because the other drones are too far away, this drone stays in its position. A centric force can solve this problem. The centric force should be a wide, slowly increasing function with the minimum on the position it is set on, the centre. The position is set in the middle of the swarm. The perfect formula that fits this description is a normal distribution (2). The sigma in this formula needs to be tuned, so it still has a force on the drone furthest from the centre.

~

F = 1

σ√2 ∗ π ∗ e

x2

2∗σ2 (2)

As a result the force-field looks like figure 3a, where it is visible that the drones all have a valley near the centre-point. The third drone which is selected to look at its movement will now have a valley on its left side. This is shown in figure

(5)

Figure 2: Display of the combined force-fields of 5 drones. The black triangle represents a drone. The green triangle means this drone is selected to make a move. When drones make a move their force-field is turned off so it does not interfere with itself. A drone will always move to the nearest valley. The position of the drones on the location axis is chosen so they space out somewhat evenly.

(a) Five drones in a 1D position. The drones show their force-fields in the y direction. The value on the y direction determines the movement of the drone on the x axis. A drone always moves to the closest valley.

(b) Five drones in a 1D position. The drones show their force-fields in the y direction. The value on the y direction determines the movement of the drone on the x axis. A drone always moves to the closest valley. The green drone is the selected drone to make a movement, that is why its force-field is turned off. However none of the other force-fields have an effect on this drone. Because the drone has no force act upon it, it will not move.

3b. The drone will move to the nearest valley. Important here is that the centric force does not outweigh the repelling forces of the drones. This is done to make sure the drones will not collapse. If the centric force is bigger than the repelling force of the drone all the drones want to reach the centre point, regardless of another drone already being in that position.

Figure 3: Display of the combined force-fields of 5 drones and a centric force. The black triangle represents a drone. The green triangle means this drone is selected to make a move. When drones make a move their force-field is turned off so it does not interfere with itself. A drone will always move to the nearest valley.

(a) Five drones on a 1D position. The drones show their force-fields in the y direction. The drone force-field is combined with the centrical force. The value on the y direction determines the movement of the drone on the location axis. A drone always moves to the closest valley on the force-field. In this example every drone will move in the direction of the centre.

(b) Five drones on a 1D position. The drones show their force-fields in the y direction. The drone force-field is combined with the centrical force. The value on the y direction determines the movement of the drone on the location axis. A drone always moves to the closest valley on the force-field. The green drone is the selected drone to make a movement, that is why its force-field is turned off. Because of the centric force there is a closeby valley where the selected drone wants to move to. The nearest valley is to its left, this is where the drone will move to.

For formation forming this is all that is needed. However, for task-driven tasks, a goal and obstacles need to be added. The goal force-field is described by a funnel function. The goal of this funnel is to smoothly alter the shape of the drone

(6)

formation so the formation will fit through a rectangle shaped hole. The force and application are visualised in figure 4, and the formula describing it is formula (3).

z = 1 2αln(x

2+ y2) (3)

Figure 4: Using the funnel formula from Wolfram3to create a goal force-field. Using this goal force-field the 5 drones ignore the centrical force-field and try to reach the goal. The centrical force-field is ignored because the goal force-field is a lot stronger, therefore it outweighs the centrical force. The yellow circle in figure 4b is the goal, the black triangle is the drone and the red star is the centre. in a continous space the funnel force will go to a infinitely negative number as the drone gets closer to the goal. As a drone gets close enough to the goal the goal force should be turned off.

(a) Funnel force in 3D

(b) Five drones, a drone centre and a goal. All the force-fields from these objects are combined to create this force-field. All the drones will move to the drone centre. Once a drone gets close enough to the goal it should ignore the goal force-field or it could collapse with the other drones because the goal force-field is stronger than the drone’s repelling force-field.

For the obstacles, a repelling force is needed, one that quickly rises as one of the drones gets closer to it, but should not have an effect when far away. The repelling force slightly increases as a drone gets closer so a movement regulator can adjust the speed in advance. A drone should not be effected by the obstacle if it is far away, because there is no reason to move away from the wall when there is no colliding chance. Formula (4) is such a function and therefore used as repelling force. Adding an obstacle in our environment makes the force-field look like figure 5. As shown in figure 5, the two most left drones want to move to the drone centre but will be stopped by the obstacle because of the repelling force. The two left drones will form a formation on the left side of the wall, while the other three will form a formation on the right side of the wall. The drones on the right are not bothered by the obstacle.

Figure 5: Five drones, a drone centre, a goal, and an obstacle. All the force-fields of these objects are combined to this force field. On the location axis the drones, the obstacle and the goal are placed arbitrarily. The drone centre location is depicted by the drone locations. All the drones will move towards the goal, however, the two most left drones will first move towards the goal and then get stuck behind the obstacle. The peak of the obstacle force is too large to move through. These two most left drones will from a formation behind the obstacle.

(7)

y =√α

x (4)

3.1 How to parameter tune the force functions?

The four functions that are used in this research have a variable range. The difference in range can be problematic when combining them without pre-processing. With parameter tuning and scalar multiplications, this pre-processing is achieved. In formula (1), containing the second order derivative of the Gaussian function, there is a sigma that needs tuning. The sigma determines the width of the function, and therefore the optimal distance between two drones. Formula (2) also has a sigma that determines the width of the function. With the correct tuning of sigma, the function has a wide enough range to effect all the drones. Formula (3) and (4) have an α parameter that can be set to determine the range of the functions.

One way of solving this combining problem is finding the correct parameters, so all the force functions match in range and show the right behaviour. Another solution is possible. Artificial physics does not need to comply with traditional physics rules. In artificial physics, the physics can be switched on and off on-demand. It is possible to evaluate them separately. Each of the force-fields determines a directional force. Together with local variables, these vectors can be combined to get the eventual movement direction. An example of a force getting shut off is the goal force-field. The goal force-field is stronger than the repelling force if a drone gets to close to the goal. This is why it should be turned off. Possible local cues are the drone-centre being on the same location as the goal. In this case the goal is reached and does not need to attract the drones anymore. Another local cue to shut off the goal force-field can be the distance of a drone to the goal. In this case the force-field needs to be turned off for a drone close to the goal, so the other drones will still reach the goal.

3.2 How to use these 1D functions in a 3D world?

Using these functions in a 3D Euclidean-space is trivial. Applying these 1D functions to all axes, x, y and z, combined to one force, creates a force-field in the 4th dimension. Showing these forces in a 4D plot is not understandable for humans, that is why the explanatory plots are in 2D. This contains a 1D Euclidean position space, and the force-field in the 2nd dimension.

3.3 How to navigate through these force-fields?

Using the PID-controller the drones can navigate through the force-fields. To do so the drones need some information. The controller needs an aimed value, the current value, and an actuator. The actuator is needed to fix the gap between the current and the aimed value. The aimed value is the lowest possible value of the combined force-fields, the goal of the drone. The current value is determined by the location of the drones in the force-field. The actuator is the acceleration of the drone. The direction of the acceleration is determent by the direction of the gradient of the force-field. For a single force this is easy to calculate because it is the derivative of the force-function, however with combined force-fields the local gradient needs to be calculated.

In the present research most of the time a naive controller is used. This controller looks at the possible moves of the drone and selects the one with the lowest force-value. This approach was chosen because it is the pragmatic way. This naive navigator can be straightforwardly implemented and still shows that the force-fields are made correctly.

4

Results

In the results, a few cases will be discussed. Each case shows a different task, being well executed. A small analysis of the behaviour is followed. In the descriptions of the figures, the number of iterations it takes for the behaviour to arise is mentioned. In the context of this research, an iteration is one step for all the drones. An iteration gives an indication of the time it will take to reach a state. The actual time in practical use depends on calculation speed of the drone, speed, and refreshing rate of drone locations. The controller used for these results is the naive controller.

(8)

The first case, forming a formation with 3 drones.

This case shows three drones randomly placed in Euclidean space, trying to form a formation. This process is visible in figure 6. The drones primarily come closer because of the centrical force, however once they get close enough the drone forces start acting, and the drones move to a local minimum. The nearest local minimum is the local valley enforced by the second order derivative. The combination of these forces makes the formation shown in figure 6 c. The formation that is formed depends on the starting position of the drones. This is the case because the valley a drone reaches depends on how it approaches another drone.

Figure 6: The process of forming a formation with 3 drones

(a) In this figure three drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. The drones are placed randomly in this grid and intend to form a formation around the centre using the method of this research.

(b) In this figure three drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. The drones came from a random position. This is their location after 10 iterations. The drone centre has also moved relative to the previous figure. The moving of the centre is due to the moves of the drones.

(c) In this figure three drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. The drones came from a random position. This is their location after 20 iterations. The drones have now formed a formation.

(9)

The second case, forming formation with 5 drones.

The second case shows that with the correct parameter tweaking aggregation is also successful for five drones. In figure 7, it is shown that the aggregation is successful even with 5 drones. However, what is striking is that the drones do not aggregate on a plane. Their formation looks like a zigzag pattern. This is due to the limited spots where a drone reaches a valley.

Figure 7: Five drones forming a formation around their centre.

(a) In this figure five drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. The drones are placed randomly in this grid and intend to form a formation around the centre using the method of this research.

(b) In this figure three drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. This is their location after 10 iterations. The drones are drawn closer to the centre by the centrical force.

(c) In this figure three drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. This is their location after 20 iterations. The drones are getting very close. Some of the drones are already in a formation with the other drones, while some drones are still approaching the centre

(d) In this figure three drones are shown, they are marked with a black triangle. Marked with a red star is the centre of the drones. This is their location after 30 iterations. The drones have formed a formation.

(10)

The third case, forming formation, and reforming after a drone falls out.

This case is shown in figure 8. It starts the same as case two, five drones are randomly placed and fly into a formation. Once this formation is reached, one of the drones falls out and the drones self-repair. The drone falling out at first only affects its direct neighbours, however once they move their movement ripples through the swarm, and results in a new formation.

Figure 8: Five drones forming a formation, as they get into a formation, one of the drones falls out. The drones that are left need to self-repair the formation.

(a) Five drones starting in a random position and trying to form a formation.

(b) After some iterations a formation is formed. This figure shows the formation of five drones.

(c) In this figure one of the drones is removed and the drones that where left created this formation.

The fourth case, forming formation, and afterwards move to a goal behind an obstacle. As shown in figure 9 the drones aggregate in front of the opening, and as the goal force is added the drones manage to find a way through the opening. When the goal is reached the drones reform, and the task is completed.

(11)

Figure 9: Three drones making their way through a passage in the wall to their goal.

(a) In this figure three drones, the centre of the drone swarm and a wall with a passage are shown. Here the drones are placed in a random position. They will first form a formation around the centre-point

(b) In this figure three drones, the centre of the drone swarm and a wall with a passage are shown. The drones have formed a formation in front of the passage.

(c) In this figure three drones, the centre of the drone swarm, a wall with a passage and a goal are shown. The goal has a funnel attraction force. This force will make sure that the drones now in formation will lose this formation because it doesn’t fit trough the passage.

(d) In this figure three drones, the centre of the drone swarm, a wall with a passage and a goal are shown. The goal has a funnel attraction force. Because of the goal force-field the drones rearranged themselves to fit through the passage.

(e) In this final figure the drones have reformed a formation behind the wall. The goal force is gone because the centre of the swarm aligns with the goal. The task is completed.

(12)

The fifth case, using a PID controller to reach a goal.

In figure 10 a drone is shown reaching a goal using the PID controller. In three oscillations the goal is reached. The drone moves on a line, because the direction of the gradient is always directly to the centre. In real life however some error is to be expected and the drone will make more of a elliptical movement.

Figure 10: In this figure a drone is shown with a goal force in 2D. The color shows the strength of the force, where red is repulsive and green is attracting. The controller used to determine the movement of the drone is the PID controller.

(a) In this figure the drone is starting in a arbitrary position. Because the goal is far away the drone fully accelerates in the direction of the centre. Slowly the speed will build up.

(b) In this figure the drone is approaching the centre, however its speed is to large to brake at the centre. It will pass the centre. How far it will pass the centre depends on the tuning of the PID controller.

(c) In this figure the drone reaches the first turning point. The drone is reversing and will reach a lower top speed as it reaches the centre, the oscillation is fazing out.

(d) In this figure the drone reached the centre. The drone needed three oscillations to come to a stand still.

5

Conclusion

The results show that, theoretically, using artificial physics, works for formation forming and completing task driven tasks. These tasks are completed without globally known coordinates, just using relative distances. This means artificial physics works in a decentralised manner. In the results, coordinative behaviour emerges from simple rules, without a central coordinator, and with individual decisions. This is soft-emergent behaviour.

The force-field that is created to determine movement can be used in different ways, because the force-field represents the world differently. The force-field representation of an obstacles is a repelling force. One could imagine a situation where the drones follow a valley down to their goal, but are in reality, they are moving around a big mountain repelling force. If a more sophisticated search algorithm is used it could calculate beforehand if climbing this force-field mountain to the goal is more cost efficient than following the path down a valley.

(13)

The proposed method for navigating through the force-field is the PID regulator. This was chosen for its property of smooth movement. The PID regulator will probably work well because the force-field has a smooth shape. The smooth shape allows for faster movements when no obstacles are nearby, and smooth deceleration when obstacles are close. Different shaped formulas could be conducive for a PID regulator.

6

Future research

Artificial physics appears to be very promising in controlling a swarm of drones. The argument that too many sensors are needed does not seem to hold up with the localisation method Beep-Beep. This allows for more research in the field of artificial physics.

Moving a swarm of drones

In this research, a drone centre was used to centralise the swarm. The drones can agree on this location because it is in the same position for every drone. However, it is not possible to use it as an orientation point because the orientation of the axes differs per drone. If in future research, one manages to find a way to agree on orientation, this can be used to control the swarm. The drone centre can be shifted in the x, y and z axes to move the swarm in this direction. Using artificial physics, the drone swarm will move in the direction of the shifted centre.

(14)

References

[1] Scott Camazine, Jean-Louis Deneubourg, Nigel R Franks, James Sneyd, Eric Bonabeau, and Guy Theraula. Self-organization in biological systems, volume 7. Princeton University Press, 2003.

[2] William Spears and Diana Spears. Physicomimetics: Physics-Based Swarm Intelligence. 01 2012.

[3] Jan C. van Gemert, Camiel R. Verschoor, Pascal Mettes, Kitso Epema, Lian Pin Koh, and Serge Wich. Nature conservation drones for automatic localization and counting of animals. In Lourdes Agapito, Michael M. Bronstein, and Carsten Rother, editors, Computer Vision - ECCV 2014 Workshops, pages 255–270, Cham, 2015. Springer International Publishing.

[4] Levent Bayındır. A review of swarm robotics tasks. Neurocomputing, 172:292 – 321, 2016. [5] David Wessels. Deep multi-agent reinforcement learning in swarm robotics. 2019.

[6] Tom Kersten. Sound-based relative position detection in drone swarms. 2019.

[7] Craig W Reynolds. Flocks, herds and schools: A distributed behavioral model, volume 21. ACM, 1987.

[8] Jimming Cheng, Winston Cheng, and Radhika Nagpal. Robust and self-repairing formation control for swarms of mobile agents. 2005.

Referenties

GERELATEERDE DOCUMENTEN

Er vinden nog steeds evaluaties plaats met alle instellingen gezamenlijk; in sommige disciplines organiseert vrijwel iedere universiteit een eigenstandige evaluatie, zoals

The results of this research show that prior financing experience, both crowdfunding experience and experience with other forms of financing, have a positive influence

The authors measured CEO ownership by the fraction of a firm’s shares that were owned by the CEO; CEO turnover by the number of CEO replacements during the five year period;

D.3.2 Pervaporation calculated results for Pervap™ 4101 The calculated pervaporation results obtained by using the raw data for Pervap™ 4101 membrane at 30, 40, 50 and 60 °C and

H4b: When online- and offline advertisements are shown together, it will have a greater positive effect on the decision of how many low-involvement products to

Bacteriocins produced by lactic acid bacteria, in particular, are attracting increasing attention as preservatives in the food processing industry to control undesirable

In the following sections, the performance of the new scheme for music and pitch perception is compared in four different experiments to that of the current clinically used

In this file, we provide an example of an edition with right-to-left text and left-to-right notes, using X E L A TEX.. • The ‘hebrew’ environment allows us to write