• No results found

Modeling Distributed Cybernetic Management for Resource Based Economies

N/A
N/A
Protected

Academic year: 2021

Share "Modeling Distributed Cybernetic Management for Resource Based Economies"

Copied!
44
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Modeling Distributed Cybernetic

Management for Resource Based

Economies

A simulation approach to Stafford Beer’s 1971

CyberSyn Project

Janosch Haber 10400192

Bachelor thesis Credits: 18 EC

Bachelor Opleiding Kunstmatige Intelligentie

University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor Dr. R. Valenti

Intelligent Systems Lab Amsterdam Faculty of Science

University of Amsterdam Science Park 904 1098 XH Amsterdam

(2)

Keywords

Agent-Based Simulation (ABS), Multi-Agent Simulation (MAS), Distributed Intelligence (DI), Artificial Social Simulation (ASS), Embedded Agents, Emer-gent Behaviour

Contents

1 Introduction 3

1.1 Historical Overview . . . 4

1.2 Resource Based Economy . . . 5

1.3 A New Approach to Intelligence . . . 6

2 Theoretical Context 6 2.1 Multi-Agent Simulation . . . 7

2.2 Multi-Agent Resource Allocation . . . 8

3 Method 8 3.1 Representation . . . 9 3.1.1 Distributed Control . . . 9 3.1.2 Agent inter-dependency . . . 10 3.2 Modeling . . . 13 3.2.1 Single-layer Approach . . . 13 3.2.2 Static Agents . . . 13 3.2.3 Limited Scope . . . 13

3.2.4 Discrete, non-sharable Resources . . . 14

3.2.5 Production Deadlines . . . 14

3.2.6 Static Intelligence Heuristics . . . 14

3.3 Simulation . . . 15 3.3.1 Implementation Choices . . . 15 3.3.2 Execution flow . . . 20 3.4 Application . . . 21 3.5 Evaluation . . . 22 3.5.1 Measures of Interest . . . 22

3.5.2 Data Evaluation Process . . . 23

3.5.3 Validation of Simulation Coverage . . . 23

4 Results 26 4.1 Production Deadlines . . . 26

4.2 Resource Availability . . . 29

4.3 Agent Network Size . . . 33

5 Conclusion 37 5.1 System Performance . . . 37 5.2 Research Question . . . 41 6 Discussion 41 6.1 Future Work . . . 41 References 43

(3)

Abstract

At the end of 1971, a group of early AI researchers around British economist Stafford Beer started working on a cybernetic approach to au-tomatize the administration of the entire Chilean economy. Although showing first successes, this project called CyberSyn was shut again only two years later after a military putsch, leaving the approach never to be concluded.

In this research we aim to investigate whether the simple cybernetic ap-proach proposed by Stafford and his colleagues would have been enough to manage something as complex as the Chilean economy. We do so by modeling a simplified economic setting governed by CyberSyn’s man-agement principles and analyze the model’s performance under a range of different parameter settings. The results of these experiments suggest that the model indeed exhibits emerging self-sustainability and lead to the conclusion that CyberSyn’s approach appears to be principally feasible.

1

Introduction

Clearly, if it is possible to have a self-regulating system that implicitly arranges its own stability, then this is of the keenest management interest.

- Stafford Beer

The Pinochet Military Putsch of September 1973 marked the end of one of the most interesting projects of early AI autonomous systems research: For the past two years, a team of international scientists had been working on a government-funded project to automatize the administration of the Chilean economy by developing a cybernetic approach to management. The new mil-itary regime however ordered the termination of the politically controversial project, dismantled the research facilities - even imprisoned some of the in-volved researchers - and the project called CyberSyn was buried in oblivion.

Looking back on more than forty years full of ground-braking discoveries and developments in the field of AI, the emergence of supercomputers and the big data hype, it may be worthwhile to revive some of the initial AI approaches and re-evaluate their potential. Within this context, Eden Medina recently pub-lished a book and several articles about early AI research in South American countries, especially focusing on the developments in 1970’s Chile and the Cy-berSyn project. Interviewing some of the researchers and scientists involved (some of whom later became major advocates - or opponents - of AI), she man-aged to draft a reasonably clear picture not only of the central ideas that were driving the CberSyn project, but also of the impact this project had on the Chilean economy and society - even in its ever unfinished state.

Based on her accounts, the research described in this paper aims to investi-gate whether a simple cybernetic approach, as was proposed by the Cyber-Syn project, would have been enough to manage something as complex as the

(4)

Chilean economy. To do so, we create a model of the original approach used by the research group back in the 1970s and simulate its performance under a range of different parameter settings. Through this process, we will gain insight in possible system behavior and may or may not find enough evidence to either confirm the approach’s validity, or display its shortcomings.

In the remainder of this paper, we first give a brief overview of the history of project CyberSyn as well as the literature relevant for the research described in this paper. We then elaborate on the applied methodology, present the most important results obtained from performed simulations and use them to draw conclusions about the approach’s performance. We conclude this paper by discussing our findings and pointing out interesting issues for succeeding future research.

1.1

Historical Overview

On the 12th of November 1971, British theorist and management consultant

Stafford Beer had scheduled a one-off appointment; he was waiting in La Mon-eda, the presidential palace of Santiago de Chile, to meet President Salvador Allende and discuss nothing less than the automatization of the nation’s eco-nomic administration.

Allende had been elected as Chile’s first socialist president one year previ-ously. During his election campaign, he had advocated a transition from capital-ism to socialcapital-ism in order to answer the country’s degrading economic strength. Consequently - once in office - Allende started off the promised transition by nationalizing large parts of the Chilean economy and putting their control in the hands of a new-found state development agency called CORFO (Corporac´on de Fomento de la Producci´on). As General Technical Manager of CORFO, Allende appointed Fernando Flores, a young scientist devoted to the study of operations research and scholar of Beer’s work on the subject of management cybernetics. Charged with the supervision of an extensive network of industrial facilities, Flores saw the opportunity of putting into practice the results of his study: Cybernetics, described as the ‘science concerned with the study of systems of any nature which are capable of receiving, storing and processing information so as to use it for control’ (Kondratov, 1969) or, in the words of Beer (1979), the ‘science of effective organization,’ appeared to provide a sound framework for organizing economical control in the given problem domain.

So by the end of 1971, Flores had assembled a team of Chilean scientists and even won over Beer, the ‘father of management cybernetics’1 himself, to

map out the fundamental concepts of what later would be called project Cy-berSyn.2 The project’s primary goal was set out to transform the collection

of nationalized industry sectors into a single, unified system that - through the communication of production performance data - would be able to self-regulate and develop an optimal usage of available resources. As a means of commu-nication, the different facilities within the system were proposed to be joined

1Beer was christened so by cybernetics pioneer Norbert Wiener himself 2A combination of its two main concepts Cybernetics and Synergy

(5)

through a network of telex machines, linked to a central mainframe computer in Chile’s capital. At this central location, performance data would be automati-cally collected, reviewed, evaluated and used to calculate dynamic action plans meant to react on changes in the system’s environment. These action plans then would be transformed into intelligible graphs, figures and other forms of visualizations to be presented to the highest planning committee in the system’s Opsroom, from where production parameters could be controlled centrally.

But although Beer’s meeting with President Allende was a great success and the team received the president’s blessings to continue, project CyberSyn could never be actually realized: In September 1973, Allende’s government party was overthrown by a military coup d’´etat. The new government under General Augusto Pinochet had no interest in continuing the politically biased project, dissolved the research group and ordered the destruction of what had been achieved thus far.

The point of interest now is to determine whether Beer’s approach actually would have been sufficient to efficiently control something as complex as the Chilean economy. And since today we have at our disposal far more computa-tional means than the CyberSyn research group could ever hoped for, we will not need to really take control of the Chilean economy to test it, but can make use of computer simulations.

In our simulation, we will attempt to model the assumptions and implications of the 1971 approach as representatively as possible, but also integrate some more recent approaches that we reckon to go hand in hand with the original management model. One of these approaches is the concept of a Resource Based Economy introduced by Jacque Fresco, the other one the so-called New Approach to Intelligence developed by Wissner-Gross and Freer.

1.2

Resource Based Economy

In the nationalized economy setting which formed the basis of the CyberSyn project, it can be assumed that competition between different economic actors no longer plays a significant role: As all companies now belong the the govern-ment, their common goal should be to produce the requested amount of goods as efficiently as possible, saving the nation’s resources. So, in a way, this can be seen as a economy where the request for - and availability of - resources take a central position in determining management and cooperation strategies. This also is the case in the prototypical Resource Based Economy proposed by Amer-ican inventor and visionary Jacque Fresco in 1994. It is found on the basic idea that the earth’s resources belong to no-one in particular and should be managed by humanity as a whole. In a setting like that, the value of resources, products and services also is solely determined in relation to their availability, allowing their allocation ‘without the use of money, credits, barter or any other system of debt or servitude’ (The Venus Project, 2015), which we find an intriguing approach to rebuild in our CyberSyn simulation model.

(6)

1.3

A New Approach to Intelligence

Cybernetic decision-making implies the use of environment feedback loops to adapt action strategies. We propose that the best way to recreate these feed-back loops in our model is by using Machine Learning (ML) techniques: When the different actors within the collective can relate performed actions to their consequences on the overall system condition, they are able ‘learn’ action-impact chains. During subsequent decision-making processes they then may adapt by using this insight to re-evaluate certain actions’ utilities differently than before. For the project described in this paper, we will base the actor’s decision mak-ing mechanisms on the so-called New Approach to Intelligence developed by Wissner-Gross and Freer in (2013).

This ‘new approach’ is derived from a hypothesized resemblance between phys-ical systems and (artificial) intelligent ones: According to Wissner-Gross and Freer, the manner in which physical systems strive for entropy maximization can be compared to intelligent systems trying to determine the best possible action to increase their future potential. Interpreting intelligence as an physi-cal force F, the measure of intelligence exhibited by any kind of system can be assessed by determining this force through simple function F = T ∇Sτ , where F is equated to intensity T with which the system maximizes the diversity ∇ of possible accessible futures S up to a given time horizon τ . Intelligence thus becomes the capability of ‘maximizing future freedom of action’. 3 We reckon that this is a sound way of assessing an embedded actor’s options based on its current condition and the environment it is placed in.

Having as basis these fundamental concepts of cybernetics and symbiosis, we decided to name our project CyberSym and propose the hypothesis that in a resource-based economic setting, the best possible satisfaction of the individ-uals’ needs will be achieved when actors respond optimally to the collective’s requirements. In order to investigate this notion, we aim to perform Multi-Agent Simulation experiments guided by our central research question

Under what conditions can a simplified resource-based economic sys-tem - building on the management cybernetics approach of Stafford Beer - develop a balance in its internal resource distribution that allows for system sustainability?

2

Theoretical Context

In science, models should ideally be as simple as possible, and predict as much as possible.

- Carlos Gershenson In this section we present a short introduction into the field of Multi-Agent Simulations (MAS) and especially its sub-field of Multi-Agent Resource Allo-cation (MARA), give reasons why these are the most promising approaches to answering our research question and finally place our work into the context of previous research within these domains.

3Transcript of a presentation recorded in November 2013 at TEDxBeaconStreet: calculated by https://www.ted.com/talks/alex wissner gross a new equation for intelligence/ transcript?language=en#t-286121

(7)

2.1

Multi-Agent Simulation

Modeling is the process of making explicit one’s knowledge and assumptions about a certain system through the generation of a representative replication. By using this replication instead of the actual system, a model provides re-searchers with a synthetic environment that can be used to experiment without implications on the original system (Drogoul & Ferber, 1994). This is especially important if experimenting in the actual system is impossible or to be avoided due to ethical, methodological or practical reasons (Goldspink, 2002). But mod-eling also makes it possible to investigate the working of systems that do not actually exist in the modelled state. Therefore creating models may even al-low for predictions about possible system performance (Epstein, 2008; Bandini, Manzoni, & Vizzari, 2009).

Investigating the effects of modeling choices in the replicated system is called simulation. Unlike analytical models, simulations are not solved but run and ‘the changes of system states can be observed at any point in time. This pro-vides an insight into system dynamics rather than just predicting the output of a system based on specific inputs’ (Siebers & Aickelin, 2008, p.1). During the last 30 years, computer simulations have taken over this field as they are capa-ble of producing the calculations for far more complex systems than analytical models ever could.

Especially in the domain of social and economic simulations, research in com-puter simulation focuses on the use of models based on autonomous agents. Although the field of Agent-Based Modeling (ABM) was defined by the work of Wooldridge and Jennings (1995), the exact meaning of the the term agent is still somewhat controversial. For this project, we adopt the rough definition proposed by Siebers and Aickelin (2008, p. 10) that an ‘intelligent agent can be described as a discrete autonomous entity with its own goals and behaviors and the capability to interact, and adapt and modify its behaviors.’ Agent-Based Models represent real-world systems using a bottom-up approach, providing the individual agents with a set of initial goals and acting possibilities. This micro-level design then is found to ignite the emergence of observable macro-level structures when then the simulation is run, which here means that agents are repeatedly triggered to determine and perform actions to fulfill their goals (Drogoul & Ferber, 1994; Tesfatsion, 2006; Chevaleyre et al., 2006; Bandini et al., 2009).

It lies in the nature of Agent-Based Models that the systems to be simu-lated often tend to demand for the representation of rather complex relations and agent inter-dependencies. Complexity here is meant in the sense that ‘a system is complex if it consists of several interacting elements, so that the be-havior of the system will be difficult to deduce from the bebe-havior of the parts’ (Gershenson, 2005, p. 1). This especially holds for emerging, self-organizing systems where behavior ‘is achieved autonomously as the elements interact with one another’ (Ibid., p. 3). Research in this field studies complex Multi-Agent Simulations (MAS) or Complex Adaptive Systems (CAS) that have been used extensively to model equally complex real-world systems in economics (e.g. Conte, Gilbert, & Sichman, 1998; Siebers & Aickelin, 2008; Dawid, Gemkow, Harting, Van der Hoog, & Neugart, 2012) or social sciences (e.g. Hartshorn,

(8)

Kaznatcheev, & Shultz, 2013).

2.2

Multi-Agent Resource Allocation

One of the disciplines of Multi-Agent Simulations that is of special interest to our research is the relatively young domain of so-called Multi-Agent Resource Allocation (MARA) problems. Using the definition of Chevaleyre et al. (2006, p. 3-4), Multi-Agent Resource Allocation ‘is the process of distributing a num-ber of items amongst a numnum-ber of agents. [...] The objective of a resource allocation procedure is either to find an allocation that is feasible (e.g. to find any allocation of tasks to production units such that all tasks will get completed in time); or to find an allocation that is optimal.’

Since it inherits the agent definitions of MAS, we will here focus on the remain-ing concepts of resource and allocation: In MARA literature, resources often tend to be indivisible items that may or may not be shared by agents (for ex-ample network access as opposed to production tasks), but in some cases may also represent divisible items such as electricity, which can be distributed in fractions (Chevaleyre et al., 2006).

The task of allocating these resources may be managed centrally or be dis-tributed among the individual agents, each having its merits and drawbacks: If control is to be managed centrally, agents loose a large part of their autonomy. In this scenario, the central control instance will have to take over the largest part of the system management as it needs to make decisions for the agents. But by introducing this higher-level control instance, the system may obtain sufficient knowledge to optimally solve the allocation problem. If decision capa-bility is distributed among the individual agents instead, knowledge will most likely be limited and optimality can only be achieved locally. This means that while global optimality cannot be guaranteed, autonomous micro-level decisions may lead to the emergence of efficient macro-level resource distribution without the need for complex calculation mechanisms (Tesfatsion, 2006; Chevaleyre et al., 2006; Lewis, Marrow, & Yao, 2010).

Research on MARA problems has long been divided into the investigation of cooperative systems (systems comprised of benevolent agents) and the study of systems containing predominantly self-interested agents (Durfee & Rosenschein, 1994). Investigating these systems of self-interested agents is primarily based on game theory, whereas modeling cooperative systems - a subject that we are highly interested in - mostly draws on techniques such as negotiation, knowledge-based search and the use of heuristic functions (Briola & Mascardi, 2011). So, knowing all this, how was the CyberSyn model actually realized?

3

Method

In 2005, Gershenson published ‘A General Methodology for Designing Self-Organizing Systems’ in which he proposes a five-stage layout for approaching simulation projects (see Figure 1). The five stages are representation, modeling, simulation, application and evaluation, where each is logically following from the previous one. The traversing order however is not necessarily linear; backtrack-ing and repetition of certain stages is highly likely to be necessary throughout

(9)

Figure 1: A diagram relating the different stages of approaching a simulation project. The traversing order is not necessarily linear; backtracking and repeti-tion of certain stages is highly likely to be necessary during the implementarepeti-tion of the simulation. Image taken from Gershenson (2005)

the implementation process, especially when problems are encountered and the model needs to be revised.

In this section we will follow Gershenson’s list and elaborate on the implementa-tion of the different steps within the CyberSym project presented in this paper.

3.1

Representation

The representation phase is the first step in approaching a simulation project. In this stage, the different components of the system to be modelled are specified and analyzed for relevance with regard to the simulation objective. Investigating the CyberSyn project, we concluded that there are two major aspects that form the basis of the original project approach and thus need to be represented in our simulation: distributed control and agent inter-dependency.

3.1.1 Distributed Control

One of the most fundamental concepts used for project CyberSyn was Stafford Beer’s viable system model. It represents the basic idea that any kind of system - whether it is a physical, biological, social or economical one - needs to be capable of adapting to its environment if it is to be self-sustainable (or, in the words of Beer, to remain ‘viable’). The model describes a partially recursive, autonomous five-tier structure explaining the inner workings of these adaptive systems (Medina, 2006).

In Figure 2, the Chilean state development agency CORFO is depicted as such a viable system: At the base level of the graph, the different production plants in-teract with the environment and with each other to fulfill their goals (system 1). Their performance is measured and communicated through the telex network (system 2) to the supervising CORFO management (system 3) on a day-to-day basis. On this level, small adjustments may be issued in order to respond to

(10)

any higher-level situation changes that may have went undetected by the sector committees in system 1. Larger scale planning is handled by even higher level control instances (system 4 and 5), who analyze the overall Chilean economic situation and might order overall management guideline shifts in order to act on any global changes observed.

Following the partially recursive design of each sub-system, it is implied that el-ements on each layer are largely autonomous, self-sustaining items, and that the the inheritance of these attributes onto the next higher level emerges through them. Applying this concept to the control of economic management in the Chilean industry sector, a system as displayed in Figure 3 develops. In this system, each collection of autonomous elements on a given level forms one new autonomous element on the next stage of increasing complexity. So, starting with the most fundamental item of this schema, this means that a number of (autonomous) workers can form a crew in order to increase their potential im-pact. Any number of crews can in turn combine to make up for the different departments of the factory. And the constellation and cooperation of these departments finally determines the factory’s management. This process is re-peated until the highest level of control is reached, which in this case it the administration of the nation of Chile as a whole.

If we aim to model the CyberSyn project approach in a representative way, we thus need to recreate this recursive structure of autonomous elements that interact with each other and form higher order systems as a collective. This also implies that each element needs to have the ability to determine its own actions, but still might be asked to adjust its decision process in accordance with higher order request.

3.1.2 Agent inter-dependency

Besides the implementation of primarily autonomous agents, two other major implications for our model can be derived form the CyberSyn approach: Firstly, the different sectors of Chile’s nationalized economy - and within the sectors the various companies and production plants - are heavily interdependent: When the complexity of requested goods increases, the number of necessary product parts and working hours also vastly increases. This means that, at a certain point, a single production plant of a given size will not be able to produce all the necessary elements by itself. This shortage however can be overcome if it for example receives some amount of necessary parts pre-assembled by another plant. In fact, a complex economic system can only be viable if some amount of this ‘task-sharing’ can be applied.

In capitalistic, market driven settings, ‘task-sharing’ is achieved by the purchase of services and goods. In the socialist system to be modelled on the other hand - and that is implication number two - all actors are part of a bigger collective, which means they ultimately pursue a common goal. In a setting like this, co-operation does not need to draw on the individual monetary profit of providing a requested service or good, but, instead, providing other actors with requested products as efficiently as possible already serves the collective’s interest.

For our model to represent the first of these two attributes, we thus need to implement the demand for some form of multi-part products which are likely to

(11)

Figure 2: CORFO drawn as a Viable System. At the base level of the system, different production plants of the various sectors interact with the environment and with each other to fulfill their goals. Performance is measured and commu-nicated through the Telex Network (system 2) and supervised by the CORFO management on a day-to-day basis. Larger scale planning and exceptional cases are passed further on to the higher level control instances (system 4 and 5). Source: Medina (2006, p. 21)

(12)

Figure 3: The recursive scale of management levels in the Chilean economy: Each collection of autonomous elements on a given level forms a new autonomous element on the next stage of increasing complexity. Source: Medina (2006, p. 28)

(13)

be too complex for a single actor to produce. This generates a setting of actor inter-dependency. The concept of a common or collective goal on the other hand needs to take influence on the decision making strategies of the actors as they do not have to strive for profit but rather a higher system efficiency.

3.2

Modeling

In the modeling step, the system to be represented needs to be abstracted to a degree which is feasible for simulation. In concrete terms, this means that the defining system attributes that were specified in the previous step are translated into concrete design choices. With regard to the simulation task at hand, we decided on the following list of central design features.

3.2.1 Single-layer Approach

As identified in the representation step, Beer’s management cybernetic approach heavily draws on the recursive design of the different layers. This means, that a model devised for one of these layers is applicable for all other levels with only minor adjustments. So, with regard to the time frame of the CyberSym project and in order to limit model complexity, it was decided to restrict the simulation approach to the representation of a single layer from the Chilean economy management system. This implies that the basic resources used by the actors of the modelled layer, should be provided by the environment (they are actually the output of a lower-order system) and that the individual agent’s demands are governed by a higher-order, out-of-system mechanism (normally determined by the next layer’s demands).

3.2.2 Static Agents

After deciding to simulate only one of the recursive layers, a subsequent issue was to choose which one: Referring back to Figure 3, the CyberSyn project approach offers at least 12 distinct management levels! In order to retain simulation clarity, it was decided to model the fifth layer representing firms or factories. On this level, products are assembled that both have a certain degree of complexity and still are comprehensive to the programmers and system evaluators. But choosing factories as the actors of the simulated layer also brought with it some implications. Most prominently is their inability to move: While Multi-Agent Simulations often aim to represent a systems of moving and interacting elements, does modeling the communication of different factories demand for the exact opposite: All interaction should be performed from static, fixed locations within the environment.

3.2.3 Limited Scope

When agents are bound to a fixed location, decision-making should be highly influenced by their direct environment. In the case of simulating firms and factories from the Chilean economy, the agents’ neighborhood might for example influence their decisions with regard to resource availability, the distribution of local demand or their location at an important transportation route. In order to model this direct neighborhood dependence, we decided to provide the agents with a limited scope of knowledge about the overall system: Agents only can ‘see’

(14)

and access environment resources within a given range around their location; the same applies to their communication with other agents.

In order to still retain the concept of an interconnected multi-agent network, factories can share their demands with their neighbors. Their neighbors in turn will pass on the received requests to their other neighbors, creating a request network in a a wave-like manner. This can be compared to the telex network provided to the factories during the CyberSyn project, enabling them to exchange production data.

3.2.4 Discrete, non-sharable Resources

As an abstraction for the different resources available in the environment and the products requested by the agents, we decided to use the letters of the English alphabet and English words, respectively. On the one hand, using letters and words has the great advantage of being a very simple approach to modeling a limited set of discrete resources that can be combined in certain regulated ways to create an infinite set of possible products. On the other hand, the ‘construc-tion plan’ for every product is already inherent to its descrip‘construc-tion. For example, creating the word ‘pancake’ has a clear cut set of possible partial products and the order in which the need to be assembled (parts ‘c’ and ‘ake’ can be com-bined to ‘cake;’ ‘pan’ needs to be added to the front etc.). Trying to actually produce a pancake would imply the use of an external recipe that specifically states what ingredients are needed and what the order of ‘assembly’ would have to be. The letter/word approach thus provides a fast way to create an ontology holding a large amount of possible products with a limited set of resources that still is intelligible for the evaluators (Bandini et al., 2009). For increased clarity, we also decided to make the letter resources non-sharable, which means that every resource extracted from the environment can be represented by a unique instance that can only be at one place at a time and as such be processed and transported.

3.2.5 Production Deadlines

With regard to the measurement of time used in our model, we decided to try to stay as close to reality as possible. Following this approach, we agreed upon a time scale measuring hours, where - as can be expected - 24 hours represent a day. During such a day, agents can work a pre-defined amount of time (stardard set to 12 hours) in order to fulfill their demands. And, in order to give them an incentive to actually do so, agents are issued a certain deadline period in which they have to assemble the requested products; if they cannot do so, they are deactivated. This deadline period also is measured in hours and should be related to the average production times of items if it is to fully comply with its function.

3.2.6 Static Intelligence Heuristics

As a last point of attention we need to consider the agent’s intelligence: Beer’s viable system model proposes largely autonomous agents that may be influenced in their decision-making by higher order levels. To realize this within the Cy-berSym model, we propose the use of distributed cybernetic decision-making.

(15)

Since each agent only has a limited view of the entire system layer, it will need to base its decisions on its own partial, local data. This means that the overall layer management will need to emerge from the individuals’ choices. Organizing control in this way guarantees high agent autonomy and relatively simple layer management. On the other hand it can not be guaranteed that decisions will be optimal.

To control the agents action decision mechanism, we propose to bring in the ‘new approach to intelligence’ as described above. When using this approach, agents that are trying to always retain the highest number of possible future actions will implicitly also have to assess their environment and their neighbors to do so: If one of an agent’s neighbors is deactivated, this agent will loose one of its contacts. A lost contact in turn means that it will have less options to spread its requests to and obtain its necessary resources from. By loosing a neighbor, the agent thus basically looses a number of possible futures.

Derived from this concept, distribution network maintenance was chosen to be the primary driving force behind the agents’ intelligence. In more detail, in order to realize this form of intelligence in a computationally efficient way, it was decided to use static, heuristic derivations of Wissner’s formula for the action decision processes of the agents. Their exact implementation will be explained in the next section.

3.3

Simulation

Once the investigated system is sufficiently represented and modelled, the next step is to implement it in a computer simulation. In order to implement the Cy-berSym model, we chose for the open source, Java-based simulation framework Repast Simphony4(North et al., 2013). Repast is integrated in the Eclipse Java

IDE and provides a range of useful simulation tools (e.g. system visualization, file sinks, model export and parameters sweeps) besides being fully adjustable through its Java source code.

Following the idea of Gershenson’s methodology schema, sometimes backtrack-ing is needed to revise decisions taken earlier in order to adapt to unforeseen implementation issues and unexpected system behavior. This also happened to the CyberSym project, where while already implementing the simulation -some issues arose that demanded for additional attention. Consequently, in this section first the most important concrete implementation choices are explained and then an exemplary simulation run is depicted.

3.3.1 Implementation Choices

While implementing a developed model in Repast Simphony for the largest part simply is Java programming, there are a number of implications that using the Repast framework has on programming style. The two most prominent ones to be described here are Repast’s tick-based approach to time and its environment representation.

Repast demands all agents that take an active role in the simulation process to be added to the so-called simulation context, from which at every iteration of the simulation, agents are activated using a @ScheduledMethod. Correctly

(16)

Figure 4: Screen shot of Repast’s environment display. The parameters for this simulation were 5 agents on a 20x20 grid with 26 resources (one for each letter of the alphabet). Red circles are agents with an indication of their scope (all agents are withing each other’s scope, the centrality rating thus is 100% for all), blue diamonds are sources with an indication of the resource they produce as well as a rating of the availability. Positive ratings mean that currently they regenerate faster than they are mined.

scheduling these methods might become a demanding task for Repast newcom-ers. Concerning the environment, Repast uses two different forms of enviroment representation in parallel: A rasterized Grid and a continuous Space. While the Grid can be used to determine all other agents within a certain (rectan-gular) range, can Space coordinates of agents be used to calculate the actual distances (which for example can be used for calculating traversing costs etc.). The environment can either be chosen to end at its borders or to ‘wrap around,’ which results in the environment behaving like a globe. For project CyberSym we decided for the latter as in the globe representation the different locations within the environment are exposed to more uniform conditions (see Figure 4).

Besides these Repast-related implementation choices, there are a range of specific design choices that were made during the programming process, all of which have implications for the model behavior:

1. Possible Product List In order to provide the system with a number of products that can be used to generate agent demand, we generated a list of 26 nouns, all starting with a different letter of the alphabet. This way it can be guaranteed that every letter might be needed while still using a very limited list of possible products. Besides this feature, choosing some random English words (with a length of 2 to 9 letters) also had the effect of generating a natural distribution of letter demand (as seen in Figure 5) that can be used to generate different levels of demand and thus (for equal numbers of resources) results in a natural distribution of resource

(17)

Figure 5: Letter distribution of the simulation’s standard list of possible prod-ucts. It can be observed that the distribution roughly follows the typical let-ter distribution of the English language, resulting in (for equal numbers of re-sources) a natural distribution of resource scarcity.

scarcity.

2. Multiple parallel demands. It was decided to introduce the possibility of having multiple demands in parallel (called agent ‘wishes’), since this helps to overcome temporal scarcity bottlenecks: Whenever a certain re-source momentarily becomes more scarce due to increased demand, agents can switch to other wishes that do not contain that particular resource for the meantime.

In the system’s standard setting, agents are granted one additional wish for every X hours of work, where X can be set in the simulation GUI (nor-mally 12). Furthermore, all wishes unfulfilled for a period of Y hours are replaced by a new wish to enable system sustainability even in settings with missing resources (Y can be set in the simulation GUI, too, and is proposed to be deadline length plus 48 hours).

3. Limited communication. Besides implementing the wave-like neighbor-to-neighbor distribution of requests and products in the agent network, it was concluded that one additional form of agent communication was needed: if agents could only contact their neighbors, determining whether a given request was still active would be rendered impossible. To overcome this problem, agents can validate requests by calling the original requester, no matter the distance. By doing so, only active requests are passed on to neighbors and action planning can rely on up-to-data information.

4. Three-part heuristic action rating. Each tick (generated by the @ScheduledMethod), all agents need to determine their possible actions

(18)

Figure 6: Plot of a root function (blue) and negative double-scaled root function (red). In the CyberSym simulation, the different heuristics governing an agent’s intelligence draw on the root functions’ property of steady increase with a fast ascent at the beginning and a subsequent decrease in intensity. Graph generated with https://graphsketch.com/

and rate them using a heuristic rating function. In the standard system setting used for the experiments described in this paper, each action is rated based on a combination of three (of five) current agent and envi-ronment properties. These rating features are request urgency, requester distance, overall resource request, resource availability and requester util-ity. They are derived from the ‘new approach to intelligence’ described before and aim to contribute to upholding the maximum number of pos-sible futures. The ratings are all designed to fall in a range between -100 and 100, which is to enable mutual compatibility, and make use of the practical features of root functions (they exhibit a steady increase with a fast ascent at the beginning and a subsequent decrease in intensity - see Figure 6). Here we will discuss them in more detail:

a) Request urgency is calculated through

p

(T + OR) mod D ×

100 √

D × C (1)

where T is the current tick, OR the random deadline offset of the

requesting agent R (in ticks), D is the system’s deadline period (in ticks) and C is Boolean charged. The function’s first factor is respon-sible for determining a request’s urgency as the value under the root increases linearly with approach of the deadline. The second factor is responsible for stretching the function graph: It is designed in a way that the resulting rating reaches 100 when the current deadline period has reached its end. Factor C finally is a Boolean indicating whether the requester has already had one wish fulfilled during the current deadline period. If this indeed is the case, C is set zero and thus the the whole rating becomes zero, too. If the requester has not had fulfilled one of its wishes yet, C is set to one and the rating is used as calculated.

(19)

b) Requester distance is rated through the function r W L × 100 √ D (2)

where W is a given request’s waitCounter in ticks, L is the number of links or hops between the current agent and the requester, and the second factor is equal to the one in the previous formula, stretch-ing the root function. The first part of the formula here takes into account the amount of time a certain request has been unfulfilled in relation to the distance to the requester: When an agent in the net-work first receives a given request, the request’s waitCounter equals the current number of links and the ratio becomes one. If during the following tick the request is still active, the ratio will increase as the waitCounter of the request increases while the number of links remains constant. The intensity of this increase in rating however is determined by the distance to the requester: At for example the fifth tick after generating a request, the ratio for an agent at distance two will be 2.5, for an agent at distance five this still will be 1.

c) The overall resource request is assessed by simple ratio

R(x)

R × 100 (3)

where R(x) is the request for given resource or product x and R is the current total resource request registered by the agent. Using this ratio, an agent can estimate the overall request for a given resource and focus on delivering items that appear to be more scarce first. d) If a requested resource is not in the agent’s inventory but may be

extracted from a source in its neighborhood, this particular action is rated with regard to Resource availability. A resource’s avail-ability is calculated by a two-step process: Step one determines the regeneration rating for the sources available to the agent. This re-generation rating is determined though

RS(x, D) − ES(x, D)

QS(x) + ES(x, D) − RS(x, D)

× 100 (4)

with RS(x, D) the number of regenerated resources of type x in all

reachable sources S within the last D ticks, ES(x, D) the number of

resources of type x extracted from sources S and QS(x), the current

available quantity of resources of type x in sources S. This rating is positive if during the last D ticks more resources of type x were regenerated than were extracted. If the opposite is true, the rating is negative.

Step two of the rating process now combines this local availability rating with the distance measurement described in b). If the avail-ability rating in equation 4 returned a positive result, it is adopted as the final resource availability’s rating, otherwise it is adjusted by

(20)

subtracting A − (− r W L × 2 100 √ D + 100) (5)

where A is the (negative) availability rating from equation 4, and the entire second part is a translated, inverted and double-scaled ver-sion of the root function used before. Implemented in this way, the formula calculates the difference between resource availability and requester distance, which will still turn out positive if the request’s waitCounter is large enough to overpower the resource scarcity warn-ing discount calculated in equation 4. For a visualization of the neg-ative root function, please refer to the red graph in Figure 6. e) As a last rating, receiver utility is assessed for actions concerning

the assembly of products. To calculate it, another conditional func-tion is used: if the receiver of the assembled product is the assembling agent itself, the score is determined by

argmax( size(A) size(WA)

× 100) (6)

where size(A) is the size of the assembled product and size(WA)

is the size of any wish containing the assembled part. Determining the argmax of this function effectively returns the best ratio of wish fulfillment that can be achieved with the assembled part.

If the receiver of the assembled product is not the assembling agent itself, the distance function of equation 2 is used since in that case the receiver’s wish list cannot be accessed.

3.3.2 Execution flow

Now all fundamental system functionality is known, we will briefly have a look at the program flow of a complete simulation run:

1. Set-up. During initialization, agents are passed the values of the simu-lation’s GUI parameters such as scope, maximum working hours per day etc. and are assigned a first wish, randomly selected from the possible product file. Then the agents are randomly placed in the environment and the letter sources are distributed (which follows alphabetical order). Then the simulation is started.

2. Update environment representation. An agent’s first task during each tick is updating the list of reachable agents within its communica-tion scope. If it cannot reach any agents, the maximum range is extended until at least one other agent is in its scope - or until the entire environ-ment is (This is to guarantee the existance of an agent network). As a next step, all sources within the agent’s scope are evaluated and resource availability is determined. When that is done, the agent’s partial product demand is re-generated from its wish list (containing all requested final products) and its inventory (containing all parts it already has). With this up-to date knowledge of missing parts, an internal jobList is created, which is merged into the agent’s external jobList generated from all its neighbor’s requests.

(21)

Figure 7: Screen shot of the Repast simulation GUI while running one of the CyberSym experiments.

3. Action selection. Now we enter the conditional part of an agent’s rou-tine: If a called agent has no more working hours left, the rest of its turn is skipped and the next agent is called. If it however still has some time left, the agent now determines what - given its current condition - currently is the best available action. This is done by rating all possible actions as described in the last section and returning that action which received the highest rating. The chosen action than is performed, meaning that ei-ther a requested product is ‘consumed’ (fulfilling the demand), a (partial) product is assembled from items in the inventory, a resource or product from the inventory are delivered to a neighboring agent or that a resource is extracted from one of the reachable sources.

In the current model setting, every action - except fulfilling an agent’s complete product wish - has a cost of one hour.5 After an action is per-formed, the next agent is called.

4. Performance registration. When all agents were called, a static eval-uation interface collects statistics about system performance and writes them to file. If additional visualization is enabled, these statistics are also displayed in Repast’s simulation GUI (see Figure 7). Then the next tick begins and all agents are called again. This process is repeated until either a pre-set maximum number of iterations is reached or until there are no more active agents left.

3.4

Application

Once the model is fully implemented and the simulation program is up and running, the actual experiments to validate assumptions and test system

per-5This was decided for since all actions lie within the agent’s scope. For possible future adaptions please refer to the future work section.

(22)

formance can be conducted. In Gershenson’s schema, this phase is called Appli-cation. And since Repast is a Java framework especially developed for running Multi-Agent Simulations, it has some custom-tailored features for supporting this phase, too:

1. File sinks. File sinks are Repast’s built-in functionality to periodically log selected system data. They can obtain data from every getter function returning Java primitive types as well as a range of other system perfor-mance features. The evaluation intervals can be adjusted in the simulation GUI, but for this project we opted for the most detailed data set, saving system data after every tick.

2. Parameter sweeps. When simulating a developed model, one is often interested in system performance under a range of different parameter settings. Especially for this task, Repast Simphony introduced batch runs that allow the user to run a whole series of simulations without the use of the visualization interface, saving masses of computation time.

When generating the instruction set for a batch run, one can also choose to set certain parameters to a range of values instead of one constant value. In this case, the batch will execute X iterations of any given parameter combination and log their performance. This is called a parameter sweep, a useful tool in generating rich and informative data sets. One however needs to keep in mind that every additional parameter in such a sweep adds a large number of extra runs that need to be executed - and executing a single simulation run on the final version of the CyberSym model takes - depending on system endurance and the machine performed on - about two minutes.

3.5

Evaluation

As a last step of the project approach schema, the results obtained from the simulation runs are analyzed and evaluated in order to gain insight into the sys-tem’s performance. So in this section we will explain the different performance measures used in the CyberSym project and evaluate their expressiveness. Fur-thermore, we will give a short overview of the data evaluation process and raise the subject of simulation coverage validation.

3.5.1 Measures of Interest

As mentioned before, Repast’s File Sinks can register any primitive result re-turned from getter functions implemented in the simulation’s Java code, so it is up to the user to determine which of these values are actually significant for accurately measuring system performance. For the CyberSym project, we propose to use these three features:

1. System sustainability. System sustainability expresses the agent net-work’s ability to fulfill individual agents’ requests within the given deadline period and therefore actually displays the system’s capability to effectively react to different environment conditions. System sustainability can be assessed by comparing the number of ticks passed until the last agent is removed, or by the number of agents still active after a given period of

(23)

time has passed. Considering these features, we propose that it provides a good primary indication of overall system performance.

2. Agent happiness. Agent ‘happiness’ gives a more detailed indication of the system behavior, as it not only assesses the number of active agents but also how many wishes they can fulfill per deadline period. To do so, agent happiness may either be calculated by simply dividing the amount of wishes fulfilled by the number of active agents - or by introducing some more complex rating system where the amount of ‘happiness’ gained by an individual agent decreases with every wish it has had fulfilled before. Cal-culating agent happiness in this way, a more equal distribution of fulfilled wishes is favored.

3. Production Cost Ratio. While system sustainability is a purely qual-itative assessment of system performance, agent happiness already can provide a more quantitative expression of system efficiency. An even more detailed quantitative performance benchmark we propose to use with re-gard to system efficiency is the system’s production cost ratio.

In the current system setting, production cost is measured in hours of work, where every step of the process costs exactly one hour. This means that the minimal production cost of a product can be calculated through 2n − 1 where n is the product’s size. So whenever a product is ‘consumed’, marking the fulfilling of a given request, the actual production cost can be compared to the optimal production cost, expressed in production cost ra-tio cost(p)/cost∗(p). The smaller this ratio, the more efficient the system is able to manage the individual agents’ requests.6

3.5.2 Data Evaluation Process

Repast saves its File Sinks in two separate files per data set. One of them con-tains the actual experiment data, the other one lists the simulation parameters. These files can be linked to each other through the unique run number indica-tion that may be used as primary ID in both sets.

Simulations of the CyberSym model in turn return two data sets each, one listing agent performance and one containing the overall system performance measurements. This means that in order to recombine the data generated by a single run, four files need to be assessed and their data linked to one another. To simplify this task, a basic MySQL server was installed that manages the output data through SQL queries and some php processing in order to directly feed the cleaned and sorted data to JavaScript data visualization plugin Highcharts.7

3.5.3 Validation of Simulation Coverage

As mentioned before, the number of parameters to use in a parameter sweep directly effects the run time of the simulation batch. So as to maximize the

6As a side note: Optimal production cost can only be achieved if all resources necessary are within the requesting agent’s scope and thus no transportation is needed. However, since the aim of this project is to investigate the behavior of an agent network that needs to share environment resources, it is reasonable to assume that in most cases, this cannot be achieved (and is not expected either).

(24)

utility of a simulation run, one wants to know in before, how many iterations of any individual setting are needed to produce constant, reliable performance data. In order to analyze this for the CyberSym project, we will here compare the standard error of the mean (SEM) for different numbers of iterations.

To calculate the standard error of the mean, we first need to assess the standard deviation of the data set. The standard deviation of a random variable X with mean value µ is given by

σ =pE[(X − µ)2] (7)

which is the root of the variance of X. If, as in our case, X takes values from a set of discrete values x1, x2, ...xn, this expression can be reformulated as

σ = v u u t 1 N N X i=1 (xi− µ)2, (8)

where N is the total size of the data set.

From the standard deviation of the sample, one gains the SEM by dividing it by the square root of the sample size:

SEM = √σ

n (9)

To investigate this standard error of the mean for different numbers of simulation iterations, we generated a data set of fifty simulation runs with identical system settings. For this set we calculated the standard deviation and determined the SEMs for different combinations of zero to ten iterations by comparing their joint mean to the data set’s total mean. The results of this experiment are displayed in table 9 and visualized in Figure 8. Based on these results we con-clude that ten iterations for each setting should provide us with reliable data for effectively and efficiently evaluating system performance.

A second limiting factor for simulation runs is the model scale. Naturally, when investigating the Chilean economy, one would want to have a simulation of thousands of agents interacting, performing actions for a number of ticks that is equivalent to multiple years in the real world. One of the aspects of modeling however is the abstraction of the problem domain to both reasonable and efficient dimensions.

In order to control the number of agents used in a simulation run and the size of the environment they are placed in, we introduced measurements for agent centrality and resource accessibility. By keeping track of the number of other agents and sources within an individual agent’s scope, one can generate settings with equal properties independent of the actual simulation size: 20 agents with a scope of 10 placed in a 50x50 environment with 200 sources will have comparable centrality and accessibility ratings as 80 agents with scope 40 in a 100x100 environment with 800 sources. This means that by logging agent centrality and source accessibility, also small scale simulation runs can generate data representative for larger scenarios.

(25)

Figure 8: Differences in standard error of the mean (SEM) for a range of varying iteration numbers.

Figure 9: Standard error of the mean (SEM) of different iteration numbers (rows) for selected system performance measures (columns). A stands for Agent Count, C for Agent Centrality, W for Wishes Fulfilled and P for Produc-tion Cost. The color indicates their relative distance to the calculated mean at 50.

(26)

4

Results

The final version of CyberSym’s simulation GUI turned out to hold 25 distinc-tive system parameters. But since in the limited scope of this research project not all of them could be evaluated, we selected the three most central parameters deadline period length, resource availability and agent network size -and ran simulation batches for different configurations thereof. The results of these parameter sweep batch runs are presented here and will be discussed and evaluated in the Conclusion section.

4.1

Production Deadlines

The first system parameter which we expect to have a direct influence on sys-tem performance is the agent’s production deadline period. Issuing production deadlines might generate the agent’s drive to actually take actions, but if the deadline periods are too short, it will become impossible to produce certain products (especially the large ones) - and if it is too long, the agents might loose the incentive of acting efficiently.

To test this, we simulated the system performance for a network of 10 agents in a 35x35 grid with 130 sources. The scope of the agents was set to a 10x10 direct neighborhood, but since the aim of this project is to model a multi-agent network, the agent scope could be increased dynamically in case an agent can-not reach acan-nother one within its scope. In this setting, we issued production deadline periods ranging from 24 hours to 84 hours, and let the simulation cycle through ten iterations for every condition.

In Figure 10, we visualized the effects that different production deadline pe-riods have on the system’s ability to sustain itself. Starting with the shortest (realistic) deadline period, which is 24 hours, we see that the initial agent net-work size of ten directly drops to an average of about 1.25 active agents for all succeeding intervals. This means that only a small fraction of the agents in the initial network were able to assemble the requested products in time. Other agents who did not succeed in doing were removed from the simulation after one of the first deadline checks.

The average number agents remaining active after the first few deadline periods appears to increase with every step of extension in deadline period length, where all runs display a comparable behavior of first reducing the number of active agents and then remaining almost constant to the end of the simulation run at 2000 ticks. The intensity in reducing the agent network size however decreases for every extension, until at 72h only in a part of the ten iterations, agents were deactivated - and none whatsoever for the 84h deadline. This also comes back in the visualization 11, where the average number of wishes fulfilled per agent per deadline period is depicted: In the 24h deadline setting only about 0.25 wishes can be fulfilled whereas in the case of 60, 72 or 84 hours this approaches or goes above 1.

Keeping that in mind, we will now have a look at the production cost and production cost ratios for different deadline period lengths. The results of this experiment are shown in Figure 12 and Figure 13. To correctly interpret this graphs, it is important to notice that an agent’s maximum workload is twelve

(27)

Figure 10: Average number of active agents during a given period of ticks for different deadline period lengths. Agents get a head start by being granted a first double-length deadline period plus individual offset. It can be seen that an extension of the deadline period results in an increase in size of sustainable networks.

Figure 11: Average number of wishes fulfilled per agent per deadline period for different deadline period lengths.

(28)

Figure 12: Average production cost for products consumed during a given in-terval of ticks. An extension in deadline period apparently also increases the average cost of assembled products.

Figure 13: Development of the average production cost ratio for a number of different deadline period lengths.

(29)

hours per day. That means that in a 24h deadline period, the agents will each only have 12 hours to work on the fulfillment of their wishes. In Figure 12 we consequently see that the average production cost in the 24h setting closes in on about nine to ten hours: The only products completed by the agents are relatively small ones8. When deadline period length increases, the average cost

of consumed products also increases. This has to do with two main factors: The first one is the simple fact that the agents have more time to complete actions and thus also larger products can actually be made. The second factor has to do with the agent population size: When comparing these results with the number of active agents seen in graph 10, it becomes clear that for the 24h deadline, there are actually just about 2 active agents left after the first few deadline eval-uation points. This means that there cannot be any larger distribution network where the repeated delivery of parts increases a final product’s cost. This de-velopment can be seen in the visualization of the system’s production cost ratio in Figure 13: for the 24h deadline, the production cost ratio approaches 100%, which means that all - or at least the very largest part of - actions necessary to create the requested products are performed by the requester himself. The increase in production cost ratio then again is linked to two major factors: One being the network size just mentioned - there will be more transport involved in fulfilling the more complex requests - the other being the re-use of older parts: Sometimes agents extract resources or assemble parts that are redundant as the request gets fulfilled by some other agent before they reach their destination. This means that the agent collective accumulates a number of spare parts that can be re-used for an incidental wish. But as these parts were already shipped to and fro for a number of times before they are eventually integrated in a con-sumed product, their cost ratio becomes sub-optimal.

A last point of attention concerning the effects of different deadline period lengths are their implications on overall system behavior. In Figures 14 and 15, the division of agent’s actions in extractions, assemblies and deliveries for a 24 hour deadline and a 84 hour deadline are displayed. Besides the fact that the total number of actions is far lower for the 24h setting (as there are only about two active agents left) it becomes clear that there is far more transportation involved in fulfilling the requests of the larger network (also see Figure 16).

4.2

Resource Availability

While deadline period length is a system parameter that directly affects sys-tem behavior and performance, we will now take a look at resource availabil-ity, which is an environment parameter. Combined with the model’s use of deadlines, scarce environments will potentially be a heavy burden for system sustainability - but with an eye on the resource-based economy approach, this is exactly what the system should be able to represent. On the other hand, if resources are abundantly available, the agent network should be capable of

8The production cost for a single product can accumulate to more than 12 hours in the first interval of the graph since the visualization does not show the production cost per deadline period, but summarizes a static 100 ticks per interval. This means that for a 24h deadline, during the first depicted interval, already 5 evaluation points were passed. And if products were started to be assembled within the first deadline period and continued during the second or the third one, their cost can increase above the 12h limit.

(30)

Figure 14: Division of agent actions over time for a system with 24 hour deadline period.

Figure 15: Division of agent actions over time for a system with 84 hour deadline period.

(31)

Figure 16: Total division of agent actions for a complete simulation run with 24h or 84h deadline period, respectively. It can be observed that the larger agent network in the 84h setting utilizes much more transportation to fulfill the system’s requests.

using and distributing these resources as efficiently as possible.

Figure 17 depicts a visualization of the number of active agents after a cer-tain amount of time when using different numbers of sources dispersed in the environment. Again, starting with the light blue line that represents the small-est possible amount (which here is 26, meaning one source per resource), we can see that in an environment like that, the number of agents who are able to fulfill their requests in time is quite low: The line showing the number of active agents quickly drops until it steadies at about 2.5. When doubling the num-ber of sources to 52, the system appears to be able to retain about 7-8 agents throughout the entire simulation run, and for all others this closes in at the initial ten. This effect also comes back in the graph that shows the number of wishes fulfilled per agent per deadline (Figure 18, where the 78+ lines fluctuate about or just below one wish per agent per deadline, whereas the 26 and 52 lines stay distinctively clear of that mark.

When analyzing the production cost ratio graph in Figure 19, which is gen-erated with the data from this specific experiment, two properties stand out: One is that all lines converge to just two different ratio percentages, of which one is at about 225% (for 52 and 87) and one at 200% (for all other settings). The other outstanding attribute is the development of production cost ratio for the 26 setting. While in all settings the average production efficiency decreases, it actually increases for the scarce setting.

(32)

Figure 17: Average number of active agents during a given period of ticks for different amounts of available resources. It can be seen that the first increase in available resources also drastically increases the number of agents that remain active, after that the effect becomes a little less powerful with each step.

Figure 18: Average number of wishes fulfilled per agent per deadline period for different amounts of available resources.

(33)

Figure 19: Development of the average production cost ratio for different num-bers of sources provided by the environment.

environment apparently does not affect the agent’s total behavior. Figure 20 shows the total division of agent actions for all six settings, and it can be concluded that there is no major difference between those segmentations.

4.3

Agent Network Size

The last parameter we want to investigate in this report is the agent network size or total agent population. This again is a system parameter like the deadline period length, but it highly depends on the environment, too: Without enough resources provided by the environment, a larger collective of agents presumably is not capable of self-sustainability. This will be due to the fact that too many resources are missing or are too scarce to fulfill all the system’s requests within the given deadline period. On the other hand, if the environment is too large, a small group of agents - when dispersed throughout the entire area - will prob-ably have communication problems because of their limited scope - and might not succeed in efficient interaction either. Through the experiment presented in this section we aim to survey the first of these two options by analyzing the performance of different sized agent networks in a scarce environment as deter-mined in the previous section.

Let us again begin with the visualization of development in agent population (Figure 21). From this graph it becomes obvious that - independent of initial network size - during the course of the simulation run the number of active agents converges at about two to four agents. More precisely: The larger the initial agent network, the faster its size will decrease during the following dead-line periods. This specific behavior can also be observed in the graph showing the average number of neighbors (agents in scope) per agent (Figure 22):

(34)

inde-Figure 20: Total division of agent actions for a complete simulation run in environments with 26 to 156 letter sources. It can be registered that here there appears to be no notable difference in agent behavior.

(35)

Figure 21: Average number of active agents after a given period of ticks for different amounts of agents at simulation start. It appears that in the scarce environment all initial sizes are eventually reduced to about two to four agents.

Figure 22: Average number of neighbors per agent. It can be observed that independent of initial network size, the number of neighbors per agent closes in at about two to three.

(36)

Figure 23: Average number of wishes fulfilled per agent per deadline period for different amounts of available resources.

pendent of initial network size, the number of neighbors per agent closes in at about two to three.

Since in the scarce environment setting the maximum size of self-sustainable agent networks appears to converge to a quite small number - independent of initial network size - this means that in this specific case also the number of wishes fulfilled per agent per deadline will converge. Exactly this behavior can be observed in Figure 23.

When analyzing the graph of the agent networks production efficiency in Figure 24, it stands out that all lines first increase to a certain level, than shallow of and remain nearly constant for the largest part of the simulation run. This behavior is striking as the numbers of active agents all converge to a very small number independently of their original size. If one however takes a look at the system behavior for the different initial agent sizes, this might become a little more understandable: In Figure 25 we see the action ratio over time for a network starting off with just two agents, in Figure 26 we see the graph for 22 agents, which is the other extreme in our experiment. If we compare them, it might seem that they show very different data, but actually they are quite similar: As the graphs shows the total amount of actions performed during a given period of time, the 22 agent network graph naturally decreases since here agents are continuously deactivated. This means that if one was to divide the number of actions performed by the number of agents active during that period, this would result in a graph similar to the one in Figure 25. That system behavior is quite equal also becomes apparent when looking at the total action division charts in Figure 27 that again show no major difference in overall system behavior for the different initial network sizes.

Referenties

GERELATEERDE DOCUMENTEN

Therefore, it is hypothesized that good financial institutions, rule of law, a good accounting infrastructure and less corruption have a positive influence on access to finance

between
the
respiration
measured
by
respiration
belts
and
EDR
was
0.7.

Correlation
between
the


Voor het herstel van de democratische controle op en publieke verant- woording over de politie is het allereerst zaak die nieuwe werkelijkheid onder ogen te zien, een werkelijkheid

Concluderend kan worden gesteld dat op basis van het Statuut, het zelfbeschikkingsrecht en de Grondwet ruimte bestaat voor differentiatie; differentiatie tussen de BES-eilanden en

Tangibility is PPENT divided by Total assets, SD stands for the secured debt to total asset ratio, Claim classes is the amount of claim classes SD and the secured debt ratio is

To illustrate the B-graph design, the three client lists are pooled into one sampling frame (excluding the respondents from the convenience and snowball sample), from which a

Following the previous work on the application of the successive shortest path algorithm [7], this paper presents a distributed implementation of the cost-scaling push-relabel

Distributed state estimation for multi-agent based active distribution networks.. Citation for published