• No results found

in Crisis management

N/A
N/A
Protected

Academic year: 2021

Share "in Crisis management"

Copied!
8
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Automation

in Crisis management

(2)

Automation

in Crisis management

An explorative research

BIJLAGEN

Author: Ing. Joost Olieroock Studentnumber: 1240978

Education: RUG, Technische Bedrijfswetenschappen 1st supervisor: Dr. T.W. de Boer

2nd supervisor: Prof. dr. R.J.J.M. Jorna

Company: TNO, Human Factors

Company unit: Informatieverwerking Company supervisor: Dr. J.M.C. Schraagen

“The author is responsible for the content of the paper;

the author has full copy rights on this paper ”

(3)

Appendix I, The complete Fitts list

Property Machine Human

Speed - much superior - Lag one second

Power - Consistency at any level

- Large constant standard forces and power available

- 2 HP for about 10 seconds - 0.5 HP for a few minutes

- 0.2 HP for continuous work over a day

Consistency - Ideal for routine, repetitive or

precision tasks

- Not reliable; should be monitored - Subject to learning and fatigue

Complex activities - Multi channel - Single channel

- Low information throughput Memory - Best for literal reproduction and short

term storage

- Large store, multiple access - Better for principles and strategies

Reasoning - Good deductive power

- Tedious to re-program

- Good indicative power - Easy to reprogram

Computation - Fast accurate

- Poor error correction

- slow, subject to error - Good error correction Input (sensing) - Some outside human sense range; i.e.

radio activity

- Insensitive to extraneous variables - Poor pattern recognition

- Wide range (1012) and variety of stimuli dealt with by one unit

- affect by heat, cold, noise and vibration

- Good pattern recognition - Low signal detection

- Good signal discrimination in high noise levels

Overload reliability - Sudden breakdown - Graceful degradation

Intelligence - None

- Incapable of goal switching or strategy switching without specific directions

- Can deal with the unpredicted - Can anticipate

- Can adapt

Manipulative abilities - Task specific - Great versatility and mobility

(source: Backman & digby, 1998)

(4)

Appendix II, levels of automation by Billings

The following table consists of the seven levels of automation as described by Billings for the automation of a pilot’s tasks.

Management mode Automation function Human functions Autonomous

operation

Fully autonomous operation: Pilot not usually informed; System may or may not be capable of being disabled.

Pilot generally has no role in operation. Monitoring is limited to fault detection. Goals are self-defined;

pilot normally has no reason to intervene.

Management by exception

Essentially autonomous operation; Automatic reconfiguration. System informs pilot and monitors responses.

Pilot informed of system intent; Must consent to critical decisions; May intervene by reverting to lower level of management.

Management by consent

Fully automatic control of aircraft and flight.

Intent diagnostic and prompting functions provided.

Pilot must consent to state changes, checklist execution, anomaly resolution; manual execution of critical actions

Management by delegation

Autopilot and autothrottle control of flight path. Automatic communications and nav following

Pilot commands hdg, alt, speed; Manual or coupled navigation; Commands sytem operations, checklists and communications functions

Shared control Enanced control and guidance; Smart advisory systems; potential flight pathand other predictor displays.

Pilot in control through CWS or envelope protection systems; may utilize advisory systems, System management is manual.

Assisted manual control

Flight director, FMS, nav modules; datalink with manual messages; Monitoring of flight path control and aircraft systems.

Direct authority over all systems; manual control, aided by F/D and enhanced navigation displays;

FMS is available; trend info on request.

Direct manual control

Normal warnings and alerts; Vioce

communication with ATC; Routine ACARS communications performed automatically.

Direct authority over all systems; manual control utilizing raw data; unaided decision making; manual communications.

CWS Contol Wheel Steering

F/D Flight director

FMS Flight Management System

ATC Air Traffic Control

Abbreviations

ACARS Aeronautical Radio, Inc. Communications and

Adress Reporting System.

(5)

Appendix III, Agent technology

As the Combined System project attempts to develop an intelligent agent system to execute several tasks in the crisis management domain, this appendix is focused on giving an insight in the theory of agent technology. It explains the basic agent theory and its implication on human machine interaction.

It is not intended as a full course in agent technology, but only to get acquainted with the possibilities of agent technology. First a definition of an individual agent is given. To explain the possibilities of an agent system compared to former computer systems the difference between the possibilities of agent-oriented and the possibilities of object–oriented software is explained. Also there is comparison to expert systems. Some structures of agent systems, their communication and coordination are described next. The final part of this chapter constitutes of an explanation of agent decision support.

Although in psychology and philosophy the concept of an agent is frequently used to describe a human decision-maker, this research contemplates the agent as a software agent.

Agent definition

Agent technology is considered the most promising development in the area of modular software. In recent years agents have gained their practise in different areas, for example in the domain of the Internet. In this domain agents are used to perform searches in favour of a human user.

Although agents have a widespread use nowadays different interpretations and a lack of formal definitions exist. A non-formal definition of an agent is given by Weiss (2001).

“An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives”

Weiss discusses this definition with a few points. The definition lacks in explaining intelligent agents, its not explained in what sort of environment the agent is situated and autonomy is not defined.

Autonomy, as Weiss describes it, is the notion that an agent has control over his internal state and his behaviour. An important characteristic of an agent is that is has several actions it can take to influence its environment. Cues are listed from the environment and are used in selecting action to influence this environment. Just as human agents, software agents have sensors to pick up cues from the environment. For a physical agent these are situated in the physical world, for software agents, the sensors are software made. Especially the property of autonomy defines the difference with usual object oriented software technology. As a similarity, both agent and object consist of an internal state, as described in a later paragraph.

(6)

with a few problems, also identified in human behaviour. An action of an agent is executed on a pre- condition, which are cues for action. While executing an action, the pre-condition can become false.

The result is that the action has no meaning anymore. Another problem may occur one that is derived from human goal directed behaviour. When the goal does not remain valid, highly goal oriented behaviour will make adaptation to another goal difficult. The balance between goal-direct (pro-active) and reactive behaviour is essential in building agents.

Agents vs. Objects

Agent technology is mostly seen, compared to object oriented programming, as a next step in modular software development. Modular software development evolved from functions and procedures to object oriented programming. Software objects are defined as computational entities that encapsulate some state, are able to perform action, or methods on this state, and communicate by message parsing.

As implied in the definition objects are in a way capable of influencing their internal state. Although the definition of an object shows resemblance with the agent definition there are a few differences.

The first difference between an agent and an object is the level of autonomy. Although objects have an amount of influence on their internal state, it is not possible for simple objects to influence their behavior. This means that when a method of object A is invoked by another object B, object A has no influence on the execution of the method. On the other hand an agent decides for itself to execute an action on a request of another agents. In agent terminology is not spoken of “invoking” actions but of

“requesting” actions. An additional difference between agents and objects is the flexibility in the behavior of the agent. Agents can behave in a reactive or pro-active but social way.

Agents vs. expert systems

An expert system is a system capable of solving problems or giving advice in some knowledge domain. In this definition one can recognise some of the properties of an agent system, for instance the problem solving capability. There is a difference between an agent-based system and an expert system present. From a user perspective the difference is difficult to discriminate. As explained in the outline above an agent has direct interaction with its environment. This means that the agent directly extracts information out of the environment. An agent also has the capability to direct action on this environment. Seen in a traditional view, an expert system has no influence on its environment. The difference becomes slimmer as real-time expert systems, for instance in process control start to extract information from the environment.

Concrete architectures of agents

There are several agent architectures that are used for modelling and implementing an agent system.

In Logic based agents, decision making is realised through logical deduction. Reactive agents use stimulus response based decision-making where an action is implemented as a response on a situation.

The next sort of agent is called a Belief-desire-intentions agent. An agent can be specified as an intentional instance, with a state of world (environment), called belief. An agent that holds a belief is called a BDI agent. As mentioned “B” stands for Belief, “D” for desire and “I” is intention. To accomplish its goal an agent has to take action.

(7)

Agent communication

The ability of an agent to communicate with other agents is separated partly in perception (receiving messages) and partly action (sending messages). An agent has to imply a model of other agents to communicate with, which needs a degree of sociability. Communication depends on the type of agents and the type and characteristics of the system. When an agent is a self-interest (competitive) agent the communication is called negotiation. In the case of a common goal (goal-oriented agent system) the communication is called co-operation. For communicating the agents have to have a common language. A commonly used language is KQML.

Agent Decision Support

Although used in a wide range of applications the key advantage of a multi-agents system still is the distillation of a high amount of digital information from diverse sources. Thereby the potential for application in the domain of crisis management is high. Crisis management was described in the first part as an open, complex and dynamic domain. In this case, building situation awareness or a state of the world to make a decision is difficult. Computer support in the form of an agent-based system can offer a solution for building an adequate state of the world, thereby increasing the confidentiality of the solution. Various stages of human information processing can be taken over by the machine.

Wickens (2001) explains four stages of automation: Information acquisition, Information analysis, Decision making and Action. By automating the rule-based tasks in the decision space, cognitive workload is minimised.

In a complex environment people act in a stimulus driven way, instead of showing goal driven behaviour. Stimulus driven behaviour often requires some kind of analytical reasoning which in a complex domain is effortful. This type of reasoning is, due to limited cognitive capabilities error- prone. Analytical reasoning can be supported by an agent-based system. Agents can support the analytical reasoning by taking over tasks in building situation awareness. A way of analysing the support of building situational awareness is by using a collaborative agent model. Out of the comparison made above between agents and other software components the notion is that agents apply for serving in an adaptive support system. Flexibility makes the agent more comparable to human operators.

The modularity, decentralization, changebility and their capability to operate in an ill-structured and complex environment makes that a multi agent interface system offers the right properties to make a system adaptive (Alty, 2003). The knowledge used for reasoning is implemented in a task model,

(8)

respect to its desirability. Input that is received from all sensors in the problem domain is called percept. There is a clear difference between precepts and events. Whereas precepts are raw data, events have a distinct meaning. The outcome of the classification process is a bunch of problematic features. In the diagnosis task the problematic features (as output of the classification task) are diagnosed. The causes of problematic features are identified. Diagnosis is done to build a relevant awareness on the task environment; the outcome is a clear understanding of the system state and the discrepancy between that state and the desired goal (Dekker, 2003). Because the decision that is going to be made is based on the process of diagnosis, it is relevant to accomplish a decent diagnosis. After the diagnosis the next step is to predict the future state of the system in the prediction task. The prediction task evaluates what will happen when the state of the system (s) will go into the next state (s’) given certain values of control variables. This can be a positive or a negative evaluation. The option generating task generates a set of plans that are considered to be adequate to overcome the problems identified previously. The set of plans is a selection of the actions that came out of the prediction task. The action selection task selects the most appropriate plan to give an optimal outcome of the process. The standpoint described above can be compared with the four functions of decision making described by Parasuraman et al. (2000), namely information acquisition, information analysis, decision and action selection and action implementation.

Referenties

GERELATEERDE DOCUMENTEN

That study measured Fe-binding organic ligands with full depth profiles in the Nansen, Amundsen and Makarov Basins.. Lower conditional binding strengths and excess ligand

All (100%) researchers admitted that collaboration can promote development and reduce poverty by providing farmers with access to knowledge and technologies; collaboration

In this prospective cohort study, we investigated whether patient-specific finite element (Fe) models can identify patients at risk of a pathological femoral fracture resulting from

A review of the BSUP guidelines formulated by the Government of India indicates that some aspects of impoverishment risks – such as landlessness, joblessness, homelessness,

This is a peculiar finding, because it states that downward counterfactual thoughts (thinking about a less desirable situation), induces sexual risk taking, which is in

dan eingsins ter sprake gebring word. As die kamavaldoedies egter op grond van persoonlikheid en mooiste gelaatstrekke gekies word, werl< die huidige verkiesingsmetode

langere artikelen over diverse onderwerpen, 2 verslagen van vergaderingen, 2 bijdragen voor Lees-idee, 1 bijdrage voor Waar Te Komen Graven / Weer Te Kort Gegraven, 3 handige ideeen,

The unprimed group and the three different priming groups (same-shape, different-shape, and word) did not show differences with respect to viewing behavior (median distance