• No results found

Model Conflicts: The use and misuse of Game Theory in Intelligence Analysis and Security Planning

N/A
N/A
Protected

Academic year: 2021

Share "Model Conflicts: The use and misuse of Game Theory in Intelligence Analysis and Security Planning"

Copied!
116
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Model Conflicts:

The use and misuse of Game Theory in Intelligence Analysis and Security Planning

Aron A. van der Beek

S 1783661

Crisis and Security Management Universiteit Leiden

First Reader: Dr. Jelle van Buuren Second Reader: Dr. Guillaume de Valk Word Count: 31014 (total)

Msc. Thesis

(2)

1

“As a poet and as a mathematician, he would reason well; as a mere mathematician, he could not have reasoned at all.”

(3)

2

Preface

“I am a fragment of rock thrown into space” - Napoleon Bonaparte1

The use of formal methods in the social sciences – let alone in the study of security – is not wholly uncontroversial. I am sure the reader is not at all unacquainted with well-meant but naïve attempts to reduce complex problems fraught with conflict and emotion to simple, cardboard schemes. If there is fault to be found in such cases, however, it can be found on two sides.

It may be that the constructor of such a model has not grasped that it is only possible to effectively model a small part of the world at a time, and that at one level at a time (Kydd 2004, 2015). It may however also be that the reader, enthusiastic to apply a model in real life, has misunderstood what kind of limited knowledge or understanding a model has been created to represent. It should be clear that, without giving in to the temptation to exemplify, such misunderstandings have led to vicious circles of ignorance in the past (and still do) at all levels and in all domains of society.

In my opinion, the misuse and faulty construction of models has its basis in a common misunderstanding about how ‘strong’ scientific arguments come about. In times such as ours, where a tendency towards the ‘scholastic’ method dominates science, truth is attained by ‘equalization’, that is to say: by copying well-respected arguments, weighing them against one another, placing them opposite new empirical data, and deriving new arguments from what is plausible (where experts and empirics agree). While there is some truth to this method in that the opinions and findings of past and active experts should not be neglected, it is faulty to think that it is even possible for novel scientific insight to arise wholly from the basis of these factors – that science can be wholly inductive.

This is a mistake, because, to quote the eminent logician C. S. Peirce: “ […] there is no necessity

for supposing that the process of thought, as it takes place in the mind, is always cut up into distinct arguments. A man goes through a process of thought. Who shall say what the nature of that process was? He cannot; for during the process he was occupied with the object about which he was thinking, not with himself nor with his motions.”

(4)

3

Rather, after determining what conclusion he has intuitively drawn for himself:

"He next asks himself how he is justified in being so confident of it; and he proceeds to cast about for a sentence expressed in words which shall strike him as resembling some previous attitude of his thought, and which at the same time shall be logically related to the sentence representing his conclusion, in such a way that if the premise-proposition be true, the

conclusion-proposition necessarily or naturally would be true."2

Old-fashioned though this language may perhaps sound, it expresses exactly how models are constructed, how much they depend on intellectual honesty, and also that models are naturally limited by their initial purpose – not to be carried to far into strange territory before they are examined or tested.

I have tried throughout this thesis to abide by these principles, and urge my reader to do so also. In this realm of models, it is easy to attribute too much to abstract results. Please keep in mind that this thesis’s approach is experimental and not meant to argue for a definite interpretation of the Structured Analytic Techniques that will be its main focus.

I would like to thank my thesis’ supervisor, Jelle van Buuren, for his critical and constructive commentary, and continual friendly availability for advice. My thanks also go out to Guillaume de Valk for his warm support in the early, conceptual stages of this project.

Aron A. van der Beek June 8 2017

(5)

4

Contents

Preface ... 2

PART I: Introduction ... 7

1. Introductory Chapter ... 8

1.1 Security in the 21st Century ... 8

1.2 Intelligence Professionalization and its Critics ... 9

1.3 Knowledge Gap and Research Question ... 12

1.4 Context and Limitations of the Study ... 14

1.5 Selecting Cases of Structured Analytic Techniques ... 16

2. Theoretical Chapter ... 22

2.1 Defining ‘Intelligence’: Common and Scientific Methods ... 22

2.1.1 Definitions ... 22

2.1.2 Situating ‘Intelligence’ as a Discipline ... 24

2.2 Formal Representation of Intel. Science by means of Game Theory ... 27

2.2.1 Requirements of the Theoretical Framework ... 27

2.2.2 Game Theory ... 30

2.2.3 Game Theory’s ‘Level of Analysis’ ... 33

2.3 Operationalization ... 36

2.3.1 Translating Theoretical Structures into One Another’s Terms ... 36

2.3.2 Operational Model... 39

PART II: Results ... 40

3. Static, Inductive Techniques: ... 41

3.1 Context of the techniques ... 41

3.2 Underlying assumptions and Axioms ... 43

3.2.1 Assumptions ... 43

3.2.2 Axioms ... 44

3.3 Lexicon, Semantics, and Grammar ... 45

3.3.1 Matrix-form ... 45

(6)

5

4. Dynamic, Inductive Techniques: ... 48

4.1 Context of the Technique ... 48

4.2 Underlying Assumptions and Axioms ... 51

4.2.1 Assumptions ... 51

4.2.2 Axioms: ... 51

4.3 Lexicon, Grammar, and Semantics ... 52

5. Static, Deductive Techniques: ... 54

5.1 Context of the Technique ... 54

5.2 Underlying Assumptions and Axioms ... 55

5.2.1 Assumptions ... 55

5.2.2 Axioms ... 57

5.3 Lexicon, Grammar, and Semantics ... 57

5.3.1 Three-Dimensional Ranking ... 57

5.3.2 Caveats ... 58

6. Dynamic, Deductive Techniques: ... 61

6.1 Context of the Technique ... 61

6.2 Underlying Assumptions and Axioms ... 62

6.2.1 Assumptions ... 62

6.2.2 Axioms ... 63

6.3 Lexicon, Grammar, and Semantics ... 63

6.3.1 Simple Scenarios ... 63

6.3.2 Cone of Plausibility... 64

6.3.3 Alternative Futures and Multiple Scenarios ... 65

6.3.4 Indicators ... 67

6.3.5 In Sum ... 69

7. Static, Abductive Techniques: ... 70

7.1 Context of the Technique ... 70

7.2 Underlying Assumptions and Axioms ... 72

7.2.1 Multiple Hypothesis Generator ... 72

7.2.2 ACH ... 72

7.3 Lexicon, Grammar, and Semantics ... 73

7.3.1 Multiple Hypothesis Generator ... 73

(7)

6

8. ‘Dynamic, Reflective Techniques: ... 78

8.1 Context of the Techniques ... 78

8.2 Underlying Assumptions and Axioms ... 80

8.2.1 Assumptions ... 80

8.2.2 Axioms ... 81

8.3 Lexicon, Grammar, and Semantics ... 81

8.3.1 Red Teaming ... 81

8.3.2 Decision Trees ... 83

9. Game Theory in the SATs ... 85

9.1 Comparison of the Assumptions underlying the SATs and Game Theory ... 85

9.1.1 Inductive, Static Techniques (Social Network, Link, and Phone Analysis) ... 85

9.1.2 Inductive, Dynamic (Chronologies, Timelines, Gantt Charts) ... 87

9.1.3 Deductive, Static Techniques (Weighing Data Reliability, Deception Detection)... 88

9.1.4 Deductive, Dynamic Techniques. (Scenarios Analysis, Scenarios and Indicators) ... 90

9.1.5 Abductive, Static Techniques (Hypothesis Generation, ACH) ... 91

9.1.6 Abductive, Dynamic Techniques (Red Teaming, Decision Trees) ... 92

9.2 Synthesis ... 93

9.2.1 Presence of Game Theoretical Axioms in the SATs ... 93

9.2.2 Complementarity between Game Theory and the SATs ... 96

PART III - Conclusions ... 99

10. Concluding Chapter ... 100

10.1 Recap ... 100

10.2 Do the SATs Make Use of Game Theoretic Principles? ... 100

10.3 Speculations ... 102

10.4 Back to the Societal Context: Broader Implications ... 104

Bibliography ... 106

(8)

7

(9)

8

1. Introductory Chapter

1.1 Security in the 21st Century

Profound societal changes in the last few decades make it more important than ever to approach security scientifically. With 66 percent of the world’s population projected to live in cities by 2050 (UN 2014) and the paradigm shift heralded by the digital revolution (Dupont 2006), ‘public space’, which partially moves to the digital realm, increasingly turns into a tool by which control can be exerted. It is a tool that is guided by rational deliberation; The omnipresence of both digital and physical infrastructures in everyday life (CCTV and ANPR cameras3, ‘smart borders’4, widespread use of digital devices that can be used to instantaneously alert emergency services, and online ‘trackers’) theoretically allow security policy to become ingrained in the environment, but also bring with them a challenge to the human intellect to construct such policies in a rational manner. In order to create the ‘perfect public space’ and rid the environment from unwanted developments, the amount of factors that would have to be included in the monitoring, detection, and response procedures would be too vast for any existing computer, let alone any collection of fallible human minds (Rittel & Webber 1973). Not only would such calculative power be impossible to attain; it would even be impossible to design the infallible algorithms the supercomputer would need to make judgements about all that is happening, and predict all that will happen, within its domain without error. The ‘natural laws’ to which such algorithms would have to correspond are still too little understood. To top it all off, the required complexity of the algorithm would grow with the increasing complexity of the environment, eventually making the model exactly as complex as the universe itself. In the famous parable of Borges, in one Empire:

“... the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.’

3 ‘Closed Circuit Tele Vision’ (in many major metropoles) / ‘Automatic Number Plate Recognition’ (on some advanced countries’ highways)

(10)

9

To make a map, or a model, useful, it needs to be limited to the essentials of what it aims to represent (Kydd 2004); but however balanced a picture of reality such an ‘essential’ representation is able to provide, because it relies on the scoring out of less relevant data, it will necessarily be somewhat reductionistic and allow for error at certain levels of detail. In the case of security policy, the implication of this could be that certain parties are structurally treated unjustly. For this reason, development towards further ‘rationalization’ of security policy is looked upon in suspicion by some security scholars and social scientists (Rose 2000; Zedner 2003; Amoore 2006).

These conceptual and philosophical issues are of great importance to contemporary study of security; all attempts to formulate effective policy in this domain are subject to it, and should at least take note of these difficulties, if not resolve them as much as possible.

This thesis examines one part of the security domain that embodies these dilemmas particularly well: that of security intelligence. The domain of intelligence is particularly interesting in the light of the developments previously described, because: (1). It is particularly concerned with informational issues and critical judgement; (2). It is immediately effected by social developments like the ‘hybridization’ of warfare, increased dominance of networks, and growing importance of non-state actors, and by technological developments like the shift to the ‘cyber’ domain (in e.g. cyberwarfare, Big Data, and Internet Of Things) and developments in the weapons industry (Charap 2015; Benbow 2014; Kaufman & Schroefl 2014; Dupont 2006; O’Toole 1997; Krahmann 2005); (3). Most importantly: in some sense it embodies security-policy formulation in real time, and, like all security security-policy, is concerned with particular cases rather than general laws (Shulsky 1991; Kent 1949).

These characteristics of security intelligence allow us to study the dynamics of the security-formulation process directly through it, and by extension to discover something about issues of security and control more broadly.

1.2 Intelligence Professionalization and its Critics

With increasing emphasis over the past decades, the idea of ‘professionalizing’ the Intelligence-Analytic trade into possibly even a full-fledged science has resurfaced in the globally leading (Smith 2016) American Intelligence Community (AIC) (Lillbacka 2013; Lahneman 2010; Gentry 2016; Kreuzer 2016; Manjikian 2013; Marrin 2012; 2016; Herbert 2013; Prunckun

(11)

10

2015; Richards 2016; Zohar 2013; Barnes 2016). The idea goes back at least to Sherman Kent, the ‘father of intelligence analysis’, who first advocated a systematic reflection on intelligence-work in his 1949 book strategic intelligence for American world policy. Since that time, calls for further integration of ‘rationalistic’ methods have continually resurfaced, mainly as the result of various strategic surprises for the U.S. ranging from Pearl Harbor to 9/11 and beyond (Dahl 2013; Hastedt 1991).

Those generally in favor of the idea to model intelligence-analysis procedures more closely on those of the sciences are classified in the so-called ‘revisionist’ camp. The contrasting, which Shulsky (1991) calls the ‘traditional’ view, is highly skeptical of the idea that intelligence analysis should attempt to emulate science and incorporate scientific methodology because“…the fact that an adversary is trying to keep vital information secret is the very

essence of the matter,” instead of this being only an added complexity to the overall process of

attaining truth. Of course, few intelligence scholars are so one-dimensional as to fall unambiguously in either camp. Gentry (2016) for example is highly skeptical of ‘sciencefication’ of the intelligence-process, but he adds that it is not scientific methodology in general that he sees as a bane to the profession, but rather the replacement of ‘traditional’ scientific expertise (in a domain such as geography, economy, or extensive knowledge of a specific area’s culture and politics) with ‘analytic’ scientific expertise (such as represented by Heuer and Pherson’s (2015) ‘Structured Analytic Techniques’), which according to Gentry mainly functions to oil the ‘current intelligence’ newspaper-machine and introduce new recruits to the rudiments of the intelligence profession.

Despite the nuances in the writings of separate scholars, a strong argument can be made for upholding the general distinction between ‘revisionist’ and ‘traditionalist’ tendencies, regardless of the particular origins for either stance in separate articles, which may be held with regards to one very specific issue only and thus not be generalizable to the author’s view on all intelligence matters. The reason why the distinction holds, is that any argument about the way in which the IC should develop, invariably agrees with either of the epistemological positions held by the revisionists or traditionalists (Honig 2008). Revisionists argue that continued failure to prevent significant surprises suggest that the community’s way of working may benefit from increased use of formally validated methods, since failure to forecast correctly may indicate, as it may in science, that the current “scientific” paradigm is either wholly faulty or not detailed enough (Dahl 2013; Heuer & Pherson 2015). A traditionalist may counter that intelligence failure more often than not has its origin in failure to share information between departments, or a failure of a policy maker to take sufficient action on the basis of analysis (Marrin 2011), or

(12)

11

it may simply be argued that a certain level of surprise is inevitable given a limited amount of time and resources, and constraints on what an agency can legally do (Hedley 2007). Intelligence failures flare up the debate, but so do unprecedented developments in data-technology, which create a host of new opportunities for analysis that are already being exploited in the private sector (Degaut 2016; Bracken & Bremmer (eds) 2008; Jones & Silberzahn 2013). This development has put some pressure on the community to consider closely how it too could benefit from integration of more structured, scientifically grounded, techniques in its work sphere (9/11 commission report 2004; Friedman & Zeckhauser 2016). Close links with the private sector make the environment competitive and volatile, with many attempts at cross-fertilization between both sectors – with ‘security-intelligence techniques for the business world’ and ‘business intelligence for state-security services’ being proliferated globally by consultancy firms and the like (Jones & Silberzahn 2013; Degaut 2016; Landon-Murray 2016; McGonagle & Misner-Elias 2016).

The governmental intelligence community has a lot more difficulty developing such a scientifically oriented branch for the problems it faces than the private sector or even the criminal justice sector do, for several reasons that have been noted by Spielmann (2016), Ben-Haim (2016), Coghill & Hare (2016), Sandler & Arce (2003) and Marrin (2012; 2016) among others. The challenges the IC faces in this regard are the result of some peculiarities that come with the particular context of its work: first, there exists a pervading need for secrecy, a ‘need to know’ culture, and compartmentalization within these types of organization, constraining free information-flow and preventing large-scale analytics from taking place (Coulthart 2017). Second, the intelligence-profession lacks a predefined playing field, since ‘wildcard’ (or ‘black swan’) risks and surprises are arguably the biggest threat to its organizational goals, and those can by definition not be contextually fixed (Taleb 2010; Bracken & Bremmer 2008; Heuer & Pherson 2015). Third, the field lacks, ultimately, a macro-theory as a guide for action, or even a method to construct such a macro theory5, because adversarial tactics are case-specific and will adapt to technological developments and counterintelligence efforts over time (De Valk 20146). The intelligence / counterintelligence-game easily leads to increasingly inventive ways wherein intelligence agencies try to best each other’s systems and procedures can thus hardly ever be ‘normalized’. This process is expressed summarily by ‘Goodhart’s Law’ (1975), from the field of economics: “When a measure becomes a target, it ceases to be a good measure”

5 A ‘macro theory’ is a theory that sets the stage for a scientific discipline as a whole. It is the most fundamental form of ‘paradigm’, e.g. the theory of evolution in biology, or the theory of quantum mechanics in physics. 6 Contrived from a lecture series titled ‘Secret Affairs 1’. University of Amsterdam.

(13)

12

meaning that “… if an observed statistical regularity in economic data is exploited for policy

purposes, it tends to collapse” (Tao 2012), i.e., by extension, if there exists some structural

weakness in the defending player’s strategy, the opponent will try to exploit it. This in turn impels the defensive side to prioritize the defense of that weakness, so that is becomes less profitable for the attacker to keep using the same method, although technological constraints, lagging information flow and the (in)ability to adopt new methods quickly, limit with what speed and to what extent the defender can adapt. If the attacker is intelligent, and conscious of these processes, he will always try to find weak points in the defensive policy that cannot be easily adjusted, and exploit those until it becomes more profitable to shift to another plan of attack (this is colloquially known as the ‘waterbed effect’).

Further research into this domain will contribute to the effort to discover where further exactitude in procedures will be most beneficial to the overarching effort, and will at the same time show where revisionist aims are not as well placed and should give way to less scientific, more ‘hands-on’ solutions.

1.3 Knowledge Gap and Research Question

The professionalization of intelligence analysis has been studied in a myriad of ways (Fischoff et al. 2013; Andrew 2010). However, as is the case more broadly within security studies, many avenues of inquiry into methodological effectiveness are closed because of the need for secrecy in the circles where scientific analyses would be most elucidating. Publicly available studies may therefore be classified in either of a small number of classes.

1. There are studies that examine intelligence failures and successes. These are historical, descriptive case studies meant to inspire the construction of new theories about intelligence-effectiveness (e.g.: Dahl 2007, 2010, 2013; Fitzgerald 2007; Goodman 2006; Marrin 2011; Kahana 2007; Fischer 2016).

2. There are publications that attempt to extensively describe the facets of intelligence work – within certain limits. This are ‘textbooks’ or, if somewhat shorter, could function as textbook chapters (e.g.: Bar-Joseph & Sheaffer 1998; Breakspear 2012; Hastedt 1991; Shulsky 1991). 3. There are studies that attempt to demonstrate the use of methods or insights from other scientific disciplines for intelligence purposes (e.g.: Barnes 2015; Fischoff et al. 2013; Marrin & Clemente 2006; Oliver 2007; Duke & Van Puyvelde 2017).

4. There are studies that explore ‘intelligence’ theoretically, arguing that certain conceptual mistakes are often made, certain biases ingrained in the community of scholars, or that a novel

(14)

13

theoretical framework may be the source of improvement in some areas. These studies are often ‘critical’ of current practices (e.g.: Andrew 2010; Bean 2012; Fischer 2016; Lillbacka 2013; Manjikian 2013; Chang & Tetlock 2016; Kreuzer 2016; Herbert 2006).

5. There are studies that examine contemporary intelligence analysis ‘policies’ and their effects empirically (e.g.: Gentry 2015; Coulthart 2015, 2016, 2017; Regens et al. 2016)7.

There is a relatively small number of studies devoted to the last class, and those that do exist attempt to measure the ‘success’ of methodological policies by means of factors such as ‘how often these techniques are used’ or ‘what experts think of them’ (e.g., see Gentry’s and Coulthart’s studies). Given the previously mentioned problematic nature of any study concerning the IC itself, it is understandable that scholars have made use of such indirect methods to evaluate intelligence-methodologies, or have written wholly theoretical treatises on the issue (see the studies of point (4)). As a result, however, it has not been possible to say anything conclusive about the empirical value of current analytic practices, nor discuss how particular epistemological problems can be solved. Theoretically too, the epistemological validity of intelligence analysis techniques has remained largely unaddressed – in the public domain at least –, given a few preliminary exceptions (Valk en Goldbach 2013; Ben-Israel 1989, 2001; Rodgers 2006; Spielman 2012, 2014, 2016)8, and of course, these studies lack empirical validation.

In this context, Heuer and Pherson’s Structured Analytic Techniques (SATs) (2015) have both been heralded as the first step in professionalizing intelligence and scorned as a sign of deterioration. They have however not been examined empirically with an eye on their value for the ‘sciencefication’ of Intelligence Analysis. This is a clear gap in the scholarly literature because attempts to ‘professionalize’ depend heavily on techniques like the SATs that claim to be able to address the domain’s particular epistemological difficulties that have thus far barred it from wholesale entry onto the scientific stage. This thesis aims to partially fill this Lacuna.

7 Note that these classes represent ‘case studies’, ‘descriptive studies’, ‘explorative studies’, ‘theoretical reflection’, and ‘evaluation studies’ respectively.

8 The epistemological value of the domain of intelligence as a whole, however, has been subject to further theoretical reflection (Rønn & Høffding 2013; Herbert 2006). The connection with concrete methods is lacking in these studies however.

(15)

14

The following central research question is used as a central reference point:

To what extent have the Structured Intelligence Analysis Techniques incorporated Game Theoretic principles in their epistemological procedures and context?

The argument to particularly focus on Game Theoretical principles will be set out in the theoretical chapter, the choice for Structured Analytic Techniques is explained later this chapter.

It is a descriptive research question, extensive enough because it concerns an area where little prior research has been conducted. More interesting would of course be eventually, if possible, to draw up an answer to the more practical question: To what extent should the SATs make use

Game Theoretical principles, if at all, and if so: how?

This second question cannot be answered by means of this small study alone, but it can be answered partially from its perspective; and because this is still such a novel track of inquiry, there is still ample room to speculate.

1.4 Context and Limitations of the Study

Like all research, this thesis has to be limited in scope and subject-matter in order to effectively say anything about a part of the issues outlined above. This paragraph outlines what issues that are important to the overarching effort of intelligence professionalization will not be addressed in this paper.

First, because the study examines the SATs by themselves – rather than in organizational context – it is likely that many of the practical (external) constraints that have had influence on the form that these SATs currently have will be missed in this thesis’ analysis. It may for example be the case that a theoretically optimal technique would be highly complex and difficult to handle, resulting in a high human error-rate in practice, making a less rigid but easier to manage technique more effective over all (Heuer & Pherson 2015; Waltz 2014; Kydd 2004). Similarly, issues regarding the management of intelligence, psychological and sociological bias, and ‘general’ scientific expertise within the IC (e.g. in domains such as economics, law, or geography) are not addressed even though we acknowledge the probability that these factors are of significant importance for the form analytic methods take in practice. It is justifiable to proceed with our inquiry given these omissions, however, because quite frankly the scientific

(16)

15

justification for techniques is not at all affected by these organizational choices. E.g.: it may

very well be that funds to maintain a certain level of analytical precision are lacking, but that does not affect the normative question of how analytic precision is achieved. Rather, even if organizational constraints force the IC to deviate from the ‘ideal type’ form of scientific reasoning, it still benefits from understanding where it deviates from it, to what extent it does so, and what possible effects that may have. The study is therefore strictly logically normative, and is both limited by, and benefits from, that design.

Second, the research focuses only on a select number of SATs, and not on other methods of inquiry used within the IC. Other methods include conventional scientific methods (without particular strategic element or risk of denial and deception) and digital data-mining. Both of these other forms are well-established areas of research that are not particular to the IC. While they are highly relevant for the analytic process as a whole and are likely to be used in combination with the SATs in practice, they are here thus momentarily laid aside in order to focus on the peculiar methodological difficulties specific to the intelligence domain.

Third, it must be stressed that the particular theoretical framework used in this thesis has a profound impact on its overall conduct, and therefore also on its results. Theory of course always shades research, but in this project it does so more than usual. The reason for this is that we aim to represent the epistemological challenges of the intelligence domain by means of the theoretical framework, which is then mapped on the methodological practices (the SATs). If we were to conclude that a disjunction between certain epistemological practices and the theoretical framework indicates that these practices can be said to be in some way epistemologically faulty, that argument would only hold in so far as the theoretical framework we here present indeed does represent those core elements of the intelligence-discipline. In recognition of this reliance on the theoretical framework, it is made as explicit as possible what choices are made in the construction of that framework. It should be emphasized again that the author by no means claims that the theoretical framework encompasses all of the difficulties of intelligence in practice, but that it aims to describe one crucial part of the discipline regarding the epistemological questions connected to certain traits peculiar to the domain.

(17)

16

1.5 Selecting Cases of Structured Analytic Techniques

This study equates Structured Analytic Techniques (SATs) with ‘epistemological practices’ within the intelligence community. There are good reasons for doing so. These techniques are the embodiment of concerns for the quality of analytic products, particularly with regards to ‘common failures of judgement’ (including psychological bias, groupthink, the inductive fallacy, naïve use of statistics, or the failure to see through deception). They are the only explicitly normative procedures that reflect theoretical discussions about how surprise can be prevented in a strategic setting.

Another reason for doing so is that these are the only techniques of this kind (addressing the specific epistemological questions) that are publicly available. It must be admitted that the possibility exists that an additional set of techniques exists within government intelligence communities worldwide that address the same problems as the SATs do in a different manner. Even so, we judge it to be generally more likely that aside from techniques like the SATs, the ICs primarily make use of ‘intuitive’ methods (such as analyst-experience) and ‘regular’ scientific methods, than a significantly distinct, separate set of epistemological methods. There are two main reasons for this. First, in academic journals and books on intelligence techniques, the ‘experience and intuition’ of analysts is mentioned as a ‘traditionalist’ stronghold opposed to supposedly scientific techniques (Shulsky 1991). Second, though this is an intuitive judgement, we think it to be unlikely that civilian and military intelligence agencies could stay on top of the private sector: businesses and universities are more likely to develop the state-of the art information-management ideas useful for addressing strategic intelligence challenges than the public sector is (one indication of this may be that it is not uncommon to see private companies get hired to introduce such analytic techniques to government agencies (e.g.: leidos.com, pherson.org, lowlandssolutions.com, fox-it.com).

As there are ‘more than 160’ SATs used in Intelligence context today (Coulthart 2017), it is necessary to make a narrow, critical selection of techniques to explore further in this thesis. The sample should represent the whole of the body of SATs relatively well, so that it in turn may represent the whole of epistemological ideas currently active within the IC.

As has been mentioned before, a first selection must be based on the extent to which analytic techniques are available in the public domain. The list can therefore not claim to be complete or comprehensive, but can say with reasonable force of argument to at least represent the most

(18)

17

important techniques and the general scope of types of techniques. As has been mentioned as well, our initial selection furthermore excludes techniques that are not used to address the domain’s epistemological issues, such as those that are concerned with idea generation or conflict management.

Main sources for SATs are: Heuer & Pherson (2015); Waltz (2014); USGov. (2009); Voulon (2010); and UNODC (2011). Multiple of these provide quite extensive lists of such techniques as well as added descriptions and commentaries. Heuer & Pherson’s book is considered to be the most important ambassador for these kinds of techniques and will therefore be the primary source.

Figure 1 Taxonomy of ‘Families of Structured analytic Techniques’ (Heuer and Pherson 2015)

From this initial list a second selection of techniques is made that represents the whole body of SATs. In order to meet this criterion, the techniques selected should be concerned with all the ‘parts’ of the analytic process, and represent all ‘types’ of techniques. To have each type represented in the selection, a method for taxonomizing the techniques is necessary. A taxonomy is a structural classification of all the elements of a domain according to distinctive, relevant traits. In the case of the SATs a taxonomy would have to consider at least the following characteristics:

(19)

18

1. What is the technique’s aim or purpose?

2. In what way does it relate to other techniques? Does it use information generated elsewhere or provide information for the benefit of another technique?

3. What is the internal structure of the technique? In what way does it manipulate data?

Heuer and Pherson (2015) provide a taxonomy of SATs that integrates all of these questions relatively well (see figure [1]).

We may rephrase the nomenclature of this model somewhat in order to expose the hidden – essentially scientific – principles underlying it.

Clockwise, from ‘Decomposition and Visualization’ up to ‘Decision Support’, the Eight Families form a somewhat unorthodox but still recognizable version of the Scientific process. In these eight steps, focus on empirics and theoretical reflection can be seen to alternatively dominate. With each two steps, the empirical model is refined and eventually translated into a conclusive rapport. The empirical step continuously functions to integrate the novel theoretical conceptions generated in the previous step in a model wherein it connects to testable facts. The theoretical step has either a creative or a critical function. I.e.: Decomposition and Visualization comprises techniques to ‘classify’ empirical data in a simple manner. The next step, ‘Idea Generation’, functions to introduce a new set of theoretical perspectives that may be applied to the empirical data. ‘Scenarios’ can then be devised from the combination of both, and these can subsequently be scrutinized by the ‘hypothesizing’ techniques from the fourth step, and so on. This motion can be followed clockwise as an inclusive outline for the creation of a long-term intelligence product, but techniques from these classes can also be used separately or in any combination.

While we agree that Heuer & Pherson’s classification has significant merit, the multiplicity of classes focusing both on empirical refinement and theoretical reflection does create some epistemological overlap between them. For example, the techniques described in ‘Idea Generation’ and ‘Assessment of Cause and Effect’ essentially go back to the technique of ‘Structured Brainstorming’, which in turn can be seen as a group-version version of ‘Hypothesis Generation’. The core epistemological procedure underlying all these techniques is thus essentially the same. At the same time, some techniques within one class are so markedly different from one another that they warrant an independent analysis. An example of this latter is the set of techniques ‘social network analyses’ and ‘timelines’, both of which fall within the same octant of ‘Decomposition and Visualization’, but are structurally very different (actors and their relations to one another, in contrast with the relation of events in time).

(20)

19

This taxonomy is therefore not the most useful to base this thesis’s sample on. Instead, we propose a taxonomy that is less concerned with the relation between techniques and more with their epistemological structure, the central theme of this thesis.

A taxonomy based on epistemological functions can be formed by making a distinction between the three ‘logical’ steps in scientific arguments, which are: Deduction, Induction, and Abduction. These three complementary, but fundamentally different, mental operations are each needed in order to formulate a ‘whole’ scientific argument.

The difference between these three operations is the following:

Deduction: P → Q, Q → S, Therefore P → S (Example of deductive law: transitivity)

Explanation: Given a Complete and Logically Ordered dataset, deductive operations allow one to apply general rules that are accepted as ‘true’ within that system, to particular instances within that system in a rigorous manner.

Deductive reasoning is wholly ‘formal’ and self-contained; it has no autonomous relation to the empirical realm.

Induction: P → Q, P → Q, Therefore P → Q (General law derived from particular instances) Explanation: Given repeated particular empirical instances of correlation or perceived causality between factors, the existence of general (theoretical) laws is inferred. Induction is the ‘movement’ from empirics to theory.

Abductive: Q, P & Q are not contradictory, Therefore P → Q (A Hypothesis that does not Immediately contradict the

empirical experience is formulated) Abduction is the method of hypothesizing about possible explanations for empirical phenomena. The hypothesized explanations can and should be empirically tested for validation. Abduction is the ‘movement’ from theory to structural empirical observation. (The conclusion of a deduction may serve as a source for abductive claims, whereas the conclusion of

(21)

20

conclusion of an abduction leads one to perform an inductive operation in a particular way.)

A second distinction will be made between ‘types’ of SATs on the basis of their level of

dynamism. This generates two classes: 1. Static techniques, which are concerned with minute

treatment of the current situation, and 2. Dynamic techniques, which are concerned with

possibilities and the future. This distinction is nontrivial enough to be able to function as a

reasonable element in the classification; because both types of techniques differ fundamentally in their understanding of the relation between mental constructs and empirical reality. Static techniques are led by empirical information and take as their prime focus to come to understand the meaning of empirical factors for strategic agents. Dynamic techniques, on the other hand, are led by rationalistic concerns about what possibilities may come to exist for some actor that may wish to exploit them. These techniques rely less on the constraints of empirical factors. The combination of the two criteria results in the following 2 x 3 matrix. Each of the segments has been attributed at least one SAT. The SATs are selected on the basis of:

- Extensiveness: the size and complexity of the technique, where larger and more complex techniques are considered to be more interesting. More extensive techniques often ‘include’ operations that can also be found in other techniques, analyzing the most extensive techniques is therefore in many cases most economical.

- Particularity: selected techniques should contain as much unique elements as possible in order to avoid redundancy.

- General importance: Since some techniques are indicated by various authors to be of particular importance for the IC (Heuer and Pherson 2015; USGov 2009), these techniques are included in the selection, if they also fulfill the previous two criteria. The full scope of techniques from which the selection was chosen can be found in the Appendix [1].

(22)

21

Table 1: Taxonomy of SAT-Types

The ‘static’ part considers, first, the technique of the Weighing of data-reliability, which reflects on the reliability of data by means of various kinds of metadata. Second, the techniques Social

Network Analysis; Link Analysis; and Phone Analysis, which are used to ‘map’ other players’

empirical constraints. The third selection, Hypothesis Generation and ACH, is concerned with the estimative process, consisting of a step of hypothesis construction, and evaluation.

The ‘dynamic’ part treats, first, Scenarios Analysis and Scenarios and Indicators, which are concerned (a) with the proper way of constructing scenario-cases, and (b) with establishing corresponding indicators for change. The second element, Chronologies and Timelines, and

Gantt Charts, consists of methods that deal with the ‘puzzle’ of reconstructing a past course of

events. Third, Decision Trees and Red Teaming are both concerned with mapping the adversaries’ basic current options.

Deductive Inductive Abductive

Weighing Data Reliability

Social Network

analysis; Link Analysis; Phone Analysis

Hypothesis Generation; ACH

Deductive Inductive Abductive

Scenarios Analysis; Scenarios and Indictors Chronologies and Timelinges; Gannt Charts

Decision Trees; Red Teaming 'Static' Techniques 'Dynamic' Techniques Inductive Chronologies and Timelines; Gannt Charts

(23)

22

2. Theoretical Chapter

2.1 Defining ‘Intelligence’: Common and Scientific Methods

2.1.1 Definitions

Before going on to define ‘intelligence’ as a practice and a discipline, it is of value to shortly consider how any scientific definition can be properly grounded in the first place; and what characteristics of a definition make that it is valuable in a scientific context.

In general, concepts that aim to describe particularly nuanced phenomena are defined by means of an extensive set of relatively simple distinctions about which classes it does, and does not, belong to. To communicate the understanding of such a concept, the speaker may provide the listeners with case examples that allow those who have similarly experienced the case-situation to understand exactly what the concept entails. Similarly, the audience may have experienced something analogous, and may by means of analogous reasoning be able to acquire an approximate understanding of what is meant with a concept (Hofstadter and Sander 2013; Wittgenstein 1967).

By itself, this ‘general’ method through which concepts acquire their semantical value is insufficient, however, for scientific purposes. In science, the central aim of coming to understand the fundamental ‘laws’ of a domain of inquiry functions as a final criterion by which concepts are judged to be fit: if a concept only describes some vague conjuncture of qualitative facets, it may empower its users to share a particular, complex experience, but it will not yet be a strong tool through which a higher, systematic understanding of an area of knowledge can be achieved. To meet this criterion, a concept must take its place in a larger systematic taxonomy of concepts that represent the scientific understanding dominating a field (a paradigm). This is particularly relevant for concepts that themselves define a scientific domain (Such as: ‘biology’, ‘sociology’, or ‘intelligence studies’), because their definition should make clear in what way its domain connects to the larger scientific enterprise: what is the domain’s own unique set of ‘laws’ and research-projects through which it contributes to, and stands in relation with, other scientific disciplines9?

9 These ideas are part of the discussion surrounding the ‘unity of science’ thesis, which has been both

extensively argued for and against (Feyerabend 2010, Fodor 1974; Von Bertalanffy 1951). The argument here, however, does not concern either the question of whether the Laws of Nature can be fundamentally brought back to those of a single, all-encompassing paradigm (e.g. physics) nor whether it is possible for the sciences to achieve such a high level of understanding about the world, but rather only that, in so far as a domain can be

(24)

23

The concept of intelligence, at least in so far as it concerns its effort to claim scientific domain, should likewise be constructed in a method conscious of the subject-matter of other scientific domains and concepts. It then aspires to fence off an area of study and practice where

its unique set of principles is needed to explain and exploit the situation.

With these things in mind, we may examine some of the more prominent attempts to define ‘intelligence’.

Sherman Kent, ‘the father of intelligence analysis’, his famous 1949 book strategic

intelligence for American world policy set the stage for much of what has subsequently been

written about intelligence and its purportedly scientific nature. Having little in the sense of predecessors, Kent wrote: “…. The whole book is an elaborate definition [of intelligence]”. The basic distinction made in the book therefore also functions as the basis for a major conceptual distinction within the field of intelligence studies; it states that there are three general ways in which the concept of intelligence is properly approached:

- As Knowledge. (e.g. an ‘intelligence product’); - As Organization. (e.g. an ‘intelligence service’);

- As Activity (or ‘process’). (e.g. an ‘action for intelligence purposes’);

These parts of intelligence are, of course, interrelated and describe different dimensions of the concept rather than mutually exclusive parts or steps. What unifies these dimensions is, according to Shulsky (1991) that, as knowledge, "intelligence refers to information relevant to

a government's formulating and implementing policy to further its national security interests and to deal with threats to those interests from actual or potential adversaries." As activity "intelligence comprises a wide range of activities, [...] [that all] however, have to do with obtaining or denying information. Therefore, intelligence as an activity may be defined as that component of the struggle between adversaries that deals primarily with information."

Concluding that, as organization, “One of the most notable characteristics […]is the secrecy

with which their activities must be conducted."

The common thread in these definitions seems to be a particular purpose for which information is used and collected, rather than any particular content of that information itself. This is summarized explicitly by De Valk et al. (2014):

“Intelligence is information collected, ordered and analyzed for the benefit of policymakers, military personnel and other involved parties.”

said to be part of the scientific endeavor, it must meet the requirement of being able to position itself in relation to the other domains of science.

(25)

24

These conceptual reflections indicate what the purpose of the discipline has at this point been in practice. Following our earlier standards about the requirements for concepts demarcating a scientific domain, it cannot be said that ‘intelligence’ constitutes a unique area of study. Rather, according to these standards, it is strictly a discipline of practice that may benefit from the scientific knowledge of any field that may happen to be (temporarily or structurally) relevant for one of its purposes. This is indeed how ‘intelligence’ has generally been approached.

2.1.2 Situating ‘Intelligence’ as a Discipline

As a field of study, Intelligence seems at first sight to constitute a particular branch within the domain of public administration (or perhaps international relations), which in turn can be said to be a branch of management science, which in turn can be said to be a branch of social science. In what respects there is need of an Intelligence-Science domain in contrast with neighboring domains has however not yet been made explicit.

One possible avenue to approach this dilemma is by examining what scholars have written about the particular problems Intelligence Science aims to cover, and what unifies these problems and the proposed methods to solve them.

According to De Valk et al. (2014) the aim of intelligence services is: - To avoid strategic and tactical surprise;

- To provide long-term expertise; - To support the formulation of policy;

This list exposes three of the domain’s elements that are mentioned by scholars in the whole field, being that 1. Intelligence is action oriented (Warner 2002), meant to be used for practical purposes. 2. That it is meant to provide knowledge and understanding (Herbert 2006; Hendrickson 2008), and in time grow in breath of understanding, become more economical10 and become more detailed in reports. And 3. That its particular focus lies on avoiding analytic mistakes and blind spots (Marrin 2010; Ben-Haim 2016).

We first elaborate on these elements before considering another meaningful factor in this typology: that the type of surprise that Intelligence Agencies are concerned with are of the ‘strategic and tactical’ variant.

(26)

25

1. The fact that Intelligence, to a degree, aims to produce “information designed for

action” (Warner 2002), sets the field apart from ‘regular’ scientific inquiry. An intelligence

analyst often does not have the option to narrow his research objectives when the question at hand is too difficult, or increase the amount of time he will spend on it. Often, answers are requested “with limited time” and failure has “potentially critical consequences.” (Bueno de Mesquita 2013 in Fischof et al.) Because of such constraints, approximate conclusions must often be sufficient to inform the policy-maker in question. The need to be able to settle for ‘approximate’ answers introduces an intellectual challenge that is not often seen in scientific disciplines, the question being: how can tentative ideas about a subject-matter be represented in an analysis report in such a way to be as objective as possible? Furthermore, the actionable nature of intelligence may have shaped the initial intelligence question as well, since a policymaker could prefer to be presented analyses that elucidate what the pros and cons of the choices for the various options available to him are going to be in comparison; in some cases, therefore, intelligence questions will be designed exactly for the purpose of making the choice between preset options easier (e.g. to take either of two roads through enemy territory). This ‘applied’ face of intelligence science has not the benefit of being able to conduct experiments, an important element of almost all other applied sciences.

2. The element ‘to provide long-term expertise’ differs from its scientific equivalent in an obvious way: in science generally it would not be the profession-oriented ‘expertise’ that would provide the researcher with a structural, long-term oriented understanding of his subject-matter, but knowledge of the laws governing his scientific field. In contrast ‘expertise’ seems to indicate either relevant understanding of concepts and empirical considerations from various fields surrounding a practical subject, or real-world experience with a subject, or both. That ‘expertise’ is needed is acknowledged, but what it entails, if not knowledge of a scientific paradigm, remains ambiguous scientifically speaking.

3. The focus on avoiding surprise once again emphasizes the practical, action-oriented nature of intelligence. Where other sciences – even closely related ones like International Relations - would consider ‘surprise’ just one of the many indicators for a lack of scientific understanding of underlying governing mechanics of that particular case, the ‘Science’ of Intelligence elevates it to be one of her core principles.

The definition further mentions that the kind of surprise relevant to it is of a ‘tactical and strategic’ nature. This is a focus that is uncommon in science. Disciplines that come to mind are those that study conflict and management in various settings (international relations,

(27)

26

communications science, marketing, change management, risk management, operations research) and certain formal sciences (decision theory, game theory). The ‘strategic or tactical’ element in all these sciences consists of the challenge for a rational decision maker to evaluate his options (possibly on the basis of various external criteria) and choose the best course of action given a predefined set of goals. Since these options often come with considerable factors of uncertainty (be they negative of positive), analysis and decision making become, as is the case in Intelligence Science, a complex process of risk assessment.

These sciences set themselves apart from Intelligence Science in some ways too, however. First, they are often able to standardize various types of the strategic problems they face. Such standardization may lead to compartmentalization of ‘central questions’ within the discipline and the development of specialized techniques for each of these compartments. Even if the uncertain cases remain uncertain, at the very least the practitioners will know that ‘this level of accuracy is the best that our scientific discipline has to offer at the moment’. Intelligence science can standardize only very little. The main reason for this is that the ‘threatening adversary’ will in many cases actively resist the counter-policies that have been inspired by intelligence reports. The analyst therefore ".... faces a particularly unique challenge: Often, the

subject of inquiry (or the intelligence target) is actively evading discovery and understanding, employing means of denial to minimize observations and deception to introduce observation errors that distort the reasoning of the analyst. " (Waltz 2014) In other words: uncertainties are

exploited and created by hostile parties11, and can therefore hardly ever said to be pinned down

or solved.

In sum, we argue, Intelligence Science can be defined by its possession of the four principles stated in Table 2.

(28)

27

Table 2: Principles of the Domain of Intelligence Science

Intelligence science is strategic at its core, information-centric, cannot be strictly demarcated empirically, and at least aspires to be as scientific as possible. No other domain addresses the questions that the combination of these factors conjure up, and thus entail the discipline’s particular area of research.

2.2 Formal Representation of Intel. Science by means of Game Theory

2.2.1 Requirements of the Theoretical Framework

The primary purpose of our theoretical framework is to evaluate the effectiveness of the SATs, not on an individual level or solely on their own terms, but with an eye on the issue of intelligence professionalization, and what that ideal would imply for the techniques. The theoretical framework should then express the particularities of Intelligence Science in order for a comparison between it and the SATs to represent a comparison between the SATs and the

intelligence-science ideal as well. From the four factors mentioned in figure …, agent-based intent should have primacy over the other factors, hard as it may be to solidify. The reason for

this is that, as we have seen, it is this element first and foremost that sets Intelligence Studies apart from general science epistemologically and is therefore also the source of the particular challenges it must address on its road to becoming a full-fledged science. Its empirical information-centrism and ‘unfixedness’ are easily added on to this prior requirement, showing merely the secondary importance of inquiry focused on generals (pure science), and the primary importance of what is relevant for the strategic purposes of the actor for whom it is carried out.

Principle: Description: Other domains with this

principle

Strategic / Tactical Concerns the challenge of decision-making in complex, interactive

environments. What 'options' exist and what the best choice between them is, is only limitedly prescribed by theory.

Decision theory, game theory, international relations, economics

Information-centric The focus lies on the methods used to acquire, represent, manipulate and interpret information.

Infromation science,

semiotics, logic, mathematics

Scientific Aims to acquire knowledge and understanding of fundamental laws

underlying the empirical questions it addresses.

All empirical science Empirically not

strictly demarcated

Does not have a 'fixed' set of phenomena to study and cannot rely solely on previous cases to explain new ones.

Humanities, philosophy, certain professional practices

(29)

28

‘Strategic’ inquiry stands apart from nonstrategic science due to is its inclusion of agency as a dominant factor.

In comparison with ‘regular’ rigorous models of an empirical situation (where events develop by means of a concrete set of modeled laws), the inclusion of an agent supposes a decision maker apart from the surface level laws of that model, who can exert limited, but seemingly independent influence on it. Although the actor is seen to stand outside the model in some sense, it is also still connected to it, because, as is the case for actors in real life, the influence that it can exert on the empirical world is limited. Furthermore, its connection to the empirical model – its ‘body’, let’s say – determines its ‘information position’ and also its ‘thought processes’ itself, in an unknown degree (as, in ‘real life’, a person’s biology would influence his trail of thought, and formal agents would come with limitations ‘programmed in’). In a situation where there is more than one actor present, they begin to stand in a ‘strategic relationship’ with one another when they start anticipating each other’s moves. In other words: the ‘strategic situation’ can only be present where one must anticipate another actor’s actions and his anticipation of one’s own actions. If the element of uncertainty or the element of mutual anticipation was missing, the situation would not be called ‘strategic’. If only uncertainty was present, there would simply be a natural law that had not been understood yet. If only mutual anticipation was present, it would be meaningless if all future events were already deterministically understood as the result of natural laws. Both factors are thus necessary conditions for a strategic situation. The particular focus on agents in a strategic setting suggests a connection to the tradition of Game-Theoretic modelling12. Lim (2016) describes what the relation between such

agent-grounded factors and the correlational approach to uncertainty practiced throughout science more generally could be. A slightly adjusted model can be found below (Figure [2]).

12 ‘Agent-based’ modeling of strategic situations immediately suggests Game-Theory as a valid theoretical approach and nothing else, because, despite its name, Game Theory is not so much a ‘theory’ as well as more generally the practice of formally representing strategic situations between decision makers.

(30)

29

Figure 2 Function of Game Theory for Strategic Intelligence (Based on Lim (2016))

The author makes a distinction between ‘correlative’ analytics of empirical factors (which form the subtext) and the construction of an idea of causal relationships underlying those correlations by ‘experts’ knowledgeable in the field. These experts are thus able, on the basis of developed theory on the subject matter, to acquire a higher understanding of what the observed correlations may be an indication of. Since Lim’s article is concerned with ‘Big Data’ in strategic settings, a primary concern of his is the potential for the scope of analytic results to become too large to be manageable. If policy were only to be based on this assessment, however, the amount of both false positives and negatives would exceed all bounds, since theoretical focus on

possibilities allows almost anything to mean almost anything. Lim therefore proposes that a

‘meta’ theoretical framework can provide constraints to the initial broad set of possible interpretations of empirical data. According to Lim, Game Theory is most suited to fulfill this purpose because it addresses rational choice and takes heed of animate players that react to their experiences. An application of this theory would limit the initial amount of suspected causal relationships deemed plausible to a more critical set.

(31)

30

2.2.2 Game Theory

‘Game Theory’ is not a theory as is common in social science, but a branch of mathematics specialized in representing ‘agents’ as factors in equations. Maschler, Solan and Zamir’s definition (2013) is worth being quoted at length:

"Game theory is the name given to the methodology of using mathematical tools to model and analyze situations of interactive decision making. These are situation involving several decision makers (called ‘players’) with different goals, in which the decision of each affects the outcome for all decision makers, This interactivity distinguishes game theory from standard decision theory, which involves a single decision maker, and it is its main focus. Game theory tries to predict the behavior of the players and sometimes also provides decision makers with suggestions regarding ways in which they can achieve their goals."

This tradition of mathematical representation has developed extensively over the last 70 years. In this time, Game Theorists have come up with ever novel ways to represent and solve strategic situations formally. This is important because it shows that the theory has no fixed

scope: game theorists make use of all the mathematical tools at their disposal to represent

situations such as are indicated by Maschler et al. (above), and will change their exact method as it suits the occasion. Being a distinctive field with distinct purposes, however, models in Game Theory do share some essential characteristics that make the modeled situations ‘strategic’. The elements that are necessarily present in all Game Theoretic descriptions we will call its ‘necessary epistemological axioms’13, of which there are three:

1. There are multiple players (intelligent agents14) involved in the strategic interaction;

2. The players each have a limited set of concrete or continual options; The choice for a certain option, or set of options, is called a ‘strategy’;

3. Players have utility values connected to variables included in the model. When they are rational, they will attempt to maximize their payoff utility value by choosing the option that is most likely to provide that maximum outcome, given the likely actions of other players.

13 In contrast with ‘mathematical axioms’, of which Game Theory makes use as well. These axioms are less important for our current purposes, since they are primarily concerned with ‘proper functioning’ of the mathematical models – to leave them without ‘bugs’, to use an ICT-analogy.

14 An autonomous entity observing the environment and acting upon it to achieve goals (Russell & Norvig 2003).

(32)

31

Common additions to those initial axioms allow for a formulation of more complex games with different strategic focal points:

4. The distribution of Information between players concerning the values of operators included in the model or other aspects of the game is unequal;

5. There is a ‘Nature’ (or ‘Chance’) player present, making moves that are ‘unpredictable’ to both players;

6. The game continues for several sessions. Such a game is called an ‘extensive form’ game. The amount of sessions may be finite or infinite. The game may change over time or remain the same;

7. Players can be rationally bounded, which is represented by a ‘fixation’ to a certain strategy. In such cases (in evolutionary game theory), the ‘players’ are not modeled as having a choice of action. Rather, it is estimated how well a certain ‘type’ of (strategy-fixed) player competes against other types in a given environment15;

8. Players can ‘communicate’ with (a limited set or all of the) other players (= exchange messages, true or false) without any immediate costs or benefits in payoff, generating an additional level of strategic interaction;

9. Players can form ‘binding contracts’ which either limits their options forcibly, or attaches a cost to the breaking of that contract.

The two most common forms of Game Theoretic modelling show the function of each of these factors (Figures [3] and [4]).

15 ‘Rationality’, as commonly understood in Game Theory, assumes that all actors have the ability to

deductively determine in depth what they should rationally do, given what they believe. This is, as Kydd (2015) writes “clearly false, as a descriptive matter." The assumption seems necessary to Game Theorists because If one actor is allowed to act irrationally, then his adversaries too can no longer ‘rationally decide’ what the best response to the opponent’s strategy is, because he has no clear strategy. This of course would render the whole model, or even the method, invalid. The problem does however not lie in this axiom per se, but rather in its combination to a supposedly ‘true’ level of detail within the model. It is perfectly possible to model an irrational actor as an agent in a Game Theoretical model, but in that case that actor’s options should perhaps be represented as being very limited, or contrarily very broad; his utility functions may all have to be leveled (making him indifferent over outcomes) or his utility values may not be transitive; etc. The point is that ‘rationality’ – when connected to the empirical realm – is not only something that describes an actor’s

response to a predefined model, but also determines what form the model takes in the first place. If the model represents an actor wrongly, his supposed ‘rationality’ is faulty as well. In cases that are correctly modeled, the ‘rationality’ axiom functions merely to ascertain mathematical coherence. Thus ‘irrational’ actors are not problematic because they do not keep to the model we use to describe them, but because they are extremely difficult to properly model in the first place.

(33)

32

Figure 3 Prisoner’s Dilemma in ‘matrix’ form

There are several ways in which a player’s understanding of the impact of uncertainty can be modeled. It can, for example, be determined by a ‘Bayesian’ method; where an ‘idea’ of the impact of uncertainty provides the prior values. Or it could arise by induction from previous experience with a certain opponent’s behavior. It could also be connected to a combination of the opponent’s ‘cheap talk’ and empirical validation or falsification of his comments, which could indicate what his true goals are and how reliable he is. There are many more ways to model uncertainty, and factors of uncertainty can be included in almost all aspects of the model. A player could, for example, be uncertain about the exact amount of other players involved in the strategic interaction, about the extensiveness of the game, about costs and utilities, about opponent’s costs and utilities, etc.

Figure 4 Strategic Situation in ‘Tree’ form16 (Maschler, Solan and Zamir 2013)

16 The ‘ovals’ in the Tree-form game represent the uncertainty of a player about the situation that he finds himself in. In this model, for example, player II would have to make a decision (t1 or b1) without knowing

D

C

Player I

D 1, 1

5, 0

C 0, 5

4, 4

Player II

1. Players 2. Options 3. Utility Values

Referenties

GERELATEERDE DOCUMENTEN

Moreover, section 4.6 pointed out that for a special class of oligopoly games in partition function form such as a (N, w) ∈ F GP F F N , where w is defined by the net benefit

As they write (Von Neumann and Morgenstern (1953, pp. 572, 573), the solution consists of two branches, either the sellers compete (and then the buyer gets the surplus), a

The CDU/CSU has the highest voting power according to both the nominal power and the Banzhaf value, but it gets dethroned by the FDP according to the restricted Banzhaf and

If player 1 chooses KNOW in period 2, we have shown that: a trustworthy player 2 will choose IN on both moves and player 1 chooses STAY on both moves in equilibrium, a selfish player

Moreover, if the (fuzzy) game as defined by Denault (2001) is adapted to incorporate these effects, certain properties of coherent risk measures, such as Scale Invariance, lose

pertains to the trust game only (the multiplication factor), four pertain to the gift-exchange game only (whether the experimental instruction was framed neutrally or was framed in

IEEE Vehicular Networking Conference.. Here the region being the merge area, and the time period being the period when the merging vehicle will reach this area. We refer

Kleur: homogeen licht bruinig grijs Bijmenging (grond): bruin: , , Inclusies:. Mangaan: veel spikkels