• No results found

The Human-Machine Interface in Targeting Law: A Legal Analysis of Meaningful Human Control in the Use of Lethal Autonomous Weapon Systems and Artificial Intelligence in the Context of Targeting Law

N/A
N/A
Protected

Academic year: 2021

Share "The Human-Machine Interface in Targeting Law: A Legal Analysis of Meaningful Human Control in the Use of Lethal Autonomous Weapon Systems and Artificial Intelligence in the Context of Targeting Law"

Copied!
45
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master Thesis

University of Amsterdam Graduate School of Law Amsterdam

LL.M Public International Law

The Human-Machine Interface in Targeting Law:

A Legal Analysis of Meaningful Human Control in the Use of Lethal

Autonomous Weapon Systems and Artificial Intelligence in the Context of

Targeting Law

Master Thesis Wiep Sophia te Velde

July 2020 10991972

With the supervision of Professor T.D. Gill University of Amsterdam

(2)

TABLE OF CONTENTS

I. ABSTRACT ... 3

II. LIST OF ABBREVIATIONS ... 5

CHAPTER 1. INTRODUCTION ... 6

1.1. Research question ... 7

1.2. Methodology ... 8

2. LETHAL AUTONOMOUS WEAPON SYSTEMS AND ARTIFICIAL INTELLIGENCE .... 9

2.1. Introduction ... 9

2.2. Definitions and state of debate ... 9

2.2.1. Artificial Intelligence ... 9

2.2.2. Lethal Autonomous Weapon Systems ... 11

2.2.3. The state of debate ... 13

2.3. Concluding remarks ... 14

3. TARGETING LAW: WHAT IS THE INTERPLAY BETWEEN HUMANS AND LAWS? . 15 3.1. Introduction ... 15

3.2. Basic principles of targeting ... 16

3.2.1. The principle of distinction ... 16

3.2.2. The principle of proportionality ... 19

3.2.3. The principle of precaution ... 22

3.3. Concluding remarks ... 25

4. MEANINGFUL HUMAN CONTROL ... 26

4.1. Introduction ... 26

4.2. The issue of definition ... 27

4.3. The elements of meaningful human control ... 28

4.3.1. The context element ... 28

4.3.2. Understanding the weapon system element ... 29

4.3.3. Predictability and reliability element ... 29

4.3.4. Understanding the environment element ... 30

4.3.5. Human supervision and ability to intervene element ... 30

4.3.6. Accountability element ... 31

4.3.7. Ethical considerations element ... 31

4.4. The control measures ... 32

4.4.1. Control over the parameters of use of the LAWS ... 32

4.4.2. Control over the environment ... 32

(3)

4.4. The military reality and operational perspectives on requirements for meaningful human control ... 34 4.5. Concluding remarks ... 35 5. CONCLUSION ... 36 6. BIBLIOGRAPHY ... 38 6.1. Primary sources ... 38 6.2. Secondary sources ... 38 I. ABSTRACT

Throughout history, emerging technologies have had a significant impact the conduct of hostilities. Often, as new technologies emerge, intense debates and calls for their ban or significant legal restraints quickly follow these developments. This was no different for emerging AI technologies which made autonomy in weapon systems possible. A fully lethal autonomous weapons system is a weapons system that, as soon as it is activated by a human, can select and attack targets without human intervention. Such weapon systems do not yet exist, but a growing number of voices are calling for immediate attention to the development of these systems. Many States, scholars and NGOs propose that there should be meaningful human control over the weapons system in order to comply with international humanitarian law (IHL). Autonomy will undoubtedly have an enormous impact on the conduct of hostilities. In order for targeting in an armed conflict to be considered as lawful under IHL, the deployment of LAWS must conform to the targeting principles. The development of LAWS, however, comes with great uncertainty about how IHL applies to autonomy. It is clear that IHL applies to LAWS and it provides an effective normative framework that ensures the lawful deployment of these weapon systems. Yet, not all LAWS are the same. Some might be lawfully deployed in one scenario, but not anymore when it is deployed in a slightly different scenario.

As of today, it is still unclear whether LAWS can be constructed in a way where it can perceive information necessary to comply with IHL. For the time being, it may be wise to limit the deployment of LAWS in situations where it will undoubtedly comply with IHL and its targeting principles. There are quite some challenges in relation to the application of the targeting

(4)

principles to LAWS. Most of these challenges boil down to the question whether targeting assessments require subjective or objective assessments of the facts.

Humans are the addressees of IHL. Hence, when exploring the limits of LAWS, a fundamental issue that has to be addressed, is the issue of meaningful human control. There is still no clear-cut consensus on the definition of meaningful human control, but it is widely regarded that humans must retain and exercise responsibility for the use of autonomous weapon systems. A central issue of debate remains on how humans should retain and exercise responsibility. A few key elements of meaningful human control can however be identified: control over the context, understanding the weapon system, understanding the environment, taking account of the predictability and reliability of the LAWS, human supervision and the ability to intervene, accountability and finally, the ethical element. These elements serve as different way to limit autonomy and increase meaningful human control. Moreover, there are three practical measures where a human operator should have meaningful human control over the operation. First, the human operator should have control over the parameters of use of the LAWS. Second, the human operator should have control over the environment, and third, the human operator should have control through human-machine interaction.

(5)

II. LIST OF ABBREVIATIONS

AI Artificial intelligence

AP I Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts AP II Protocol Additional to the Geneva Conventions of 12 August 1949, and

relating to the Protection of Victims of Non-International Armed Conflicts

CCW Convention on Prohibitions or Restrictions of the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects as amended on 21 December 2001

HRW Human Rights Watch

ICRAC International Committee for Robot Arms Control ICRC International Committee of the Red Cross

ICTY International Criminal Tribunal for the Former Yugoslavia

IHL International Humanitarian Law

LAWS Lethal Autonomous Weapon Systems

(6)

CHAPTER 1. INTRODUCTION

The Artificial Intelligence revolution is on its way.1 Technological developments in the military move at an increasingly rapid pace. Over the past decade the use of automatic systems has become increasingly prevalent in modern-day armed conflicts. Currently, most robots and unmanned aerial systems are remotely operated by humans, but for the most part a human being still makes the decision on whether or not to fire a weapon. Humans will however become increasingly remote from the targeting process. States, including the USA, China, Israel and Russia are already developing and, in some cases, even deploying predecessor of fully lethal autonomous weapon systems (LAWS).2 Proponents argue that there are potential advantages to the deployment of LAWS, such as a reduction of manpower, generating more protection of the own forces, cutting costs and creating more precision in targeting. Yet, this development also raises many questions about the law of war and the military ethics of such autonomous weapon systems.

Throughout history, emerging new technologies have played an important role and have had a significant impact on the conduct of hostilities. The advances in the development of different types of weapon systems and tactics can make the difference between victory and loss. Generally, when new weapon systems or technologies emerge, calls for their ban or meaningful legal restrictions on their development quickly follow.3 The discussion about the legality of LAWS has been a heated one for years. Opponents, such as Elon Musk, Stephen Hawking and Steve Wozniak stated in an open letter: “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”4 The issue of autonomous weapons flows from the concern that humans will lose all control over the weapon they use, and therefore, will no longer be deciding over who gets to live or dies. As a consequence, most States, non-governmental organisations (NGOs) such as

1 Paul Scharre and Michael Horowitz, ‘Artificial Intelligence: What Every Policy Maker Needs to Know’ (2018), Report by Center for a New American Security, June 2018, 3.

2 Mary Wareham, ‘As Killer Robots Loom, A Push to Keep Humans in Control of Use of Force’ (HRW 2 January 2020) <https://www.hrw.org/news/2020/01/02/killer-robots-loom-push-keep-humans-control-use-force> accessed 1 June 2020.

3 Eric Talbot Jensen, ‘The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict’ (2020) 96 International Law Studies 26, 30.

4 Brian Fung, ‘Elon Musk and Stephen Hawking think we should ban killer robots’ (The Washington Post 28 July 2015) <

(7)

Human Rights Watch and PAX5, and academics agree that LAWS need a level of meaningful human control. There is however no consensus on what the concept of meaningful human control means, let alone on what is subject to this form of control. Is that the weapon itself, its critical functions or on each individual attack?6

1.1. Research question

As mentioned above, there are currently ongoing heated debates in international and national fora about LAWS and whether or not they can comply with international (humanitarian) law. To the extent that there is any consent among States, NGOs and academics regarding the regulation of LAWS, it is grounded in the idea that LAWS should be subject to meaningful human control.7 The broad support from many State- and non-State actors comes however with a cost: there is no consensus on what meaningful human control means. This thesis will therefore attempt to shed light on the current debates of LAWS in relation to the targeting principles. Moreover, it will try to identify the key elements of meaningful human control and analyse where a human operator of an autonomous weapon system should have control for it to be meaningful. Hence, this led to the research question: “What factors need to be considered to ensure that human involvement in the targeting process is “meaningful” in the deployment of Lethal Autonomous Weapons?”

In order to answer this question, this thesis will begin with providing an overview of the definitions and frameworks on AI and LAWS and how AI software is applied to LAWS. Moreover, the first chapter aims to define the current state of the debate. The second part of the thesis continues to analyse the deployment of LAWS in relation to the targeting principles of IHL. This includes the principles on distinction, proportionality and precaution and an analysis on the challenges that these principles might pose to a legal deployment of LAWS. Finally, this thesis will elaborate on the concept of meaningful human control in relation to the the use of LAWS in the targeting process. This chapter will attempt to give a definition of

5 See also Human Rights Watch and Harvard Law School International Human Rights Clinic, ‘Heed the Call: A Moral and Legal Imperative to Ban Killer Robots (Report HRW/IHRC 18 August 2018); PAX, ‘Slippery Slope: The Arms Industry and Increasingly Autonomous Weapons’, Report prepared by PAX, Utrecht, November 2019

<https://www.paxforpeace.nl/publications/all-publications/slippery-slope> accessed 20 January 2020.

6 Merel Ekelhof, ‘Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control through the Lens of Military Targeting’ (2018) 71 Naval War College Review 3, 61.

7 Rebecca Crootof, ‘A Meaningful Floor for ‘Meaningful Human Control’ (2016) Temple International & Comparative Law Journal 53, 53.

(8)

meaningful human control and it will identify seven interrelated elements on meaningful human control. Moreover, this chapter attempts to answer where a human operator should exercise control and finally, this chapter briefly analyses meaningful human control in relation to the military reality.

1.2. Methodology

As mentioned before, fully lethal autonomous weapon systems do not yet exist. As a result, in comparison to other weapon systems, relatively little is known about these weapon systems. Regardless of this fact, this thesis is a classical legal research. The general purpose of this thesis is to clarify the current debates on LAWS in relation to targeting law and to clarify the problematic concept of meaningful human control and to identify what factors need to be considered to ensure that human involvement in the targeting process is meaningful in the deployment of LAWS. The methodology is partly descriptive and partly analytical. This thesis departs from the premises that IHL is set out in treaties and custom based on certain fundamental principles and that these rules are applicable to any weapons systems, including those of the future. This thesis' analysis is restricted to IHL targeting rules and is based on legal literature which is openly available.

(9)

2. LETHAL AUTONOMOUS WEAPON SYSTEMS AND ARTIFICIAL INTELLIGENCE

2.1. Introduction

Before turning to the law regulating the use of weapons, it is important to define LAWS and AI and their issues relating to the definition. To this day, there is no international consensus on the definition on the meaning of ‘artificial intelligence’ or ‘autonomy’.8 As Jenks puts it: “The international community cannot even agree about wat they disagree about.”9 Yet, autonomous weapons are commonly referred to as weapons that, once they have been activated, can select and attack targets without human intervention through the use of Artificial Intelligence (AI).10 The International Committee of the Red Cross (ICRC) refers to LAWS as follows:

Any weapons system with autonomy in its critical functions – that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.11

2.2. Definitions and state of debate

2.2.1. Artificial Intelligence

LAWS do not exist without AI. AI, a multidisciplinary field first developed by computer science, has undergone rapid progress in the past years. Scientists and researchers have created codes and software programmes that can process data, carry out commands and even mimic human intelligence.12 One of the AI areas is machine learning, which makes it possible to build special-purpose machines to perform cognitive tasks, sometimes even better than humans are

8 Chris Jenks, ‘False Rubicons, Moral Panic, and Conceptual Cul-De-Sacs: Critiquing & Reframing the Call to Ban Lethal Autonomous Weapons’ (2016) 44 Pepperdine Law Review 1, 13; Chris Ford and Chris Jenks, ‘The International Discussion Continues: 2016 CCW Experts Meeting on Lethal Autonomous Weapons (Just Security, 20 April 2016) <https://www.justsecurity.org/30682/2016-ccw-experts-meeting-laws/> accessed on 2 May 2020. 9 Jenks (n 3) 13.

10 Jeroen van den Boogaard, ‘Proportionality and Autonomous Weapons Systems’ ACIL Research Paper 2016-07, <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2748997#> accessed 2 February 2020, 2.

11 ICRC, ‘Views of the ICRC on Autonomous Weapon Systems’ (paper submitted to the Convention on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons (LAWS), 11 April 2016, 1

<https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system> accessed on 2 February 2020.

12 Eugenio V. Garcia, ‘Artificial Intelligence, Peace and Security: Challenges for International Humanitarian Law

(2019) unpublished paper, 2. <

https://www.researchgate.net/publication/335787943_Artificial_Intelligence_Peace_and_Security_Challenges_f

(10)

capable of.13 Recent AI advances allow for highly sophisticated systems. Machine learning enables algorithms to learn from data and develop solutions to problems.14 The machines can be used in a wide range of aims. They can analyse data to find patterns and anomalies, automate tasks, predict trends and they provide the ‘brains’ for autonomous robotic systems.15 Large amounts of data are the fuel to successful machine learning.

A large hurdle for AI systems is that they are rather limited in transferring learning from one task to another related task.16 This distinguishes machine learning from humans. Humans can learn one skill and then, building on what they have learned in this specific skill, use this knowledge to quickly obtain more knowledge. AI systems on the other hand, suffer from “catastrophic forgetting”.17 The system loses old knowledge when it focusses on learning a new task. This certainty restricts AI systems to perform very particular actions within predictable and well-structured parameters. In other words, the current AI systems are “narrow”.18 However, AI scientists are making progress. In 2018 a machine learned how to perform up to 30 tasks within a simulated environment and the machine did not forget its “old” knowledge while learning new tasks.19 Recent breakthroughs in big data, advanced algorithms and computational power led to considerable improvements in AI capabilities. Accordingly, more advanced AI systems have since left the lab and entered the real world. In some capabilities, such as image recognition, AI systems have already proven to better than humans.20 Finally, AI also for the production of machines with the autonomy of performing tasks on their own. This autonomy allows for many advantages, yet also limitations which will be discussed in the third chapter. The next section will discuss AI autonomy in relation to weapon systems.

13 Scharre & Horowitz (n 1) 4. 14 ibid.

15 Scharre & Horowitz (n 1) 4. 16 ibid.

17 ibid.

18 Garcia (n 12) 2.

19 Hubert Soyer ea, ‘Scalable Agent Architecture for Distributed Training’ (DeepMind 5 February 2018) <

https://deepmind.com/blog/article/impala-scalable-distributed-deeprl-dmlab-30> accessed 2 May 2020.

20 Alex Hern, ‘Computers now better than humans at recognising and sorting images’ (The Guardian 13 May 2015) <

(11)

2.2.2. Lethal Autonomous Weapon Systems

So far, most of the AI research has been done in areas not explicitly relating to weapons or the military.21 However, AI is already a military reality. While international debate about the legality of autonomous weapons is relatively recent, historically, intelligent weapons and autonomous technologies have been part of military arsenals for decades.22 Anti-personnel mines and undersea mines can be seen as autonomous, in the sense that they could detonate based on physical contact once they were activated.23 A more sophisticated example can be found in the air defence system used by the US, developed in the sixties: the Patriot.24 This system was able to autonomously detect, track and engage assessment functions to defend certain areas, such as ships, from manoeuvrable threats.

The AI system used in the Patriot can be described as an automated system. Such a system is rule-based where the computer program easily follows a set of predefined instructions on how to function in a particular situation.25 This type of technology has proven to be highly beneficial for militaries. It is important to distinguish between automated weapon systems, as mentioned above, and autonomous weapon systems. Automated weapons are created to fire automatically when certain predetermined parameters are met in a limited environment.26 The main characteristic is that as soon as they are deployed, there is no separate moment of active decision-making to attack. Today automated weapons systems are used throughout the armed forces. For example, unmanned air systems are highly automated in different functions, such as take-off, navigation and landing.27

Recent developments in the field of AI and intelligent weapons systems, however, go much further with a much broader range of possible sophisticated applications. Even though at the moment of writing this thesis, there are no fully lethal autonomous weapons systems in use by armed forces, is seems likely that they will be deployed in the future. Today, algorithms are

21 Kenneth Payne, ‘Artificial Intelligence: A Revolution in Strategic Affairs?’ (2018) 60 Survival, Global Politics and Strategy, 7.

22 Merel Ekelhof, ‘The Distributed Conduct of War: Reframing Debates on Autonomous Weapons, Human Control and Legal Compliance in Targeting’ (PhD Dissertation, Vrije Universiteit Amsterdam 2019), 17. 23 Kenneth Anderson ea, ‘Adapting the Law of Armed Conflict to Autonomous Weapon Systems’ (2014) 90 International Law Studies, 388.

24 Missile Threat, Center for Strategic and International Studies, ‘Missile Defense Project, "Patriot,"’, June 14, 2018, last modified November 4, 2019, <https://missilethreat.csis.org/system/patriot/> accessed on 2 May 2020. 25 Scharre & Horowitz (n 1) 4.

26 Van den Boogaard (n 10) 6.

27 ICRC, ‘Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects’ (Expert Meeting, Geneva, Switzerland, 26-28 March 2014), 60.

(12)

already used to identify patterns in large datasets and weapon-guidance systems can carry out pre-programmed functions and identify potential targets.28 In some applications, such as in the US Patriot, the weapon system can automatically engage targets, for which it has been programmed for. This is however a far cry from fully lethal autonomous weapons.

As mentioned in the introduction of this chapter, lethal autonomous weapon systems are likely to be highly sophisticated weapon systems, that, as soon as they are activated by a human, can select and attack targets without any further human intervention. The characteristic of LAWS is the use of AI software that provides LAWS with a level of autonomy.29 With further development in technology, some authors even argue that these weapons system can make targeting decisions autonomously in an open, unstructured and dynamic environment based on the information it receives from sensors.30 This gives LAWS the ability to either attack or refrain from attacking when a potential military target emerges. Automated weapons, on the other hand, do not have this ability and must thus be differentiated from LAWS. In other words, automated weapons systems are only able to start an attack when their sensors detect a target when in fact LAWS can autonomously select a target.31

LAWS are best understood as being composed of different software and hardware elements that work together. Sensors, algorithmic targeting and decision-making mechanisms and the weapon itself co-operate in order to identify or attack a target.32 LAWS use sensors that give them information to ultimately identify legitimate targets on the one hand and on the other hand civilians and civilian objectives. The information on identification of a possible target will then be processed through AI software, on which it then decides how the system should respond.33 Present day LAWS do not yet possess technology that can select targets without human intervention in an open and dynamic environment, but this technology is rapidly developing.34

28 Payne (n 21) 9. 29 ibid 4.

30 ibid. 31 ibid 7.

32 Article36, ‘Structuring Debate on Autonomous Weapons’ (2013), 1.

33 Christof Heyns, ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions’ (2013) UN Doc A/HRC/23/47. Para 39.

34 Marco Sassòli, International Humanitarian Law: Rules, Controversies, and Solution to Problems Arising in

(13)

The definition of LAWS as given in the introduction of this chapter mentions different components. Before turning to the current debate about LAWS, one components of this definition must be further scrutinised. The “human involvement” is the component in the definition of LAWS that is not entirely clear. This component requires that the actions of the weapon system occur without human intervention or involvement. The degree of human participation is however not always clear.35 As already mentioned, the difference between automaticity and autonomy is important, yet sometimes rather ambiguous. As the ICRC has noted, both of the systems “have the capacity to independently select and attack targets within the bound of their human determined programming.”36 It is therefore the question what degree of freedom in a system is sufficient to be considered as operating without human interference. Autonomy can be measured on a scale ranging from ‘adaptive’ to ‘direct human control’.37 The level of the autonomy of a weapon system depends, according to Jeroen van den Boogaard, on four different factors: (1) the frequency at which the weapon system has to communicate with a human; (2) its tolerance for the uncertain environment; (3) its ability to change plans and to execute its task without any human involvement, and lastly (4) its ability to learn from its own actions through AI adding new knowledge which can be applied in the future. 38 According to Christopher Ford, a fully autonomous weapon system is a system that (1) Observes. A computer gathers data without human direction. It does not provide humans with information and what the computer observes is unpredictable. (2) Orientation. A computer analyses data without human input and it does not report to a human. Moreover, the analysis is not unpredictable and unseen. (3) Decide. A computer establishes targets and determines by itself when and where to engage. This process is unpredictable and unseen. (4) Act. A computer determines when and where to execute. The actions are unpredictable in time and space.39

2.2.3. The state of the debate

In the past year, a growing number of voices are calling for immediate attention to weapon systems with increasing autonomy. One of the earliest reports that discusses the illegality of LAWS was published by Human Rights Watch (HRW) in collaboration with the Harvard Law

35 Christopher M. Ford, ‘Autonomous Weapons and International Law’ (2017) 69 University of South Carolina Law Review 413, 419.

36 ICRC (n 27) 64.

37 Van den Boogaard (n 10) 4. 38 ibid 5.

(14)

School International Human Rights Clinic (IHRC) at the end of 2012.40 The report called for a ban on “Killer Robots” as they argued that autonomous weapons did not meet the legal standards. Moreover, the media described the development of LAWS as the third revolution in warfare41 or portrayed LAWS as smart killer robots that will wipe out the entire human race. It is therefore no surprise that debates about LAWS intensified quickly.

The debate primarily focusses on the power of an autonomous weapon system to independently select and attack targets without human intervention. More detailed, the debate revolves around the critical functions of these weapon systems. In other words, the debate focusses on the function that determines which target will be selected and the function that results in the use of force by the weapon system.42 Opponents of LAWS claim that the weapon systems should be banned, or at least restricted for many different reasons, such as ethical, legal and security concerns.43 Even when LAWS would comply with international humanitarian law (IHL), opponents rather see the weapons systems banned. They claim that the deployment of LAWS would be inherently unethical to delegate decisions of life and death to a machine. Opponents thus fear that humans will no longer control technology, but instead technology will control humans.44

2.3. Concluding remarks

Although humans will remain on the loop for the time being, it seems certain that autonomous weapon systems herald the most fundamental changes in modern warfare. The rise of autonomous technologies has sparked heated international discussion revolving around one key question: can lethal autonomous weapon systems comply with international humanitarian law without any further human intervention? The next chapter will explain the targeting law and analyse how LAWS relate to the three targeting principles.

40 Human Rights Watch and Harvard Law School International Human Rights Clinic, ‘Losing Humanity: The Case against Killer Robots’ (Report HRW/IHRC 12 November 2012).

41 Future of Life Institute, ‘Autonomous Weapons: an Open Letter from AI & Robotics Researchers’ (FLI 2015) available at <https://futureoflife.org/open-letter-autonomous-weapons> accessed 7 May 2020.

42 Merel A.C. Ekelhof, ‘Complications of a Common Language: Why is it so Hard to Talk about Autonomous Weapons’ (2017) 22 Journal of Conflict and Security Law 311, 313.

43 PAX, ’10 Reasons to Ban Killer Robots’, available at <

https://www.paxforpeace.nl/media/files/pax-ten-reasons-to-ban-killer-robots.pdf> accessed 4 July 2020.

(15)

3. TARGETING LAW: WHAT IS THE INTERPLAY BETWEEN HUMANS AND LAWS?

3.1. Introduction

In 2005, a member of the US Joint Forces Command summed up the benefits of LAWS: “They don’t get hungry. They’re not afraid. They don’t forget orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.”45 Besides, LAWS are considered to operate cheaper than human-operated weapon systems and they are able to perform continuously, without taking rest. Additionally, fewer humans are required for the operation of LAWS.46

As good as this sounds, the deployment of any weapon system as a means of warfare during an armed conflict, is governed by customary International Humanitarian Law and a number of treaty obligations, particularly the rules on the principles of distinction, proportionality and precaution. This means that first, the lethal autonomous weapon system should be able to verify whether the target is a military target, second, the LAWS should be able to assess on a case-by-case basis whether the attack would be proportionate and third, the LAWS needs to fulfil the requirement of taking precautionary measures to minimize potential harm to civilians and civilian objectives.

In order for taking a human life in an armed conflict to be considered as legal, the deployment of LAWS must conform to these three principles.47 The IHL rules create obligations for human combatants in the use of weapons to carry out attacks, and the combatants will ultimately be responsible for respecting the rules.48 The application of these principles to LAWS, however, raises some questions. For example, can a machine distinguish a civilian object from a military object or a civilian from a combatant? How can a machine weigh certain subjective values against anticipated military advantage during a proportionality analysis? Some scholars argue

45 Tim Weiner, ‘New Model Army Soldier Rolls Closer to Battle’ (New York Times 16 February 2005) available

at <https://www.nytimes.com/2005/02/16/technology/new-model-army-soldierrolls-closer-to-battle.html>

accessed on 1 June 2020.

46 Major Michael A. Guetlein, ‘Lethal Autonomous Weapons – Ethical and Doctrinal Implications’ (2005) Naval War College, available at <https://apps.dtic.mil/dtic/tr/fulltext/u2/a464896.pdf> accessed on 1 June 2020. 47 Peter Asaro, ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the dehumanization of Lethal Decision-Making’ (2012) 94 International Review of the Red Cross 687, 696.

48 Neil Davidson, ‘A Legal Perspective: Autonomous Weapon Systems Under International Law’ (2017) 30 UNODA Occasional Papers 7.

(16)

that these questions are impossible to solve, as LAWS have no human judgement and the technology is not (yet) sophisticated enough. Other authors argue the contrary.

This chapter will explain and define the targeting principles on distinction, proportionality and precaution. Furthermore, it will apply the deployment of LAWS to the targeting principles and explain how LAWS are not a homogenous category that either complies with the law or not.49

3.2. Basic principles of targeting

3.2.1. The principle of distinction

The International Court of Justice, in its Nuclear Weapons Advisory Opinion has described distinction as one of the two cardinal principles that constitute the cornerstone of international humanitarian law (IHL)(the other being unnecessary suffering).50 The Court described distinction as one of the two intransgressible principles of international customary law.51 The principle requires that an operator or commander must ensure distinction between military objectives on the one hand and civilian objects, civilians and other protected person on the other hand. In other words, distinction requires a person conducting an attack to distinguish between unlawful targets and lawful targets.52 Lawful targets are civilians taking direct part in hostilities, combatants and military objectives; unlawful targets are civilian objects, civilians, those hors de combat and other protected persons and objects. The principle of distinction is considered to be part of international customary law and it is codified in articles 48, 51 and 52 of Additional Protocol I (API).53 Article 48 API provides a general definition:

In order to ensure respect for and protection of the civilian population and civilian objects, the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations against military objectives.

49 Ford (n 34) 16e pagina.

50 Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 1996 ICJ Rep. 226 (8 July 1996), par. 78.

51 ibid, par. 79; Nils Melzner, ‘The Principle of distinction between civilians and combatants’, in Andre Clapham & Paola Gaeta (eds), The Oxford Handbook of International Law in Armed Conflict (OUP 2014) 298.

52 Ford (n 34) 433.

53 Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, opened for signature 12 December 1977, 1125 UNTS 3 (entered into force 7 December 1979) [hereinafter API].

(17)

Moreover, the principle also entails the prohibition on indiscriminate attacks against civilians as well as attacks where civilians or civilian objects are deliberately made object of the attack.54 Article 51(4) characterizes indiscriminate attacks as:

(a) those which are not directed at a specific military objective

(b) those which employ a method or means of combat which cannot be directed at a specific military objective

(c) those which employ a method or means of combat the effect of which cannot be limited as required by this Protocol; and consequently, in each such case, are of a nature to strike military objectives and civilians or civilian objects without distinction.55

For the application of the principle of distinction it is relevant to distinguish between civilians and civilian objects and combatants and military objectives.56 Article 52(1) AP I describes civilian objects as “all objects that are not military objectives.” It is therefore necessary to first define what military objectives are. Article 52(2) AP I defines it as “those objects which by their nature, location, or use make an effective contribution to military action and whose partial or total destruction, capture, or minimization, in their circumstances ruling at the time, offers a definite military advantage.” This definition provides two cumulative criteria. First, an objective must make an effective contribution to military action in order for this objective to be considered as a military objective. Second, its partial or total destruction, capture or minimization must offer a definite military advantage. Whether an objective can be considered to be a military objective should be assessed on a case-by-case basis in view of the circumstances at the time.57

Civilians are understood as all persons of a population who are not members of the armed forces and therefore, they are protected under targeting law.58 Civilians are only protected for the time they do not take direct part in hostilities.59 The ICRC published the Interpretative

54 Legality of the Threat or Use of Nuclear Weapons (n 50) par. 78. 55 Art. 51(4) AP I.

56 Ekelhof, ‘The Distributed Conduct of War (n 22) 90. 57 ibid 91.

58 Art. 50(2) AP I.

59 Art. 51(3) AP I; art. 13(3) of Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of Non-International Armed Conflicts, opened for signature 8 June 1977, 1125 UNTS 609 [hereinafter AP II].

(18)

Guidance on the Notion of Direct Participation in Hostilities, where acts will constitute direct participation in hostilities when three cumulative criteria are met.60 First, “a specific act must be likely to adversely affect the military operations or military capacity of a party to an armed conflict or, alternatively, to inflict death, injury, or destruction on persons or objects protected against direct attack.”61 Second, “there must be a direct causal link between a specific act, or from a coordinated military operation of which that act constitutes and integral part.”62 Third, “an act must be specifically designed to directly cause the required threshold of harm in support of a party of the conflict and to the detriment of another.”63 It is important to note that, when in doubt, a person shall be assumed to be a civilian.64

Members of the armed forces are “all organized armed forces, groups and units, which are under a command responsible to that party for the conduct of its subordinates.”65 Furthermore, article 43(2) describes that “all members of the armed forces of a party to the conflict are combatants, except medical and religious personnel.” Civilians must be distinguished from combatants. Put simply, combatants are those who are members of the armed forces of a State; members of a militia or volunteer group belonging to a State or members of a levée en masse.66 These persons may be attacked, unless they are hors de combat: anyone who surrendered, has been captured, enjoys a protected status such as religious or medical personnel or has parachuted from a disabled aircraft.67

3.2.1.1. Application to LAWS

Whether the use of LAWS complies with the principle of distinction is, according to Merel Ekelhof, dependent on three factors. (1) The weapon’s capabilities, (2) the circumstances of its use, and (3) its relationship with the human operator (for example, the parameters in which the weapon system is allowed to perform autonomously).68 A weapon system that can only distinguish tanks from other objects and persons, will most likely not be able to comply with

60 Nilz Melzer, ‘Interpretative Guidance on the Notion of Direct Participation in Hostilities under International Humanitarian Law (ICRC 2009) 58.

61 ibid 47. 62 ibid 51. 63 ibid 58.

64 Art. 50(1) AP I. 65 Art. 43 AP I.

66 Art. 4(2) Geneva Convention (III) Relative to the Treatment of Prisoners of War, art. 2, Aug. 12, 1949, 75 UNTS. 135 [hereinafter GC III].

67 Art. 47 AP I.

(19)

the distinction principle. This does however not mean that the weapon system is unlawful per

se, but it would make this weapon system unsuitable for many situations.

Most existing and currently being developed technologies are based on narrow AI.69 Their knowledge is thus limited to a single domain. In a dynamic environment, where situations can quickly change, it might therefore be complicated to deploy LAWS. Besides, it is particularly difficult to program AI software that can distinguish between different persons: for example, identifying a civilian taking direct part in hostilities or a person who has clearly expressed his intention of surrender. As this particular distinction is difficult for LAWS, it is important to keep in mind that humans find this distinction just as challenging.70

As for objects, the definition of a military objective depends on a couple of factors as mentioned above. These requirements seem to imply that there needs to be awareness of the plans and overall military operation.71 This implies that there needs to be some sort of constant update on those elements, making the weapon system not fully autonomous. Another difficulty in relation to distinction is that LAWS must be able to recognize when a legitimate target surrenders.72 It is highly unlikely that today’s AI technology is sophisticated enough to autonomously make determinations to the lawfulness of the targeting of all persons and objectives. But, as Sassòli argues, “I simply do not see any reason of principle why a machine could never become better at fulfilling this task than a human being.”73 Yet, as long as the current technology is not sophisticated enough to apply the principle of distinction, human operators must be involved in the application of the principle of distinction.

3.2.2. The principle of proportionality

Targets are often found in situations fraught with potential collateral damage to civilian objects and civilians. This is inherent to a dynamic environment. This makes the decision-making process highly complex and uncertain and therefore, IHL requires a proportionality assessment. The principle of proportionality is widely considered a norm of customary international law, but it is also codified in AP I. It states that:

69 Scharre & Horowitz (n 1) 4.

70 Marco Sassòli, ‘Autonomous Weapons and IHL: Advantages, Questions and Issues’ (2014) 90 International Law Studies, 327.

71 ibid 328.

72 William H Boothby, Weapons and the Law of Armed Conflict (OUP 2016) 233. 73 Sassòli, ‘Autonomous Weapons and IHL’ (n 70) 328.

(20)

Launching an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.74

The principle of proportionality is widely regarded as a difficult component in the law of targeting, as the proportionality analysis requires many factors be taken into account. This principle is the last check that needs to be conducted by a military commander before an operation can be executed.75 Moreover, a commander is obliged to continuously apply and monitor the principle when an attack is already ongoing. As soon as an attack fails to comply with the principle, the commander is obliged to cancel or suspend the attack.76

The principle is comprised of two different categories: the expected civilian harm and the anticipated military advantage. Civilian harm, is according to the definition, related to incidental loss of civilian life, injury to civilians and damage to civilian objects. There are scholars who claim only direct harm will constitute civilian harm, whereas others claim that indirect effects should also be taken into account.77 This thesis will however not go into this discussion, as the precise scope of civilian harm remains unsolved. Nonetheless, it is generally accepted that civilian harm only extends to what can be reasonably expected. The International Criminal Tribunal for the former Yugoslavia (ICTY), defined the legal standard as: “a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.”78 Hence, the ICTY argued that the principle of proportionality does not refer to the actual damage caused. As for the determination of military advantage, the same standard applies.79 It is usually acknowledged that this category consists of weakening enemy forces or gained ground.80 The advantage must be “substantial and relatively close, and the advantages which are hardly perceptible and those which would only

74 Art. 51(5)(b) AP I and art. 57 AP I. 75 Van den Boogaard (n 10) 14. 76 ibid.

77 Ekelhof, ‘The Distributed Conduct of War’ (n 22) 98.

78 Prosecutor v Stanislav Galic. ICTY Judgement 5 December 2003, par. 58. 79 Ekelhof, ‘The Distributed Conduct of War’ (n 22) 98.

80 Y. Sandoz, C. Swinarski and B. Zimmermann (eds), The Commentary on the Additional Protocols of 8 June 1987 (ICRC 1987) 658, para 2218.

(21)

appear in the long term should be disregarded.”81 Unfortunately, the exact legal meaning of the factors that have be taken in mind in the proportionality analysis remain unsettled.

3.2.2.1. Application to LAWS

As is clear from the above section, neither incidental civilian harm, loss of civilian life, injury to civilians and damage to civilian objects nor the anticipated military advantage can be easily quantified.82 On the other hand, the law does also not require impeccable awareness of the consequences of the attack.83 The assessment must however be reasonable. Hence, the question raises what a “reasonably well-informed person” constitutes of.84

The proportionality analysis falls apart into two slightly different types of assessments. (1) quantitative analysis of collateral damage, (2) qualitative analysis balancing the values of incidental civilian harm with the military advantage anticipated.85 The first assessment can easily be done by LAWS, as this is a mere quantitative analysis. The second assessment will however impose much more difficulties for LAWS due to the inevitable subjectivity of these value judgements.86 Consequently, Sassòli raises the question: “But why should a civilian be better protected under the law from incidental effects arising from an attack by one soldier than by another soldier? Why should the soldier’s youth, education, values, religion or ethics matter at all? Should not the only consideration be the military advantage to be gained and the incidental effect upon civilians?”87 If the latter question would be the answer, and the principle of proportionality can be translated into AI software, that could potentially mean that LAWS could improve the proportionality analysis’ objectivity.88 With the help of humanitarian experts and the military, indicators and criteria could be identified to make the qualitative judgement more objective. Such suggestions have so far been rejected with the argument that this type of judgement is dependent on the circumstances of a particular situation and the good faith of the

81 ibid, para 2209.

82 Vincent Boulanin et all, ‘Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control’ (Report prepared by SIPRI & ICRC, June 2020).

83 Prosecutor v Galic (n 78) par. 58. 84 Ford (n 34) 444.

85 Ekelhof, ‘The Distributed Conduct of War’ (n 22) 99.

86 Bradan T. Thomas, ‘Autonomous Weapon Systems: The Anatomy of Autonomy and the Legality of Lethality’ (2015) 37 Houston Journal of International Law 235, 269.

87 Sassòli, ‘Autonomous Weapons and IHL’ (n 70) 335. 88 ibid 335.

(22)

military commander.89 LAWS however need explicit formulas and criteria in order to apply the principle of proportionality.

Considering the current state of technology and the highly complex value judgements that are part of the proportionality analysis, it seems rather unlikely that in the foreseeable future LAWS could be programmed in a manner where they could conduct the proportionality analysis without the intervention of humans. This is not so striking, considering that determining anticipated military advantage and the expected collateral damage and weighing the relationship between these dissimilar values is difficult for even the most skilled human, let alone an autonomous weapon system. Furthermore, military advantage is determined by looking at the circumstances ruling at the time.90 Therefore, this determination is a dynamic and contextual analysis, not easily translatable into software language. Hence, Sassòli has argued that LAWS have to be “constantly updated about military operations and plans” in order to apply the proportionality analysis.91 Ford counterargues this, by stating that LAWS do not necessarily need constant updates. Even in the dynamic modern-day battlefield, the military advantage of certain targets, such as the enemy’s headquarters, remains fairly static.92

Regardless of who may make the most accurate predictions of future technological capabilities in making the proportionality analysis, it seems most desirable to combine both human- and AI system qualities.93 This is also emphasized in the Convention of Certain Conventional Weapons (CCW) meetings, where one of the core issues has been the human-machine interaction. This collaboration between humans one the one hand and machines on the other, seems most beneficial in the application of the principle of proportionality, as this principle consists of a dual assessment: qualitative and quantitative.

3.2.3. The principle of precaution

Persons conducting attacks must take feasible precautions to reduce the risk of harm to civilians and other protected persons and objects.94 This requirement is stipulated in article 57(1)(a) AP I, which requires: “in the conduct of military operations, constant care shall be taken to spare

89 Sassòli, ‘Autonomous Weapons and IHL’ (n 70) 331. 90 Art. 52(2) AP I.

91 Sassòli, ‘Autonomous Weapons and IHL’ (n 70) 332. 92 Ford (n 34) 446.

93 Ekelhof, ‘The Distributed Conduct of War’ (n 22) 101. 94 Ford (n 34) 446.

(23)

the civilian population, civilians and civilian objects.” Article 57(2)(a) further specifies a threefold obligation before taking the decision to attack:95

(i) do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects and are not subject to special protection but are military objectives within the meaning of paragraph 2 of Article 52 and that it is not prohibited by the provisions of the Protocol to attack them;

(ii) take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects

(iii) refrain from deciding to launch any attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated;

Additionally, the requirement applies “to those who plan or decide upon an attack.”96 This includes the high command at the national level which determines to deploy a weapon system and the commander who plans a specific attack alongside the individuals who carry out the use of the weapon system.97 The deployment of a weapon system is a whole process which involves various military and civilian actors who decide on the development, procedure and deployment of a weapons system. Operational and tactical level commanders plan the use of a weapon on a specific target, and personnel under his or her command carry out the attack. The personnel under command are also obliged to use their own judgement whether an attack can be carried out as planned or whether it should be postponed or cancelled in accordance with Article 57 AP I. The ICRC has noted that the high commander of an army is under the obligation to instruct personnel “adequately so that the latter, even if in low rank, can act correctly in the situations envisaged.”98

3.2.3.1. Application to LAWS

By looking at the definition of precaution as stipulated in article 57 AP I, the obligation to take precautionary measures seems to be addressed at specific persons. According to Sassòli, there

95 Ekelhof, ‘The Distributed Conduct of War’ (n 22) 103. 96 Art. 57(a)(2).

97 Ford (n 34) 446.

(24)

is however no legal objection to LAWS, instead of humans, taking precautionary measures, as long as the system has the capability to do so and the commander deploying the LAWS is certain that the system will comply with the obligation to take feasible precautions in attack.99 Moreover, Boothby adds that it does not matter who takes precautionary measures, as long as they are taken.100 As of today, it is however not certain that machines in the future will be able to take feasible precautionary measures.

Since the principle of precaution is continuously applied during the targeting process, and it is still uncertain whether LAWS will be able to apply the principle, the question arises who should be taking precautionary measures. The discussion among scholars is not settled. As mentioned in the above section on proportionality and an often-heard argument, is that LAWS are not capable of subjective reasoning. It is unfortunately still unsettled if this concept ought to be part of the targeting principles.

A considerable problem in relation to precautions, is the obligation to cancel or suspend attack if: “it becomes apparent that the objective is not a military one or is subject to special protection or that the attack may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”101 Does this then mean that because LAWS are used and no human identifies the change in circumstances that the rule cannot be violated? According to Sassòli, the obligation for commanders to take precautionary measures in order to make sure the attack is lawful, implies that LAWS must be designed in a manner where such verification is possible.102 Such human override comes with problems, such as the possibility where the enemy disrupts communication between the LAWS and human. Therefore, Sassòli argues that when LAWS are able to make this distinction, they will equally be aware of changes in the situational context and can therefore cancel an attack.103 If it becomes apparent that LAWS are not be able to make such a distinction, the deployment of an autonomous weapon will be deemed to be inconsistent with IHL.

99 Sassòli, ‘Autonomous Weapons and IHL’ (n 70) 336. 100 Boothby (n 72) 120.

101 Art. 57(2)(b) AP I.

102 Sassòli, ‘Autonomous Weapons and IHL’ (n 70) 337. 103 ibid.

(25)

It can be assumed that LAWS will not be able to make highly complex value judgements, which is required in any modern battlefield. The question is then whether these kinds of weapon systems in such circumstances be used in accordance with the principle of precaution. There are many scholars who believe that this is possible. It is argued that humans will still control the targeting process, to ensure that the LAWS fully comply with the legal requirements.104

3.3. Concluding remarks

This chapter sought to answer the question of the legality and limitations of lethal autonomous weapons systems in relation to targeting law. There seems to be no reason to assume LAWS are unlawful per se. Whether LAWS may be used in compliance with targeting law, must be assessed on a case-by-case basis. The outcome of this assessment does not only depend on the technological capabilities of the weapon system to comply with the targeting principles (how sophisticated is the system?), but it also depends on the circumstances of its deployment and the role of the human operator of the system.105

Compliance with the principle of distinction will much depend on the complexity of the environment. As there is no doubt that LAWS comply with this principle in easily distinguishable targets, difficulties arise when LAWS operate in dynamic circumstances or in situations where contextual decisions are required.106 In those cases, LAWS would be unlawful to deploy. The principle of proportionality remains a potential challenge for the lawful use of LAWS. This is particularly the case when LAWS are deployed in dynamic circumstances, when the weapon system is used for longer periods of time or when the military advantage is likely to change. Finally, in applying the principle of precaution, autonomy creates additional complexities in that the LAWS may possess the capability to conduct the feasibility analysis.107

104 Jeffrey S Thurner, ‘Means and Methods of the Future: Autonomous Systems’, in Ducheine, Schmitt and Osinga (eds.) Targeting: The Challenges of Modern Warfare (Asser Press 2016).

105 Ekelhof, ‘The Distributed Conduct of War’ (n 22) 124. 106 Ford (n 34) 442.

(26)

4. MEANINGFUL HUMAN CONTROL

4.1. Introduction

Since 2014, the challenges posed by autonomy in weapon systems has been the focal point of intergovernmental discussions within the framework of the CCW.108 The ICRC, many IHL experts, critics of LAWS and non-governmental organizations share a comparable understanding of the challenges that autonomy in weapon systems pose. An often-heard viewpoint is that, when exploring the limits on LAWS, a fundamental issue that has to be addressed is that of meaningful human control. There seems to be an emerging consensus among States that autonomy in weapon systems cannot be unlimited. Human beings must therefore “retain and exercise responsibility for the use of weapon systems.”109 The CCW GGE (Group of Governmental Experts) affirmed that:

“Human-machine interaction, which may take various forms and be implemented at various stages of the life cycle of a weapon, should ensure that the potential use of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems is in compliance with applicable international law, in particular IHL. In determining the quality and extent of human-machine interaction, a range of factors should be considered including the operational context, and the characteristics and capabilities of the weapons system as a whole.”

A central issue of debate remains on how humans should retain and exercise responsibility, since the system itself cannot be held responsible. Moreover, the CCW provides little guidance on how human-machine interaction should be understood. Many actors, such as States, NGOs, scholars and experts have made proposals where they all recognize that humans need to apply some form of control over weapons. Ford even puts control over weapon systems as “the essence of a military”.110 However, there is as of now no generally agreed standard on how and when humans should exercise this control in operational contexts.111 Furthermore, there

108 Boulanin et all (n 82) v.

109 CCW GGE (Group of Governmental Experts of the High Contracting Parties to the Convention on the Prohibition or Restriction on the Use of Certain Conventional Weapon Which May Be Deemed Excessively Injurious or to Have Indiscriminate Effects) ‘Draft Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ (Report CCW GGE 21 August 2019) CCW/GGE.1/2019/CRP.1/Rev.2.

110 Ford (n 34) 450. 111 Boulanin et all (n 82) 2.

(27)

appears to be confusion about the requirement of control. Is it in fact required by international law or is it a policy imperative? A core problem in the deployment of LAWS is that they are triggered by their environment. The user of the weapon system therefore does not know, let alone choose, the specific target, location or timing of the application of force. This unpredictability of LAWS raises serious concerns in the consequences of their deployment. Moreover, it also raises serious risks for civilians, ethical concerns about the role of machines in the taking of a human life, challenges for military control and lastly challenges for compliance with IHL.

This chapter addresses the issues of the current debate as explained above and attempts to shed light on why meaningful human control is needed and what concrete elements of meaningful human control can be identified. This chapter also attempts to sheds light on the question of where a human operator should exercise control and finally, this chapter looks at meaningful human control in relation to the military reality.

4.2. The issue of definition

As described in chapter 3, the use of any type of weapon as a means of warfare during an armed conflict, is governed by IHL, notably the targeting principles. Humans are subject to IHL, and they are the ones who are responsible for applying the law and the ones who can be held accountable for violations. Not the weapon system.112 On the contrary, the legal requirements must be fulfilled by the ones who plan, decide and carry out attacks.113 For this reason, there seems to be consensus among States, NGOs and academics that weapon systems should be subject to some form of human involvement. This objective has been captured by many names, such as meaningful human control. Whatever the name, the objectives are formulated in response to a shared concern: the removal of human presence in the targeting cycle.114 The ambiguity around the definition on meaningful human control is due to the fact that they are typically not driven by one shared motivation (moral, operational, legal, political). Moreover, the concept is often also related to different actors, such as operators, engineers or commanders. This ambiguity, however, does not prevent States from using the concept to steer global

112 Davidson (n 48) 8. 113 ibid.

114 Merel Ekelhof, ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation’ (2019) Global Policy 343.

(28)

discussions.115 As there seems to be not one specific definition on meaningful human control, it is still possible to identify elements that are often mentioned in the discussion on the concept. The elements of meaningful human control will be discussed in the next section.

4.3. The elements of meaningful human control

This section aims to identify the key elements of meaningful human control from a legal perspective. The following elements will be discussed: context and control, understanding the weapon system, predictability and reliability, understanding the environment, human supervision, accountability and lastly, ethical considerations.116

4.3.1. The context element

The first key element to meaningful human control is context. For the lawful use of LAWS, IHL requires a context-specific legal judgement before and during the use of force. As explained by the ICRC, all autonomous weapon systems will have some level of unpredictability since they interact with an unpredictable environment. 117 Therefore, a way to increase meaningful human control is to limit the unpredictability of the environment by operational constraints. For example, to comply with the principle of distinction, one must make an assessment based on knowledge and context and users must be able to adapt to changing circumstances, as attacking an object whose destruction no longer offers military advantage is not lawful.118 The requirement to make judgements based on context, poses challenges to the deployment of LAWS. The human operator of LAWS does not specifically know the context, such as surrounding, location and timing in which the LAWS operate. That is because an autonomous weapon system is programmed in advanced. Yet, for LAWS to comply with IHL, the human operator must also take into account factors that may vary from the time from programming the system till the moment it selects and engages a target. The context challenge thus raises the question of what controls on the context are required for users of LAWS to safely rely on their planning assumptions.119

115 ibid 344.

116 See also Amanda Eklund, ‘Meaningful Human Control of Autonomous Weapon Systems: Institutional Definitions in the Light of International Humanitarian Law and International Human Rights Law’ (LL.M Thesis Umeå Universitet 2020) 31-39.

117 ICRC Expert Meeting (n 27) 7. 118 Boulanin et all (n 82) 7. 119 ibid.

(29)

4.3.2. Understanding the weapon system element

Many scholars and organisations write about the element of “understanding the weapon system”.120 This element prescribes that the user of LAWS understands the weapon. Who and what a LAWS human operator must actually understand is still ambiguous. The ICRC describes that a human operator must understand the capabilities and limitations of the weapon system.121 Article36 describes that the user must understand what the weapon might identify as a target.122

4.3.3. Predictability and reliability element

Predictability and reliability are often heard elements of meaningful human control and is central to compliance with IHL.123 A LAWS operator or commander needs a high level of confidence that, once the system is activated, it will operate predictably. This demands a high degree of predictability of the environment, technical performance and the interaction of the two.124 This key element of predictability and reliability is more of an overarching element, in the sense that the predictability and reliability of LAWS increase when, for example, the human operator of LAWS sufficiently understands the weapon system and the context in which it operates. As mentioned above, the ICRC already states that all autonomous weapon systems will have some level of unpredictability since they operate in an unpredictable environment. Therefore, it is important that all LAWS will be verified through testing in a realistic environment.125 According to Neil Davidson however, predicting the outcome of using LAWS, will be increasingly difficult if such systems become highly sophisticated in their functioning.126 Even more so when the LAWS use AI machine learning and adapts its own knowledge and functioning each time it is used. The targeting principles require that users of weapon systems are capable of limiting the effects of the system they use. This is only possible, when the user reasonably foresees how a weapon functions in any possible circumstance and

120 See for example Article36, ‘Killing by Machine: Key Issues for Understanding Meaningful Human Control’ (2015) and Ekelhof M.A.C, ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation’ (2019) Global Policy 343, 344.

121 Statement ICRC, ‘Agenda item 5(a) – An exploration of the potential challenges posed by emerging technologies in the area of lethal autonomous weapon systems to international humanitarian law’ (CCW GGE 25-29 March 2019).

122 Article36, ‘Killing by Machine: Key Issues for Understanding Meaningful Human Control’ (2015).

123 See for example Article36 (n 120), ICRC (n 121) and ICRAC, ‘What Makes Human Control over Weapon Systems “Meaningful”?’ (ICRAC Report submitted to CCW GGE, August 2019); Davidson (n 48) 14.

124 Davidson (n 48) 15. 125 ibid 12.

Referenties

GERELATEERDE DOCUMENTEN

Vanwege het feit dat de SGP en de Partij voor de Dieren helemaal geen allochtone kan- didaten op de kandidatenlijst hadden staan en alle partijen volgens de peiling, op GROEN- LINKS

Since the rules of targeting are addressed to human decision-makers, there is a need for clarification of what qualities lethal autonomous robots would need to possess in order

The Council of State asked the ECJ in a preliminary reference procedure how the provision in the Recast RCD, allowing for the detention of asylum seekers on public order

2 of international law in the national legal order; to what extent national courts are competent to re- view national legislation and administrative acts for their

By combining these theoretical elements of infrastructures with this methodological approach, I have brought to the foreground the human network of arrangements,

H1: The national identification of first-generation Turkish immigrants in Germany decreases with successive cohorts, due to a well-established (transnational) Turkish community upon

Purpose The purpose of the study is to identify demographic, clinical, lifestyle-related, and social-cognitive correlates of physical activity (PA) intention and behavior in head

But this document, with its universal appeal, forms the basis of a multitude of treaties, which elaborate the basic idea behind the Declaration: that human nature is the same