• No results found

Autonomous Weapon Systems under International Humanitarian Law

N/A
N/A
Protected

Academic year: 2021

Share "Autonomous Weapon Systems under International Humanitarian Law"

Copied!
41
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Autonomous Weapon Systems under

International Humanitarian Law

Name: Annabeth van der Ende Supervisor: prof. Terry D. Gill Date: 11 December 2018

(2)

Abstract

Autonomous weapon systems (AWS) are weapon systems with autonomy in their critical functions, which can select and attack targets without human intervention. Although fully AWS are not in use yet, rapid technological development has led to a global debate on the lawfulness of these weapon systems. AWS empowered with artificial intelligence (AI) would be capable of autonomously adapting their actions to changing circumstances and

unanticipated events. On the one hand, AWS could offer more precision and accuracy than conventional weapon systems. On the other hand, AWS brings risks, as such advanced technology might be too complex for humans to understand. This could lead to

unpredictability of the results and of the effects of AWS. This research analyzes whether the use of AWS in armed conflict can comply with international humanitarian law (IHL), especially the principles of distinction, proportionality and precautions in attack. These principles contain specific obligations and limitations for the use of weapons in armed conflict. This is a classical legal research, as it describes the existing law – IHL – and how it should be applied to the use of AWS. Concluded is that the principle of distinction,

proportionality and precautions in attack require human intervention before launching an attack, if civilians are likely to be affected by the attack. Firstly, because the distinction between combatants and non-combatants is difficult to make, especially during guerilla warfare, or when civilians directly participate in hostilities, thereby losing their protection. It seems for now impossible that AWS would be capable to make this distinction. Secondly, the subjective element of the proportionality principle, to make a balance between the expected military advantage and civilian casualties, requires human intervention. Thirdly, both the principle of proportionality and precautions in attack require situational awareness and should be taken into consideration during the whole military operation. Although it can be assumed that AWS will be capable of processing and categorizing large amounts of incoming data, to make subjective instant decisions when quickly changing circumstances require seems to be impossible for machines. Yet, this does not mean that AWS are by definition inherently indiscriminate and disproportionate. AWS can be used, as long as there is a human in- or on-the-loop. The level of human control necessary would be decided during the targeting cycle, which is conducted before an attack. During the targeting cycle, the capabilities of weapons systems, the predictability and technical performance are analyzed. Thus, means and methods

(3)

would be chosen that are best suited to achieve the goals, with the necessary level of human control and while complying with the core-principles of international humanitarian law.

Table of contents

1. Introduction...4

1.1 Methodology...6

2. Defining Autonomous Weapon Systems...7

2.1 Autonomous Weapon Systems and Automation in Weapon Systems...7

2.2 Artificial Intelligence in Autonomous Weapon Systems...9

3. The Principles of International Humanitarian Law and Autonomous Weapon Systems. 12 3.1 The Principle of Distinction...13

3.2 The Principle of Proportionality...18

3.3 The Principle of Precautions in Attack...20

4. Possible Uses of Autonomous Weapon Systems...24

4.1 Meaningful Human Control...25

4.1.1 Predictability of Autonomous Weapon Systems...25

4.1.2 Possibility for Corrective Action...28

4.2 The Targeting Cycle...28

4.3 Possible Uses of Autonomous Weapon Systems...30

5. Conclusion...32

(4)

1.

Introduction

Over the past decades, there has been rapid progress in the development of artificial

intelligence (AI), robotics and machine learning. Examples range from autonomous vacuum cleaners to self-driving cars and underwater and flying robots.1 The military shows an

increasing interest in autonomy of weapon systems, as these weapons might enable greater precision, and may help fulfil the duty of militaries to protect their soldiers.2 Already,

weapons with varying degrees of autonomy are used by some high-tech militaries, including the militaries of the United States, Russia, the United Kingdom, Israel, South Korea and China.3 For instance, in January 2017, the U.S. Department of Defense released a video

showing an autonomous drone swarm of 103 individual robots. Nobody was in control of the drones, their flight paths were choreographed in real-time by an advanced algorithm.4

Moreover, South Korea’s DoDAAM Systems5 manufactures a semi-autonomous combat

robot, capable of detecting targets up to three kilometers away. According to company executives, there are self-imposed restrictions on the robot, that requires a human to deliver a lethal attack.6 Although fully autonomous weapons systems (AWS) do not yet exist, the rapid

technological development has led to a global debate on the legality and the risks of these weapons. The physicist Stephen Hawking warned of the risks of AI in his speech during the Web Summit Tech Conference: ‘Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.’7 In 2017,

experts warned that the AI technology has reached a point where the deployment of AWS is feasible within years.8

Autonomous weapon systems are defined by the International Committee of the Red Cross (ICRC) as any weapon system with autonomy in its critical functions. That is, a weapon

1 ICRC, ‘Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects’ (2014) Expert Meeting, 25-27.

2 ICRC, ‘Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control?’ (2018), 8. 3 Billy Perrigo, ‘A Global Arms Race for Killer Robots Is Transforming the Battlefield’ Time (9 April 2018) <http://time.com/5230567/killer-robots/> accessed 24 August 2018.

4 Ibid, accessed 24 August 2018.

5 DoDAAM Systems Co., LTD. develops and sells electronic weapon training systems for simulation and military industry.

6 Benjamin Haas, ‘’Killer robots’: AI Experts Call for Boycott over Lab at South Korea University’ The Guardian (5 April 2018) < https://www.theguardian.com/technology/2018/apr/05/killer-robots-south-korea-university-boycott-artifical-intelligence-hanwha> accessed 25 August 2018.

7 Stephen Hawking, Speech Web Summit Tech Conference (2017) Lisbon, Portugal.

8 Samuel Gibbs, ‘Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots’ The Guardian (20 August 2017) < https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war> accessed 25 August 2018.

(5)

system that can select (detect, identify, track) and attack targets without human intervention.9

These weapon systems must be capable of being used in accordance with international humanitarian law (IHL). The rules of IHL seek to impose limits on the destruction and suffering caused by armed conflicts. Especially the principles of distinction, proportionality and precautions in attack contain specific obligations and limitations for the use of weapons in armed conflict that may be difficult for AWS to comply with. 10

Considering the challenges the IHL principles pose on the use of AWS, research and discussion is necessary to decide whether or to what extent AWS are capable of complying with the rules of IHL. This leads to the research question:

Can the use of autonomous weapon systems in an armed conflict be in compliance with the principles of distinction, proportionality and precautions in attack, as laid down in

international humanitarian law?

The sub questions are respectively: (1) What is the definition of autonomous weapon systems and how do we differentiate them from other weapon systems? (2) What is the definition of artificial intelligence and how is it applied to autonomous weapon systems? (3) What do the principles of distinction, proportionality and precautions in attack entail? (4) What are the main challenges the principles of distinction, proportionality and precautions in attack pose on the use of autonomous weapon systems? (5) How does the targeting cycle include the

principles of international humanitarian law and how does it include meaningful human control when using autonomous weapon systems? and (6) What are the possible uses of autonomous weapon systems?

After the introduction, the second chapter describes the definition of AWS. This includes an explanation of the distinction between AWS and automated weapons, and of the use of AI- technology. The third chapter studies the three principles of IHL and analyzes the challenges the principles pose when they are applied to the use of AWS. The fourth chapter discusses the concept of ‘meaningful human control’, the targeting cycle and the possible uses of AWS while complying with the IHL-principles. Finally, the fifth chapter comprises the conclusion.

9 ICRC, ‘Views of the International Committee of the Red Cross (ICRC) on Autonomous Weapon Systems’ (2016) Meeting of Experts on Lethal Autonomous Weapon Systems, 1.

10 Jeroen van den Boogaard, ‘Proportionality and Autonomous Weapon Systems’ (2016-2017) Amsterdam Center for International Law Research Paper, 12.

(6)

1.1 Methodology

This research is a classical legal research, as it describes the existing law –IHL– and how it should be applied to the use of AWS. AWS are not specifically regulated by IHL treaties. However, it is unquestionable that AWS must be capable of being used, and must be used, in accordance with IHL.11 Besides, IHL does not only cover traditional weapons. According to

article 36 of the Protocol Additional to the Geneva Conventions, relating to the Protection of Victims of International Armed Conflicts (API), new technologies and methods of warfare have to be evaluated in the light of IHL. For this research, the rules laid down in API on the Protection of Victims of International Conflicts are leading, as they contain obligations for the conduct of hostilities. The primary rules on the conduct of hostilities according to IHL are the rules of distinction, proportionality and precautions in attack. Respectively codified in article 48, 51(2)(4), 52(2), article 51(5)(b) and article 57(1) API. This research is conducted with an internal perspective, recognizing the normative meaning of the IHL principles of

proportionality, distinction and precautions in attack. The objective of this research is to prescribe how the rules of IHL should be interpret, when applied to the use of AWS. Thus, the results in the conclusion are prescriptive, as they explain how and in which situations AWS are capable of complying with the rules of IHL.

The ICRC report on the rules of customary IHL applicable in armed conflicts, is used for the study of the IHL-principles. Customary rules are the rules that apply to all states irrespective of their participation in particular treaties. As laid down in the Statute of the International Court of Justice, customary law are the rules that come from general practice, reflecting a legal conviction.12 Furthermore, documentation of the United Nations, books and

articles in scientific and law journals were consulted. These sources were helpful for describing the IHL framework, analyzing AWS, AI, robotics and defining the law of targeting.

11 Neil Davidson, ‘A legal Perspective: Autonomous Weapon Systems under International Humanitarian Law’ (2017) UNODA Occasional Papers, 7.

(7)

2.

Defining Autonomous Weapon Systems

The definition of AWS is explained by the ICRC as any weapon with autonomy in its critical functions. A weapon system might have a variety of different autonomous functions. For example, take-off and landing, navigation, flying or driving, and control of its sensors. These are examples of non-critical functions, as these autonomous functions do not directly

determine the ability of the system to independently use force by selecting and attacking a target.13 Autonomy in the critical functions is made possible with the use of AI. This

differentiates AWS from other weapon system, because it is the machine using its sensors, programming, and onboard weapons, which takes on tasks ordinarily carried out by humans.14 In this chapter, the difference between AWS and automated weapons will be

clarified, including examples of these weapons. Furthermore, a paragraph is focused on the definition of AI, its application on AWS and the concerns which exist on the use of AWS empowered with AI.

2.1 Autonomous Weapon Systems and Automation in Weapon Systems

AWS are weapon systems that can operate independently and are capable of using their weapons autonomously, by selecting and attacking targets and making an independent decision to attack once they have been activated.15 The term ‘autonomous’ is used by

engineers to define systems that ‘can operate without direct human control in dynamic, unstructured, open environments based on feedback information from a variety of sensors.16

AWS are best understood as being composed of different soft-and hardware elements that work together - including sensors, algorithmic targeting and decision-making mechanisms, and the weapon itself’.17

It is necessary to differentiate autonomous weapons from automated weapons, which have been used on a large scale in militaries for some time now. Automated weapons are designed to fire automatically at a target when detecting predetermined parameters in a limited

environment. Once the parameters are met, the outcome is predictable.18 Therefore, automated

weapons are able to fire at targets when their sensors detect the pre-selected target, whereas

13 Neil Davidson, ‘Characteristics of Autonomous Weapon Systems’ (2015) CCW Meeting of Experts, 3. 14 Ibid, 1.

15 Van den Boogaard (n 10) 5.

16 Article 36, ‘Structuring Debate on Autonomous Weapon Systems’ (2013) Memorandum for Delegates to the Convention on Certain Conventional Weapons (CCW), 1 <

http://www.article36.org/wp-content/uploads/2013/11/Autonomous-weapons-memo-for-CCW.pdf>. 17 Ibid.

(8)

AWS are able to select their targets independently.19 Automated weapons are primarily

point-defense weapons. For example, missile- and rocket point-defense weapons. These weapons incorporate a radar, to detect incoming projectiles, and a computer-controlled ‘fire-control system’ to aim and fire a weapon. It selects an incoming projectile, estimates its course, and then fires missiles or bullets to destroy it.20 For example, the Goalkeeper Close-in Weapon

System. This is a ship-based gun system, that automatically performs the process from surveillance and detection to destruction.21

As mentioned in the introduction, fully AWS do not exist yet. However, there are semi-autonomous weapon systems. These weapon systems still require human authorization, but can identify targets autonomously. For example, the DoDAAM Systems combat robot, the Super aEgis I and II. This weapon uses various optical, thermal and infrared sensors to select human targets.22 This is an anti-personnel sentry weapon, for use at specific sites, that can

select targets autonomously, but require remote authorization from a human operator to attack.23 Furthermore, loitering munitions are highly automated, but still require human

interaction. These weapon systems can select and attack targets over a designated area and period, using on-board sensors and pre-programmed target signatures.24 For example, the

Harop. This is a part unmanned aerial vehicle and part-missile development in which the entire aircraft becomes an attack weapon upon spotting a target of opportunity. It can loiter in a given area, survey enemy movements, and hunt for critical targets.25 After the launch, it

autonomously flies over an area to find targets on its radar that fits pre-determined criteria and then unleashes an aerial strike.26 For now, a human is still an integral part of the system of

lethal decision-making and the Harop has abort attack capability.27

In short, the distinction between automated weapons and AWS is that automated

19 Van den Boogaard (n 10) 7.

20 ICRC, ‘Autonomous Weapon Systems, Implications of Increasing Autonomy in the Critical Functions of Weapons’ (2016) Expert Meeting, 72.

21 Thales, Goalkeeper Close-in Weapon Systems, < https://www.thalesgroup.com/en/goalkeeper-close-weapon-system> accessed 22 November 2018.

22 DoDAAM, Super aEgis II, <http://www.dodaam.com/eng/sub2/menu2_1_4.php> accessed 29 August 2018. 23 ICRC, ‘AWS, Implications of Increasing Autonomy in the Critical Functions of Weapons’ (n 20) 73. 24 Ibid, 75.

25 Military Factory, ‘IAI Harop’ <https://www.militaryfactory.com/aircraft/detail.asp?aircraft_id=1288> accessed 7 September 2018.

26 Chase Winter, ‘’Killer Robots’: autonomous weapons pose moral dilemma’ DW (14 November 2017) <https://www.dw.com/en/killer-robots-autonomous-weapons-pose-moral-dilemma/a-41342616> accessed 7 September 2018.

27 Israel Aerospace Industries, ‘Harop’ < http://www.iai.co.il/2013/36694-46079-en/Business_Areas_Land.aspx> accessed 7 September 2018.

(9)

weapons react to pre-determined criteria, whereas AWS are able to move independently, to handle unexpected events, to set its own goals, and to learn and adapt its functioning. 28

2.2 Artificial Intelligence in Autonomous Weapon Systems

In 2017, the Future of Life Institute released a short film showing a future with killer drones powered by AI. The film starts with a representative from a defense contractor, advertising an explosive-filled drone with an AI implant that allows to find, target and kill people.

Subsequently, the drones are shown to fall into the wrong hands and kill innocent people all around the world.29 Although this short film seems over the top, it shows the rapid

technological development of AI and existing concerns of AI-pioneers. As Stephen Hawking puts it: ‘Success in creating effective AI could be the biggest event in the history of our civilization, or the worst’.30 Contrary to these concerns, other experts highlight the benefits of

using AI for AWS, because of the greater precision AWS would offer compared to weapon systems controlled directly by humans.31 This paragraph explains how AI works, in general

and for military uses, and explains the concerns regarding the use of AI for AWS. In general, the field of AI attempts to understand intelligent entities and strives to build intelligent entities. AI has produced many significant and impressive products at this early stage in its development, and it is clear that computers with human-level intelligence (or better) would have a huge impact on the future course of civilization.32 AI can be described

as: ‘the formless capacity embedded in software and hardware architecture which enables the latter to exhibit traits that resemble human reasoning, problem-solving, learning, planning, and/or knowledge’.33 Most ongoing AI research is in sectors that are not explicitly military

related. For example, in healthcare, civilian-vehicle manufacture or internet search34 (AI’s

footprints are all over social media, from neural networks learning to tag, to facial recognition making it easier to find friends35). However, according to Kenneth Payne, researcher in 28 ICRC, ‘Autonomous Weapon Systems, Implications of Increasing Autonomy in the Critical Functions of Weapons’ (n 20) 23.

29 Future of Life Institute, ‘Slaughterbots’ <https://www.youtube.com/watch?v=HipTO_7mUOw> accessed on 7 September 2018.

30 Hawking (n 7).

31 ICRC, ‘Ethics and Autonomous Weapon Systems’ (n 2) 8.

32 Stuart J. Russel, Peter Norvig, Artificial Intelligence: A Modern Approach (3th edn, Prentice Hall 2009), 1. 33 Regina Surber, ‘Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapon Systems (LAWS) and Peace Time Threats’ (2018) ICT4Peace Foundation, 4 <

https://ict4peace.org/wp-content/uploads/2018/06/2018_RSurber_AI-AT-LAWS-Peace-Time-Threats_final.pdf> accessed on 7 September 2018.

34 Kenneth Payne, ‘Artificial Intelligence: A Revolution in Strategic Affairs?’ (2018) 60 Survival, Global Politics and Strategy,7-8.

35 Internet Society, ‘Internet Society Global Internet Report: Paths to Our Digital Future’ (2017)

< https://future.internetsociety.org/wp-content/uploads/2017/09/2017-Internet-Society-Global-Internet-Report-Paths-to-Our-Digital-Future.pdf > accessed on 20 September 2018.

(10)

strategic studies and AI, the broad capabilities of AI which are under development – the ability to flexibly categorize information and use this as a basis for decision-making – have clear military strategic utility. Payne maintains that in the near future AI systems will allow autonomous decision-making by networked computer systems, enabling extremely rapid sequential action even in uncertain operating environments.36 AWS would be able to quickly

match a current situation with previous scenarios that were uploaded into it, or that it has already faced on previous missions.37 Thus, the use of AI in AWS enables the systems to adapt

their actions to changing and unanticipated events, without human intervention. 38

Concerns of the ICRC regarding AWS empowered with AI, mostly consists of the risks when no human is involved in the process of selecting and attacking a target. The ICRC claims that ‘when AWS selects and attacks targets independently, there is uncertainty as to exactly when, where and/or why the resulting attack will take place. The difference between a human and AWS is that the former involves a human choosing a specific target to be attacked, connecting their legal responsibility to the specific consequences of their actions.’39 Moreover, Payne

thinks that the technological change will force humans out of the picture: ‘it actually will be happening just too quickly for a human to interfere’.40

Much progress has been made by AI systems utilizing ‘deep learning’ methods that model the neural processes of human brains. Already, algorithms are demonstrating an increasing capacity to learn without supervision and to cope with unclear information.41 Although the

development of AI might be rapid, fully AWS have not been developed yet and AI still has flaws, also considering other fields were AI is used. For example, with Tesla’s self-driving cars, whereby several accidents have happened including deadly crashes when humans were not paying attention. The company said its Autopilot feature ‘can keep speed, change lanes and self-park, but requires drivers to keep their eyes on the road and hands on the wheel, in order to be able to take control and avoid accidents.’42 It seems that more progress in AI is

36 Payne (n 34) 9.

37 Van den Boogaard (n 10) 20. 38 Ibid, 5.

39 ICRC, ‘Ethics and Autonomous Weapon Systems’ (n 2) 12.

40 Nieuwsuur, ‘A Bright New World: de Impact van Artificial Intelligence’ (2018)

<https://nos.nl/nieuwsuur/artikel/2252501-a-bright-new-world-de-impact-van-artificial-intelligence.html> accessed 1 October 2018.

41 Payne (n 35) 8.

42 Guardian, ‘Tesla Car that Crashed and Killed Driver was Running on Autopilot, Firm Says’ The Guardian (31 March 2018) < https://www.theguardian.com/technology/2018/mar/31/tesla-car-crash-autopilot-mountain-view> accessed 2 October 2018.

(11)

necessary before machines can be used fully autonomously, but also for humans to stay alert when using machines empowered with AI.

Although much progress has been made in the field of AI, the idea of ‘killer robots’, as several critics pose,43 seems to be overstated. Michael Schmitt and Jeffrey Thurner argue in

their article ‘‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict’, that the idea of ‘killer robots or drones’ is premature, as there exists no such a system yet. 44 However, AWS are on its way. That is why it is necessary to analyze whether it

is possible for AWS to be used in compliance with the IHL principles.

43 Human Rights Watch, ‘Losing Humanity, The Case Against Killer Robots’ (2012), 1.

44 Michael N. Schmitt, Jeffrey S. Thurner, ‘‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict’ (2013) 4 Harvard National Security Journal, 237.

(12)

3.

The Principles of International Humanitarian Law and

Autonomous Weapon Systems

In 2014, the first meeting to discuss the questions related to emerging technologies in the area of AWS was organized by the High Contracting Parties to the CCW and joined by experts on AWS.45 The session on the legal aspects examined the question of compatibility and

compliance of AWS with international law, in particular the IHL principles of distinction, proportionality and precautions in attack, as well as the Martens Clause and customary law.46

During this meeting, the necessity for any development and use of AWS to be in compliance with IHL was reaffirmed.47

The Martens Clause holds that in cases which are not covered by international agreements, civilians and combatants remain protected by customary IHL, the principles of humanity and by the dictates of public conscience. 48 The Martens Claus was first inserted in the preamble of

the 1899 Hague Convention II, and was restated in the 1907 Hague Convention on the same matter. It is argued that the Martens Clause has been a significant turning point in the history of IHL. It would represent the first time that international legal rules embodying humanitarian concerns, were considered just as binding as other international legal rules motivated by other concerns.49 However, it remains ambiguous what is exactly meant by the principles of

humanity and the dictates of public conscience, especially when applied in armed conflicts. When it was established, there were no international courts with compulsory jurisdiction. It was left to each belligerent to decide for itself whether or not it had behaved humanely while attacking the enemy.50 Now, it is argued that the principles of humanity as stated in the

Martens Clause are included in customary IHL.51

Customary law consists of rules that come from a general practice, reflecting a legal conviction and is accepted as a primary source of legal obligations. Customary IHL is of

45 CCW/MSP/2014/3, 3. 46 Ibid, 4.

47 Ibid, 5.

48 Dieter Fleck, The Handbook of International Humanitarian Law (3th edn, Oxford University Press 2013), 33.

49 Antiono Cassesse, Paola Gaeta, Salvatore Zappalà, The Human Dimension of International Law: Selected Papers of Antonio Cassesse (Oxford University Press 2008), 39.

50 Ibid, 40.

51 Theodor Meron, ‘The Martens Clause, Principles of Humanity, and Dictates of Public Conscience’ (2000) 94 The American Journal of International Law 78, 87-88, see also Cassesse, Gaeta, Zappalà (n 49) 51, see also Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, International Court of Justice, Nuclear Weapons, para 78, 84.

(13)

importance in armed conflicts, because it binds all parties to an armed conflict.52 Even if states

have not ratified important treaty law, they remain bound by rules of customary law.53

Moreover, the legal framework governing non-international armed conflict is more detailed under customary international law than under treaty law. Since most armed conflicts today are non-international this is of importance.54

This research is focused on the legality of the use of AWS in the conduct of hostilities, hence the decision to limit its scope to the IHL rules of distinction, proportionality and precautions in attack (although customary IHL in the view of most experts underline the rules). First, the principles are explained, subsequently the main concerns regarding the compliance of the use of AWS with these principles.

3.1 The Principle of Distinction

The principle of distinction is laid down in article 48 of API and obligates states, in order to ensure respect for and protection of the civilian population and civilian objects, to distinguish between military objectives and civilian objects, combatants and civilians and active

combatants and those hors de combat; to determine whether the attack may be expected to cause incidental civilian causalities and damage to civilian objects.55 Article 41(1) of API

prohibits attacks against persons who are hors de combat. A person is hors de combat if: he is in power of an adverse party, he clearly expresses an intention to surrender, or he has been rendered unconscious or incapacitated by wounds or sickness, and therefore is incapable of defending himself.56

Attacks should not be directed against civilian objectives or the civilian population, this is laid down in article 52 (2) of API: ‘attacks shall be limited strictly to military

objectives’. In practice, this means that it may be the case that the commander needs to change means and methods, for achieving the desired military purpose in a discriminating way.57 Article 51(2) of API operationalizes the principle with respect to persons: ‘the civilian

population as such, as well as individual civilians, shall not be the object of attack’

Furthermore, Article 51(4) of API prohibits indiscriminate attacks. These include: (a) attacks

52 ICRC, ‘Customary Law’ <https://www.icrc.org/en/war-and-law/treaties-customary-law/customary-law> Accessed 10 September 2018.

53 ICRC, ‘Customary International Humanitarian Law’ (2010) < https://www.icrc.org/en/document/customary-international-humanitarian-law-0> accessed 22 November 2018.

54 Ibid.

55 Fleck (n 48) 36, see also Davidson (n 11) 7, see also Article 48 Additional Protocol to the 1949 Geneva Convention of 1977 (API).

56 Article 41(2)(a)(b)(c) API.

(14)

which are not directed at a specific military objective, (b) those which employs a method or means of combat which cannot be directed at a specific military objective, or (c) those which employ a method or means of combat the effects of which cannot be limited as required by the Protocol.58 An example of an attack which would breach article 51(4)(a) of API would be

the unaimed, or blind firing of a rifle. The rule laid down in 51(4)(b) and (c) of API is breached when, for example, certain biological weapons are used that release diseases, the spread of which cannot be controlled.59 Article 52(2) of API states that attacks shall be limited

to strictly military objectives. Two tests must be satisfied for an object to become a military objective, (1) it must contribute effectively to military action, and (2) its destruction or capture must offer a definite military advantage.60 The term ‘military advantage’ will be

further discussed in the next paragraph on the principle of proportionality.

In an international armed conflict, the term combatants refers to three types of individuals. First, members of the regular armed forces qualify as combatants.61 Second, members of

militias or volunteer corps that belong to a party to the conflict also qualify as combatants when they: (1) wear a distinctive sign, (2) carry their arms openly, (3) abide by the laws of war and (4) operate under responsible command.62 Third, the category of combatants includes

member of a levée en masse. A levée en masse consists of inhabitants of a non-occupied territory, who on the approach of the enemy spontaneously take up arms to resist the invading forces, without having had time to form themselves into regular armed units, provided they carry arms openly and respect the laws and customs of war.63 Civilians can be very close to

combat actions. Consequently, the situation on battlefields is blurred and the implementation of the principle of distinction can cause difficulties. During their direct participation in hostilities, they lose their protection as civilians and the rules concerning combatants temporary apply to them, as long as the participation in hostilities takes place.64 Thus,

civilians lose their protection from attack when they ‘directly participate in hostilities’, for such time as they participate.65 The ICRC published the ‘Interpretive Guidance on the Notion

of Direct Participation in Hostilities under IHL’. It suggested that acts of direct participation

58 Article 51(4)(a)(b)(c) API. 59 Boothby (n 57) 92. 60 Ibid, 100.

61 Article 13 Geneva Convention I of 1949 (GC I); Article 4(a)(1) Geneva Convention III of 1949 (GC III) 62 Article 4(a)(2) GC III.

63 Article 4(a)(6) GC III, see also M.N Schmitt, E. Widmar, The Law of Targeting’ in Frans P. B. Osinga, Michael N. Schmitt, Paul A.L. Ducheine (eds), Targeting the Challenges of Modern Warfare (T.M.C. Asser Press 2016), 125.

64 Fleck (n 48) 130.

(15)

must meet the following criteria: ‘(1) the act must be likely to adversely affect the military operations of a party to an armed conflict, (2) there must be a direct causal link between the act and the harm likely to result, or the act must be an integral part of a coordinated military operation that causes the harm, (3) the act must be specifically designed to directly cause the required threshold of harm in support of a party to the conflict and to the detriment of

another.66

The distinction between combatants and non-combatants is different when applied to an international armed conflict or a non-international armed conflict. This should be taken into consideration, because the distinction in non-international armed conflict is often less clear. In the Interpretative Guidance is explained that, ‘for the purpose of the principle of distinction, in international armed conflict, all persons who are neither members of the armed forces of a party to the conflict nor participants in a levée en masse are civilians, and,

therefore, entitled to protection against direct attack unless and for such time as they take a direct part in hostilities.67 A non-international armed conflict is a situation of armed violence

between governmental authorities and organized armed groups or between such groups within a state.68 In non-international armed conflict, all persons who are not members of state armed

forces or organized armed groups of a party to the conflict, are civilians. Therefore, they are entitled to protection against direct attack, unless and for such time as they take a direct part in hostilities.69 Moreover, in non-international armed conflict, organized armed groups

constitute the armed forces of a non-state party to the conflict, and consists only of individuals whose continuous function is to take a direct part in hostilities.70 To recognize this

‘continuous combat function’ can be challenging. Consequently, in non-international armed conflicts, the distinction between combatants and civilians is often more difficult to make. It can be assumed that it will be possible to upload identifying factors of military objects into the computer of AWS. For example, the shape, size or maximum speed of armored combat vehicles.71 The Harop can already automatically search, detect and accurately attack mobile or

static targets.72 With the Harop you can specify the geographic region and the targeting

criteria, which could be ‘looks like a tank’. Then it flies around the region until it finds

66 ICRC, ‘Interpretive Guidance on the Notion of Direct Participation in Hostilities under International Humanitarian Law’ (2009), 58.

67 Ibid, 20. 68 Fleck (n 48) 49.

69 ICRC, ‘Interpretive Guidance on the Notion of Direct Participation in Hostilities under IHL (n 66) 27. 70 Ibid.

71 Van den Boogaard, (n 10) 15.

72 Air Force Technology, ‘Harop Loitering Munitions UCAV System’ < https://www.airforce-technology.com/projects/haroploiteringmuniti/> accessed 6 October 2018.

(16)

something that meets the targeting criteria and destroys it.73 As long as the target criteria are

accurately programmed in the machine and the geographic area is specified, it seems possible for AWS to make a distinction between military objectives and civilian objects in some situations. This would mean that AWS are capable of being directed at a specific military objective, thereby complying with the rule in article 51(4) of API, especially when AWS would be deployed with precision guided munitions.74 Precision guided munitions are guided

with sensors to detect the target and guide the munition during the flight. The primary goal of a sensor system in precision guided munition is to improve its effectiveness and reduce collateral damage.75

There are complex situations during an armed conflict which will make the distinction between combatants and non-combatants very challenging to make. Yet, this is the same for humans. Especially in guerilla warfare, the counter-insurgent often finds it difficult to

distinguish guerillas from the population at large.76 A distinction can be made between

status-based targeting (targeting members of an armed group) and conduct-status-based targeting. It seems impossible that AWS would be capable of status-based targeting in guerilla warfare, as the status of individuals is, even for humans, hard to recognize. The ICRC explains, ‘AWS would have to be able to distinguish not only between combatants and civilians, but also between active combatants and those hors de combat, and between civilians taking a direct part in hostilities and armed civilians such as law enforcement personnel, who remain protected against direct attack’.77 Conduct-based targeting might be a possibility. Computers can be

programmed to only fire when they are fired upon, or when a soldier of the adversary party points his gun at a fellow-soldier. Moreover, AWS might be able to recognize, after being programmed, ambulances and medical personnel, because they bear the protective emblem78

or because of their non-aggressive conduct. With conduct-based targeting, AWS would be able to distinguish between a person carrying a rifle because this might be legal in a country and meant for self-defense, and persons who actually point their rifle at a soldier. However, conduct-based targeting would mean that AWS must be capable of recognizing behaviors that

73 Nieuwsuur (n 40).

74 Schmitt, Widmar, ‘The Law of Targeting’ (n 63) 135.

75 Sreeja Sulochana, Hari B Hablani, Hemendra Arya, ‘Precision Targeting in Guided Munition Using Infrared Sensor and Milimeter Wave Radar’( 2016) 10 Journal of Applied Remote Sensing, 1.

76 Bernhard L. Brown, ‘The Proportionality Principle in the Humanitarian Law of Warfare: Recent Efforts at Codification (1976) 10 Cornell International Law Journal, 140.

77 ICRC, ‘Fully Autonomous Weapon Systems’ (2013) Presentation by Kathleen Lawand.

<https://www.icrc.org/eng/resources/documents/statement/2013/09-03-autonomous-weapons.htm> accessed 7 October 2018.

(17)

carry hostile or surrender messages and the incapability to fight, when persons are hors de combat.79

Furthermore, in this regard, spoofing should be taken into consideration. Spoofing is a method of bypassing some aspect of a system by delivering deceptive messages.80 In the context of the

use of AWS, this could mean that the adversary party paints the red cross emblem on a military bus, in order to confuse AWS in recognizing the bus as ‘not a military objective’. As soon as soldiers carrying weapons are jumping out of the bus, AWS might not be capable to correctly categorize incoming data, as the situation contradicts its orders.

Arguments in favor of AWS include the characteristic that AWS are – unlike human soldiers – not driven by self-preservation and do not feel any emotions, which consequently can lead to better precision and more respect for the principle of distinction. As quoted by Gordon Johnson, who leads robotic efforts at the Joint Forces Command at the Pentagon: ‘They don’t get hungry. They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot.’81 A human soldier can make mistakes once they are afraid, enter a

dark house and fire when they hear something suspicious, which can turn out to be a civilian trying to get out of the house.

The use of AWS does not breach the principle of distinction by definition, as conduct-based targeting would be a possibility. Yet, mistakes may derive from unreliable intelligence.82 The

indiscriminate use of a discriminate weapon is unlawful.83 Michael Schmitt explains in his

article on drones; ‘it is the unreasonable reliance on suspect intelligence, not the platform used to exploit it, which renders the attack unlawful’.84 AWS have to be legally reviewed by a

military, taking into account the sensors of the system, guidance technology on board, their ability to direct attacks solely at military objectives, and the reliability of the AI used in the weapon system, before using it in an attack.85 Although the progress of AI to recognize certain

objects and persons is impressive, it seems that AWS – for now - will not be capable of

79 Daniele Amoroso, Guglielmo Tamburrini, ‘The Ethical Case Against Autonomy in Weapons Systems’ (2018) Global Jurist, 6.

80 Hargrave’s Communication Dictionary (2001)

81 Tim Weiner, ‘New Model Army Soldier Rolls Closer to Battle’ New York Times (16 February 2005) <https ://www.nytimes.com/2005/02/16/technology/new-model-army-soldierrolls-closer-to-battle.html> accessed 10 October 2018

82 Fleck (n 48) 182.

83 Michael N. Schmitt, ‘Drone Attacks under the Jus Ad Bellum and Jus In Bello: Clearing the ‘Fog of Law’ in Michael N. Schmitt, Louise Arimatsu, Tim McCormack (eds) Yearbook of International Humanitarian Law (T.M.C. Asser Press 2011), 8-9.

84 Ibid, 9.

(18)

recognizing persons who are taking direct part in hostilities, are specifically protected, or are hors de combat, on an accurate enough level to comply with the principle of distinction.86

3.2 The Principle of Proportionality

The rule of distinction entails a requirement to conduct a proportionality assessment, if civilians are likely to be affected by an attack. The principle of proportionality is codified in article 51(5)(b) of API. The rule prohibits ‘an attack which may be expected to cause

incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to concrete and direct military advantage anticipated.’87 According to customary IHL, the term ‘military advantage’, refers to ‘the

benefits of a military nature that results from an attack. They relate to the attack as a whole and not only of isolated or specific parts of the attack’.88 The word ‘direct’ implies a

connection between the action and the fulfillment of the military purpose.89 Compliance with

the rule of proportionality is assessed based upon the information reasonably available to the attacker at the time the attack was planned. However, those with the ability to control the engagement have a continuing duty to comply with the rule.90 Moreover, attackers must

consider the full range of targets that, if attacked, would yield the same or similar military advantage. When options are available, the attacker has to select the objective that ‘may be expected to cause the least danger to civilian lives and to civilian objects’.91

Concerns on the capability of AWS to implement the principle of proportionality arise because of the principle’s complex set of variables involved in the assessment. The

proportionality assessments has qualitative and subjective elements, is conducted on a case-by-case basis, and dependent on the specific context. 92 When planning an attack, the question

of whether the collateral damage is excessive in relation to the military advantage, is partly subjective. Jeroen van den Boogaard explains in his article on proportionality and AWS that, ‘on the one hand, it leaves the commander with quite some room to maneuver. On the other hand, the term remains vague in the sense that it offers little guidance on the exact point

86 Noel E. Sharkey, ‘The Evitability of Autonomous Robot Warfare’ (2012) 94 International Review of the Red Cross, 789.

87 Article 51(5)(b) API, Article 57 (2)(a)(ii) API.

88 Customary International Humanitarian Law Study, rule 14 < https://ihl-databases.icrc.org/customary-ihl/eng/docs/v2_cou_de_rule14_sectionb> Accessed on 10 September

89 Françoise Hampson, ‘Proportionality and Necessity in the Gulf Conflict’ (1992) Proceedings of the Annual Meeting American Society of International Law, 47.

90 Article 57 (2)(b) AP I, see also Schmitt, Widmar, ‘The Law of Targeting’ (n 63) 140-141. 91 Article 57(3) API, see also Schmitt, Widmar, ‘The Law of Targeting’ (n 63) 139.

(19)

where a planned attack is no longer proportionate.’ 93 For example, when planning on

attacking a single bridge in a defined area with the purpose to hinder adversary military supplies, the proportionality of the attack depends on the following aspects. Firstly, the military advantage. It is necessary to check whether there are other routes or bridges available to which the adversary can easily switch to, reducing the military advantage from destroying that one bridge. Secondly, the anticipated military advantage has to be compared to the expected collateral damage to civilians and civilian objects.94 When the attack is planned at

eight o’clock in the morning when it is expected that there will be a lot of civilians on the bridge to get to their work, it will be disproportionate to bomb the bridge. During the night when there are no civilians are around, the attack could be proportionate. However, if destroying the bridge equals destroying a vital civilian infrastructure, this should also be considered.95 Yet, in that case the attack would only be disproportionate if the bridge is

indispensable to the survival of the civilian population, assuming the attack offers a definite military advantage.96 Noel Sharkey, computer scientist and chair of the International

Committee on Robots Arms Control, describes the subjective element as the ‘hard

proportionality problem’ for AWS. He argues that ‘the decision about what is proportional to direct military advantage is a human qualitative and subjective decision.’ He claims that it is essential that such decisions are made by responsible human commanders, who can weigh the options based on experience and situational awareness.97 Moreover, the International Criminal

Tribunal of former Yugoslavia (ICTY) explained that ‘in determining whether an attack is proportionate, it is necessary to examine whether a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.’98 It

seems that the ICTY also recognizes the necessity of human judgment for the proportionality assessment.

Considering the current progress of AI, it might be possible in the near future that AWS would be able to conduct a collateral damage estimation in some situations. The principle of proportionality requires consideration of a commander’s assessment of the expected collateral damage of attack. The test is based on the expected collateral damage a ‘reasonable

commander’ would have assessed at the time of attack – and not the damage that actually

93 Van den Boogaard (n 10) 20. 94 Ibid, 19.

95 Ibid, 21.

96 Article 52(2) API, Article 54 (2) API. 97 Sharkey (n 86) 789-790.

(20)

occurred as a result of the attack.99 Collateral damage estimation methodologies are used by

some militaries. They rely on testing and data, analysis of past practice and lessons learned through battle damage assessment.100 This seems applicable to the capabilities of AI in

weapon systems. Consequently, it might be foreseeable that a weapon system would also be able to identify moving tanks and categorize them as military objects and lawful targets. However, to actually make a balance between military targets, civilian objects and civilians, and deciding whether an attack is proportionate, seems too ambitious for AI. Considering the subjective element of the proportionality assessment, namely striking the balance between the military advantage and the expected civilian casualties which has to be conducted on a case-by-case basis, it seems that the proportionality assessment remains to a large extent a human activity.

3.3 The Principle of Precautions in Attack

The principle of precautions in attack is laid down in article 57(1) of AP I, saying: ‘in the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects’.101 The principle requires that: (1) those who plan or decide

upon attack shall do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects and are not subject to special protection but are military

objectives,102 (2) all feasible precautions must be taken to avoid, and in any event to minimize,

incidental loss of civilian life, injury to civilians and damage to civilian objects,103 (3) an

attack should be canceled or suspended if it becomes clear that the target is not a military objective and attacking the target in the circumstances ruling at the time would be disproportionate,104 (4) effective warning shall be given of attacks which may affect the

civilian population, unless circumstances do not permit,105 and (5) when a choice is possible

between several military objects for obtaining a similar military advantage, the selected objective shall be the attack which may be expected to cause the least danger to civilian lives and to civilian objectives.106

99 ICRC, ‘Proportionality and Precautions in Attack: The Reverberating Effects of Using Explosive Weapons in Populated Areas’ (2017) International Review of the Red Cross, 121.

100 Ibid, 122.

101 Article 57 (1) API. 102 Article 57 (2)(a)(i) API.

103 Article 57 (2)(a)(i) API, see also Customary International Humanitarian Law Study, rule 15 < https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule15> accessed 12 September 2018.

104 Article 57(2)(b) API, see also Davidson (n 11) 7. 105 Article 57(2)(c) API.

(21)

The first requirement on the verification of the objectives to be attacked should be considered during the whole military operation, as constant care should be taken to spare the civilian population, civilians and civilian objects. This means that, for example, if a soldier who is ordered to attack a building in which enemy forces have barricaded themselves becomes aware that civilians are unexpectedly present, the soldier must reassess the proportionality of the attack. This could either mean changing the means or methods of attack to avoid, or in any event minimize collateral damage, or cancel or postpone the attack.107 This means for AWS

that the weapon system should be able to recognize that the characteristics of the initial selected target has changed, and subsequently conduct a proportionality assessment to decide whether it is still a lawful military target. As concluded in the previous paragraph, the

subjective element of the proportionality assessment requires a human in control to ensure compliance with the principle.

The second obligation, to take all ‘feasible’ precautions, has been interpreted by many states as being limited to those precautions which are practicable or practically possible, taking into account all circumstances ruling at the time, including humanitarian and military

considerations.108 These include the obligation for a party to the conflict to do everything

feasible to verify that targets are military objectives,109 and taking precautions in the choice of

means and methods of warfare.110 For example, a commander wants to bomb a building in a

populated area, in which an insurgent leader is hiding. He must consider different means and methods, trying to reduce civilian casualties to a minimum. However, Michael Schmitt and Eric Widmar explain in their article on the law of targeting: ‘an attacker does not have to use a less powerful bomb against an insurgent leader in a building in order to avoid civilian casualties if doing so would significantly lower the likelihood of success (assuming all other requirements are met).’111 The military advantage does not have to be sacrificed, but the

attacker must consider all the reasonable options. Thus, the concept of ‘feasibility’ requires a subjective evaluation of the humanitarian and military considerations to decide which

precautions are feasible, but the military advantage does not have to be sacrificed.

Moreover, when taking feasible precautions in the choice of means and methods of warfare during an attack, if there are other assets reasonably available to verify or attack a

107 Schmitt, Widmar, ‘The Law of Targeting’ (n 63) 140-141.

108 Customary International Humanitarian Law Study, rule 15 < https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule15 accessed 8 October 2018.

109 Customary International Humanitarian Law Study, rule 16 < https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule16> accessed 8 October 2018.

110 Customary International Humanitarian Law Study, rule 17 < https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule17> accessed 8 October 2018.

(22)

target, these must be resorted to if doing so would improve the verification or the attack of the target.112 It will be the commander who decide whether the circumstances of an attack require

AWS. When, for example, in guerilla warfare the distinction between military targets and civilians is blurred, circumstances are changing quickly, the insurgent leader is moving, and it is plausible that many civilians are present in a the designated area, it might not be lawful to launch AWS. Consequently, another weapon system has to be used.

The third requirement entails that an attack should be canceled or suspended if it becomes clear that the target is not a military objective and attacking the target in the circumstances ruling at the time would be disproportionate. Important for this requirement is the constantly monitoring of all incoming data where AWS would be operating. Considering the progress of AI, AWS would be able to process large amounts of data.113 However, it is questionable

whether AWS will be capable of making the decision to abort the mission or to change the course of action, when changing circumstances require, by processing and categorizing the incoming data.

Regarding the fourth requirement, to issue an effective advance warning, it is accepted that the need for surprise in certain attacks is a circumstance which may preclude the issuance of warnings.114 Considering the assumed precision of AWS and its ability to be targeted at

military lawful targets and not civilians, the requirement to issue an advance might not even be necessary.

The fifth requirement is to choose to attack the military objective which is expected to cause the least danger to civilian lives and to civilian objectives. It might be possible that AWS could predict to a certain extent the effects of an attack at a military object, and thereby foresee whether civilian objects would be damaged. However, as concluded in the paragraph on the principle of distinction, AWS are probably not capable of recognizing all protected persons and civilians in all circumstances. Consequently, it is necessary that a human would make sure that the military objective is chosen which is expected to cause the least danger to civilian lives.

In conclusion, the principle of precautions in attack and especially the concept ‘feasibility’ requires contextual evaluations, similar to proportionality assessment, which are unlikely to be adequately performed by a fully AWS.115 However, the commander who decides to launch 112 Schmitt, ‘Drone Attacks under the Jus Ad Bellum and Jus In Bello’ (n 83) 12.

113 Boothby (n 57) 286. 114 Boothby (n 57) 12.

(23)

an AWS will take precautions in attack himself. Thus, the principle requires human intervention to make sure the civilian population and civilian objects are protected. This chapter explained the rules on the conduct of hostilities and the main concerns of

applying these rules to AWS. The lawfulness of AWS under IHL differentiates, depending on the actual facts and the concrete employment on the field.116 The next chapter discusses the

concept of meaningful human control when using AWS while applying the law of targeting.

(24)

4.

Possible Uses of Autonomous Weapon Systems

The previous chapter showed that AWS are not necessarily inherently indiscriminate and their use is not necessarily incompatible with the proportionality principle, as long as the

proportionality assessment is conducted by a human when there is a risk that civilians are affected by the attack. The principle of precautions in attack also requires human intervention when using AWS, to ensure the protection of the civilian population. The necessity of

meaningful human control is by many critics posed as a main argument against AWS. Peter Asaro, who researches the legal and ethical dimensions of military robotics, argues that an implicit requirement for human judgement can be found in IHL, as it was designed to govern the conduct of humans.117 Linked to the discussion on meaningful human control, is the

distinction between human-in-the-loop and human-on-the-loop. Human-in-the-loop is described as the capability of a machine to take some action, but then stop and wait for a human to take a positive action before it continues.118 Human-on-the-loop means that a human

has supervisory control and only intervenes when he/she wants to stop a machine’s operation. The phrase human-out-the-loop is sometimes used to describe AWS, because it would mean that the machine will take action and the human cannot intervene.119

Arguments supporting meaningful human control often neglect the targeting cycle which is conducted before an attack (explained below), and the potential useful capabilities of AWS during an armed conflict. Pablo Kalmanovitz explains in his article on the lawfulness of AWS, that the decision to deploy AWS is made depending on the situation: ‘Sensors that are perfectly capable of detecting certain weapons could be presented as a reasonable basis for distinction under some circumstances, but not in others. It seems unlikely that in counter-insurgency scenarios algorithms could perform autonomously and in keeping with the ICRC criteria of ‘direct participation of hostilities’’.120 This chapter analyzes the possible uses of

AWS, while complying with the IHL principles. Firstly, the concept of meaningful human control is discussed. Secondly, the targeting cycle will be analyzed in order to explain the possible uses of AWS and illustrating situations in which AWS can be used.

117 Peter Asaro, ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making’ (2012) International Review of the ICRC 94, 700.

118 ICRC ‘AWS: Implications of Increasing Autonomy in the Critical Functions of Weapons’ (n 20) 53. 119 Ibid.

120 Pablo Kalmanovitz, ‘Judgment, Liability and the Risks of Riskless Warfare’ in Nehal Bhuta, Susanne Beck, Robin Geiβ, Hin-Yan Liu, Claus Kreβ (eds), Autonomous Weapon Systems, Law, Ethics, Policy (Cambridge University Press 2016), 153.

(25)

4.1 Meaningful Human Control

‘Meaningful human control over individual attacks’ is a phrase posed by several critics and organizations to express the core element that is challenged by the movement towards AWS, the autonomy of weapons in its critical functions.121 For human control to be meaningful, the

organization Article 36 considers as key elements the need for predictable technology, and the need for human judgment to be applied in the use of force. 122 Paul Scharre, director of the

20YY Future of Warfare Initiative, narrowed down the areas through which meaningful human control is enacted, to two components:

1. The ability of the human operator to accurately predict the AWS behavior in the environment in which it is being used.

2. The ability of the human operator to undertake corrective action when AWS fail to behave in accordance with the human operator’s intentions.123

These two aspects will be briefly discussed. Established is, that a fully autonomous system is never entirely human-free. An operator would have to program it to function pursuant to specified parameters, and a commander decides to employ it during an attack.124

4.1.1 Predictability of Autonomous Weapon Systems

Predictability applied to AWS consists of knowledge of how the weapon system will function in any given circumstances of use, and the effects that will result.125 Sufficient testing is

necessary in order to predict the functioning of AWS. According to Kalmanovitz, the testing should provide the basis for defining a probability distribution over the best of outcomes in a deployment scenario; ‘for each possible outcome, commanders should be able to estimate the expected impact on civilians.’126 To accurately predict the course of action of weapon

systems, knowledge of the technical performance characteristics is required. These include the speed, range and endurance.127 For a commander to be confident that he has taken all

121 ICRC ‘AWS: Implications of Increasing Autonomy in the Critical Function of Weapons’ (n 20) 46, see also Article 36, ‘Discussion Paper: Autonomous Weapon Systems: Evaluating the Capacity for ‘Meaningful Human Control’’ (2017) Weapon Review Processes, 1.

122 ICRC ‘AWS: Implications of Increasing Autonomy in the Critical Function of Weapons’ (n 20) 46, see also Article 36, ‘Evaluating the Capacity for ‘Meaningful Human Control’’ (n 121) 4.

123 Paul Scharre, ‘Autonomous Weapons and Operational Risk’ (2016) Center for a New American Security, 8.

124 Schmitt, Thurner, ‘Out of the Loop’ (n 44) 235.

125 ICRC ‘AWS: Implications of Increasing Autonomy in the Critical Function of Weapons’ (n 20) 9. 126 Kalmanovitz, (n 120) 156.

127 F. S. Timson, ‘Measurement of Technical Performance in Weapon System Development Programs: A Subjective Probability Approach’ (The RAND Corporation 1968) Advanced Research Projects Agency, 12.

(26)

reasonable steps to avoid creating excessive risks to civilians, he would have to trust that sufficient tests had been done on AWS.128

AWS are autonomous in their critical functions, therefore they are non-deterministic to various degrees. 129 Yet, assuming that operational parameters would be accurately

pre-programmed and the geographic area and time would be limited, the effects of AWS could be predictable to a certain extent. However, this would mainly be in situations where the

principles of distinction, proportionality and precautions in attack are not challenged or are straightforward. For example, an autonomous loitering munition would be programmed to loiter above a defined part of a battlefield, for two hours, programmed to search and attack enemy tanks. It may not be predictable exactly at what time and at which precise location the attack will take place. Nevertheless, considering the strict parameters, the commander can reasonably assume that an attack would be lawful. Furthermore, the use of AWS would possibly be lawful in an air exclusion zone. Assuming that the area is clearly identified at international level and air space is strictly forbidden to all air traffic, civilian or military. 130

Gérard de Boisboissel, professor at Saint-Cyr Military Academy Research Centre in France, thinks that an option for future air combats could be the combination of an unmanned aerial vehicle designator - used as AWS - which can indicate where the target lies, together with an unmanned combat aerial vehicle - used as AWS - which can launch short or long term

missiles and has the capacity of recognizing targets.131 According to him, the assumed

flexibility and adaptability of AWS and the combination of one or more target designators and one or more missile launchers can cover a large air exclusion zone.132

In contrast, the principles would be challenged when AWS would be used in battlefields where there could be civilians. Boisboissel gives an example in his article on possible uses of AWS.133 He illustrates a situation where a friendly unit installs a campsite in a

hostile environment away from urban zones, where AWS would be used to secure the

bivouacs.134 For instance, it could be a possibility that the AWS are similar to the Super aEgis

I and II and the sensors would select human targets based on conduct based-targeting. According to Boisboissel, AWS would secure a bivouac with more efficient sensors than

128 Kalmanovitz (n 120) 155-156. 129 Ibid, 156.

130 Gérard de Boisboissel, ‘Uses of Lethal Autonomous Weapon Systems’ (2015) International Conference on Military Technologies, 4.

131 Ibid. 132 Ibid. 133 Ibid, 5. 134 Ibid.

(27)

human ones, and would be highly responsive in case of an attack.135 However, problems will

arise when persons carrying a weapon will enter the specific site with no hostile intentions, but feel threatened by the AWS and fires at it. The use of AWS is too much of a risk in this case. It seems that limiting the operational parameters, like the geographic area, is not always sufficient if there is a risk that civilians or other protected persons will be present in a

specified area. This would mean the predictability of AWS would be the most accurate when used for an attack where the principle of distinction is not challenged at all. For example, the demilitarized zone between North-Korea and South-Korea is guarded by the SGR-A1 Sentry Guard Robot. The area were the weapon system is used is demarcated with integrated series of obstacles including minefields and barbed wire, is not accessible to non-combatants and is prohibited to enter.136 Even though the system cannot make a distinction between civilians and

combatants, this geographic limitation significantly minimizes the risk on civilian casualties.137

The complexity of AWS is posed as a critical aspect for predictability. This could make it more difficult for the operator to predict the course of action of the system.138 However, using

a complex weapon will lead to higher restrictions on, for instance, the size of the area of operation or the type of targets it may attack.139

During armed conflict, it is not possible for a commander to always be 100% certain about the predictability and the effects of an attack. Because AWS have some level of autonomy in selecting and attacking a target, a question which should be considered is whether it will be sufficient if the outcome of using AWS matches the performance of humans acting without such technology.140 Christof Heyns, former UN Special Rapporteur on extrajudicial, summary

or arbitrary executions, argues that it seems clear that we hold technology to which we entrust human life in general to a higher standard, and that ‘a mistake involving machine autonomy is more likely to cause public outrage than pure human error’.141 This view seems to support the

need for meaningful human control over the use of AWS during attack.

135 De Boisboissel (n 130) 5.

136 Global Security, ‘Samsung Techwin SGR-A1 Sentru Guard Robot’

<https://www.globalsecurity.org/military/world/rok/sgr-a1.htm> accessed 30 November 2018.

137 Merel Ekelhof, ‘Autonome Wapens, een verkenning van het concept Meaningful Human Control’ (2015) Militaire Spectator 5, 244.

138 Scharre (n 123) 11.

139 Mark Roorda, ‘NATO’s Targeting Process: Ensuring Human control Over (and Lawful Use of) Autonomous Weapons’, in A. Williams, P. Scharre (eds), NATO Headquarters Supreme Allied Commander Transformation Publication on Autonomous Systems, 166.

140 Christof Heyns, ‘Human Rights and the Use of Autonomous Weapon Systems (AWS) During Domestic Law Enforcement’ (2016) Human Rights Quarterly, 376-377.

(28)

4.1.2 Possibility for Corrective Action

The possibility for corrective action requires that AWS are constructed in a way that it is possible for a human to change the course of action or to abort the operation. This means that a human-in-the-loop or a human-on-the-loop is necessary to ensure the possibility of

corrective action. However, as mentioned in the paragraph on AI, this might not be an option for all AWS. Payne thinks AI systems will allow autonomous decision-making by networked computer systems and foresees that the technological change will force humans out of the loop.142 This should be considered before using AWS for an attack.

The two posed aspects necessary for meaningful human control are covered by the IHL-principles. A commander considers the predictability of a weapon system in the

proportionality assessment, as he needs to strike a balance between the loss of civilians and civilian objects and the military advantage. Therefore he needs to know, to a certain extent, what the effects and results will be of using a weapon system. Furthermore, taking

precautions in attack often entails the possibility for corrective action when this is necessary. In the next paragraph, the targeting cycle is discussed, and is explained how the two aspects of meaningful human control and the IHL principles are included in the targeting cycle.

4.2 The Targeting Cycle

Whether a weapon system will operate within the constraints of IHL, will depend on the technical performance of the weapon system, especially the predictability. Furthermore, it will depend on operational parameters. These include: the task the weapon system is assigned, the type of target the weapon system may attack, the type of force and munition it employs, the environment in which the weapon system is to operate, and the timeframe of its operation.143

These aspects, the technical performance and operational parameters, are both included in the targeting process. Targeting is the process of selecting and prioritizing targets and matching the appropriate response to them, considering operational requirements and capabilities.144

The process contains the following six steps in the methodology of the North Atlantic Treaty Organization (NATO):

1. Formulation of the commander’s intent, objectives and guidance; 2. Targets are developed nominated and prioritized;

142 Nieuwsuur (n 40). 143 Davidson (n 11) 13.

144 Philippe R. Pratzner, ‘The Current Targeting Process’, in Frans P. B. Osinga, Michael N. Schmitt, Paul A.L. Ducheine (eds), Targeting: the Challenges of Modern Warfare (T.M.C. Asser Press 2016), 79.

Referenties

GERELATEERDE DOCUMENTEN

Met alleen aandacht in de planvorming voor het canonieke (kenmerken en verhalen die wel algemeen erkend worden) wordt volgens hem een belangrijk deel van de identiteit van

Research with discrete sequence production tasks further indicates that the execution of familiar movement sequences involves contributions of central-symbolic representations

The observed behaviour of the optical band gap may be due to structural defects introduced in the Sn O 2 nanoparticles by Zn +2 , such as tin interstitials

Secularism is crit- ical for maintaining the equal human digni- ty and rights of believers and non-believers alike, but its ability to play a role in political communities depends on

We consider the credit risk (i.e. the likelihood of default) of the bank concerned as a financial put option (i.e. the right, but not the obligation to sell an asset at a fixed

The main aim of this thesis has been to simulate and analyze the dynamics of sponta- neous breaking of time-translation symmetry in macroscopic systems in order to explain the

Vanwege het feit dat de SGP en de Partij voor de Dieren helemaal geen allochtone kan- didaten op de kandidatenlijst hadden staan en alle partijen volgens de peiling, op GROEN- LINKS

Het reisgedrag van de studenten wordt beïnvloedt door veranderingen binnen verschillende disciplines; ten eerste vanuit politieke een politieke discipline, waar politieke