• No results found

Drones, Killer rbots, and Just War Theory; Placing drones within a Just War Therory framework

N/A
N/A
Protected

Academic year: 2021

Share "Drones, Killer rbots, and Just War Theory; Placing drones within a Just War Therory framework"

Copied!
32
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Drones, Killer Robots, and Just War Theory:

Placing drones within a Just War Theory framework

2019

Radboud University Nijmegen

Masterthesis – Sem van Maanen, s4469364

Thesis supervisor: dr. R.B.J. Tinnevelt.

(2)

2

Abstract

Drones are a relatively new development which fundamentally alter the nature of warfare. As the first non-human ‘combatants’ they require to ask how they fit within a Just War Theory framework. This thesis shows the core issues raised by both remotely-piloted drones and lethal autonomous

weapons, both Ius ad bellum and Ius in bello. We find that in reality, drones, despite their potential

to better adhere to principles of Ius in bello, create complex situations in which different ethical principles have to be weighed against each other.

(3)

3

Index

1. Introduction 4.

1.1 Drones and the Army 4.

1.2 On Terminology 6.

2. Just War Theory and Moral Challenges 7.

2.1 A Brief Introduction to Just War Theory 8. 2.2 The Challenges and How We Intend to Face Them 9. Lethal Autonomous Weapons 11.

3. Conceiving of Drones Within Just War Theory 13.

3.1Targeted Killings 13.

3.2 Reasonable Chance of Success 14. 3.3 The Argument of Autonomy 15. 3.4 Reciprocity and Radical Asymmetry 15. 3.5 The Obligation to Deploy Drones 17. 3.6 The Threshold of War 18. 3.7 Emotional Detachment 19.

4. Conceiving of Lethal Autonomous Weapons 21.

4.1 Obedience and Rule Adherence 21.

4.2 Responsibility 23. 4.3 Emotional Detachment 27. 5. Conclusion 28. Reflection 28. Concluding Remarks 30. References 31.

(4)

4

1. Introduction

War is an evolving discipline. More so than many other facets of government it is dependent on technological developments. In the last decade or so, drone warfare has become a staple of modern warfare itself. It is a relatively new phenomenon. Typically, ‘drone’ refers to any unmanned

remotely-piloted aircraft. More recently the word also conjures the image of recreational drones or delivery drones, designed and produced for civilian use. Yet in the military drones have played the largest and most influential role. It is also in this role in the military that they raise questions about ethics.

These weapons can strike anywhere, without exposing a human pilot to the risks of the battlefield. Nor does a drone need sleep, it can stay in the air for up to 14 hours at a time (U.S. Air Force, 2006). In particular the US military has taken a liking to these unmanned aircraft and has been at the forefront of drone development. The most famous of these are the Predator and Reaper, whose names alone are a good indicator of their projected role in the US armed forces. Initially the MQ-1 Predator was conceived for the purpose of reconnaissance. Yet later in development it settled into a role more befitting of its name, when it was equipped with AGM-114 Hellfire missiles. The MQ-1 was succeeded by the MQ-9 Reaper, capable of carrying 15 times more payload than its predecessor. All this represents a clear move away from the roles unmanned aerial vehicles were originally

developed for: intelligence gathering, surveillance, and reconnaissance. Instead, drones have moved towards further weaponization and support roles in the military (ibid).

1.1 Drones and the Army

Increasingly, drones have become the primary weapon of the US arsenal and other militaries are consolidating their drone projects as well. In 2014 only five countries had developed armed drones: The United States, the United Kingdom, Israel, China, and Iran (Zenko & Kreps, 2014). As of 2019 the number has dramatically risen to 28. It shows which direction military technology is moving towards globally.

Drones have not just become more ubiquitous; they have also become significantly more valuable in conflicts. In fact, in the U.S. air force, drones are now more often used for air strikes than piloted aircraft. Under the Obama administration the amount of drone strikes carried out had risen sharply to 563 compared to only 57 under his predecessor (Bureau of Investigate Journalism, 2017). As of 2019, more than three times as much strikes are carried out each year alone. For instance, in Afghanistan drone strikes have carried out a reported 1968 strikes, during which an estimated 713 to 1010 people were killed (Bureau of Investigative Journalism, 2019). By contrast, the American air force carried out only 738 Close Air Support sorties where weapons were fired. In other words, more than 70% of all strikes carried out in Afghanistan in 2018 were carried out with drones. In Pakistan the USA has carried out strikes exclusively with drones (although at a much smaller frequency than in Afghanistan), and the same goes for Yemen. The vast majority of American weapons fired from the sky are fired by drones.

Despite only recently entering the spotlight, unmanned aerial vehicles are older than one may think. During the early ‘30s of the previous century the British navy experimented with radio controlled pilotless aircraft. In fact, ‘drones’ are sometimes held to owe their name to one of these early experimental aircraft, which was named the ‘Queen Bee’ (drone also being a term used to refer to male bees). Nevertheless, it has only been recently that we have seen the widespread usage of

(5)

5

drones in a military context. The last decades there has been a significant increase in the usage of drone technology by armed forces. Notably the US armed forces have extensively deployed

unmanned vehicles, not just for reconnaissance but for ground-support as well. Increasingly drones are integrated within the military sphere.

Although drones represent a significant technological innovation there are parallels to be drawn with older military roles. It is easy to compare drone operators with, for instance, officers operating long-range artillery. After all, both artillerymen and drone operators make use of a long-range weapon while being relatively safe from the frontlines themselves. As Williams (2015) notes, there is a pronounced spatial dimension to autonomous weapon systems. Commonly the operator of a drone finds himself tucked away safely in a military base with his drone several miles away high up in the sky. This spatial dimension is perhaps what most defines the uniqueness of drones: waging war from afar. The comparisons with artillerymen may be fitting but it is unsatisfying to dismiss the implications of drone warfare this way.

Rather we must accept that there are moral implications that come with this new field. For instance, a drone operator can hardly claim to act out of self-defence when he himself is not at threat of suffering physical harm. This is because unlike a soldier on the ground, a drone operator may be miles away from the frontline. Of course, this also applies to the aforementioned artillery example, but more than ever drones have the potential to not just act as a weapon but as a ‘proxy’ for its operator; a tool with its own eyes and ears. A drone allows for exerting force over much longer distances and with much greater precision than before. In addition, an artillery operator could still be said to be present in the battlefield, whereas these days a drone operator could wage war comfortably from a desk in the Pentagon in Washington D.C, or even as a ‘stay-at-home operator’. This technology is a new development, and as such raises questions on how we should account for this theoretically.

In addition to remotely-piloted drones, new developments which present even greater moral challenges are already on the horizon. Ominously called ‘Killer Robots’: autonomous drones that can fire their weapons independent from an operator, according to their programming and the

directives given to them. In a case such as this there is no operator to ascribe responsibility of the drone’s actions. Nor can the spatial dimension be said to be the main defining feature anymore. What we are then dealing with is an artificial ‘combatant’ built for war. ‘Killer Robots’, sometimes referred to by the less normatively charged term of lethal autonomous weapon, may not yet be a part of our arsenals yet but it is not an unlikely assumption that they will in the near-future. This is a unique and unprecedented situation in which terms as ‘combatant’ no longer exclusively apply to human beings.

This is where the theoretical relevance lies. I will be evaluating drones from the perspective of Just

War Theory. The challenge here is in applying a theory that assumes human combatant, to a

situation in which machines have adopted an equally important role. I will attempt to create a comprehensive overview of the questions pertaining to drones that we might ask ourselves from the perspective of Just War Theory. Up front I must already confess that it is not my intent to provide comprehensive and exhaustive answers to all these questions. Many facets of Just War Theory when applied to drones create complex situations which can not so easily be settled. Therefore, my intent is to lay the groundwork for further reflection on the subject and to provide suggestions on how these questions might perhaps be approached.

The broader relevance to society at large of this topic should also be made clear. If governments and countries are to decide what weapons to deploy in war, this can only be done responsibly if those

(6)

6

weapons are properly understood. Understanding these weapons is just as much a moral and philosophical matter as it is a technical matter. By implicating drones within the framework of Just

War Theory, some contribution is made towards a more comprehensive understanding of the

implications of these kinds of weapons.

Our first chapter will go into more detail about the theoretical challenges. This will mean entail an introduction to the core premises of Just War Theory. Afterwards, we will expand on this by elaborating where the theoretical challenges arise exactly. The second chapter will be dedicated to those problems and questions applicable to drones in general, both remotely-piloted and

autonomous. Then, in the third chapter, we focus on autonomous drones specifically, and what unique questions they raise. Finally, a conclusion will be presented.

1.2 On Terminology

Before we can further discuss the topic, we have to define our terms.

Historically ‘drone’ has referred to any unmanned aerial vehicle, and can be used interchangeably with ‘UAV’ or Unmanned Aerial Vehicle. For purposes of consistency, this paper will exclusively use the term drone except when quoting or citing other authors where their terminology will be displayed unabridged.

Secondly, drones can be subdivided into two categories; 1) those under the remote control of a human operator, and 2) those acting autonomously. The former remotely-controlled drones are sometimes referred to as ‘RPA’, standing for remotely piloted aircraft. As such, this is also the term we will be using. For the second category of drones, sometimes referred to as ‘Killer Robots’, the acronym ‘LAW’ or Lethal autonomous weapon will be used. Alternatively, they may be referred to as autonomous drones. Depending on the context, both varieties may simply be called ‘drones’ for the purpose of brevity and when it does not detract from clarity.

(7)

7

2. Just War Theory and Moral Challenges

Aware of the complications and moral challenges of lethal autonomous weapons the EU parliament has already called for a preventive global ban on ‘Killer Robots’ (reuters, 2018). It is a fitting example of the gut reaction many of us may feel when confronted by a machine that takes lives: one of wariness. Many cases can be made both against and in favour of the usage of drones and many have done so in the past. However, up until this point much of the discussion on drones has been limited to their use. Commonly cited criticisms are the high number of civilian casualties in drone strikes. Comparatively little attention has been given to how drones fit within the larger theoretical framework of Just War Theory. Rather, the focus seems to be on the role of drones in so-called ‘targeted killings’ (Freiberger, 2013)(Van der Linden, 2015)(Walzer, 2016) While this is a highly relevant topic, the implications are far less grand. After all, the dubious ethical nature of assassination is not something unique to drones. The more fundamental question –and in my opinion also the more interesting one- is how drones present unique challenges to Just War Theory. Together, remotely-controlled drones and autonomous drones differ from human combatants significantly in a way that calls us to reflect on their relation with Just War Theory. Technologically, both varieties of drone are part of the same evolutionary process: the increased integration of machines in our armed forces and the increased distance of between human and the battlefield. From a theoretical perspective too, we see an evolutionary process. The drone has gone beyond being a mere ‘intermediary’ of force that acts as a proxy of a soldier somewhere else. Instead, drones are on the verge of being ‘emancipated’ and becoming combatants in their own rights. We are witnessing the dawn of drone warfare, not just as a side-note of conventional modern warfare but potentially as a separate discipline, with its own rules and conventions and conceptions of combatant status.

The idea of a ‘Just War’ is an old one that goes back centuries. It seems philosophers and thinkers generally agree that war is such a destructive and reprehensible thing that its existence requires moral justification. Yet at the same time they also agree (absolute pacifists not withstanding) it can in fact be justified in some cases. This is Ius ad bellum; the principles that decide when starting a war is just. Just as before starting a war, what happens within war is also governed by rules in the Just

War tradition. This is called Ius in Bello; how wars are to be fought.

However, these principles were conceived when wars were still primarily fought by humans. Historically humans have held a monopoly on acts of war. At most a soldier might have ridden a horse into battle, but it was always a human who was expected to pull the trigger. Truthfully, even two decades ago some of the developments that we are now witnessing could not have been predicted except by the extraordinarily prescient and a few over-imaginative science-fiction writers. Military technology has advanced to the point where humans are no longer the only soldiers carrying weapons in the field. Mobile and remote-controlled weapon platforms such as the MQ-9 Reaper now perform the majority of air strikes in the military theatres where the US are engaged in (Council on Foreign Relations, 2013).

In these cases, a human operator still pulls the proverbial trigger, but the point is approaching where drones may be put into service which will decide for themselves whether to open fire or not. Within

Just War Theory Michael Walzer enjoys a preeminent position, similar to how John Rawls is often the

central point of reference in discussions relating to justice. Of course, when Michael Walzer wrote

Just and Unjust Wars in 1977, he could hardly have imagined that a new type of combatant would

(8)

8

could not have been predicted. A fundamental transformation has occurred and whereas once wars were fought by men with weapons, wars are now increasingly being fought by just weapons. With the likelihood of such ‘Killer Robots’ being included in military arsenals in the near future, so does the question of its moral implications become ever more apparent.

2.1 A Brief Introduction to Just War Theory

Before further inroads can be made on the subject of drones and how it relates to Just War Theory, it is sensible to briefly recap some of the core principles of Just War Theory. Just War Theory is a wide school of thought that deals with the fundamental question of when wars are Just.

Just War Theory is composed of three core aspects: Ius ad bellum, Ius in bello, and Ius post bellum. Ius ad bellum refers to those principles that determine whether it is justifiable to go to war in the

first place (Walzer, 1977). Secondly, Ius in bello refers to the principles governing the conduct within war, such as who is a legitimate target and who is not (ibid). Finally, Ius post bellum is a relatively recent addition that refers to how matters should be settled after the war is over.

For our discussion of drones we will limit ourselves to principles of Ius in bello and Ius ad bellum. Ius

in bello because primarily, drones are a weapon which participate in war, and as such are subject to Ius in bello constraints. Secondly, drone technology does not exist in a vacuum and has very real

implications for Ius ad bellum. Sauer and Schörnig (2012) have noted this, observing how the physical distance drones create may reduce the threshold of going to war, challenging principles of

Ius as bellum. Ius post bellum is omitted as drones are weapons first with little a role left for them in

peacetime. This is not to say that drones might not be employed efficiently for surveillance or policing, or that Amazon’s delivery drones have the potential to revolutionise post-war reconstruction, but we will emphasise the military variety of drones first and foremost. In Just and Unjust Wars Walzer establishes one of the core principles of Ius in bello: the moral

equality between soldiers. This moral equality means that, though one soldier may fight for a just

cause and another for an unjust one, there should be no distinction made between them. Both are soldiers who, presumably, had no part deciding whether or not their country went to war. As such, both have the same rights and duties to respectively enjoy and obey during war. This means soldiers on both sides are allowed to use lethal force and to kill in war. We do not condemn an individual German soldier’s killing of a British soldier and excuse the reverse. Rather, as moral equals this ‘right’ applies to both of them. When a soldier shoots another soldier, we do not consider this murder, nor would we if the other soldier shot first. Neither man is a criminal. Both are liable to kill and be killed. As Walzer notes: “by and large we don’t blame a soldier, even a general, who fights for his own government.”, rather, the guilt falls upon their leaders (1977, p. 39).

According to Walzer, we must go forth on the assumption that both participants believe themselves to be in the right (p. 128). Not only that, but in the course of war a soldier will almost inevitably find himself faced with an instance where he must take an enemy’s life to preserve his own.

A second important principle of Just War Theory is non-combatant immunity (Walzer, 1977, pp. 138-159). This principle stipulates that civilians are never legitimate targets. Some nuances exist that reconcile this principle with military reality. For one, Walzer argues that a military factory is a legitimate target, even if it is manned by civilians. What permits this, is that these factory workers have become “partially assimilated” to the soldier class (p. 146). In addition, the principle of double

(9)

9

lives of civilians. For Walzer, the criteria for this is that the act itself is an otherwise legitimate act of war and that its direct effect is morally acceptable. In addition, the intention of the soldier must be ‘good’, in that he only aims at the acceptable effect. Finally, this acceptable or good effect must balance out against the bad. It must thus be proportional (ibid).

Enemark (2013, pp. 41-52) mentions three core principles which in essence summarise the Ius in

bello aspect of Just War Theory: necessity, discrimination, and proportionality. Necessity means that

actions undertaken in war must benefit the larger objective (for otherwise it would be mindless slaughter). Discrimination refers back to the principle of non-combatant immunity, which may not be violated. Finally, proportionality means that the harm done by a military act may not outweigh the good gained from it.

Then there remains Ius ad bellum. Just as with the former, the latter too can be summarised by some of its core positions. Brian Orend (2000) notes the following elements which compose Ius ad

bellum: just cause, right intention, last resort, probability of success, and proportionality. Some of

these are less relevant than others for our case. Regardless, I will attempt to sketch a complete yet succinct overview of the field.

Just cause entails that a just war can only be fought in pursuit of a just objective. This is need hardly be a controversial statement. For Walzer however, what constitutes a ‘just cause’ is decidedly limited. As Orend observes, the main ‘just cause’ is a war to resist aggression (2000, p. 526), and in some cases humanitarian interventions. Additionally, in limited cases one may intervene to prevent the most egregious violations of human rights or to assist in a people’s struggle for secession (Walzer, 1977, pp. 86-108). Just cause is closely related to the idea of right intention. Even if a legitimate casus belli is present, a war can hardly be just if it is fought with selfish motivations. An example raised by Orend is the Gulf war (p. 532). Assuming we are willing to accept that defending Kuwait against Iraqi aggression is a just cause, the motivations of the US for doing so remain unaddressed. The US might, for instance, decide to go to war not out of genuine concern for Kuwait’s sovereignty, but rather, decide to do so in order to secure its own oil supply. Regardless of whether this is true or not, it illustrates the necessity of right intention.

Last resort entails that war can only be an option when all other reasonable means have been exhausted. More relevant to our discussion (we will see later why) is the probability of success. A war can only be a just war if it has a reasonable chance of succeeding. If it were to be a doomed endeavour beforehand, the war, despite its noble intentions, would be nothing but senseless slaughter from which nothing good can come. Therefore, when beginning a war, one must have a reasonable chance of reaching the just objective one seeks to achieve. Finally, there is the

requirement of proportionality, which is comparable to the Ius in bello principle of proportionality. In

Ius ad bellum, proportionality holds that a state considering a just war must weigh the extended

universal benefits of doing so against the expected universal costs (Orend, 2000, p. 536).

This recap should provide at least a base understanding of Ius in bello and Ius ad belum in traditional

Just War Theory. Though there are many nuances to these points which will come up as they are

further discussed in the context of drones.

2.2 The Challenges and How We Intend to Face Them

The principles of Just War Theory are universal principles that apply to any war and any combatant, at least in theory. Yet we find that Just War Theory understandably emphasises human combatants.

(10)

10

Drones on the other hand, are a new development that Just War Theory did not account for. That is not an indictment of the theory, but rather, a recognition of how present-day realities require us to revisit some of Just War Theory’s principles. Simply put, they will have to be ‘translated’ in a way that allows us to apply them to drones as well. Why is this relevant? What is it that makes drones so different as to dedicate a whole discussion to it? It is because drones, as well as lethal autonomous

weapons, differ significantly from human combatants in various ways.

While a rifleman is marching towards the frontlines, at risk of death, the drone flies overhead with its operator comfortably away from the battlefield. Drone technology adds an element of spatial distance, allowing operators and commanders to fight on the battlefield without physically being present. In the case of lethal autonomous weapons, the distinction is even more pronounced. While we might still regard a remotely-piloted drone as a weapon, the autonomous drone might be called a non-human combatant. It is capable of acting on its own accord (in compliance with its

programming and its orders), without being controlled by a human.

Both remotely-piloted drones and lethal autonomous weapons generate questions which might be raised within the context of Just War Theory. Perhaps it can be said that if remotely-piloted drones represent one step towards the mechanisation of war, then the lethal autonomous weapon represents a giant leap. The latter represents a more radical continuation of the same principle. First, we will emphasise those themes applicable to both types of drones. Afterwards, we will focus exclusively on those themes unique to lethal autonomous weapons. Though at various points we will discuss minor arguments relevant to our case, our discussion will centre on several themes.

Firstly, we will briefly reflect upon the topic of targeted killings carried out by drones, as much discussion seems centred on this (Council on Foreign Relations, 2013)(Center for Civilians in Conflict, 2012)(Himes, 2016)(Walzer, 2016). Of more importance are the points raised by Dewyn (2019), who discusses how drone technology might influence the reasonable chance of success criterium of Ius

ad bellum. The ‘birds-eye-view’ of drones, coupled with their cutting-edge technology, might allow

for greater precision which in turn leads to a greater chance at success. We must ask if this is the case.

A second point raised by Enemark (2013) and Williams (2015) is reciprocity. One of the key advantages drones possess is how they allow for battlefield participation without exposure to physical harm. In the case of the remotely-piloted drone, they allow a human operator to project force over great distances and thus isolate the operator from any risks. In the case of the lethal

autonomous weapon, no risk of harm exists either. Though the autonomous drone may be

destroyed, it cannot experience harm as a human can. Reciprocity holds that in war, both sides must be liable to be killed or harmed. Drones negate this principle, and we must ask whether this

fundamentally changes the morality of war, as Enemark and Williams argue.

Afterwards, we relate this with the argument of Strawser (2010), who argues for precisely this reason that it is more ethical to deploy drones; because it isolates our soldiers from physical risks that technology has rendered unnecessary. In fact, he goes so far as to say that we have a moral obligation to deploy drones when possible, as a way to preserve the lives of soldiers.

The next and penultimate question raised is how drone technology might in fact endanger Ius ad

bellum principles. Perhaps the way in which drones allow us to wage war with relative ease and

safety might make it more tempting to cross the threshold of war.

As a final point, we will discuss what effects drone technology might have on the emotional engagement of drone operators. Does the spatial distance created by remotely-piloted drones

(11)

11

translate into emotional distance? If so, is this ethically questionable, does it turn drone operators cold-blooded? Alternatively, is emotional distance perhaps a desirable trait and does it allow operators to act in a more impartial way? We will discuss both the normative and descriptive component of this question. Note how we will focus exclusively on remotely-piloted drones. Lethal

autonomous weapons will be revisited later, for they form a separate case.

With these questions addressed, we possess a comprehensive overview of drones. We can thusly move forward to discuss the questions lethal autonomous weapons raise.

2.3 Lethal Autonomous Weapons

In addition to the remotely-piloted drone with its human operator, there remains the equally -if not more- contentious case of the lethal autonomous weapon; the drone without an operator. There are many ways we can approach these autonomous drones, and many questions to be raised.

They are fundamentally distinct from human-controlled drones due to their exclusively mechanical nature. The former acts as a ‘proxy’ of some sort, acting on behalf of a human operator with a human mind. But the lethal autonomous weapon has no such human overlord. While it still takes orders and instructions from superiors, it does so in a way not unlike a human soldier; by

interpreting an order and pursuing it to the best of its ability in a way they think is most efficient or expedient. We, therefore, cannot regard it as a combatant as we would with a human. Though we might be tempted to call it a non-human combatant, it might best be seen as a tool of war, albeit of a highly sophisticated nature. It is not a combatant in that it does not have the rights we would associate with a combatant, a lethal autonomous weapon can’t surrender, nor is it entitled to a humane treatment. It is capable of acting autonomously, and thus should still be considered as distinct from other tools of war for this reason.

There are reasons why we may be pessimistic about lethal autonomous weapons, surely. If not for anything else, then at least for the reason that this is entirely new ground yet to be thread upon. As such, the field comes with a certain unpredictability. That is perhaps the paradoxical nature of the

lethal autonomous weapon. Its mechanical mind and the absolute logical thinking of computers

ought to make it predictable to a fault. Yet with new technologies always come new uncertainties, and so it is with the lethal autonomous weapons. Yet despite whatever concerns we may

legitimately have, equally legitimate causes for optimism are also present.

One notable advantageous trait they possess lies in the fact that as computers, they can be programmed with unbreakable moral guidelines. This presents an opportunity to aspire to a more ‘ethical’ form of war. Perhaps war can never be truly ethical, but surely a war in which all the principles of Ius in bello are respected is more ethical than a war in which this is not the case? The large-scale introduction of lethal autonomous weapons might then very well be a desirable development for this reason. A drone does not panic under fire, or is otherwise influenced by the stress of the moment. It presents us with the possibility of a more perfect adherence to Ius in bello and the military rules of engagement.

Yet, it would be intellectually dishonest to not remain sceptical and reflect upon the downsides of this. Notably, a drone ultimately lacks that elusive and treasured ‘common sense’. To a machine, concepts such as ‘proportionality’ can only be understood in terms of numbers. For a drone to assess whether projected collateral damage would be offset by military necessity it can do little but crunch said numbers. It would make these utilitarian calculations without any understanding of the value of

(12)

12

human life. A (human) commander, at least in theory, can be expected to make some calculation that take into account and understands the necessity to protect non-combatants. A machine

however, even if it is programmed to do so, does not truly understand the importance of this. Vague terms such as ‘proportionality’ and what is ‘reasonable’, are already difficult for humans to deal with and would be impossible to be understood by an entity that acts purely on pre-programmed

directives. We will discuss how to go about this, and what options lie before us.

An even more pressing matter is that of responsibility. Even if we expect that lethal autonomous

weapons will have a much better track record at respecting Ius in bello, it seems delusional to

imagine the chance for a mistake to be non-existent. In those cases, we will need someone to hold responsible for the mistake. Yet, how to do this with an autonomous, if still ultimately unthinking, machine? As Sparrow (2007) notes, it is practically impossible to determine whom ought to hold this responsibility when dealing with autonomous robots. If we speak of a just war, we must speak of justice in war, and to do so requires us to be able to attribute blame for unjust acts. In the case of the remotely-piloted drone this seems a no-brainer. But the autonomy of the lethal autonomous

drone complicates the matter entirely. We will discuss the alternatives that have been raised and

how to approach the matter of responsibility, and whether or not this means we should abstain from deploying lethal autonomous weapons.

Finally, we will revisit the case of emotional detachment as it pertains specifically to lethal

autonomous weapons. Whereas we might ask ourselves how the emotional distance of drone

operators might be affected, in the instance of lethal autonomous weapons no such emotions exist to begin with. Do we regard this as a problem, and how does this relate to previously raised points? Together these questions will help us establish a clear window through which to view lethal

autonomous weapons and to determine which problems can be easily solved, and which are more

persistent. In doing so, we will have created a more comprehensive overview of autonomous drones within a Just War Theory framework. Paired with the broader discussion of drones in general this means the main questions surrounding drones in a Just War Theory context will have been addressed.

(13)

13

3. Conceiving of Drones Within Just War Theory

The work of the drone operator is a curious one. The distance from which they operate set them apart from their peers, to the point where said colleagues hardly consider them part of the same military apparatus. In fact, veteran groups have opposed efforts to institute certain medals for drone operators, believing it to be unfair to those who actually served on the frontlines (Reuters, 2016). Clearly, drone operators find themselves in a world separate form their colleagues. Though an anecdotal case of little relevance to our wider discussion on Just War Theory, it stands as an indication of some of the sentiments towards drone operators; that they lack the military virtue associated with ‘regular’ soldiers.

In the previous chapter we have given some cursory glimpse of those issues that indeed set drones, drone operators, and lethal autonomous weapons apart from their peers. More than just

differences, there are also distinct opportunities and concerns this raises, with implications for Just

War theory. We will now go into those issues with more detail.

3.1 Targeted Killings

Let us first begin by addressing what might very well be regarded as the proverbial elephant in the room: targeted killings. It is indeed true that a vast share of drone deployments is done in the context of targeted killings (Council on Foreign Relations, 2013). The role of drones in these killings seem to be a recurring theme in the literature combining drones and Just War Theory (Center for Civilians in Conflict, 2012)(Himes, 2016). Michael Walzer (2016) also writes on the questionable ethics of these killings.

Walzer observes how the rules of war have been relaxed in the context of targeting killings. Drones allow for targeted killings to be carried out much easier than other weapons would, due to the lack of danger to their operators. Yet Walzer notes that targeted killings are not necessarily a violation of

Ius in bello. He makes the comparison with a military sniper assassinating an enemy commander

(2016, p. 13), and argues that this is not a violation in Ius in bello. He does add how only military leaders, and thus not political leaders, are legitimate targets. Additionally, targeting has to be “undertaken with great care” (ibid, p. 14).

However, it is highly questionable whether targeting is indeed done with great care. Take for instance the problem of so-called ‘signature strikes’. Signature strikes are forms of targeted killing where the target is eliminated without knowing the precise identity of the target. Rather, they are eliminated because the individual matches the general profile of an insurgent (Center for Civilians in Conflict, 2012, p. 8). This carries a tremendous risk of killing non-combatants, not just in the form of collateral damage, but because it is not confirmed whether or not the target itself is a combatant. This is in itself problematic, and undermining the Ius in bello principle of discrimination.

While targeted killings present an interesting case that is worth delving into, I do not intend to say much more about it. A discussion on drones and Just War Theory would be amiss by omitting the case of targeted killings entirely, hence its inclusion here, but as an issue it stands separate from those problems that I seek to address. For one, targeted killings are not a phenomenon unique to drones. As Walzer’s example of the sniper illustrates, targeted killings can be carried out in a myriad of ways. The reverse is also true, drones are not a phenomenon unique to targeted killings. The equipment of drones allows them to be deployed in other capacities. Therefore, I will emphasise those issues particular and unique to drones.

(14)

14

3.2 Reasonable Chance of Success

As we have remarked in the first chapter, the ‘reasonable chance of success’ is a vital element of Ius

ad bellum, as Orend (2000) and Dewyn (2019) note. Even in pursuit of a cause that is just, there

needs to be a reasonable chance of succeeding. After all, if a war is fought with no chance of achieving the just cause for which it was started, then such a war would be nothing but senseless destructive bloodshed.

Michaël Dewyn (ibid) asks whether using drone weaponry indeed increases the chances of success in a war, specifically counter-insurgency. The question is applicable to a broader scope of scenarios however, not just counter-insurgency. Conventional wars as well can be revolutionised by the changes drone technology introduces.

From a Ius ad bellum perspective we can argue that if a technology increases the ‘reasonable chance of success’ that said technology assists in meeting Ius ad bellum requirements. It therefore can be regarded as a positive development that hypothetically enables us to fight better for a just cause. Of course, it merely enables us to fight better regardless of whether the cause is just. Other Ius ad

bellum principles arguably are more important in this regard. At the same time, the obvious fact

should be stated that this principle of ‘reasonable chance of success’ does not justify weapons that expediate victory at such a cost to human dignity that we deem it to violate Ius in bello.

Over the course of military history many weapons have been developed which provide one side a substantial advantage over the other. But what is so revolutionary about drones is how they allow one side to fight a war while not experiencing risk, which Enemark (2013) and Williams (2015) have observed. Though this Ius ad bellum principle of reasonable chance of success is worth discussing in its own right, it is particularly interesting because it provides us with a point of departure for further discussions on drones. It allows us to ask why and if drones indeed make war more expedient, and what the implications for Just War Theory are.

According to Dewyn, drones seem to offer serious advantages at first glance. As he notes, drones not only reduce the exposure to physical danger for their own side, but their use might also reduce the number of non-combatants mistakenly targeted (2019, pp. 118-119). This would reduce casualties for both sides. He remains critical however, citing collateral damaged caused by drones which undermines this argument somewhat.

Brunstetter and Braun (2011) share this scepticism. They call this the ‘drone myth’ (p. 339); the belief that technologically advanced drones increase the probability of success while decreasing the risk to our soldiers and of collateral damage (p. 346). (Remotely piloted) drones remain fallible, bound to the limitations of their human operators. Dewyn also recounts how in some cases the ubiquitous deployment of drones (in a reconnaissance and intelligence-gathering capacity as well) can lead to an overload of information (2019, p. 121). Under the pressure of so much accounts and reports available, drone operators will not be able to find the information relevant to their decision to open fire or not. In addition, the operator is distanced far from the battlefield and may be lacking situational awareness (Brunstetter and Braun, 2011, p. 347). Theoretically, lethal autonomous

weapons might be better capable of dealing with such information and avoiding this mistake,

although Dewyn and Brunstetter and Braun do not mention this. Dewyn also elaborates how drones have introduced a distance between US forces and the local population (2019, p. 123). Whereas soldiers on the ground are capable of communicating to the local population the purpose of their presence, drones are not. This makes it so that locals are sceptical of the US’ motives and might even

(15)

15

regard collateral damage as deliberate, notes Dewyn. Ultimately, this creates support for the enemy and undermines military objectives.

These points Dewyn raises are particular to counter-insurgency operations, but nevertheless provide food for thought. There are many reasons to doubt whether drones indeed make success more likely due to the complicating factors they introduce. At the very least, we might hold that the way they are used right now, within the context of counter-insurgency, is at best questionable.

3.3 The Argument of Autonomy

Williams (2015) elaborates upon what he deems to be the principle of autonomy. Note how ‘autonomy’ in this context has a different meaning than ‘autonomy’ as in lethal autonomous

weapon. With this, Williams refers to the ‘autonomy’, -not of the drone or its operator-, but of the

target of said drone. He defines this autonomy along the lines of individual rights. He holds that a drone’s target, even if that target is a legitimate target, retains several fundamental rights (p. 103). According to Williams drones pose a threat to said rights and ‘autonomy’. In conventional military situations a combatant is free to withdraw from battle or surrender at any time. Yet as Williams notes, you cannot surrender to a Reaper drone (ibid). For Williams then, the target’s autonomy is “fundamentally compromised” through drones.

Of course, drones are not the only military weapon in which a chance to surrender is denied to the target. To once again return to the comparison with artillerymen, it is equally impossible to surrender to an artillery barrage. The same goes for long-range missiles or any airplane. Williams recognises this and concedes this point. What distinguishes these technologies from drones however, is that drones claim a kind of discriminatory precision these other weapons do not. This can be traced back to the ‘drone myth’ which Brunstetter and Braun (2011) have raised. Williams calls this discriminatory precision the “intimacy of drones”. The target is “…targeted as an

autonomous individual— a specific person—yet is denied the last resort of individual autonomy in warfare: the chance to surrender… Such distance makes warfare seem too clinical or cold-hearted.” (2015, p. 103). The capacity of drones to gather information and observe their target is fundamental to this intimacy, so he argues.

I find it a peculiar argument however. Drones certainly possess a kind of ‘intimacy’ compared to other weapons, yet it seems odd that this would make them more immoral. In normal situations we would applaud a more precise weapon capable of information gathering. The irony here is then, that if we follow Williams’ argument, we would be acting more justly by using less precise weaponry acting on less information. It seems rather counter-intuitive that to kill someone based on limited information is more moral than to kill someone on extensive information. Similarly, within the justice system we would condemn condemning a man to life in prison on the basis of little to no evidence, even if were to be guilty. Especially from a rights-based approach we should encourage the gathering of information for precisely this reason. Arguably, this ‘intimacy’ should make drones more moral, even if we need to remain sceptical if drones indeed are more accurate. His position thus cannot be argued consistently.

3.4 Reciprocity and Radical Asymmetry

One fact we will have to concede is how drones are indeed a unique development. But what makes them so special is how drones, contrary to human soldiers, are completely immunized from any and

(16)

16

all harm that might befall them. As Himes (2016) Enemark (2014), and Walzer (2016) have observed the element of risk is something that we would fundamentally associate with soldiering. An element that is absent in the case of both remotely-piloted drones and lethal autonomous weapon. In the former case, the drone acts as a proxy that allows its operator to work from a safe distance. In the latter case of the lethal autonomous weapon, it is equally questionable if we can say that there are risks involved. Though they would be more exposed to physical danger than drone operators (perhaps they might even be more exposed to danger than the infantry, as commanders will be more willing to take tactical risks with a machine than with human soldiers), they are at the same time incapable of experiencing harm as a human would. Whatever damage would be done to the

lethal autonomous weapon should be regarded not as physical, but material damage. Its loss is not

measured in the loss of a life, but in the loss of material.

This introduces us to the principle that Williams calls reciprocity: the notion that both sides in a fight are vulnerable to physical harm. According to him, drone technology reworks this principle so that the relationship between the drone operator (as well as the lethal autonomous weapon) and the target becomes exclusively one-directional (2015, p. 94). Enemark adds to this by arguing that drones alter the relation between both sides even more profoundly, in what he calls radical

asymmetry, a “form of violence so fundamentally different in nature that it does not count as war.”

(2013, pp. 60). According to this position, the mutual experiencing of physical risk is inherent to any violent contest. Drone technology effectively immunises one side from harm, which makes it difficult to regard the conflict as a war at all. Even if a war needs not be a fair fight, it still needs to be

considered a fight. Enemark calls upon Clausewitz. If war is indeed the continuation of politics by other means, as the famous aphorism goes, then what is happening is according to him not a war but “directional politically motivated violence.”

Enemark in particular attaches grand consequences to this. For one, he argues how radical

asymmetry may undermine Ius ad bellum as well. In a non-radically asymmetric situation, even if

one side is significantly better capable of waging war, the ‘underdog’ is still theoretically capable of inflicting harm on the opponent. When radical asymmetry is introduced this changes, with the weaker side becoming completely incapable of harming the other. The principle of Ius ad bellum which demands a reasonable chance of success before a just war can commence, becomes

unattainable for the technologically inferior side in a conflict. Yet individuals and states have a right under Just War Theory to exercise self-defence, and to deny them this is immoral (Enemark, 2013, p. 60) The problem then is that the introduction of radical asymmetry leads to a ‘might makes right’ doctrine, where resistance against a vastly superior foe can no longer be justified. On the basis of this, Enemark concludes that drones endanger this core principle of Just War Theory.

I must disagree with his conclusion however. For one, the argument that radical asymmetry makes all forms of resistance impossible is not true. While admittedly the life of a drone operator is secured through drone technology, the drone may still be destroyed. The ability to eliminate a threat (i.e., the drone) remains, and a retreat or withdrawal may be forced through force. Resistance remains viable, even if it is made significantly more difficult. In this sense, drone technology is no more unethical than the various other military technologies which make it more difficult to win for an underdog.

Having digressed on Enemark’s argument of radical asymmetry, there is still further reflection to be done on the topic of reciprocity. Although I disagree with Enemark’s ultimate conclusion, the broader point made by him and Williams (2016) remains. The mutual assumption of risk is inherent in combatant status in Just War Theory. According to Walzer a soldier is distinguished from a civilian through the enterprise of his class: “He has been made a dangerous man” (1977, p. 145) Being a

(17)

17

participant in war means becoming liable to be killed (Walzer, 1977). This is far from a

comprehensive overview of what it means to be a combatant, but it gives us a working approach for the issue of reciprocity.

The issue here, in layman’s terms, seems to be that drones allow one side to assume the ‘privileges’ of combatant status (i.e., being allowed to kill the opponent) while at the same time not having to assume the ‘burdens’ (i.e., being liable to be killed). While this seems problematic at first glance, a counter-argument can be made that this in fact allows drones to limit unnecessary combat deaths and as such is to be regarded as more ethical.

3.5 The Obligation to Deploy Drones

For instance, Strawser (2010) argues not only that we have a right, but an active duty to employ drones. His argument is founded on what he calls the principle of unnecessary risk (p. 344). It is a relatively straightforward principle that holds that actor X giving an order to actor Y, all other things being equal, should opt for the means that exposes Y to the least amount of risk. If the same result could be achieved by exposing Y to less risk, then that risk is unnecessary and therefore unethical. In a military context, the principle of unnecessary risk is the responsibility of a commander for the wellbeing of the soldiers under his/her command.

Strawser raises the example of an explosive ordnance disposal unit, colloquially called a ‘bomb squad’. Commonly robots are used to disarm bombs in order to keep the technicians at a safe distance should anything go wrong. We regard this as a responsible approach, and an example of the

principle of unnecessary risk. Should the commander of the bomb squad elect not to use such

robots, then he would be endangering the lives of his own men needlessly, and thus violate the

principle of unnecessary risk.

It is not difficult to see how this principle can be related to drones. Drones not only limit the exposure to harm for soldiers, but effectively negate it. While Enemark (2013) and Williams (2015) consider this problematic, Strawser regards this as an opportunity to make wars less deadly. Looking at his principle of unnecessary risk it is a sympathetic argument; to expose our soldiers to risks that technology has rendered unnecessary seems neglectful of their wellbeing.

Strawser’s principle of unnecessary risk offers somewhat of a counterpoint to the position of Enemark (2013) and Williams (2015), based on reciprocity. The strictest application of reciprocity would entail that we are obliged to refrain from completely isolating our soldiers from danger. If one can wage war justly only when one is exposed to the war itself, then this means putting soldiers in positions of danger, even when technology has made this unnecessary. This would demand of us to expose soldiers to unnecessary risks, violating Strawser’s principle. Regardless of whether one agrees with Strawser’s formulation, intuitively it does seem immoral or unreasonable that we are to expose soldiers to harm when it is not strictly speaking necessary. In fact, it may instead strike one as a somewhat callous disregard for soldiers’ lives in the name of ‘fairness’. But how then to reconcile these two perspectives?

My proposal for this would be a continuation of my argument against Enemark’s conception of

radical asymmetry. Though drone operators and lethal autonomous weapons are safe from harm,

they nevertheless can be ‘taken out’. A lethal autonomous weapon can still be destroyed, thus ending its participation in a conflict. In the case of drone operators too, they can be removed from the fight by destroying their drone. This all comes down to an alternative and more inclusive

(18)

18

conception of reciprocity. Rather than to define the concept of reciprocity by the mutual

experiencing of harm, it should be construed as the mutual experiencing of risk. Risk here should be interpreted rather broadly, as a threat to one’s participation in war rather than a threat to one’s physical safety. This way, this principle also applies to lethal autonomous weapons. Admittedly, this is a rather broad definition of reciprocity.

Furthermore, one limitation of this that I will have to acknowledge is how it applies less to drone operators than lethal autonomous weapons. While a lethal autonomous weapon may be destroyed permanently, the same cannot be said for a drone operator. Even by destroying their drone, they may return later with a replacement. With the exception of destroying every drone in the military’s arsenal, a drone operator cannot be taken out permanently. This makes it an imperfect solution, but nevertheless, a workable one. The alternative would be to outwardly reject either the principle of

reciprocity or unnecessary risk. While I am certain that a case might be made for both, it would

require a much more substantive and focused effort that veers away significantly from the core themes of our discourse on drones. Such efforts might ultimately be worthwhile, however.

3.6 The Threshold of War

The aforementioned isolation from physical harm, apart from implications for theory, also has serious real-world consequences. Sauer and Schörnig (2012) observe how the likelihood of less casualties may make governments more inclined to go to war. On utilitarian grounds, political decisionmakers fear losses because they have negative consequences on public support for the war. Similarly, there is a normative argument to be made on the grounds that each life has value and that we should therefore seek to minimise casualties (2012, p. 368). They note how democracies in particular are risk-averse when compared to other forms of government. Drone weaponry also tends to be cheaper than other weapons, appealing to the democratic interest in limiting military spending during peacetime (ibid, p. 370).

A threat to Ius ad bellum then emerges. Walzer notes how drone technology “invites us to imagine a war in which there won’t be any casualties on our side” (2016, p. 15). Strawser makes the same observation (2010, p. 358). The prospect of a war without casualties may make governments much more eager to begin a war than they otherwise would be. While drones might allow us to meet the ‘reasonable chance of success’ criterium of Just War Theory better, they might also lower the threshold of going to war by reducing its costs drastically. The problem here is that the drastically lowered risk of any casualties at all makes war-by-drone tempting.

Are we to regard this as inherently troubling? On the one hand we might argue that our eagerness to begin a just war is of little relevance to whether the war is indeed just. If the cause is just, our

intentions are right, and all other requirements of Ius ad bellum have been satisfied, then what is the problem if we resort to force more confidently? Indeed, if the only thing preventing a nation from fighting a just war against an unjust enemy is the likelihood of significant casualties, then perhaps drones are a good thing. They mitigate one of those factors which makes war such a horrid affair: the great loss of life associated with, at least one side. This way, they make war easier.

Yet we must remain sceptical, for this is overly optimistic conjecture. For one, it seems misguided to believe that war can ever become a perfectly-controlled clinical affair, even with the use of drones. Civilians will still be displaced, regular life disrupted by fighting, et cetera. Drones merely limit how a military’s soldiers are exposed to danger, yet civilians and enemy lives will remain at risk. While drone technology might also indirectly come to the benefit of civilians through greater accuracy

(19)

19

(assuming this is indeed the case), their lives will still be significantly affected. Even more so, the way drones seemingly limit exposure to harm might instil militaries with a false sense of confidence in their ability to control the battlefield. This relates back to the concept of the ‘drone myth’ as put forward by Braun and Brunstetter (2011).

Additionally, it seems naively optimistic to assume that countries will not be tempted by drone technology in some way. The way drone technology makes war a more appealing affair shouldn’t mean that countries automatically disregard other Ius ad bellum principles. Yet it is not far-fetched to assume countries will regard these Ius ad bellum principles more liberally. Braun and Brunstetter observe a similar trend; the Ius ad belum principle that war should only be used as a last resort is routinely undermined because the targeted killing of (alleged) terrorists becomes the default tactic. Drones’ capacity to act on just cause may lead to a propensity to do the opposite (2011, p. 346). This calls for great self-restraint on the part of the governments of nations.

3.7 Emotional Detachment

Various authors have noted how the spatial distance between drone operators and the battlefield may influence their decision-making. Just as they are spatially distanced, they might become emotionally distanced from the consequences of their actions. Strawser notes the inherent risk of drone operators regarding war as a video game (2010, p. 352). Alston notes the same: “…because operators are based thousands of miles away from the battlefield, and undertake operations entirely through computer screens and remote audio-feed, there is a risk of developing a “Playstation” mentality to killing.” (2010, pp. 25). Of course, we want our drone operators to be professional and not make needlessly emotional decisions. Yet at the same time, we must indeed be wary for the callousness Strawser and Alston warn of.

Dewyn too notes this inherent risk. He observes, however, that the operator’s safe distance from the battlefield in no way means that they do not feel engaged with his colleagues on the ground. He argues that it might even be that drone operators overcompensate due to the safety they enjoy (2019, p. 120). The lack of context might lead them to act disproportionately and in a way that carries greater risk of collateral damage. Substantiating this, Enemark also cites anecdotes of drone operators experiencing strong feelings of anger when observing friendly forces under fire (2013, p. 86). He concludes how “caring too little might sometimes be less of a problem than caring too much.” (ibid). Yet, Dewyn ultimately concludes how further empirical research will have to determine to what degree this is indeed the case.

The point stands however. Idealistically we might be tempted to believe that spatial distance translates into emotional distance. As such, we may believe that drone operators will be in a better position to make tactical judgements, and better observe the principles of Ius in bello. It seems a logical assumption after all, an infantryman firing a rifle at the enemy does so under immense stress and pressure, with imperfect information due to the chaotic surrounding situation. If in such a situation an infantryman accidentally shoots a civilian after mistaking him or her for a combatant in the chaos of the firefight, we might not ascribe full blame to the soldier, even though it is

regrettable. Yet with drone operators we are inclined to believe that their birds-eye view allows them to make more impartial decisions. Not only do they not suffer the threat of immediate physical harm that may cloud their judgement, but typically they have the means and opportunity for

intelligence gathering and reconnaissance. We therefore would be inclined to hold them to a higher standard.

(20)

20

Yet as Dewyn (2019) and Enemark (2013) have shown, this claim is questionable at best. Perhaps drone operators ideally ought to be emotionally detached from the conflict (although detachment should not in any way be construed as cold-bloodedness). Such detachment should allow for stricter adherence to protocol and Ius in bello. Alston also notes the importance of ensuring that drone operators who have never been subjected to battle still respect human rights and various safeguards to prevent needless loss of life (2010, pp. 25). Enemark notes as well how the idea of an impassioned soldier may appeal to some heroic notions of soldiery, but fury and zeal are not desirable traits in someone who has to make life-or-death decisions at a moment’s notice (2013). Especially with the kind of armaments drones are typically equipped with, great emphasis ought to be placed on precision as to avoid civilian casualties. For this, a mechanistic mindset is preferable. This leaves open the question on how to best achieve this, and what arrangements are most conducive to a mechanistic approach that simultaneously respects human rights. This is an empirical question that I am not in a position to answer.

(21)

21

4. Conceiving of Lethal Autonomous Weapons

In contrast to its remotely-piloted counterpart, the lethal autonomous weapon is unique in that there is no human calling the shots. While the challenges raised in the previous chapter apply to both remotely-piloted and lethal autonomous drones, this chapter will discuss in more detail those particular problems exclusive to lethal autonomous weapons. It leads us to discuss three points in particular. For starters, we will reflect on the matter of obedience. Lethal autonomous weapons differ from human operators in that they have no free will of their own, they are unable to disobey a given order, provided it is lawful. This creates opportunities for stricter adherence to Ius in bello, but does require us to reflect on how this would take shape in the ‘mind’ of a machine.

Secondly, there is the matter of responsibility. As Sparrow (2007) has noted, this is particularly problematic in the case of lethal autonomous weapons. There is no human to ascribe blame to in the case of such machines, yet ascribing responsibility to the lethal autonomous weapon is equally difficult. How should we deal with the issue of responsibility in the case of autonomous drones? Finally, we will continue the discussion on emotional detachment and its implications. Whereas drone operators controlling remotely-piloted drones might experience emotions differently due to the spatial dimension, lethal autonomous weapons are incapable of experiencing emption at all. What does this mean?

4.1 Obedience and Rule Adherence

Perhaps one of the strongest arguments in favour of the use of lethal autonomous weapons is precisely how, as a machine, their rigid adherence to their programming might make for a more ethical form of war. Compared to a lethal autonomous weapon, a human soldier is a wildcard. He has a will of its own, might panic in battle, or have limited knowledge of what he is supposed to do. All this can lead to undesirable behaviour, and at worst, blatant violations of the rules of war. A

lethal autonomous weapon is without these flaws. Enemark (2013, pp. 97) affirms this: “From an

ethical perspective, an argument in favour of autonomy might be that, given the poor record of human adherence to just war principles, an armed drone could be programmed to do a better job.” A lethal autonomous weapon would not be able to knowingly violate the rules of war or disobey an order (provided said order is lawful).

It deserves mentioning that some have raised the argument that we should desire some modicum of disobedience in our soldiers. In particular, Wolfendale (2019) proposes that we should demand of our soldiers that they refuse participation in a war if they deem it to be violating Ius ad bellum, framing it as a positive obligation on their part. It is difficult to imagine lethal autonomous weapons being capable of this. The complexity of such a question make it too complex for a machine. In addition, it would be unable to gather the data required to make such a judgement call. It therefore seems that lethal autonomous weapons are limited to observing Ius in bello.

Arkin notes how the military operates under so called ‘Standing Rules of Engagement’ and ‘Rules of Engagement’ (2007, pp. 33). The former refers to rules that are invariably applicable to the conduct of deployed soldiers, such as when opening fire is generally allowed. The latter, non-standing Rules of Engagement are supplemental to the standing ones. These Rules of Engagement are tailored to a particular mission, and often take into account local factors such as culture. These Standing and non-Standing Rules of Engagement are relevant because they could provide the basis for the limits of what lethal autonomous weapons are allowed and are not allowed to do. These rules provide a more

(22)

22

tangible codification of Ius in bello principles (even if the two are not the same). They should, insofar applicable, be included as part of the programming of a lethal autonomous weapon.

Theoretically this ought to result in perfect adherence to the Standing Rules of Engagement, in addition to whatever additional Rules of Engagement or side-constraints which are temporarily or permanently added. Of course, there are two caveats that apply here as well. Firstly, the Standing Rules of Engagement here are taken as an example of codified principles of how to behave during war, as applicable to soldiers. They form a good point of reference for Ius in bello as viewed by the military. Yet it must be noted how this may not always comply with Ius in bello as conceived by Just

War Theory. We might for instance, find that the military’s standards (in this case, the American

military) are woefully lacking and fail to live up to the standards of Just War Theory. If that is indeed the case, we would have to propose augmenting the Standing Rules of Engagement as to make it more reflective of the principles of Ius in bello as conceived by Just War Theory.

The second caveat is that a machine lacks what we would regard as ‘common sense’, and a lethal

autonomous weapon would follow the letter of the law rather than its spirit. This could be

problematic, but is a restriction imposed by technology. It is partially because of this that we will have to elaborate upon the question of who wields responsibility later on. It is not unimaginable, after all, that violations of Ius in bello by a lethal autonomous weapon could arise from a too-rigid interpretation of Ius in bello, leading to unforeseen/undesirable consequences.

This second caveat also provides somewhat of a segue to the next problem of perfect rule adherence, which is the issue of ‘common sense’. Whereas a drone operator can be expected to exercise common sense in making such judgements, a machine itself cannot. ‘Common sense’ is surprisingly difficult to define (in addition to being surprisingly uncommon), let alone quantify in a way that translates into binary that a drone could understand.

This becomes challenging in practical situations where there will always be a clash between moral directives and the necessities of the military operation. Indeed, many discussions on Ius in bello would be permanently settled if it weren’t for the fact that military objectives often come at a moral cost. Although historically there is no shortage of atrocities, the average soldier is not some

sociopathic thug revelling in carnage. It is not difficult to imagine how, more often than not, Ius in

bello is violated in the name of military necessity. Walzer himself notes this as well (1977, pp.

144-152). The application of the notion of military necessity -when military objectives start outweighing principles such as the protection of the lives of non-combatants-, might be one of the more difficult notions to implement to a machine.

How then do we deal with this and resolve this issue? There are two conceivable approaches. The first is perfect adherence to the rules, regardless of consequences. The second approach would be more utilitarian, where the drone is allowed to weight the pros and cons of a particular decision in order to come to a conclusion.

Yet both approaches fail to satisfy definitively for a variety of reasons. Rigid rule adherence may initially be the most defensible position. In an ideal world we would expect the same of our soldiers, after all. The argument we are making against this position is not that militaries will be at a strategic and tactical disadvantage if drones would adhere perfectly to Ius in bello (although this is certainly an argument that military strategists might make). The position that “It’s alright to break the rules if

your enemy does it first” is not a position compatible with Just War Theory. Rather, we must

question unconditional rule adherence on the ground that it can quickly be abused by combatants with no intent on honouring those very same rules. An example would be how unconditional

Referenties

GERELATEERDE DOCUMENTEN

De verwachting is dat, omdat CEO’s een hogere beloning ontvangen wat tot meer tevredenheid leidt, ze minder geneigd zullen zijn de accruals te beïnvloeden waardoor lagere

Er was sprake van drie meetmomenten: een voormeting, nameting (na 10 dagen checken) en follow-up meting (na zeven dagen). Verwacht werd dat wanneer mensen objecten meer gaan

“The Ethical Dimension of Terahertz and Millimeter-Wave Imaging Technologies: Security, Privacy, and Acceptability.” In Optics and Photonics in Global Homeland Security V and

Processes of globalisation, liberalisation, and privatisation have, together with access to the institutions and the limited problem-solving capacity of the government, a

Omdat de aanwezigheid van groen niet de enige factor is die invloed heeft op de mate van sociale cohesie, zijn ook andere buurtkenmerken in de analyse meegenomen, waaronder de omvang

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright

Light green circles represent rural birds fed with the control diet, dark green circles represent rural bird fed with processed diet, light blue squares represent urban birds

Much of the scholarship discussing the citizenship experience of racialized minorities in both the Netherlands and Greece has not catered to the sub-Saharan African narrative, with