• No results found

Robots and international law : A study into the usefulness and relevance of International Legal Personality for the study of technological developments and international law

N/A
N/A
Protected

Academic year: 2021

Share "Robots and international law : A study into the usefulness and relevance of International Legal Personality for the study of technological developments and international law"

Copied!
37
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Robots and international law

A study into the usefulness and relevance of

International Legal Personality for the study of

technological developments and international law

Image courtesy of Shutterstock

Charlotte Renckens 10260676

International and European Law: Public International Law

Supervisor: prof. dr. J. E. Nijman Wordcount: 11082

(2)

Abstract

Robots are increasingly capable of fully autonomous action and decision-making. It seems that society is ready to discuss the position of robots in our world, and international law should not stay behind in this discussion. International law departs from the concept of International Legal Personality (ILP), and as such ILP provides the theoretical background against which fully autonomous robots are studied. This study attempts to answer two questions. First, in what ways can ILP theory inform us about the position of robots in international law? Second, in what ways do robots illustrate the usefulness and relevance of the concept of ILP for the study of technological developments and international law? ILP is grounded on a number of human capacities, most importantly, autonomy, intelligence, rationality, choice, and intentionality. Fully autonomous robots already match these capacities. From the standpoint of ILP theory, therefore, it is not at all unlikely that robots might be perceived as internationally legal persons in the near future. However, it is argued that ILP is not capable of adequately addressing the questions that technological developments evoke. There are three issues beyond the human capacities of ILP that arise when confronted with the technological reality of fully autonomous robots. These are emotions, human-to-human interaction, and accountability. It seems that ILP has lost its relevance and usefulness with the rise of new technological developments. Whether it could regain this usefulness and relevance depends on a reconceptualization of ILP. It is recommended to perceive ILP as a mode of identity and to draw from Actor-Network Theory for a more coherent view on the position of non-human entities in international law.

(3)

Table of Contents

1. Introduction

1

2. Traditional and non-traditional ILP theories 3

2.1 The significance of International Legal Personality 3 2.2 Traditional ILP theory 4

2.3 Non-traditional ILP theory 5

2.4 The attribution of human capacities to legal persons 8 2.5 Robots and legal personality 10

3. Fully autonomous robots explained

11

3.1 Definition 11

3.2 The significance of autonomy 12

3.3 Technological background 13

3.4 Robots and the human capacities of ILP 14

4. Case study: fully autonomous weapons 15

4.1 The significance of fully autonomous weapons 15

4.2 The capabilities of fully autonomous weapons15

4.3 The desirability of the employment of fully autonomous weapons 17 4.4 Questions beyond ILP 18

5. ILP and the position of robots in international law 20

5.1 Boundaries between humans and robots 20

5.2 ILP theory applied to robots 20

5.3 Beyond the human capacities of ILP 22

6. Conclusion and recommendations

24

6.1 Conclusion 24

6.2 Recommendations 26

(4)

1. Introduction

Robots are increasingly capable of autonomous action, whether it is in health care, in search and rescue operations or in the theater of war. Artificial Intelligence (AI) gives rise to completely novel questions regarding the blurring lines between human and robot actions and between human and robot intentions. In a world where robots are becoming increasingly advanced, it is time to question the manner in which international law perceives them. Are they mere objects, are they perhaps evolving into subjects of international law, or are they something else? The philosophical question of the position of robots in our world and their relationship to humans is somewhat of a hot topic; it has recently received a platform in popular culture through TV series such as Real Humans and Westworld. Both series portray a certain discomfort with regard to the role of highly intelligent robots as objects, i.e. whether they are used as servants, sex slaves or cannon fodder. Many humans in the series are empathetic toward the robots in their lives and feel uncomfortable with treating them as de facto slaves, which is exacerbated by the fact that some of the robots themselves claim to be something – or someone – beyond only robots. Popular culture attests to the fact that society is ready to discuss the difficult issue of the position of robots in our world, and international law should not stay behind in this discussion.

International law departs from the concept of International Legal Personality (ILP); there is a generally accepted idea that ‘for the purpose of both the protection and the accountability of entities within the international legal system, these should have the enhanced status of legal ‘subject’ rather than ‘object’’1. Moreover, it can be argued

that the significance of legal personality lies in the fact that it distinguishes ‘those social actors belonging to the international legal system from those being excluded from it’2. The traditional ILP doctrine holds that ‘the international legal order

recognizes a limited number of entities — primarily States — as the bearers of rights and obligations’3. However, there has been a realization that existing theories might

1 Nijman, J. E. (2010). Non-state actors and the international rule of law: revisiting the 'realist theory' of international legal personality. In M. Noortmann & C. Ryngaert (Eds.). Non-state actor dynamics in

international law: from law-taker to law-makers. (pp. 91-214). (Non-state actors in international law,

politics and governance series). Farnham: Ashgate, 4.

2 Portmann, R. (2010). Legal personality in international law (Vol. 70). Cambridge: Cambridge University Press, 19.

3 Bianchi, A. (2011). The fight for inclusion: non-state actors and international law. Oxford: Oxford University Press, 40.

(5)

lack in their analysis of non-state-actors. This is felt not least through the ‘Not-a-Cat syndrome’4, which describes the fact that non-state actors are referred to by a term

that is not their own but is rather in relation to the state; more than anything else, they are not states5. Three innovative non-traditional ILP theories, which leave room for

unconventional actors, are discussed in the current study. These range from an endogenous perspective to an actor conception to a focus on the fulfilling of obligations.In addition, fiction theory and realist theory are discussed, and it is argued that regardless of which ILP theory one adheres to, certain human capacities are attributed to legal persons – whether because it is believed that these legal persons intrinsically possess those capacities or that the law imputes these capacities to them by granting them legal personality. The current study explains the importance of these human capacities to ILP theory, and how they relate to the position of fully autonomous robots (hereafter simply referred to as ‘robots’) in international law. The current debate on robots and international law focuses almost exclusively on fully autonomous weapons and the unique challenges they pose in the theater of war. There are broadly two opposing views on how international law should approach fully autonomous weapons. The first camp is made up of those who believe that it is likely (and arguably preferable) that the weapons will acquire more autonomy in the not too distant future. International law should, therefore, not necessarily aim to prevent this development but rather think about how the law of armed conflict applies to robots and how robots can be programmed to abide by this legal regime6. The other,

arguably larger, camp is made up of those who believe that fully autonomous weapons should be seen as inherently unlawful as they pose a unique threat to civilians in the theater of war. Therefore, international law should aim at ensuring that ‘meaningful human control’ is always part of the process leading up to the robots’ actions7. In the current study, fully autonomous weapons form a case study for

4 Alston, P. (2005), The “Not-a-Cat” Syndrome: Can the International Human Rights Regime Accommodate Non-State Actors? In P. Alston (ed), Non-State Actors and Human Rights. Oxford: Oxford University Press.

5 Bianchi, A. (note 3), 39.

6 See for example Schmitt, M. N., & Thurnher, J. S. (2012). Out of the loop: autonomous weapon systems and the law of armed conflict. Harv. Nat'l Sec. J., 4, 231; Arkin, R. C., Ulam, P., & Wagner, A. R. (2012). Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proceedings of the IEEE, 100(3), 571-589.

7 See for example Sharkey, N. E. (2012). The evitability of autonomous robot warfare. International

Review of the Red Cross, 94(886), 787-799; Human Rights Watch (2012). Losing Humanity – The

(6)

examining the capabilities of fully autonomous weapons, the questions that arise in the debate about the desirability of their employment, and what these questions reveal about the relevance and usefulness of ILP for the study of technological developments and international law.

The current study looks at two separate but interrelated questions. The first question is as follows: in what ways can ILP theory inform us about the position of robots in international law? The philosophical issue ultimately underpinning this question is the subject–object divide within international law8. However, law operates to a certain

extent in reaction to social reality and thus strives to answer social questions. The most pressing social question underlying the position of robots in international law seems to be the distribution of responsibility: after all, autonomy knows a counterpart in international law. One is free to make choices, but these choices bear consequences and are regulated by the international legal system.The second question of the current study reverses the dynamic of the first and asks: in what ways do robots illustrate the usefulness and relevance of the concept of ILP for the study of technological developments and international law? As stated in the very beginning of this introduction, the emergence of AI has led to truly novel social questions. This means not only that applying existing theoretical frameworks to AI can provide for interesting perspectives on robots but also that studying robots might give refreshing insights into existing theoretical frameworks such as ILP. The current study aims to study both dynamics using traditional and non-traditional ILP theories.

2. Traditional and non-traditional ILP theories

2.1 The significance of International Legal Personality

Before embarking on an overview of ILP theory, it is paramount to explain the significance of the concept: why does international law depart from ILP? In international law, legal personality ‘necessitates the consideration of the interrelationship between rights and duties afforded under the international system and capacity to enforce claims’9. However, international law ‘cannot just think in

Summary, and Arbitrary Execution, United Nations Human Rights Council. 23rd Session, April, 9, 7.

8 Nijman, J. E. (2010) (note 1), 5.

(7)

terms of rights and duties’10, it also needs to consider who or what possesses them:

‘[t]here must exist something that ‘has’ the duty or right’11. This has led to some

scholars perceiving ILP as a threshold: it is thought to be ‘a conditio sine qua non for the possibility of acting within a given legal situation’12. However, this analysis is

regarded as extremely general and lacking in descriptive value13. Rather than being a

threshold for legal action on the international plane, one can argue that the significance of legal personality lies in the fact that it distinguishes ‘those social actors belonging to the international legal system from those being excluded from it’14. This does not necessarily imply that non-subjects do not have any leeway

regarding international legal action but rather that ‘for the purpose of both the protection and the accountability of entities within the international legal system, these should have the enhanced status of legal ‘subject’ rather than ‘object’’15.

Although the merit of the subject–object dichotomy is debatable16, it is evident that in

the international legal order subjects and non-subjects are distinctly different. A number of theories that deal with the question of who or what can be a legal person are discussed in the next section.

2.2 Traditional ILP theory

Traditional ILP theory holds that ‘the international legal order recognizes a limited number of entities — primarily States — as the bearers of rights and obligations’17.

Traditionally, states are seen as the sole subjects of international law, and the international legal system as an expression of the ‘common will’ of states18. This

‘states-only conception’19 was inspired by (amongst many other developments) the

1648 Treaty of Westphalia20 and the work of the legal philosopher Emer de Vattel. He

wrote an influential treatise in 1758 in which he defined international law as ‘la 10 Klabbers, J. (2005). The Concept of Legal Personality. Ius Gentium, 11, 39.

11 Kelsen, H. (1945). General Theory of Law and State (Wedberg trans.). New York: Russell & Russell, 93. 12 Klabbers, J. (note 10), 37. 13 Ibid. 37. 14 Portmann, R. (note 2), 19. 15 Nijman, J. E. (2010) (note 1), 4. 16 Ibid. 5. 17 Bianchi, A. (note 3), 40. 18 Shaw, M. N. (note 9), 21.

19 Cf. The classification of Roland Portmann in Portmann, R. (note 2).

20 Nijman, J. E. (2004). The concept of international legal personality: an inquiry into the history and

(8)

science du droit qui a lieu entre les Nations, ou États, et des obligations qui répondent à ce droit’21. Hersch Lauterpacht further specified this notion as follows:

According to what may be described as the traditional view in the matter, States only and exclusively are the subjects of international law. In particular, on that view, individuals are not subjects of international law; they are its objects in the sense that by customary and conventional law States may be bound to observe certain rules of conduct in relation to

individuals22

In the states-only conception, internationally legally persons thus emerge once states consent to being bound by international law23. More recently, the states-only

conception has been complemented and modified by the ‘recognition conception’. Although this conception ‘still stipulates the primacy of the state in international law, it accepts that states can recognize other entities as international persons’24. Its earliest

manifestation in case law was in the Reparation for Injuries Advisory Opinion, in which the International Court of Justice (ICJ) recognized the international legal personality of the United Nations (UN)25.

2.3 Non-traditional ILP theory

The effects of equating international law with inter-state law have been felt to this day: ‘its effects linger, in some ways, in existing international law’26. This is felt not

least through the ‘Not-a-Cat syndrome’27, which describes how non-state actors are

referred to by a term that is not their own but is rather in relation to the state: more 21 ‘The science of law that takes place between nations or between states, and the obligations that correspond to that law’. Vattel, E. de. (1916) [1758] Le Droit des Gens, ou Principes de la Loi

Naturelle, appliqués à la Conduite aux Affaires des Nations et des Souverains. The Classics of

International Law (James Brown Scott ed.). Washington: the Carnegie Institution of Washington. Préliminaires §3, 1.

22 Lauterpacht, H. (1970). The Subjects of International Law. In E Lauterpacht (ed). International

Law. Being the Collected Papers of Hersch Lauterpacht, Volume I: The General Works. Cambridge:

Cambridge University Press, 136.

23 Nijman, J. E. (2004) (note 20), 10.

24 Portmann, R. (note 2), 80.

25 International Court of Justice (1949). Reparation for Injuries Suffered in the Service of the United

Nations (Advisory Opinion). ICJ Reports, 174.

26 Lauterpacht, H. (1970) (note 22), 136.

(9)

than anything else, they are not states28. Next, three innovative non-traditional ILP

theories, which leave room for unconventional actors, are discussed.

The first and oldest non-traditional ILP theory discussed here analyzes ILP from an endogenous perspective. According to Thomas von Gierke, ‘factual existence’ or ‘real existence’29 is a manner in which an internationally legal person can come into

being30. This ‘factual existence’ is established when ‘the legal person that emerges …

has its own capacity to bear rights and obligations, to act and decide freely’31. Gierke

belongs to the realist strand of ILP theory, which holds that an internationally legal person possesses ‘a real existence, including its own will, distinct from that of other members’32. It is not a fictitious person, but ‘a living organism and a real person …

itself can will, itself can act’33. This endogenous perspective on ILP is not common in

international law, in which legal personality is usually examined from an exogenous perspective: ‘the international legal system determines which entity has ILP’34.

Another non-traditional ILP theory is the actor conception. This conception avoids the notion of personality and rather speaks of participants or actors and ‘considers all entities exercising ‘effective power’ in the international ‘decision-making process’ international persons’35. The subject–object dichotomy is discarded as ‘not

particularly helpful’, both operationally and intellectually. Instead, this school of thought observes that in international law, ‘there are a variety of participants, making claims across state lines, with the object of maximizing various values’36. The

emergence of an internationally legal person is not the necessary consequence of applying a definition but rather ‘links up with the actor’s participatory role in the global decision-making processes’37. The focus has thus shifted from only states and

28 Bianchi, A. (note 3), 39.

29 Ibid. 33.

30 The other ways being explicit attribution and implicit attribution; Nijman, J. E. (2010) (note 1), 33.

31 Ibid. 33.

32 Klabbers, J. (note 10), 44.

33 Maitland, F. W. (1951) [1900]. Translater’s introduction. In: Gierke, O. Political Theories of the

Middle Age. Cambridge: Cambridge University Press, xxvi.

34 Nijman, J. E. (2010) (note 1), 37.

35 Portmann, R. (note 2), 208.

36 Higgins R. (1994) Problems and Process: International Law and How We Use It. Oxford: Oxford University Press, 50.

(10)

international organizations to also include individuals.

Andrew Clapham has developed another non-traditional ILP theory, which concentrates on the ‘scope of application of international norms’38. In other words:

who has ‘the capacity to fulfill obligations’39? Clapham, therefore, makes a move

from personality to capacity40, and he is able to do so because he has disconnected

ILP from ‘the misleading concept of ‘subjects’ of international law and the attendant question of attribution of statehood under international law’41. Although Clapham’s

theory does apply an exogenous perspective, it is non-traditional in the sense that it implies a dramatic reversal of the traditional perspective.

Rather than identifying before-hand which entities may hold rights and obligations under international law, this approach presupposes identifying who are the addressees of international rules, without drawing any particular

con-clusions on the addressees’ status under international law. International practice and policy consideration would thus be more useful in spotting the addressees of international obligations than any preordained theory about the subjects

of the law42

Clapham’s theory is reminiscent of Hans Kelsen’s formal conception of ILP theory, which holds that ‘any entity on which the international legal system confers rights, duties or capacities is an international person’43. There is thus no a priori

identification of internationally legal persons, and ‘there are no limits as to which entities can be international persons’44. Instead, ‘whenever the interpretation of an

international norm leads to it addressing the conduct of a particular entity, this entity

38 Bianchi, A. (note 3), 43.

39 Nijman, J. E. (2010) (note 1), 27.

40 Ibid. 27.

41 Clapham, A. (2006) Human Rights Obligations of Non-State Actors. Oxford: Oxford University Press, 59.

42 Bianchi, A. (note 3), 43.

43 Portmann, R. (note 2), 174.

(11)

is an international person’45. Legal personality is then merely a descriptive tool.

2.4 The attribution of human capacities to legal persons

Several theories that deal with who or what can be a legal person have been discussed, but there are also contending theories on how legal personality is constituted, and ‘two contending theories of personality appear most prevalent’46: fiction theory and realist

theory. In fiction theory, ‘the legal person has no will, no mind, and no ability to act, except to the extent that the law imputes such will and ability to the legal person in question’47. Autonomy thus flows from the attribution of ILP by international law.

The other theory focuses on the ‘real existence’48 of internationally legal persons. Real

existence means that a legal person is ‘a living organism and a real person … itself can will, itself can act’49, and it thus reverses the cause–effect relation of fiction

theory: legal personality flows from an entity’s real existence.

Notions such as autonomy, a will, ability to act, and a free choice are ultimately human notions. They are capacities that ‘we associate first with human beings’50.

Regardless of which ILP theory one adheres to, these human capacities are attributed to legal persons, whether because it is believed that these legal persons intrinsically possess those capacities or that the law imputes these capacities to them by granting them legal personality. Three conceptions of legal personality can be discerned. The first conception is devoid of any moral dimension and merely focuses on formal capacity: the legal person ‘exists only as an abstract capacity to function in law, a capacity which is endowed by law because it is convenient for law to have such a creation’51. Indeed, Clapham’s theory, which refers to ILP as mere descriptive tool,

would fall within this framework. The second conception of legal personality – which is the most accepted legal notion according to the author – is grounded on the human being, which is ‘the natural basis of personality’52. The reasoning is as follows: ‘rights

45 Ibid. 174.

46 Klabbers, J. (note 10), 42.

47 Ibid. 42.

48 Nijman, J. E. (2010) (note 1), 33.

49 Maitland, F. W. (note 33), xxvi.

50Wendt, A. (2004). The state as person in international theory. Review of International

Studies, 30(2), 289.

51Naffine, N. (2003). Who are law's persons? From Cheshire cats to responsible subjects. The

Modern Law Review, 66(3), 351.

(12)

and duties involve choice, therefore, they will naturally under any system of law be held to inhere primarily in those beings which enjoy the ability to choose, viz, human beings’53. It is possible to regard non-human actors as ‘subjects of international law

only because we have personified them’54. The third conception focuses on the

rationality of legal persons: ‘persons include only the rational and so the legally competent’55. The legal person is ‘an intelligent agent and a moral agent in the sense

that he is accountable for his actions’56. The second and third conceptions – albeit

different in their use or non-use of ‘extra-legal biological or moral considerations’57

are similar in the sense that they vest in legal persons the intrinsic ability to choose. Therefore, whether one chooses to adhere to realist or to fiction theory, human capacities are abstracted to legal persons that are not natural persons. This means that the second and third conceptions of legal personality hold strong descriptive value. John Dewey noted that although legal personality might be defined simply as a descriptive tool – as the formal conception puts it – this is in fact not how it was used. Instead, it was considered necessary to first define who or what can be a legal person, as ‘a precondition for having right-duties’58. The attribution of human capacities to

corporations has been studied extensively. It is assumed that there is ‘some nature or essence which belongs both to men in the singular and to corporate bodies’59. The

state as a legal person that possesses human capacities has also been studied extensively. The first theory that pictured the state as a ‘group personality’ was set forth by Samuel Pufendorf60, who described the state as having intelligence and will

and as being a moral person61. In conclusion,

there seems to be a genus of which State and Corporation are species. They seem to be permanently organized groups of men;

53 Fitzgerald P.J. (1966) Salmond on Jurisprudence (12th ed). London: Sweet and Maxwell, 29.

54 Cf. Brierly in Nijman, J. E. (2004) (note 20), 145.

55Naffine, N. (note 51), 362.

56 Ibid. 362.

57 Ibid. 357.

58Dewey, J. (1926). The historic background of corporate legal personality. The Yale law

journal, 35(6), 659.

59 Ibid. 659.

60Aufricht, H. (1943). Personality in International Law. American Political Science Review, 37(2),

218.

(13)

they seem to be group-units; we seem to attribute acts and intents, rights and wrongs to these groups, to these units’62

There are thus certain human capacities that are abstracted to the legal person. These capacities are, most importantly, autonomy, intelligence, rationality, choice, and intentionality.

2.5 Robots and legal personality

Theory on international legal personality for robots is a largely unchartered territory. There are several authors, however, who have written about legal personality for robots in domestic law. Although not directly concerned with ILP, these contributions are informative as to the relevance of ILP and indicate questions that emerge when one considers the legal position of robots. In 1992, Lawrence Solum wrote an influential essay on legal personality of robots. He focused on legal and moral personality of robots as guaranteed by the Bill of Rights and the Civil War Amendment to the US Constitution. Solum envisaged a scenario in which a robot claims it is a legal person and demands certain rights and raised three objections against this claim. The second objection to recognizing constitutional rights for robots is especially insightful. The author contended that robots ‘lack some critical component of personhood, for example, souls, consciousness, intentionality, or feelings’63. In this sense, Solum also references human capacities as the ‘building

blocks’ of legal personality. Nevertheless, he went beyond the human capacities defined in the current study and added the requirement of a legal person to have a soul64.

In addition, there are a number of authors who have explored the possibility of legal personality for a contracting agent, which ‘buys and sells goods; … processes applications for visas and credit cards; collects, acquires, and processes financial information; trades on stock markets; and so on’65. Some authors argue in favor of

62 Maitland, F. W. (note 33), ix.

63 Solum evidently adheres to the realist theory on ILP: there ought to be something innately real to internationally legal persons. Solum, L. B. (1991). Legal personhood for artificial intelligences. NCL

Rev., 70, 1258.

64 This is reminiscent of the natural law tradition, in which ‘having a soul was indeed a requirement for falling under God’s law, the law of nature’. Nijman, J. E. (2004) (note 20), 56, note 122.

65 Chopra, S., & White, L. F. (2011). A legal theory for autonomous artificial agents. University of Michigan Press, 1.

(14)

assigning legal personality to these specific robots for liability reasons66: ‘it would

‘‘reassure the owners-users of agents’’, because, by considering the eventual ‘‘agents’’ liability, it could at least limit their own (human) responsibility for the ‘‘agents’’ behaviour’67. These contributions are insightful as they substantiate the

proposition advanced in the introduction: the most pressing social question underlying the position of robots in international law is the distribution of responsibility.

Peter M. Asaro wrote a paper that also emphasizes the distribution of responsibility. He contended that it is not likely that robots will be considered as legal persons anytime soon. However, he argued that ‘the legal systems in pluralistic societies have found ways to deal practically with several border-line cases of personhood’68. In line

with this, Asaro explored the possibility of treating robots as ‘quasi-persons’ that can bear some rights and obligations but not all that are borne by full legal persons69. A

scenario is thus envisaged in which robots float somewhere in the subject–object dichotomy.

3. Fully autonomous robots explained

3.1 Definition

The notion of autonomy can carry multiple connotations70, and it is crucial to strictly

define what is considered full autonomy in the current study. Within the field of Human-Robot Interaction theory (HRI), which is concerned with ‘understanding, designing, and evaluating robotic systems for use by or with humans’71, the most

commonly used conception of robot autonomy is a scale designed by Tom Sheridan and William Verplank in 197872. Even though the scale is several decades old, it is

66 See for example Chopra, S., & White, L. F. (note 64); Andrade, F., Novais, P., Machado, J., & Neves, J. (2007). Contracting agents: legal personality and representation. Artificial Intelligence and

Law, 15(4), 357-373.

67 Andrade, F., Novais, P., Machado, J., & Neves, J. (note 65), 366.

68 Asaro, P. M. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE, 22.

69 Ibid. 22.

70 For an illustration of the usage of autonomy in AI technology, please refer to Smithers, T. (1997). Autonomy in robots and other agents. Brain and Cognition, 34(1), 88-106.

71 Goodrich, M. A., & Schultz, A. C. (2007). Human-robot interaction: a survey. Foundations and

trends in human-computer interaction, 1(3), 240.

72 Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea

teleoperators. MASSACHUSETTS INST OF TECH CAMBRIDGE MAN-MACHINE SYSTEMS

(15)

still ‘the most widely cited description’ of robots’ level of autonomy73. On Sheridan’s

scale, ‘there is a continuum from the entity being completely controlled by a human … through the entity being completely autonomous and not requiring input or approval of its actions from a human before taking actions’74. The scale has ten

degrees that range from ‘computer offers no assistance; human does it all’ to ‘computer decides everything and acts autonomously, ignoring the human’75. Implicit

in this continuum are two elements: control over decision-making and control over action. If a robot takes decisions without human intervention, and humans do not control robot action, then that robot is fully autonomous76.

3.2 The significance of autonomy

As discussed earlier, autonomy is one of those human capacities that are abstracted to a legal person. It is a characteristic that is either assumed to be intrinsically present in a legal person or imputed to that legal person when granted international legal personality. This autonomy – or sovereignty, when used in the same sense as meaning self-determination – knows a counterpart according to Gottfried Wilhelm Leibniz. In fact,

ILP was used … to give sovereignty a counterpart in responsibility. Sovereignty was thus not only relative in the sense that it applied to those with a sufficient degree of force and related competences, but also in the sense that sovereign powers were externally restricted by the law of nations. By having ILP, rulers not only had powers or rights, but also the legal responsibility to use their authority in accordance with the law of nations as springing from justice77

Part of the significance of autonomy thus lies in the fact that it is a counterpart to 73 Goodrich, M. A., & Schultz, A. C. (note 70), 217.

74 Ibid. 217-219.

75 Ibid. 218.

76 A complementary definition of full autonomy focuses on situations in which humans and robots are involved in peer-to-peer collaboration ‘in which each agent (human and [robot]) contributes what it is best suited at the most appropriate time’. The robot possesses ‘dynamic autonomy’, and does interact with humans even though it is not strictly necessary. However, for the sake of clarity the definition by Sheridan and Verplank will be used in this thesis. Hearst, M. A., Allen, J., Guinn, C., & Horvitz, E. (1999). Mixed-initiative interaction: Trends and controversies. IEEE Intelligent Systems, 14(5), 14-23.

(16)

responsibility. Self-determination, therefore, comes at a price in international law: one is free to make choices, but these choices bear consequences and are regulated by the international legal system. Translated to the level of ILP, this means that ‘the only direct consequence of possessing personality in international law is the capacity to invoke responsibility and to be held responsible for internationally wrongful acts’78.

Autonomy is an important factor not only for the distribution of responsibility but also for accountability and liability questions. Accountability carries a more political and moral connotation than responsibility: it can encompass the legal notion of responsibility, but it can also signify accountability beyond that79. Liability is a legal

term, but unlike responsibility, it exists in international law independently of whether a breach of international law has taken place: it is the consequence of an act or an omission (‘whether that result is the occurrence of a ‘risk’ or even simply of ‘damage’ itself’80) that determines whether a subject of international law can be held liable.

Both accountability and liability are also grounded on free choice and in self-determination: it is the choice to carry out a certain conduct that ultimately leads to responsibility, accountability, and liability. In a discussion on ILP and the position of fully autonomous robots in international law, it is, therefore, important to remember that autonomy and responsibility as well as accountability and liability questions are closely tied together.

3.3 Technological background

To understand what the full autonomy of robots entails, a brief explanation of the technology underpinning their autonomy is imperative. There are two common autonomy approaches, which are now often combined to create ‘hybrid architectures’81. The first approach is the ‘sense-plan-act model of decision-making’82

– the model’s name refers to the steps that lead up to the action of the fully autonomous robot. The model ‘is typified by artificial intelligence techniques, such as

78 Portmann, R. (note 2), 277.

79Bovens, M. (1998). The quest for responsibility: Accountability and citizenship in complex

organisations. Cambridge: Cambridge University Press, 25.

80Pellet, A. (2010). The definition of responsibility in international law. In: Crawford J., Pellet. A., Olleson, S. & Parlett, K. (Eds) The Law of International Responsibility. Oxford: Oxford University Press, 10.

81 Goodrich, M. A., & Schultz, A. C. (note 70), 220.

(17)

logics and planning algorithms [and] can also incorporate control theoretic concepts’83. In the mid-1980s a new robotics paradigm emerged, which is

‘behavior-based robotics’84:

In this paradigm, behavior is generated from a set of carefully designed autonomy modules that are then integrated to create

an emergent system … These modules generate reactive

behaviors that map sensors directly to actions, sometimes with no intervening internal representations85

The aforementioned two models are often combined in one hybrid architecture by building ‘sense-think-act models on top of a behavior-based substrate’86. Researchers

have developed systems in which ‘the low-level reactivity is separated from higher level reasoning about plans and goals’87. In other words, these systems have ‘the

ability to switch between deliberative and more reactive modes of reasoning via a learning mechanism that caches deliberative results’88. This means that robots are

capable of multifaceted high-level reasoning. In addition to robotic control algorithms, ‘sensors, sensor-processing, and reasoning algorithms’ have been improved upon, which is ‘best represented by the success of the field of probabilistic robotics, typified by probabilistic algorithms for localization and mapping’89.

3.4 Robots and the human capacities of ILP

Robots possess certain capacities that can be said to resemble, mirror, and/or match the human capacities on which ILP is grounded. They are clearly intelligent and rational, as they make choices based on complex reasoning capabilities: they come to fully autonomous decisions by switching between a reactive mode of reasoning (by 83 Goodrich, M. A., & Schultz, A. C. (note 70), 220; Control theory ‘deals with the behavior of dynamic systems’: Simrock, S. (2008). Control theory, 73.

84 See for example Arkin, R. C. (1998). Behavior-Based Robotics. Cambridge, MA, USA: The MIT Press; Brooks, R. (1986). A robust layered control system for a mobile robot. IEEE journal on robotics

and automation, 2(1), 14-23.

85 Goodrich, M. A., & Schultz, A. C. (note 70), 220.

86 Ibid. 220.

87 Ibid. 220.

88 Peter Bonasso, R., James Firby, R., Gat, E., Kortenkamp, D., Miller, D. P., & Slack, M. G. (1997). Experiences with an architecture for intelligent, reactive agents. Journal of Experimental & Theoretical

Artificial Intelligence, 9(2-3), 253.

(18)

which they ‘will move the state of the world in a direction that should cause the desired events’90) and a more deliberative mode of reasoning ‘about plans and

goals’91. Intentionality implies having ‘the capacity to purposive action’92, of which

robots are also capable. This does not mean that robots are ‘intentional in a philosophically rigorous way … [or] that the actions are derived from a will that is free on all levels of abstraction’93 but rather that their deliberative and reactive modes

of reasoning culminate into a predetermined goal. The following case study delves deeper into the robots’ capacities.

4. Case study: fully autonomous weapons

4.1 The significance of fully autonomous weapons

Fully autonomous weapons are currently the most hotly debated issue concerning the intersection of robots and international law. Their employment speaks to the imagination and is not only subject of a raging legal debate, but also highly visible in popular culture. What is discussed here are the capabilities of fully autonomous weapons, the questions that arise in the debate about the desirability of their employment, and what these questions reveal about the relevance and usefulness of ILP for the study of technological developments and international law.

4.2 The capabilities of fully autonomous weapons

A widely used definition of a fully autonomous weapon – endorsed by parties such as Human Rights Watch, the US Department of Defense and the UN Special Rapporteur – is a robot that

once activated, can select and engage targets without further intervention by a human operator. The important element is that the robot has an autonomous “choice” regarding selection of a target and the use of lethal force94

90 Peter Bonasso, R., James Firby, R., Gat, E., Kortenkamp, D., Miller, D. P., & Slack, M. G. (note 87), 239.

91 Goodrich, M. A., & Schultz, A. C. (note 70), 220.

92 Cf. Von Gierke, Nijman, J. E. (2010) (note 1), 39.

93 Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics,

6(12), 23-30.

94 Heyns, C. (2013). Report of the Special Rapporteur on Extrajudicial, Summary, and Arbitrary Execution, United Nations Human Rights Council. 23rd Session, April, 9, 7.

(19)

What applies to robots in general therefore also applies to fully autonomous weapons: control over decision-making and control over action are the constituting elements of full autonomy. Human Rights Watch further distinguishes three categories of weapons: ‘Human-in-the-Loop Weapons’ (human command), ‘Human-on-the-Loop Weapons’ (human operator capable of overriding the robot) and ‘Human-out of-the-Loop Weapons’ (no human interaction or input whatsoever)95. Fully autonomous

weapons only comprise the latter two categories: ‘both out-of-the-loop weapons and those that allow a human on the loop, but that are effectively out-of-the-loop weapons because the supervision is so limited’96. At the time of writing its report (in 2012),

Human Rights Watch noted that fully autonomous weapons did not yet exist97. It is

indeed true that these weapons have not been employed until this day, but they do exist. South Korean engineers have, for example, created The Super aEgis II. This robot is capable of identifying targets and initially possessed an auto-firing system, which was removed after customers demanded a human safeguard98. The robot is,

therefore, ‘theoretically without the need for human mediation’99. In addition, Arkin et

al. indicated that these weapons are capable of deceiving humans and hiding their true intentions. In the example provided, a robot is engaged in military activities on the ground and attempts to hide from an enemy by selecting one of three different corridors. It is able to trick the (human) enemy into believing it has entered a different corridor instead of the one it has actually entered100. Finally, it has been suggested that

soon, there might be not only hyper-intelligent robots that are capable of making autonomous choices of who they should kill but also robots that can be developed as moral agents101. For example, John P. Sullins identified three simple criteria that

would make a robot a moral agent: ‘if it is (1) in a position of responsibility relative to 95 Human Rights Watch (note 7), 2.

96 Ibid. 2.

97 Ibid. 3.

98 BBC (16 July 2015). Killer robots: the Soldiers that never sleep.

http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep (last visited 26/05/017).

99 Ibid.

100 Arkin, R. C., Ulam, P., & Wagner, A. R. (note 6), 583.

101 For an overview, see Verbeek, P.-P. (2006). Materializing morality: Design ethics and

Technological Mediation. Science Technology & Human Values 31, 361–380; Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. In Ethics in Science, Technology and Engineering, 2014

IEEE International Symposium, 1-6; Coeckelbergh, M. (2010). Moral appearances: emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235-241.

(20)

some other moral agent, (2) has a significant degree of autonomy, and (3) can exhibit some loose sort of intentional behavior’102. Others argue that developing robots into

moral agents is an ‘awe-inspiring challenge’, yet lay down practical steps that would indeed grant them moral agency103.

4.3 The desirability of the employment of fully autonomous weapons

There are broadly two opposing views on how international law should approach fully autonomous weapons. There are those authors who find the existence of fully autonomous weapons unequivocally undesirable, and there are those who acknowledge the risks associated with the weapons but also recognize the advantages they might have. Let us start with the position that for fully autonomous weapons a ‘preemptive prohibition on their development and use is needed’104. Human Rights

Watch is in the vanguard of this position. It argues that ‘robots with complete autonomy would be incapable of meeting international humanitarian law standards’ and thus would not be able to protect civilian lives105. First, robots would not be able

to detect human intentions and thus cannot make proper distinctions106. Second, robots

would not be capable of properly executing proportionality tests, as ‘a robot could not be programmed to duplicate the psychological processes in human judgment that are necessary to assess proportionality’107. This same argument applies to military

necessity tests108. In addition, the lack of human emotion would make robots

especially dangerous to civilians, as emotions such as empathy and pity form an important safeguard against killing civilians109. Therefore, Human Rights Watch

argues for imposing ‘meaningful human control’ on robots. This human control would effectively take away the robots’ full autonomy. Another argument against the use of 102 Arkin, R. C. (2008). Governing lethal behavior: embedding ethics in a hybrid

deliberative/reactive robot architecture. In Proceedings of the 3rd ACM/IEEE international conference

on Human robot interaction, ACM, 125; Sullins, J. P. (note 92), 28.

103Malle, B. F., & Scheutz, M. (note 100), 6.

104 Human Rights Watch (note 7), 1.

105 Ibid. 3.

106 Ibid. 31; Sharkey, N. E. (2012). Killing made easy: From joysticks to politics. Robot ethics: The

ethical and social implications of robotics, 118; Guarini, M., & Bello, P. (2012). Robotic warfare: some challenges in moving from noncivilian to civilian theaters. Robot ethics: The ethical and social

implications of robotics, 129, 138.

107 Human Rights Watch (note 7), 3; Sharkey, N. (2007). Automated killers and the computing profession. Computer, 40(11), 122.

108 Human Rights Watch (note 7), 34-35.

109 Ibid. 37-38; Glover, J. (2000). Humanity: A Moral History of the Twentieth Century. New Haven, CT: Yale University Press, 48.

(21)

fully autonomous weapons is the responsibility gap that would arise if the robots would commit unlawful acts: ‘there is no fair and effective way to assign legal responsibility’110. This resonates with Hersch Lauterpacht’s claim that ‘there is

cogency in the view that unless responsibility is imputed to persons of flesh and blood, it rests with no one’111.

The other position acknowledges the risk associated with fully autonomous weapons, yet it also begs for considering the advantages these weapons might have. First, responding to the argument that robots would not be able to adequately protect human lives, Arkin et al. explained that it is possible to build an ethical constraint governor. This would restrain ‘the actions of a lethal autonomous system so as to abide within the internationally agreed upon Laws of War (LOW)’112. Some authors even

contended that robots might be capable of performing ‘better than humans do on the battlefield, particularly with respect to reducing unlawful behavior or war crimes’113.

There are a number of reasons for this: robots do not have to be cautious, their behavior cannot be clouded by emotions such as anger and despair, and ‘they can integrate more information from more sources much faster’114. In fact, some even go

so far as arguing that if robots would indeed be better at sparing civilian lives, states might actually be under an obligation to employ fully autonomous weapons115.

Finally, it is argued that if an ‘existing set of ethical policies (e.g., LOW and ROE) is replicated through a robot’s behavior, it enforces a particular morality in the robot itself’116. A robot is expected to act ethically if it possesses an ethical system.

4.4 Questions beyond ILP

As mentioned earlier, the full autonomy of fully autonomous weapons entails the combination of autonomous decision-making and action. The robots are able to make free choices and they are intentional in the sense that their deliberative and reactive 110 Human Rights Watch (note 7), 42.

111Lauterpacht, H. (1948). The subjects of the law of nations. Law Quarterly Review, 64(253), 107.

112 Arkin, R. C., Ulam, P., & Wagner, A. R. (note 6), 572.

113Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. California Polytechnic State Univ San Luis Obispo, 2; Arkin, R. C. (2008) (note 101).

114 Arkin, R. C. (2008) (note 101); Schmitt, M. N., & Thurnher, J. S. (note 6), 239.

115Herbach, J. D. (2012). Into the Caves of Steel: Precaution, Cognition and Robotic Weapon Systems Under the International Law of Armed Conflict. Amsterdam Law Forum, 4, 12-14.

116 Arkin, R. C. (2008) (note 101), 125; Asaro, P. M. (2006). What should we want from a robot ethic. International Review of Information Ethics, 6(12), 9-16.

(22)

modes of reasoning culminate into a predetermined goal. When they are developed as moral agents, they are even capable of reasoning toward an intention that is morally right. Robots are intelligent and rational; according to some they might even outperform humans in this sense. However, the debate about the desirability of the employment of the weapons seems to center around other factors. Both positions (in favor of and against the employment of the weapons) seem to take the fact that robots are autonomous in action and choice as well as rational, intelligent, and intentional – the human capacities on which ILP is grounded – as a given. These are not the capacities under scrutiny; rather, the debate concentrates on human emotions, their significance, and the extent to which robots might be able to replicate them. In addition, the debate focuses on interaction with humans: would robots be able to read human intentions and be able to make subjective judgments based on interaction with humans?

Robots already resemble, mirror, and/or match the human capacities on which ILP is grounded. In that light, one can argue that it is not at all unlikely that robots might develop into internationally legal persons at some point. However, the debate on the fully autonomous weapons suggests that there are other dimensions to be considered with regard to robots’ position in international law. Human emotions and how humans normally interact with each other come up when position of robots in international law is discussed. In addition, the supposed ‘responsibility gap’ appears in the debate. As discussed earlier, responsibility is a counterpart to autonomy and self-determination. Robots are becoming increasingly autonomous and can equal or outperform humans at some levels. Therefore, logically, robots would also receive a level of responsibility. Nevertheless, Lauterpacht’s claim that ‘there is cogency in the view that unless responsibility is imputed to persons of flesh and blood, it rests with no one’117 resonates. Responsibility is of relevance here not only because it is

philosophically related to autonomy and self-determination, but also because it is a great operational tool for visualizing an abstract debate. If a robot breaches international law, how should the law proceed? Should the robot be put in prison, or should it be fined? These options seem preposterous: a robot would not care if it would be put in prison or fined. Should international law then regulate the programming of robots to prevent any transgressions, as the proponents of the 117Lauterpacht, H. (1948) (note 110), 107.

(23)

employment of fully autonomous weapons seem to suggest? Perhaps, but exactly how free and autonomous would robots be then? Here, we arrive at the fringes of the concept of ILP: it seems that ILP cannot adequately answer all questions that robots evoke with regard to their position in international law. Some of the questions that arise go beyond the human capacities on which ILP is grounded. This claim is elaborated further in the next chapter.

5. ILP and the position of robots in international law

5.1 Boundaries between humans and robots

Before answering the current study’s two research questions, one crucial observation has to be made: the study has focused on robots as distinct entities – as distinct from humans. Conceptually, this makes sense: in our day and age, robots are not humans, and vice versa. Nevertheless, we might be heading toward a future in which a discussion about robots is not only about ‘the other’. Elon Musk, the CEO of Tesla and SpaceX, started a venture called Neuralink, which will attempt to merge the human brain with AI. Although only at its earliest stages of development, this venture has the eventual purpose of ‘helping human beings merge with software and keep pace with advancements in artificial intelligence’118. This would have far-reaching

implications for severing the boundary between humans and robots. Although the conceptual separation of humans and robots is currently descriptively accurate and allows for a close analysis of legal questions surrounding robots, it is important to consider that discussions about international law and robots in the future might be not only about ‘them’ but also about ‘us’.

The claim has been made that the concept of ILP might not be able to adequately answer questions that technological developments evoke. However, let us start at the beginning and go back to the first research question. In what ways can ILP theory inform us about the position of robots in international law? The three non-traditional ILP theories have been introduced precisely because they are highly innovative in their accommodation of unconventional actors. What kinds of answers emerge when ILP theory is used as a yardstick for the position of robots in international law?

118 The Verge (27 March 2017) Elon Musk launches Neuralink, a venture to merge the human brain with AI. https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs (last visited 30/05/2017).

(24)

5.2 ILP theory applied to robots

As discussed above, the traditional ILP theory holds that ‘the international legal order recognizes a limited number of entities — primarily States — as the bearers of rights and obligations’119. This theory’s underlying notions are the state-only conception and

the recognition conception. Its outlook on the position of robots on the international legal plane is easy to deduce: they are not internationally legal persons unless the international legal order – as the expression of the common will of states – recognizes them to be such. Only then will they have rights and obligations and the capacity to bring claims. Since individuals are not even recognized as internationally legal persons in the traditional perspective120, it is hardly imaginable that robots will gain

this status in the near future.

However, non-traditional ILP theories might be able to shed more light on the position of robots in international law. The first is the endogenous perspective, and the ‘factual existence’ of Gierke121. Factual existence entails that ‘the legal person is a

conscious and willful metaphysical person, a super-individual being with a life, mind, and will of its own’122. It has been established that robots already resemble, mirror,

and/or match the human capacities on which ILP is grounded. It is not at all a stretch to then conclude that robots might be perceived as having ‘factual existence’. The second non-traditional ILP theory on which the current study focuses on is the actor conception, in which all entities that exercise ‘effective power’ in the international ‘decision-making process’ are considered internationally legal persons123. Does this

also foreshadow a potential recognition of robots as participants? Today, robots do not yet seem to hold any effective power in the global arena. Nevertheless, the actor conception is insightful because it focuses less on what an entity is than on what it

does and what role it has to play. This opens up the possibility of more advanced

robots being perceived as participants one day. Finally, the third non-traditional ILP

119 Bianchi, A. (note 3), 40.

120 ‘Individuals are not subjects of international law; they are its objects’. Lauterpacht, H. (1970) (note 22), 136.

121 As on the of the three ways in which a legal person can come into being: Nijman, J. E. (2010) (note 1), 33.

122 Ibid. 34.

(25)

theory discussed is the theory by Clapham that focuses on the scope of obligations124.

This theory is related to the formal conception of ILP as formulated by Kelsen, who perceives legal personality as a mere descriptive tool125. As mentioned earlier, fully

autonomous weapons can be programmed to have an ethical governor and thus can be programmed to adhere to the law of armed conflict126. If robots were also to become

the addressees of international norms, this would automatically render them internationally legal persons. Since this theory is merely descriptive, it is logically not sound to predict anything on its basis.

5.3 Beyond the human capacities of ILP

In addition to the traditional and non-traditional theories, the human capacities on which ILP is grounded have also been discussed. Certain human capacities are attributed to legal persons. When one confronts robots with those capacities, it is difficult to see in what sense they actually differ. They are autonomous, intelligent, rational, and intentional (they might even be moral in their intentionality), and they can make autonomous choices. From the standpoint of ILP theory, it is, therefore, not at all unlikely that robots might be perceived as internationally legal persons in the near future. They might even be capable of making legal claims – like the scenario in Lawrence Solum’s essay127 or like one of the protagonists in the TV series Humans128. Indeed, it is difficult to see why robots would not soon become internationally legal persons. This would significantly change their position in international law – from objects to subjects of international law. Yet, I make the claim that this prospect is not as likely as ILP theory makes it seem. There are factors – and this is illustrated by the case study of the fully autonomous weapons – beyond the human capacities on which ILP is grounded that are relevant with regard to the position of robots in international law.

According to the ICJ, in its famous Reparation for Injuries Advisory Opinion, the nature of the subjects of international law ‘depends upon the need of the 124 Bianchi, A. (note 3), 43.

125 Portmann, R. (note 2), 174.

126 Arkin, R. C., Ulam, P., & Wagner, A. R. (note 6), 572.

127 Solum, L. B. (note 63).

128 Niska, a robot designed to be a sex slave, kills one of her customers in a brothel and subsequently requests to stand trial as a human (Humans season 2 episode 1).

(26)

community’129. If so, is ILP equipped to deal with technological developments? My

claim is that ILP is not capable of adequately addressing the questions that these developments evoke. There are three issues that arise when one is confronted with the technological reality of robots that are capable of doing what humans can do and yet are distinctly not human. These are emotions, human-to-human interaction, and accountability (as an overarching term for responsibility, accountability and liability questions). Emotions are not usually perceived as an essential element in the constitution of legal personality. Nevertheless, in the debate about fully autonomous weapons, it seems that for both positions in the debate the absence of human emotions in robots is one of the critical factors for giving them a certain position in international law. Their lack of emotions is highlighted to plead in favor of either prohibiting them as indiscriminate weapons or giving them a position in which they are employed but in which their programming is regulated by international law. Human-to-human interaction is also not usually seen as important in determining an entity’s personality. However, it needs to be considered that the way robots interact with humans is fundamentally different from human-to-human interaction, in the sense that they lack familiarity and a certain mutual understanding. It is precisely this lack of understanding that is cited to argue that robots should not be able to be fully autonomous, because their inability to read human intentions and to make subjective judgments would render them dangerous for humans.

In addition, accountability and the perceived ‘responsibility gap’ are also critical factors in the determination of which position fully autonomous weapons should have in international law. Accountability is connected to questions of emotions and interaction. The most satisfying form of accountability is when another human of flesh and blood can be held accountable. Throwing robots in prison or giving them a fine does not seem satisfactory, as they would not 1) care (lack of emotions) and 2) give any relatable reaction. What is interesting for the accountability question is the suggestion by Peter Asaro of treating robots as ‘quasi-persons’ (comparable to children and the mentally disabled) having some rights and obligations but not all that are borne by full legal persons130. This is a suggested solution for the responsibility

gap that could arise but it also implicitly addresses the issue that even though robots 129 International Court of Justice (1949) (note 25), 178.

(27)

might meet the human capacities on which ILP is grounded, they are still different. Nevertheless, I contend that this is not a satisfactory way of dealing with the problem: it does not challenge personality theory but rather tries to press robots in a mold that does not currently fit them. It is a solution that is capable of working around the issue at large but not one that is sustainable for a future in which robots and AI will continue to advance.

One could counter my claim (about the significance of the robots’ lack of emotions and differing mode of interaction) by contending that the peculiarities of robots apply to all legal persons that are not biological persons. After all, do states and corporations have emotions, can they match human-to-human interaction, and is their accountability as satisfying as the accountability of a natural person? Yet, I would argue that robots are fundamentally different in this respect. For example, the state as an institution does not have emotions an sich. Nevertheless, it is made up of people who do. I would not go so far as arguing that states are solely made up of humans or are exclusively shaped by humans and the human experience. Weapons, protocol, ideology, etc. as such are other forces that continually shape the state. Nevertheless, it cannot be denied that a state has human components; and that it sometimes carries a human face. States, corporations and other legal persons have a human element; the same cannot be said of robots. Robots are not humans, and importantly, even though they are not humans, they do meet the human capacities on which ILP is grounded. This is where the true novelty of fully autonomous robots comes in: the fact that something so distinctly non-human is capable of things that are considered human capacities. The modern international legal system, unlike, for example, the ancient Greek system, does not have any experience with treating entirely non-human entities as subjects of the law131. Imagining robots as internationally legal persons seems ill

suited, hence the allusion to exclusive human domains as emotions and human-to-human interaction.

6. Conclusion and recommendations

6.1 Conclusion

A change is visible in popular culture: we are slowly but surely familiarizing 131 In ancient Greek law and medieval times, trees and animals were occasionally tried:Klabbers, J. (note 10), 44, note 11.

(28)

ourselves with a future in which we are no longer the most intelligent entities on Earth. Moreover, a future is imagined in which it is increasingly difficult to discern what exactly sets us apart as a species when robots are becoming more human – and humans perhaps more ‘artificial’132. These are difficult and sometimes even painful

topics: do we want to be surpassed, and do we want to lose exclusive ‘access’ to those characteristics that we deem unequivocally human? In the current study, I argue that it is time for international law to start anticipating questions surrounding the position of robots. For the sake of the legal analysis – and because it is currently the most accurate description – robots are treated here as separate entities that are distinct from humans.

The current study has asked two questions. The first is as follows: in what ways can ILP theory inform us about the position of robots in international law? Several ILP theories are introduced (one traditional and three non-traditional), and the human capacities on which ILP is grounded are examined. When one observes strictly from the standpoint of ILP theory, it actually seems quite likely that robots will one day become internationally legal persons. They are autonomous in action and choice as well as rational, intelligent, and intentional. In addition, the non-traditional theories that are most capable of incorporating unconventional actors in their framework also indicate that robots could move from being objects to being subjects of international law. However, I claim that when one confronts robots with the human capacities of ILP, we arrive at the fringes of ILP. Some of the questions that arise go beyond the human capacities on which ILP is grounded. This is where the second research question appears, which asks: in what ways do robots illustrate the usefulness and relevance of the concept of ILP for the study of technological developments and international law? This issue is emphasized by the case study, which analyzed fully autonomous weapons and the legal debate that rages around the desirability of their employment on the battlefield. The debate broadly knows two positions: one for and one against the employment of fully autonomous weapons. Both sides seem to take as a given the fact that robots are autonomous in action and choice as well as rational, intelligent, and intentional. This is, therefore, not what the debate centers on. Rather, it concentrates on three issues: emotions, human-to-human interaction, and accountability (as an overarching term for responsibility, accountability and liability 132 Taking note of the Neuralink venture which will attempt to merge the human brain with AI.

Referenties

GERELATEERDE DOCUMENTEN

For sources with more recognition of medieval jurisprudence, see L Benton and B Straumann, ‘Acquiring Empire by Law: From Roman Doctrine to Modern European Practice’ (2010) 28 Law

First three different preparation techniques are discussed in detail, namely electron beam evaporation, which was used to prepare the C-doped ZnO thin films, solid

The change in the macroscopic contact angle of the sessile drop under the applied electrical voltage can be understood by means of an energy minimization approach 1,2,15.. At

For those of us who believed that the military action then proposed was indeed unjust, imprudent, and anti- humanitarian, was it right to run the risk that our legal arguments

And the Committee, in the performance of its duties under either article 40 of the Covenant or under the Optional Protocols, must know whether a State is bound by a

This thesis aims to provide an investigation into how revolutionary transformation aimed to affect the international legal order itself, rather than what the

XXII: «In this Convention, with the exception of Articles XXIV to XXVII, references to States shall be deemed to apply to any international intergovernmental

On 14 March 2017, we presented our research to a mixed audience of experts, scholars and students during the Postgraduate Seminar on Crimmigration in the Netherlands which