• No results found

Towards the recognition of Autonomous Robots as subjects of international law

N/A
N/A
Protected

Academic year: 2021

Share "Towards the recognition of Autonomous Robots as subjects of international law"

Copied!
40
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master thesis

LLM in Public International Law

Academic year 2018-2019

TOWARDS THE RECOGNITION OF AUTONOMOUS ROBOTS AS

SUBJECTS OF INTERNATIONAL LAW

Pauline Lavarenne

Student ID: 12169757 pauline.lavarenne@u-psud.fr

Supervised by Prof. Yvonne Donders

(2)

1

ABSTRACT

With the converging advances in biotech and infotech, the 21st century is likely to witness one of the most challenging periods for humanity. The concerns surrounding the technological Revolution are mostly driven by issues arising from the development of artificial intelligence. As robots vested with such autonomy tend to be considered as objects as well as, to some extent, human-like, they completely disrupt from our established legal models. The possibility to envisage them as legal subjects rather than objects has been sparsely discussed and remained focused on domestic approaches, even though, in such a globalized world, determining which would be the appropriate status for incoming intelligent machines requires international cooperation. Questioning the pertinence of recognizing artificially intelligent robots as subjects of international law might become a serious issue in the years to come, that we will try to clarify in this paper. By first exploring the concept of international personhood and the reasons underlying the recognition of certain entities as legal persons, this paper then dissociates the paramount elements of international legal personality – rights and duties – in order to determine its applicability to Autonomous robots. As for the issue of obligations is closely linked to responsibility, a study of the civil and criminal regimes of responsibility is undertaken to assess whether it would be sufficient to regard artificially intelligent robots as objects or if they should be subjected to another regime, accordingly requiring the introduction of Autonomous robots’ duties. This paper finally examines the controversial eventuality of giving basic rights to independently intelligent machines, which has been refuted so far to other forms of living beings.

(3)

2

TABLE OF CONTENTS

ABSTRACT ...1

TABLE OF CONTENTS...2

INTRODUCTION ...3

1. The concept of international legal personality and the emergence of tailor-made personhoods ...7

1.1. The attributes of international legal personality: rights, duties and legal capacity ...7

1.2. States and international organisations as traditional subjects of international law ...8

1.3. Individuals as controversial subjects of international law ... 10

1.4. The progressive motion towards the adoption of alternative international and domestic legal personalities ... 11

2. The complex issue of AI agents’ responsibility, between objects and ‘autonomous’ entities ... 14

2.1. Protecting human beings from AI agents through the introduction of international duties ... 14

2.2. Autonomous robots within regimes of civil responsibility ... 17

2.3. Autonomous robots within regimes of criminal responsibility ... 19

3. The controversial issue of Autonomous robots’ protection and the granting of related ‘human’ rights ... 23

1. AI agents and norms of protection: bearers of rights or beneficiaries of others’ duties? ... 24

2. AI agents as beneficiaries of ‘human’ rights? ... 28

CONCLUDING REMARKS ... 31

(4)

3

Introduction

Humanity has witnessed more technological innovations in the last fifty years than it had ever seen before over such a short period of time. The biotech and infotech revolution is far from being over, and forecasts drastic changes that might disrupt with the established social, political and economic structures, especially with the improvement and increasing use of artificial intelligence. Certainly, humankind has already overcome significant shifts deriving from technological developments. Indeed, when the Industrial Revolution started, triggered by a broad movement towards automatization in industrial production, the outcomes were unpredictable. It was one of the key elements that finally led to the adoption of completely new structures, since the ones in place at that time could not cope anymore with the social, political and economic changes that occurred1. But this time, with the development of artificial intelligence, setting up new socio-political structures will not necessarily be sufficient to cope with the massive upcoming societal changes, as for the 21st century automatization will not only grant a physical advantage to the machines, as it was the case under the Industrial Revolution, but also a cognitive advantage. At the dawn of the 22nd century, robots are likely to outperform humans in both physical and cognitive abilities, which might lead to radical transformations in the job market, leisure activities and social connections.

One of the major concerns arising from artificial intelligence development lies in the fear of creating uncontrollable self-learning machines which would end the human era. The hypothetical point in time where an artificial intelligence could self-improve under a runaway reaction cycle, resulting in an intelligence explosion2 surpassing by far human intelligence, is

called the technological singularity3. The scientific community appears to be divided regarding

the possibility to attain such technological growth. If some remarkable scientists, such as Stephen Hawking and Elon Musk4, issued warnings concerning the motion towards such an

evolution, some others, like Luc Julia5, strongly affirm that the genesis of super-developed artificial intelligence is far from happening. However, long before such technological developments could even be scientifically reckoned as technically achievable, science-fiction

1 Harari, 21 Lessons for the 21st Century, 2018, 33

2 NASA, Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 1993/ CP-10129, 11–22 3 Shanahan, The Technological Singularity, MIT Press 2015, 233; Stanislaw, Tribute to John von Neumann, Bulletin of the American Mathematical Society 1958/5, 64

4 Open letter, Research Priorities for Robust and Beneficial Artificial Intelligence, https://futureoflife.org; Cellan-Jones, Hawking warns artificial intelligence could end mankind, BBC 2-12-2014, https://www.bbc.com; Sparkes, Top scientists call for caution over artificial intelligence, The Telegraph 13-01-2015, https://www.telegraph.co.uk

(5)

4

authors did not miss out on the opportunity and took it upon them to feed the public fears and curiosity. Literature and cinematography depicted worlds where robots could develop emotions and hatred towards their creators, inciting them to rebel against humans, and sometimes ending up in the domination of the whole Homo sapiens species. From Frankenstein6 to the Matrix trilogy7, humans learned to fear machines provided with artificial intelligence.

Nonetheless, science-fiction must not be confused with reality. In order not to be misled regarding the challenges raised by artificial intelligence, it is first necessary to understand what artificial intelligence is and how it works. Briefly, artificial intelligence is a programming system technique, at the merger of biotech and infotech, trying to mimic how the human brain works. Accordingly, the development of artificial intelligence widely depends on bioresearch. As the mysteries of the human brain are unlikely to be deciphered for a few more decades at least, creating an artificial intelligence merely as complex and complete as the Homo sapiens brain will be possible only in a distant future.

At the present day, a robot provided with artificial intelligence, commonly called an ‘agent’ in the literature8, might be able to analyse different inputs – which are often data

regarding the robots’ environment – and respond accordingly in realizing a certain task, constituting the program code’s output. Hitherto, the most effective forms of artificial intelligence use deep-learning techniques, based on the latest machine-learning methods, still requiring training with humans. The astonishing thing about deep-learning techniques relies in the fact that the agent self-learns from the inputs that he obtains and collects. A famous example is the Alpha Zero computer program, which defeated in 2017 the world’s computer champion, Stockfish Elmo, after learning to play chess in 4 hours only by playing against itself9. The problem is that, even though engineers understand how deep-learning works, the outcome of such processes escapes their control as artificial intelligence inner workings are almost impossible to track, explain and evaluate10. Consequently, it is hard to predict how

deep-learning AI agents11 might evolve if they are to be given certain tasks.

6 Shelley, Frankenstein or the Modern Prometheus, 1818 7 The Wachoskis, The Matrix, 1999-2003

8 Russell, Norvig, Artificial Intelligence: A Modern Approach, 2010

9 Silver, et al., Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, December 2017, 4

10 EGE, Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, 9 March 2018

11 As for this paper, we will consider what will be termed ‘AI agents’ and ‘Autonomous Robots’, defined by their ability to “make decisions about what behaviors to execute based on perceptions and internal states, rather than following a pre-determined action sequence based on pre-programmed commands”. Scheutz, Crowell, The Burden of Embodied Autonomy: Some Reflections on the Social and Ethical Implications of Autonomous Robots, 2007, in Proceedings of Workshop on Roboethics at the International Conference on Robotics and Automation 2007, Rome, Italy

(6)

5

However, if the available knowledge in biotech and infotech does not currently enable scientists and engineers to create AI agents that can surpass our cognitive abilities in all fields, it should be noted that Autonomous robots already outperform humans when it comes to specialization. The occurring of a scenario where machines do have cognitive superiority over

Homo sapiens in a foreseeable future should not be set aside. How the world will be shaped at

the artificial intelligence era largely depends on technological developments and political decisions. As for today, no concrete governmental action has been conducted, which leaves the decisional power in the hands of scientists, engineers, and especially the Big Four12. Having

other goals and priorities than governments, the Web Giants are barely aware, or feeling concerned, by the political, economic and social implications of their decisions, and are even less vested of a mandate to represent anyone and take policy decisions on their behalf.

As the impact of the upcoming technological revolution is as unpredictable as incommensurable, the political sphere should take action as soon as possible. The mere introduction of national regulation measures would not be sufficient since the social and economic consequences will not stop at national borders. Besides, handling the issue at a domestic level would result in a regulatory patchwork, whereas introducing an internationally unified legal framework will prevent the risk of ‘ethic shopping’. In 2018, the European Group on Ethics13 called for the launch of a process towards an “internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robots and ‘autonomous’ systems”14. As the European Group on Ethics underlined, what is required

to address the artificial intelligence issue is a ‘wide-ranging’ and ‘inclusive’ international cooperation. Otherwise, the few countries having appropriate resources and knowledge in biotech and infotech would decide for the rest of the world, which seems improper since it would set aside different perspectives and societal interests from the process, while the aftermath would deeply affect the global market, ethical questions and, above all, humankind as a whole.

The easy option could be to legally block any related infotech and biotech developments but, considering the potential immensity of artificial intelligence benefits for the human society,

12 Google, Apple, Facebook and Amazon

13 The European Group on Ethics in Science and New Technologies (EGE) is an independent body of the President of the European Commission, which advises on all aspects of Commission policies where ethical, societal and fundamental rights issues intersect with the development of science and new technologies. The EGE is composed of 15 members appointed by the President of the European Commission and the Commissioner for Research, Science and Innovation, chosen for their high level of expertise in areas of interest (medicine, health, philosophy, ethics, law).

(7)

6

such approach would constitute a huge loss for mankind. Moreover, since the codes are already on open source, trying to prevent further coding improvements appears to be merely a naïve solution, that engineer could all the more easily circumvent. Accordingly, awareness regarding the necessity to regulate these matters emerged on the international plane with, for example, the introduction of discussions concerning principles for artificial intelligence, under the auspices of UNESCO15.

One aspect of the debate lies within the delicate issue of defining the status of machines vested with artificial intelligence and, therefore, the eventuality of granting legal personality to such Autonomous robots. In a controversial resolution, the European Parliament, which already decided to explore the matter, asked the European Commission to examine, analyse and consider all possible solutions regarding artificial intelligence, such as ‘creating a specific legal status for robots in the long run, and even mentioned an ‘electronic personality’16. If this

provision of the resolution paves the way for the recognition of a legal personality for Autonomous robots, or at least for the acknowledgment and exploration of this possibility by governments and legislators, it has been heavily criticized. Artificial intelligence and robotics experts, industry leaders, law, medical and ethics experts even drew up an open letter to express and explain their concerns regarding this provision17. Nonetheless, the possibility of recognising Autonomous robots’ legal personality should not be disregarded as other entities seem to apprehend their legal status in this way. Indeed, in 2017, Saudi authorities accorded Saudi nationality to a hominoid robot granted with artificial intelligence, called Sophia18. This symbolic event was considerably mediatized because, beyond the fact that a ‘female’ robot had more rights than female human beings subjected to Islamic laws in Saudi Arabia, the concept of nationality of an entity is closely linked to its capacity to be recognized as a legal person19.

Hence, in this paper, with regards to issues arising from the advancements in biotech and infotech triggering the need for a prompt international cooperation, we will consider the challenges and opportunities raised by the potential recognition of Autonomous robots as subjects of international law, notably with regards to issues of AI agents’ rights and liabilities.

15 Amelan, Participants at Global UNESCO Conference on Artificial Intelligence urge rights-based governance of AI, UNESCO NEWS 6-03-2019, https://en.unesco.org/news

16 European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)) § 59 f

17 Robotics Open Letter, Open Letter to the European Commission Artificial Intelligence and Robotics,

http://www.robotics-openletter.eu

18 Walsh, Saudi Arabia grants citizenship to robot Sophia, DW 28-10-2017, https://www.dw.com/en 19 Shaw, International Law, 2018, Cambridge University Press, 8th ed., 204-205

(8)

7

1. The concept of international legal personality and the emergence of

tailor-made personhoods

Before exploring the desirability or the relevance of recognizing AI agents as subjects of international law, it is first necessary to define what international legal personality covers. The use of the term ‘personality’ might be misleading as it may disclose plural meanings, notably philosophical or legal. Under the philosophical definition, personality is the combination of all of a person’s character traits, moulding its own identity and giving rise to a sense of uniqueness proper to each human being. As for the legal meaning, it might be interesting to note that the word itself derives from persona, which meant ‘mask’ in ancient Greek, implying that personality could be something distinct from the person herself, whereas intrinsic particularities characterise the philosophical personality. Such interpretation resonates with the legal approach of the term personality, according to which each legal person holds an attribute separate from its identity: the sum of its rights and duties enforceable at law– its legal personality20.

1.1. The attributes of international legal personality: rights, duties and legal

capacity

The International Court of Justice has issued a basic definition of what international legal personality is in 1948, in the famous Reparations for Injuries case21, according to which an entity has international legal personality when it is “capable of possessing international rights and duties and [has] the capacity to maintain its rights by bringing international claims”22. Thus,

international legal personality result from the gathering of three ingredients, those being international rights, international obligations and legal capacity on the international plane, allowing the said entity to secure its rights or be held accountable for the violation of its obligations.

There exist contradicting views amongst international law specialists regarding whether or not legal capacity constitutes a requisite element in order to recognize an entity as a subject of international law. While some writers suggest that an entity should have legal personality if

20 Shaw, supra note 19, 155

21 ICJ, Advisory Opinion on the Reparation for injuries suffered in the service of the United Nations, 11 December 1948, ICJ Reports 1949 (‘ICJ, Reparations for Injuries’)

(9)

8

it has the ability to either enforce a complaint or hold rights and duties 23, other authors maintain that the legal capacity to enforce rights within the international system remains a necessary component due to its crucial role24. However, with regards to the existing debates amongst academics regarding the necessity of having legal capacity in order to be recognized international legal personality, we will leave out the issue of enforcement for the purposes of this paper. Indeed, we will focus on international rights and duties – the fundamental elements in the recognition of one’s international legal personality – since legal capacity and enforcement first and foremost require their existence to come into play. The possibilit y to maintain rights and undertake duties thereafter remains a technical issue, whose methods of implementation will depend on the States’ concerns.

1.2. States and international organisations as traditional subjects of

international law

For the purpose of understanding how international personhood could come into play for AI agents, an overview of the major entities which have international legal personality appears adequate. First and foremost, since the end of the 17th century and the signing of the Westphalian peace treaties25, States are the original and primary subjects of international law, those being the only entities holding full and objective international legal personality. It means that, while possessing the “totality of international rights and duties recognized by international law”26, States have legal personhood with regards to all other actors on the international plane,

incurring that the latter can sue States for their internationally wrongful acts27. Their

international legal personality is conditioned by the criteria for statehood mentioned in article 1 of the Montevideo Convention28, which are the possession by the entity of a permanent

population, a defined territory, a government and the capacity to enter into relations with other States29. Hence, once an entity satisfies the Montevideo criteria, it is recognized as a State, holding all international rights and bearing all international duties for whose breaches it can be held responsible by any other subject of international law.

23 Sørensen, Principes de Droit International Public, 1960, 5, 127; Mosler, The International Society as a Legal Community, 1980, 32

24 Verzijl, International Law in Historical Perspective, 1973, 3 25 Treaty of Munster and Treaty of Osnabruck, 1648

26 ICJ, Reparations for Injuries, 180

27 ILC, Draft Articles on Responsibility of States for Internationally Wrongful Acts (‘ARSIWA’), 2001 28 Montevideo Convention on the Rights and Duties of States, 1933

(10)

9

No entities other than States have such a broad legal status on the international sphere. Not possessing a set of rights and duties as wide as States’ does not preclude the existence of alternative international legal personalities. Indeed, in the Reparation for Injuries case, the International Court of Justice recognized that there might be a multiplicity of international personhood models since “subjects of law in any legal system are not necessarily identical in their nature or in the extent of their rights”30. Other subjects of international law have been acknowledged over the years, even if, contrarily to States, they have partial and relative international legal personality, which means their personhood only entails the rights and duties accorded by States, in relation to them.

For instance, in the aforementioned Reparations for Injuries case, the International Court of Justice expounded that an international organisation can have international personhood31. Whether or not an international organisation has international legal personality depends on its constitutional status, determined in its constituent treaty, which reflects the will of the States parties. If the issue is not explicitly settled in the establishing instrument, international legal personality can be inferred from indicia of personality such as the powers, the purpose and the practice of the organisation. An international organisation can even have objective international legal personality if it is necessary to perform its functions, as was explicitly recognized in the

Reparation for Injuries case32.

Beyond recognizing that multiple personhood models exist, the International Court of Justice justified the United Nations’ attribution of international legal personality by relying widely on its indispensable character, after stating that the legal status’ nature and scope of an entity “depends upon the needs of the community”33. It appears that the effectiveness of the

whole international system and the needs of the international community were put to the fore in order to determine the status of an entity as a subject of international law. As the technological revolution might completely disrupt the system in place, it is possible that, in order to contain forthcoming changes that might destabilise the international community, an international legal personality could be recognized for Autonomous robots, even if they would be, as subjects of international law, absolutely distinct from international organisations and States in their nature and in the extent of their rights.

30 ICJ, Reparations for Injuries, 178 31 ICJ, Reparations for Injuries, 179 32 ICJ, Reparations for Injuries, 185 33 ICJ, Reparations for Injuries, 178

(11)

10

1.3. Individuals as controversial subjects of international law

Regarding individuals, they were traditionally not recognised as having international personhood, even if some rules of international law directly aimed at their protection, through humanitarian law for instance. For a long time, individuals were only considered as the subject-matter of certain of these international rules34, especially since individuals’ claims on the international plane could only be brought by a willing State of nationality. Nowadays, and resulting from post-war initiatives, it is widely recognized that individuals are subjects of international law or, at least, participants of international law. Indeed, in the aftermath of the Second World War, the international community needed to respond to the atrocities committed and undertook to hold individuals accountable for their acts as well as granting them rights, paired with a direct access to international courts and tribunals.

Investment treaties and human rights treaties allowed individuals to become the direct holders of enforceable rights at the international level35, which probably encouraged the

International Court of Justice to recognize implicit individual rights in other treaties36. As for

individuals’ duties, they revolve around crimes against international law, those being, for the most serious, the crime of genocide, crimes against humanity, war crimes and the crime of aggression37. After the Second World War, the Nuremberg tribunal, designed to try and punish the perpetrators of Nazi atrocities38, clearly established that individuals do have international duties, transcending their national obligations, for whose breaches they have to be punished so as to ensure the enforcement of international law39. Over the following years, the Tokyo War Crime Tribunal40, the International Criminal Tribunal for the former Yugoslavia41 and the International Criminal Tribunal for Rwanda42 maintained this reasoning, which has been

34 O’Connell, International Law, 1966, 106-107

35 For example: Council of Europe, Convention for the Protection of Human Rights and Fundamental Freedoms (‘ECHR’), 1950; ICSID, Convention on the Settlement of Investment Disputes Between States and Nationals of Other States, 1965; UN General Assembly, Optional Protocol to the International Covenant on Civil and Political Rights, 1966

36 ICJ, LaGrand (Germany v. United States of America), 27 June 2001, ICJ Report 2001, 494, 514

37 UN General Assembly, Rome Statute of the International Criminal Court (‘Rome Statute’), 1998, arts. 5-8 38 UN, Charter of the International Military Tribunal, Annex to the Agreement for the prosecution and punishment of the major war criminals of the European Axis ("London Agreement"), 1945

39 IMT, Trial of German Major War Criminals, Proceedings of the International Military Tribunal sitting at Nuremberg, Germany, 1946, 55-56

40 UN, International Military Tribunal for the Far East Charter ("Tokyo Charter"), 1946

41 UN Security Council, Security Council Resolution 827 (1993) [International Criminal Tribunal for the former Yugoslavia (ICTY)], 25 May 1993

42 UN Security Council, Security Council Resolution 955 (1994) [Establishment of the International Criminal Tribunal for Rwanda], 8 November 1994

(12)

11

definitively enacted by the creation in 1998 of the International Criminal Court whose Statute explicitly recognises individual criminal responsibility43.

States acted upon the necessity to prevent crimes of genocide and crimes against humanity, as happened during the Second World War, from occurring ever again. It is under this necessity that they granted rights and duties to individuals, which enabled them to be recognized as subject of international law. The process of international legal personality recognition is thus completely different than in the Reparation for Injuries case, in which international organisations were vested with international legal personality because the international law system required them to acquire enforceable rights.

At this point, it is important to note that all human beings are subjects of international law, holding international rights and duties, even if their capacity to maintain their rights or to be held responsible on the international plane might vary. For instance, persons suffering from mental illness and minor children need to proceed via a representant to enforce their rights in courts and either have their criminal responsibility excluded or fall outside penal courts’ jurisdiction44. Despite their altered or restricted mental capacities, disabled persons and infants do have international legal personality even if their will, conscience and autonomy, considered by the philosophical milieu as proper to human beings, can be dubious, while AI agents that could demonstrate more of such abilities do not have legal personality.

1.4. The progressive motion towards the adoption of alternative international

and domestic legal personalities

Hitherto, we observed that the principal subjects of international law have been recognized as such for different motives. First, States constituted the principal actors in the international sphere, then, individuals’ legal personality was recognized by States which gave them all necessary components for their recognition as subjects of international law, whereas international organizations required to have international personhood to ensure the effectiveness of their own functioning. Regarding the latter rationale, with the increasing role of international corporations on issues related to violations of human rights and the humanitarian threat stemming from armed groups such as ISIS, there are ongoing debates with

43 Rome Statute, art. 25

(13)

12

regards to whether or not they should become subjects of international law45. Besides an evident interest in granting them rights and standing, the main purpose of granting them legal personality would be to hold them accountable for the breaches of certain international duties. All these more or less controversial categories of subjects of international law are composed of or controlled by human beings. Beyond the state-centric character of international law, it appears the system in place remains focus on human beings, from which derives an anthropocentric approach of international legal personality46 with which AI agents do not align.

Across the world, many regulations oriented towards the recognition of alternative legal personalities can be found at the domestic level, breaking away from the traditional anthropocentric approach adopted for international legal personality. For instance, two years ago, New-Zealand legislature recognised the Whanganui river as legal person47, while Argentina courts granted specific rights to an orangutan named Sandra which was accordingly considered as a ‘non-human person’ through a writ of amparo in 2015 48, and, at the beginning

of the nineties, India’s highest court held that Hindu temples and idols were “juristic entities […] with the power of suing and being sued”49. With a view of protecting them, certain entities

can thus obtain domestic legal personality, which is a first step towards international legal personality, even if they are devoid of any human presence or cognitive abilities. Nevertheless, a major difference between these and AI agents is that Autonomous robots are humans’ creations that ought to be human property under liberalist conceptions50. Animals, rivers, idols, temples – even if the two latter’s tangible expressions are the fruit of humans’ work – still originate from nature or some divine entities, both evading humans’ control. Knowing that robots were created to serve humans rather enjoy existence independently from our control, there is reluctance amongst experts and industry leaders concerning the possibility to grant an autonomous legal status to these non-organic AI agents51.

45 For international corporations: St Paul, Restatement of the Law Third: The Foreign Relations Law of the United States, 1987, 126

46 An understanding of the concept of international legal personality that does not allow for the existence of subjects of international law which are not composed or controlled by human beings. Such approach hence reflects anthropocentric views which consider human beings as the most significant entity of the universe and interpret the world in terms of human values and experiences

47 Hutchiston, The Whanganui River as a Legal Person, 2014; Kieran, The Legal Personality of Rivers, EMA Human Rights, 16-01-2019, http://www.emahumanrights.org

48 ARG, Asociación de Funcionarios y Abogados por los Derechos de Los Animales y Otros Contra GCBA Sobre Amparo, EXPTE A2174-2015/0 ; Lawrence, Brazier, Legally Human ? ‘Novel Beings’ and English Law, 2018 49 IND, Pramatha Nath Mullick v Pradyumnakumar Mullick, 917 F. 2d 278 (7th Cir. 1990); UK, Bumper Development Corporation Ltd. v Commissioner of Police of the Metropolis [1991] 1 WLR 1362, [1991] 4 All ER 638 (CA)

50 Locke, Two Treatises of Government, 1689, Peter Laslett ed. (1988), 285-302 51 Robotics Open Letter, supra note 17

(14)

13

Regarding the reasons and processes which lead to the recognition of certain entities’ personhood, granting legal personality to Autonomous robots appears to be controversial. This theory is defensible, provided that undeniable proof that AI agents are, or should be, bearer of rights and duties enforceable at law, is substantiated. Anyhow, beyond the reasons why, the question remains to know how, technically, AI robots could be recognised as subjects of law. At the national level, each legal system determines which entities hold legal personhood and the extent of their legal personality within the domestic system, by means of legislature or through courts’ decisions, whereas, due to its special characteristics, the international order lacks a legislative branch or a treaty dealing with the issue of international legal personality.

The evolution of international law usually relies on general principles and customary law, for which widespread acceptance of AI agents as legal persons could be of significant relevance, but it would probably take considerable time. As research in artificial intelligence advances at a tremendous speed, there is probably not much time before issues deriving from AI agents unsettled legal status start arising. Alternatively, a somewhat faster process could be for the international community to cooperate and compromise on which legal status should be given to AI agents. This would open the door to a harmonized international legal personality for Autonomous robots, leaving States the opportunity to regulate domestically on fewer matters of minor importance, especially since international personhood “does not even imply that all […] rights and duties must be upon the international plane”52.

Malcom Shaw underlined that “[p]ersonality is a relative phenomenon varying with the circumstances”53, and, in view of these considerations, international legal personality rather

stands as a malleable concept depending on the needs of the international system than a strict institution. With regards to the circumstances surrounding the biotech and infotech revolution, recognizing AI agents as subjects of international law in order to adopt a unified status remains a plausible or needed option. It must be proven that according the paramount attributes of legal personality – rights and duties – to Autonomous robots would constitute a great benefit for humanity, and not only satisfy a symbolic advance in order to justify a semblance of legal personhood for AI agents.

52 ICJ, Reparations for injuries, 179 53 Shaw, supra note 19, 156

(15)

14

2. The complex issue of AI agents’ responsibility, between objects and

‘autonomous’ entities

Responsibility is a central element of international personhood, which derives from the violation of an obligation arising under international law. Being closely linked to obligations and duties, which are constituent elements of international personhood according to the

Reparation for Injuries definition54, responsibility is “at one and the same time an indicator and

the consequence of international legal personality”55 according to Pellet. Indeed, as seen

previously, in the process of identifying individuals as international persons, their capacity to be held responsible in front of international tribunals has been an indicator, whereas responsibility as a consequence of international legal personality is sought to justify the recognition of corporations and armed groups’ international personhood. However, those responsibilities are based on different liability models, being respectively criminal, humanitarian and civil. Even if the traditional approach of international responsibility had ‘civil’ or ‘private law’ undertones, we are now moving towards a neither civil nor criminal conception56, taking a different shape depending on the obligation at hand. Indeed, according

to the International Law Commission approach, responsibility arises from internationally wrongful acts, requiring both the attribution of the said wrongful act to the entity and a breach of its international obligations, which is the reason why the duties’ nature itself influences the responsibility model57.

2.1. Protecting human beings from AI agents through the introduction of

international duties

First and foremost, an entity needs to hold international duties before even considering the possibility to engage its international responsibility. With regards to AI agents, no such obligations currently exist under international law. However, some authors, from the scientific and science-fiction community, attempted to prescribe rules and laws to regulate robots’ activities in an ethical way. The most famous attempt is to be found in I, Robot, written by

54 ICJ, Reparation for injuries, 179

55 Crawford, Pellet, Olleson, Parlett, The Law of International Responsibility, Oxford University Press, 2010, 6 56 Supra note 55, 8-9, 12-13; Kelsen, Théorie du droit international public, 1953, 84

57 ARSIWA, arts. 1-2; ILC, Draft Articles on the Responsibility of International Organizations (‘DARIO’), 2011, arts. 3-4

(16)

15

biochemist Asimov, who issued Three Laws of Robotics, which stand as a reference model around the world. The First Law, which provides that a robot should not harm a human or allow that harm happens to a human through inaction, overrides both other Laws. Indeed, according to the Second Law, the machine must obey humans’ orders, except where these orders clash with the First Law principle. Finally, the Third Law, applied as long as its effects do not conflict with both previous Laws, proclaims that the robot himself must protect its own existence. Asimov then added a fourth law, known as Zeroth Law, to precede the others, according to which “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”58,

thus targeting humanity as a whole and not merely ‘a human’, contrarily to the First Law. Even if these said ‘Laws’ are legally non-binding and contain ambiguities and loopholes, they have been followed as guidelines by the scientific and engineering community and significantly influenced the design and creation of Autonomous robots.

Even if no comparable laws exist, the First, Second and Zeroth Laws transcribes principles of human superiority and dignity, echoing with human rights values. Human rights are considered to be the basic rights and freedoms, inherent to every human being, such as the right to life59, prohibition of torture60, prohibition of discrimination61, right to respect for private and family life62 and freedom of thought, conscience and religion63. Due to the fundamental importance of these rights within human systems, it is necessary that AI agents respect human rights, especially civil and political rights since Autonomous robots might have a considerable impact on the exercise of these rights. With regards to social, economic and cultural rights, even if AI agents could have an impact on the health and education sectors, the exercise of these rights mostly depend on States’ policy actions, over which the machine could gain influence only when possessing a part of the political decisional power.

However, under international human rights law, exclusively States must uphold obligations arising from human rights treaties, while legal persons within the said States are solely indirectly bound by these treaties as their provisions lack direct horizontal effect64.

Indeed, besides a negative obligation of non-interference with the exercise of the rights

58 Asimov, Runaround, I, Robot, 1950, New York City: Doubleday, The Isaac Asimov Collection ed., 183-216 59 UN General Assembly, Universal Declaration of Human Rights (‘UDHR’), 1948, art. 3; UN General Assembly, International Covenant on Civil and Political Rights (‘ICCPR’), 1966, art. 6; ECHR, art. 2

60 See UDHR, art. 5; ICCPR, art. 7; ECHR, art. 3

61 See UDHR, art.7; ICCPR, art. 26; ECHR, art. 14; Council of Europe, Protocol No. 12 to the Convention for the Protection of Human Rights and Fundamental Freedoms, 2000, art. 1

62 See UDHR, art. 12; ICCPR, art.17; ECHR, art. 8 63 See UDHR, art. 18; ICCPR, art. 18; ECHR, art. 9

64 Lane, The horizontal effect of international human rights law in practice, 2018, European Journal of Comparative Law and Governance, Vol. 5, No. 1, 5-88

(17)

16

provided for in the treaties, these texts provide for positive obligations, weighing on States, to ‘ensure’ or ‘secure’ everyone in their jurisdiction the rights and freedoms recognized in the treaty65. These positive obligations are primarily fulfilled by the setting up of a legal framework providing effective remedies and reparation.

With regards to the artificial intelligence issue, States could be required to establish such a regulatory framework to uphold their international human rights obligations. However, the current international human right law mechanisms might not be adequate. Under this system, each State can establish singular human rights related norms, that might vary in their content or in their implementation amongst different countries, even if these norms ought to be similar and tend towards the respect of the same values. Indeed, beyond States’ distinct domestic legal systems and historic heritage, these variations might derive from a dissensus within the international community regarding what human rights really encompass. Such different understandings not only exist between diametrically opposite countries, but also within regional organisations whose contracting parties share strong historical and cultural backgrounds, such as those of the Council of Europe66. If the current system applies to AI agents, human rights, transcribed dissimilarly in each domestic system, would weigh on engineers, whose coding programs will reflect the human right approach of his own State. As the intelligent machines are likely to spread worldwide, regardless of the human rights ethics of the country of origin and arrival, using robots whose own system would disrupt with the place of utilisation’s cultural background and could lead to significant inconveniences.

Since international human rights law as a basis for AI agents’ regulation could only result in an undesirable regulatory patchwork, other alternatives should be more seriously considered. The law of responsibility applicable to classical subjects of international law does not necessarily have to be “transposed wholesale and unmodified” to Autonomous robots, for which we could introduce “special rules required by the[ir] specific nature”67. In order to ensure

the protection of human rights while increasing the harmonization of AI agents’ regulation, unified international duties for Autonomous robots could be introduced and enforced, either through an international body or through States. If Autonomous robots’ duties emanate from international norms, it does not preclude the possibility for States to implement these international duties at a domestic level. Besides, for the purpose of international personhood, it

65 ICCPR, art. 2.1; ECHR, art. 1

66 For example (marriage for same-sex couples): ECHR, art. 8; ECtHR, Vallianatos and others v. Greece, nos. 29381/09 and 32684/09, 13 November 2013

(18)

17

is not required that all duties must be enforced at the international level68. To address the aforementioned difficulties faced with the international human rights law system, these norms could be more precise and include further guidelines as regards to their implementation and enforcement.

Conceiving international duties specifically tailored for Autonomous robots constitutes an option, but remains the question to determine whether or not this alternative would be useful. Indeed, unifying the rules applying to developers and users relating to issues of responsibility and AI agents would align more conveniently with the existing domestic legal models rather than introducing Autonomous robots’ duties. If it was proven not to be sufficient to handle responsibility issues, another side of the problematic would be to figure how Autonomous robots’ responsibility model could fit within domestic legal systems. In order to accurately grasp the fundamental deficiencies of current responsibility regimes and issues arising from the introduction of AI agents’ international duties at the domestic level, this study will explore the current models of civil responsibility and criminal liability which are likely to come into play following violation of human rights by AI agents. Taking the example of a self-driving car, such vehicle could cause an accident resulting in important damages to one’s property, incurring civil responsibility, and in which a human being would lose his life, possibly constituting an involuntary homicide triggering criminal liability.

2.2. Autonomous robots within regimes of civil responsibility

The current civil regime of responsibility already offers some solutions regarding civil liability and AI agents. Generally, civil responsibility requires the gathering of three elements: a breach of an existing duty, an incurring damage or injury and a causality link between those two. In the case of a damage caused by an object, the responsibility thereof lies with the producer or the user, noting that the latter’s responsibility will be set aside because cases of non-reasonable use of the machine are not the subject of this study. Since it is basically software that constitutes artificial intelligence programs, AI agents are usually considered as products69, with an implied warranty whose breach could entail developers’ responsibility70. Since law is

ill-defined in this area, some authors rather view artificial intelligence as a service71. Anyhow,

68 Shaw, supra note 19, 156

69 US, Ransome v. Wisconsin Elec. Power Co., 275 N.W.2d 641, 647-48. Wis. (1979); Kingston, Artificial Intelligence and Legal Liability, 2018

70 For example: UK, Sale of Goods Act 1979

(19)

18

if the AI agent performs an act resulting in a damage or injury, it certainly does not uphold the product or service’s requirements and would incur developers’ responsibility as it respectively translates a manufacturing defect or an improper service.

At first sight, holding engineers accountable under this civil regime seems perfectly relevant, especially knowing that most of those deficiencies can be tackled throughout the coding process. Over the last years, at the national as well as the international level, there have been numerous incentives regarding the importance for the scientific community to develop and establish ethical codes and processes, hand in hand with public authorities and the civil society72. Indeed, some Autonomous robots’ functions are directly linked to endless

philosophical debates, such as the Trolley-problem73, extensively analysed by Thomson, which

is currently put in a different perspective in relation to self-driving cars. Even if there is no definitive answer to this ethical problem, it will probably become necessary to determine during the coding process whose to save and whose to kill in extreme cases, or let the algorithm chose completely randomly. Taking in consideration such ethical notions throughout the programming of AI machines is of paramount importance for developers, as well as paying a particular attention to avoid transmitting their unconscious biases into the code. Holding developers responsible under the civil regime of responsibility is a way to ensure and reinforce their vigilance on bias and ethics issues.

Nevertheless, the current regime of civil responsibility is missing out on the issues of strong artificial intelligences’ specific characteristics, achievable through machine-learning and deep-learning techniques. For the latter in particular, AI agents’ actions might go beyond developers’ control and anticipation, especially when the program learns from its own environment. Such events already happened when a chatbot created by Microsoft, named Tay, started generating inappropriate tweets, using antisemitic, racist and sexist language, after less than 24 hours online, interacting with the public74. Besides constituting a criminal offence,

delivering racist and antisemitic statements could engage one’s civil responsibility for moral damages.

72 Villani, ‘For a Meaningful Artificial Intelligence; Towards a French and European Strategy’ Report, 29 March 2018; Šopova, Audrey Azoulay: Making the most of artificial intelligence, UNESCO Courier, 03-2018,

https://en.unesco.org/courier/2018-3

73 The Trolley-problem is a thought experiment in ethics which consists in evaluating whether abstaining and kill numerous persons or acting and slaughter just one person is more ethic. This experiment is often related to the assessment of the value of a person’s life (or persons’ lives) in comparison to someone else’s, and sometimes takes into account the age, the state of health, etc. of the individuals. Thomson, The Trolley Problem, 1985, Yale Law Journal, Vol. 94, No. 6, 1395-1415

74 Wolf, Miller, Grodzinsky, Why We Should Have Seen That Coming: Comments on Microsoft’s Tay ‘Experiment,’ and Wider Implications, 2017, ACM SIGCAS Computers and Society, Vol. 47, Issue 3, 54–64

(20)

19

Because of the unpredictability of the AI agents’ interactions with its environment and ensuing outcomes, it seems unfair to establish engineers as the sole bearers of civil responsibility in comparable cases. Even with the undertaking of insurances and low probability of civil damages happening, maintaining this responsibility scheme could constitute an obstacle to technological advances. Engineers could potentially refrain from developing artificial intelligence in order to avoid engaging their responsibility, funds and own moral conscience, especially with regards to cases involving considerable physical and moral damages. Moreover, in case of multiplicity of artificial intelligence systems within one AI agent, determining which developers to hold responsible would be extremely complicated since tracking the processes of each and every one of those artificial intelligences is nearly impossible. Rather, considering the AI agent as one whole entity with regards to responsibility would enable to evade such complications. Introducing the responsibility of Autonomous robots for breach of duties under the regime of civil responsibility could constitute a fair and satisfactory alternative, from developers as well as victims’ point of view, and would therefore require attributing duties to AI agents. However, since developers should not evade all ethical responsibilities, it could be interesting to conceive international and domestic legal regimes under which engineers’ civil responsibility and Autonomous robots’ civil liability could coexist.

2.3. Autonomous robots within regimes of criminal responsibility

Turning to issues of criminal liability, it is important to note that the regime of criminal responsibility normally requires an actus reus and a mens rea. With regards to AI agents, not much problem is posed by the actus reus element since it simply consists of an action or an omission, attributable to the machine without much difficulty. On the other hand, the eventuality of attributing AI agents a mens rea – the mental intent – gives rise to far more complex issues. More than necessitating the fulfilment of cumbersome technical requirements – by requiring the proof of either knowledge and information of the criminal wrong committed or negligence in comparison to reasonable expectations –, the ascription of a mental intent to Autonomous robots challenges our moral conceptions. In the doctrine, a major argument to deny legal personality to AI agents is that they lack some necessary things to establish mental intent – souls, consciousness, feelings, free will and autonomy – which are considered as elements attributed per excellence to human beings75.

75 EGE, supra note 10, 9

(21)

20

Lacking mens rea, the criminal responsibility of AI agents could be completely excluded, as for insane persons or minors whose responsibility is precluded under the infancy and insanity defences due to their limited reasoning and judgmental abilities76. However, this would generally eliminate other agents’ responsibility, such as users’ and developers’, which would be utterly undesirable, especially from the victim’s perspective whose needs for pecuniary satisfaction and to hold someone guilty would not be fulfilled.

Still, if ever an Autonomous robot ‘commits’ a crime and cannot be held liable, other traditional legal models of indirect responsibility could enter into play, such as the perpetrator-via-another model and the natural-or-probable-consequence model. The first of these two models allows to hold responsible a person who instructs the perpetrator, lacking the mental capacity to form a mens rea, to commit a criminal offence. The perpetrator-via-another model could apply to an AI agent, considered then as the innocent agent, while the developer or the user could be held liable for the criminal offence. The second model stems from a particular approach of accomplices’ liability, whereby a person will be held criminally responsible if the acts of the perpetrator were a natural or probable consequence of the encouragements or assistance of the accomplice, conscious of the underlying criminal scheme. Users and developers could therefore engage their criminal liability if they were aware that the realisation of a criminal offence was a natural or probable consequence of the machine’s use or program design77. However, these models present the same upsides and downsides than the current regime of civil responsibility previously reviewed. Even if it is important that users and developers remain bound by their ethical responsibilities, maintaining a regime under which all responsibilities weigh on them might have a serious disincentive effect on the use and development of artificial intelligence. Effectively, criminal liability entails penalties, that might not only amount to fines but also to prison sentences, and engages harsher issues of moral conscience, especially in case of an offence resulting in the physical injury or death of a human being.

As the aforementioned models seem not to be completely satisfactory, it would be desirable to address the issue of mens rea and AI agents under the direct liability model. There is thriving debate amongst the scientific community regarding the possibility to attribute mental intent to Autonomous robots, being equally set aside without further consideration78,

76 Elliott, Criminal Responsibility and Children: A New Defence Required to Acknowledge the Absence of Capacity and Choice, 2011, The Journal of Criminal Law, vol. 75, No. 4, 289–308

77 Kingston, supra note 69, 3-4

(22)

21

scientifically disapproved79 or vividly supported80. The origin of this controversy is broadly grounded on individualistic and anthropocentric understandings of what mens rea represents. Some cultures and religions, such as Shintoism and Buddhism, adopt a different approach and consider that all things, including machines, are inhabited by spirits, and a fortiori, have souls81. Technically speaking, we have to bear in mind that biotech scientists still know little about the functioning of consciousness, free will and autonomy, that have been elevated as philosophical concepts by humans for centuries. For all we know, all these ‘human’ attributes are the result of biochemical processes that can be hacked. Once researchers will decipher the biochemical mechanisms behind the human brain, it is likely that they could be reproduced in artificial intelligence systems82.

At this point, it could be argued that AI agents merely simulate mental intent, and that, even if their behaviour indicate the presence of such quality, it would not amount as consciousness, autonomy and free will83. Unfortunately, we actually lack knowledge to ascertain whether there is actual or simulated consciousness beyond behavioural evidence. We could envisage designing and using an experiment, comparable to the Turing test84, so as to evaluate the mental human-like abilities of an AI agent. Passing such a test would be neither accurate nor sufficient to confirm that a machine has enough mental abilities to hold a mens

rea, especially since researchers struggle to grasp the contours of human will, autonomy and

consciousness85. It could be useful to measure the machine capability of making autonomous choices – without continuous human input or control –, so as to determine whether the AI agent should be subjected to criminal responsibility as an autonomous entity or as an innocent agent or product.

With regards to the issue of criminal liability, objective evidence of AI agents’ mental intent signs might suffice. Indeed, from a purely legal point of view, penal juries and judges widely base their verdicts on the evidence gathered, through clues and witnesses’ testimonies, transcribing a certain mental intent, but cannot ascertain which were the perpetrator’s intentions

79 For example: Flanagan, The Science of the Mind, 1991

80 For example: Solum, Legal Personhood for Artificial Intelligences, 1992, North Carolina Law Review, Vol. 70, No. 4

81 Ito, Why Westerners Fear Robots and the Japanese Do Not, Wired, 30-7-18, https://www.wired.com 82 Harari, supra note 1, 47-48, 69-71

83 Weiss, On the Impossibility of Artificial Intelligence, 1990, The Review of Metaphysics, Vol. 44, No. 2, 335, 340; Solum, supra note 80

84 The Turing test aims at evaluating a machine’s faculty to demonstrate intelligent behaviour equivalent or indistinguishable from that of a human based on a natural-language conversation between a human and the machine. Turing, Computing Machinery and Intelligence, 1950, Mind, LIX, Issue 236, 433-460

85 Ray, Cerveau: à la recherche des réseaux de la conscience, Futura Santé, 13-02-2019,

(23)

22

and thoughts prior and during the offence. Moreover, considering that drunk persons can be held accountable for criminal offences while they definitely do not demonstrate much mental capacities nor behavioural consciousness86, it is hard to argue that Autonomous robots, displaying a wider range of signs of intelligence, should not be criminally responsible only on the basis that the existence of their mental intent cannot be ascertained.

Since the utilisation of direct liability for AI agents touches upon controversial debates, other options can be envisaged. Strict liability could constitute a serious alternative as it would evade all considerations regarding whether or not machines can possess mens rea since the only requirement under this model is evidence of an actus reus. Numerous legal systems around the world have adopted strict liability with regards to corporations’ criminal liability87, which could

be translated for Autonomous robots. The sole drawback of this type of criminal responsibility is that, as it would be easier to retain machines’ liability, victims and public authorities are likely to refrain or cease seeking for users and developers’ responsibility, which might alleviate their ethical duties in the long run. Then, as for the system of civil responsibility, a criminal responsibility regime allowing for the coexistence of users, developers and AI agents responsibilities appears to be the preferable option.

Since the sole responsibility of users and developers is not desirable, Autonomous robots’ liability under criminal and civil law is likely to become necessary in the following decades. Accordingly, law-makers, and the international community, should start rethinking our current responsibility models, in which AI agents do not sufficiently fit. More than introducing internationally standardized duties corresponding to civil breaches and criminal offences, it would be of great interest to introduce “special rules required by the specific nature”88 of AI

agents, and prescribe a specific regime for the responsibility of Autonomous robots at the international level, that ought to be consistently implemented at the domestic level. However, such initiative would probably face a strong backlash from States as it would not only impinge on States’ sovereignty but also significantly disrupt with established domestic regimes of responsibility. With regards to the regime of criminal responsibility in particular, whether a regime of direct or of strict liability is adopted, law-makers will probably have to reconsider

86 Bergman, Responsibility for Crime and Injury When Drunk, 1997, Addiction (Abingdon, England), Vol. 92, No. 9, 1183–88

87 Wells, Corporations and Criminal Responsibility, 2001, Oxford University Press, 64-83 88 Zacklin, supra note 67, 91

(24)

23

our conception of mens rea, and either introduce a brand-new approach of the concept suitable to AI agents, or risk threatening the basis of our criminal liability models.

However, the respect of duties, which main purpose is to organize society and social interactions, is normally ensured thanks to laws’ deterrent effect, through the payment of damages and penal sanctions. With regards to criminal liability in particular, holding AI agents responsible could not result in the intended deterrent effect since Autonomous robots would not have the capacity to understand the concept of punishment89, as they supposedly lack

consciousness and feelings, which brings us back to the mens rea issue and the necessity to distinguish between different types of AI. Concerning the obligation of reparation – which was assimilated to and limiting international responsibility for a long time90 – it is uneasy to

conceive how could AI agents pay for damages, as well as for criminal fines as they still do not have any property rights. Indeed, even if it would be possible to create funds or insurances for the pecuniary reparation of robots’ damages and crimes, we could envisage granting rights to Autonomous robots, such as right to property, obviously linked to the concept of patrimony, itself closely related to legal personhood. Finally, more than for reparation matters, the issue surrounding Autonomous robots’ responsibility is intrinsically linked to those of their rights. Holding them responsible would protect their existence rather than allowing to simply dispose of them in the occurring of a criminal offence or civil wrong.

3. The controversial issue of Autonomous robots’ protection and the

granting of related ‘human’ rights

As previously observed, in the Reparations for Injuries case, the United Nations was recognised as a subject of international law, with a view to endow the international organisation of the capacity to assert its right to bring a claim on the international plane against a third-State91. Other entities, such as individuals, rivers or idols, were granted legal personality, either domestic or international, to acquire rights, in order to ensure their own protection. In this process of weighing the pros and cons of a potential recognition of AI agents’ international

89 Simmler, Markwalder, Guilty Robots? – Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence, 2019, Criminal Law Forum, Vol. 30, No. 1, Springer Netherlands, 1–31

90 Jouannet, Emer de Vattel et l’émergence du droit international classique, 1998, Paris, Pedone, 407; PCIJ, Factory at Chorzów, Jurisdiction, 1927, Series A, No. 9, 4, 21; PCIJ, Factory at Chorzów, Merits, 1928, Series A, No. 17, 4, 29

(25)

24

legal personality, we must consider, beyond their ability to be held responsible under civil or criminal law, whether or not Autonomous robots require to be conferred rights.

Traditionally, due to the fear surrounding the risk of robot supremacy depicted in science-fiction novels, authors tended to orientate their works towards the issue of robots’ obligations. Asimov was no exception to the rule when he elaborated his Laws of Robotics, those being mainly robots’ duties towards human beings and humanity as a whole92. Nevertheless, the Third

Law, providing that a machine must protect its own existence, might constitute the beginning of an inclination towards robots’ protection, even though this Law is preceded by the First, Second and Zeroth Laws. Even though no concrete initiative in that direction had been undertaken at the moment, governments and academics issued manifold calls in favour of robots’ rights, some looking further ahead than others93. For instance, in 2006, an avant-garde

report commissioned for the government of the United-Kingdom suggested that Autonomous robots could not only be given rights in the years to come, but also become able to petition for their own rights94. However, before exploring the issue of AI agents’ legal capacity to hold claims, it is first necessary to examine whether rights for AI agents are meaningful and desirable, and, if this is the case, prospectively grant international rights and according legal personality to Autonomous robots.

1. AI agents and norms of protection: bearers of rights or beneficiaries of

others’ duties?

First and foremost, this section will examine which rights could or should be attributed to AI agents in order to ensure their protection. As not all rules have the same value, we will restrict this research to the study of widespread international and national norms relating to one’s protection that could transposed to Autonomous robots, those being jus cogens, animal welfare and human rights norms.

Under international law, no derogation from peremptory norms is permitted95. Treaties as well as unilateral declarations conflicting with jus cogens norms are deemed to be null and

92 Asimov, supra note 48, 183-216

93 For example: Torrance, Ethics and consciousness in artificial agents, 2008, AI & Society, No. 22, 495–521; Levy, The ethical treatment of artificially conscious robots, 2009, International Journal of Social Robotics, No. 1, Issue 3, 209–216

94 UK Government. Robot-rights: Utopian dream or rise of the machines?, 2006, Report, Office of Science and Innovation’s Horizon Scanning Centre

Referenties

GERELATEERDE DOCUMENTEN

The annual reports that Members which ratify this Convention agree to make to the International Labour Office, pursuant to the pro- visions of Article 22 of the Constitution of

While the language of cyber terrorism itself is not used specifically in Russia to push through these legislative changes, the potential threat of terrorist activities does seem

1 In the past the generation (supply) of electricity always followed the consumption (demand), so that the flexibility in the electricity system was mainly

This type of interface, where multiple users are gathered around a table with equal access to the characters and the story world, offers a more social setting for interaction than

In the current research three extractants (trioctylamine, tri-iso- octylamine and Aliquat 336), three diluents (dodecane, dodecanol, and oleyl alcohol) and two

Specialized Hospitals 3.5-5.0 million people Tertiary level healthcare General Hospitals 1.0-1.5 million people Secondary level healthcare Primary Hospitals

The financial structure indicator BANK is defined as the ratio of bank credit to total credit to the private non-financial sector.. QE * BANK is the demeaned interaction term between

Während große Städte im Hinblick auf eine Klimaanpassung schon re- lativ gut aufgestellt sind, fehlt kleinen und mittelgroßen Städten oft die Kapazität, um einen stra- tegischen