• No results found

The creation of an ethical Artificial Intelligence [AI] policy? An exploration into the early days of the European Union’s ethical rhetoric in the field of AI.

N/A
N/A
Protected

Academic year: 2021

Share "The creation of an ethical Artificial Intelligence [AI] policy? An exploration into the early days of the European Union’s ethical rhetoric in the field of AI."

Copied!
55
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Author: Seán Pender

Supervisor: Dr. Jan Oster

Second Reader: Dr. Maxine David

Student Number: 2361094

Student Email: s.h.pender@umail.leidenuniv.nl

Word count: 14,851

University Leiden Master’s Thesis 2018-2019

The creation of an ethical Artificial Intelligence [AI] policy? An exploration into the early days of the European Union’s ethical rhetoric in the field of AI.

(2)

2 Abstract:

This thesis explores the European Union’s rhetoric in the field of ethical AI by aiming to pin their discourse to mainstream normative ethical theories. The literature review supplies a deep insight into the current discussions on-going in AI, as well as its sub-fields. This is followed by a critical analysis into ethics and AI and why there is a necessity for ethical considerations when using AI. A synopsis of the EU’s current position is explored prior to delving into the methodological decisions made in the cadre of this thesis. A document analysis of qualitative secondary data permits an analysis into two key documents: The AI High-Level Expert Group’s ethical guidelines, as well as an official European Commission communiqué on said guidelines. Finally, the normative theories introduced in the literature review act as a framework for analysing and evaluating the EU’s early AI rhetoric, in order to arrive at my conclusions in section 6.0. In doing so, this thesis aims to comprehend the EU’s ethical strategy for ethical AI.

(3)

3 Acknowledgements:

I would like to thank firstly my supervisor Dr. Jan Oster for his continued support and motivation to undertake an ambitious project on ethical AI. His kind words and suggestions helped me greatly.

I would also like to thank everyone I met during my time in the Netherlands, for keeping me curious and always eager to learn more.

(4)

4 Table of contents: Abstract: ... 2 Acknowledgements: ... 3 Table of contents: ... 4 Abbreviations: ... 6 1.0 Introduction ... 7 2.0 Literature Review ... 9 2.1 Introduction ... 9

2.2 Artificial Intelligence - Definitions ... 9

2.3 Why the sudden surge of AI into mainstream discourse? ... 11

2.4 Ethics and AI Technology – Why the need? ... 13

2.5 Human Bias in Machines ... 15

2.6 Machine and Computer Ethics ... 17

2.7 Normative Ethics – Two schools of thought ... 18

2.7.1 Deontological Ethics ... 18 2.7.2 Kantianism... 19 2.7.3 Absolutism ... 19 2.8 Consequentialist Ethics ... 20 2.8.1 Utilitarianism ... 20 2.8.2 Altruism ... 21

2.9 The EU and AI – A Synopsis ... 21

2.10 Conclusion ... 22

3.0 Methodology ... 23

3.1 Introduction ... 23

3.2 The Epistemological and Ontological Questions ... 23

3.3 Qualitative Data ... 24

3.4 Secondary Research – Document Analysis Approach ... 24

3.5 The AI HLEG ... 25

4.0 Investigation ... 28

4.1 Introduction ... 28

4.2 Human-centric AI ... 28

4.3 The AI HLEG Guidelines ... 29

(5)

5

4.3.2 From Fundamental Rights to Ethical Principles ... 30

4.4 Ethical Principles in AI Systems ... 32

4.4.1 Human Autonomy ... 32

4.4.2 Prevention of Harm ... 32

4.4.3 Fairness ... 33

4.4.4 Explicability ... 33

4.5 Requirements for Trustworthy AI ... 33

4.5.1 Human Agency and Oversight ... 34

4.5.2 Technical Robustness and Safety ... 34

4.5.3 Privacy and Data Governance ... 35

4.5.4 Transparency ... 35

4.5.5 Diversity, Non-discrimination and Fairness ... 36

4.5.6 Societal and Environmental Well-being ... 36

4.5.7 Accountability ... 36

4.6 Technical Methods ... 37

4.6.1 Architectures for Trustworthy AI ... 37

4.6.2 Ethics and Rule of Law by Design ... 37

4.6.2 Explanation Methods ... 38

4.6.3 Testing and Validating ... 38

4.7 Conclusion ... 38

5.0 Analysis and Discussion of Results ... 39

5.1 Introduction ... 39

5.2 Evidence of Deontological Ethics ... 39

5.2.1 Evidence of Kantianism ... 39

5.2.2 Evidence of Moral Absolutism ... 40

5.3 Evidence of Consequentialism ... 41 5.3.1 Evidence of Utilitarianism ... 42 5.3.2 Evidence of Altruism ... 42 5.4 Conclusion ... 43 6.0 Conclusion ... 45 Bibliography ... 47

(6)

6 Abbreviations:

AGI – Artificial General Intelligence AI – Artificial Intelligence

AI HLEG – Artificial Intelligence High-Level Expert Group DL – Deep Learning

DSM – Digital Single Market

EPSC – European Political Strategy Centre EU – European Union

GAFAM – Google, Amazon, Facebook, Apple and Microsoft LAWS – Lethal Autonomous Weapon Systems

ML – Machine Learning

(7)

7 1.0 Introduction

Artificial Intelligence (AI) is a phenomenon that has existed for a rather long period of time. In fact, its roots can be traced back to WWII. One could say the same about the European Union (EU). Ethics, however, predates them both. This thesis aims to link all three together. This thesis first and foremostly offers a critical literature review on ground-breaking research based in the field of AI. AI is one of the most contested sectors of research and consensus is seldomly found when it comes to definitions and opinions. In the first section of my literature review, I delve into wider debates on the purpose, application and imputed consequences of AI and society, sharing insights and perspectives from a plethora of spheres both academic and otherwise. This critical review includes an exploration into AI’s sub-categories such as Machine Learning (ML) and Deep Learning (DL), as well as uncovering the veil on human-centric AI. Following this, the notion of computer and machine ethics is introduced, coupled with an explanation as to why an ethical dimension is essential in technologies such as AI. This includes sharing real-life examples of where AI has made questionable and life-altering decisions, predominantly through the phenomenon of human bias in machines. To conclude the literature review, six normative ethical theories are explored. These theories provide a framework to critique and analyse the EU’s rhetoric on ethical AI.

Section three provides a thorough explanation of the difficult methodological choices made in the cadre of this thesis. Few more difficult than the choice of epistemological lens chosen. Three main schools of technological epistemology are explored before the adaption of technological mediation theory. This is coupled with an interpretivist ontological lens. The choices of researching qualitative, secondary data, rooted in documentary analysis are also justified. Finally, the identification of a pivotally important trans-national actor in the emergence of an ethical code on all things AI within the EU is contextualised and justified, namely the EU’s AI High-Level Expert Group (AI HLEG). This expert group is composed of 52 people and was hand-picked by the European Commission to write ethical guidelines for AI in the EU. Section four critically analyses these Guidelines, as well as the official European Commission communiqué on the Guidelines.

Finally, section five critiques and discusses the results found in section four and the decisions made by the AI HLEG and the European Commission. The normative ethical theories discussed in detail in the literature review act as a cogent and understandable way to better comprehend the early rhetoric of the EU’s strategy on the emergence of ethical AI. Section six

(8)

8 concludes this thesis by offering my own opinions on the EU’s rhetoric, as well as offering suggestions for future research on the topic. This thesis aims to shed light on the EU as a player in ethical AI and the range of ethical theories the EU may be drawing upon in its attempt at prioritising the Union’s efforts to significantly influence Member States and wider global AI and ethics discourses and policy formulation.

(9)

9 2.0 Literature Review

2.1 Introduction

This section delivers a critical review on cutting-edge literature written in the field of AI and ethics. It aims to firstly tackle the often-challenging task of defining AI and its sub-categories in section 2.2. This includes definitions of Narrow AI and Artificial General Intelligence (AGI). This is followed by an exploration into AI, probing into the technology’s imputed threats and opportunities as well as defining key concepts such as Deep Learning (DL) and Machine Learning (ML) in section 2.3. The focus then shifts towards ethics and technology as section 2.4 seeks to identify examples of ethical questions involved in the creation of AI-programmed machines. Section 2.5 delves into the name given to describe this phenomenon causing these ethically questionable decisions taken by AI algorithms: human bias. Section 2.6 then explores the phenomena of machine and computer ethics as sub-fields of ethical research. Section 2.7 introduces the normative ethical theories that are explored in this thesis. The two main schools of ethical thought that are discussed are Deontological ethics (section 2.7.1) and Consequentialism (section 2.8). Sub-theories that fall under the deontological school such as Kantianism (section 2.7.2) and Absolutism (section 2.7.3) are explored. Consequentialist sub-theories such as Utilitarianism (section 2.8.1) and Altruism (section 2.8.2) are also discussed. These ethical frameworks are necessary to better understand the EU’s AI rhetoric, investigated at length in section 4.0. Lastly, section 2.9 offers a brief overview of AI from an EU perspective. Section 2.2 now moves to unpack the entity AI.

2.2 Artificial Intelligence - Definitions

Defining AI has brought about a large debate. It is a heavily contested and complex area. Whether in academia, public or private spheres, there is a lack of agreement when it comes to a concrete definition. This mainly stems from the fact that intelligence itself can be difficult to define (Carrico, 2018). Ironically, Tegmark (2018, p. 49), one of the biggest names in AI, believes that there is no consensus on what intelligence means, even amongst “intelligent intelligence researchers!” Some believe that this lack of consensus “prevents more productive conversations about…such technologies” (Sloane, 2018, p.1). Nevertheless, in the corpus of this thesis, I adopt the official definition of the EU. The European Commission (2018a) states that:

(10)

10 “Artificial Intelligence (AI) refers to systems that show intelligent

behaviour: by analysing their environment they can perform various tasks with some degree of autonomy to achieve specific goals”

I consider this definition fundamental, due to the scope of the research. It is therefore the definition adopted in this thesis. Notwithstanding, many would argue that this definition omits certain aspects of AI. Andersen (2002, as cited in Carrico, 2018) defines AI as any machine that can solve problems normally resolved by humans with natural intelligence. Russell and Norvig (1995) believe AI to be systems that think and act like humans, i.e. rationally. In the Commission’s definition, there is, somewhat surprisingly, no mention of humans. This becomes more startling when one explores the EU’s human-centric approach to AI systems, further explored in section 4.1. Furthermore, the Commission links intelligence to autonomy. Gunderson & Gunderson (2004) argue that although the two are similar, intelligence does not equate autonomy. The Commission also fails to distinguish between Narrow AI and Artificial General Intelligence, the two main sections of AI.

A sub-section of AI is Artificial General Intelligence (AGI). According to Dr. Ben Goertzel, AGI can be defined as “the ability to achieve complex goals in complex environments using limited computational resources” (Goertzel, 2009). This includes an aspect of autonomy on the part of machines, a practical understanding of self and others and finally the capacity to understand “what the problems is” as opposed to just solving problems posed explicitly by programmers. i.e., machines thinking and acting on their own accord.

Narrow AI, although still an AI, differs to AGI. Narrow AI is commonly known as data-driven AI. Narrow AI can be described as “the ability to carry out certain complex goals in certain environments” (Goertzel, 2009). For example, when an IBM programme named Deep Blue beat the world’s best chess player Garry Kasparov in a game of chess in 1997, many believe this was an example of Narrow AI (Ford, 2015; Korf 1997; Goertzel, 2009). Deep Blue was based on an algorithm analysing millions of possibilities per second and selecting the most promising move, a “brute force” technique supported by extensive computing power (European Parliament, 2018). Chess is a game with rigidly defined rules, in a certain environment. However, this definition becomes complicated as IBM themselves in 1997 claimed Deep Blue was not an AI (IBM, 1997 as cited in Korf 1997), however even this is contested in the computer science field. This comes back to the challenge of definitions in the sector.

(11)

11 Notwithstanding, this is the working definition that will be used in the frame of this thesis, due to it being the official definition of the Commission.

Many experts claim we are still many years away from real AGI1 (Ford, 2019, cited in The Verge, 2018). Others believe it will never be achieved. For these reasons, this thesis focuses on Narrow AI.

Human-centric AI in this thesis refers to AI developed in a manner that is aligned with the values and ethical principles of a society or the community it affects (IBM, 2018). MIT go a step further by stating that human-centred AI is defined by two goals; (1) the AI system must continually improve by learning from humans while (2) creating an effective and fulfilling human-robot interaction experience (MIT, 2018). Carrico (2018) argues that the EU should become a leader in AI development, but more specifically a human-centred development. He believes that the EU must develop a “mission-based innovations” which concentrate on using AI leadership in order to find solutions to “the most pressing societal problems of our time” all whilst “avoiding potential dangers and risks” (Carrico, 2018, p.1). Section 2.3 aims to explain why AI has indeed become such an important technology.

2.3 Why the sudden surge of AI into mainstream discourse?

The term Artificial Intelligence was first coined by John McCarthy in 1954. Yet as of late, AI is mentioned more and more by policy makers and private businesses. We, as humans, use sensory inputs to obtain information and understand our environment, which seemingly corresponds to the aforementioned definition of the Commission (2018a). Taking this at a very basic level, for instance once we see a green man appear on a traffic light, we know that it is time to cross the road. Traditionally, computers do not learn as such. Conventionally humans programme computers with specific instructions (i.e. an algorithm) and the computer follows this. With advancements in a certain component of AI called Machine Learning (ML), this process has started to drastically change, leading AI into the limelight.

ML gives machines the ability to automatically learn and improve from experience without being explicitly programmed to do so (Expert System, 2018). This could be considered rather human-like. With ML, “man is meeting, for the first time, a serious competitor” (Alexandre, 2018, p.30). According to the current CEO of Google, Sundar Pichai, ML is “a core,

1 Martin Ford interviewed 23 AI experts and the average guess was that AGI has a 50% chance of existing by

(12)

12 transformative way by which we're re-thinking how we're doing everything," and that ML is used throughout almost every aspect of Google, whether it be YouTube or their search engine (Sundar Pichai as cited in Business Insider, 2015, p.1). A sub-section of ML is Deep Learning (DL). This is a process where AI-programmed computers learn through pattern and association. For instance, by showing an AI millions of images of road signs, once trained, the computer will be able identify a road sign without specifically being trained to do so. AI can be graphed in the following manner:

Even with these AI-powered computers gaining intelligence, it remains clear that the role of their programmers is still fundamental. AI-programmed machines are ultimately being taught to think by their programmers through algorithms. Advancements in the fields of ML and DL have springboarded the discussion of ethics to the forefront of the debate. There is a growing fear that AI-machines will think and indeed act like their programmers. This phenomenon is referred to as human bias, further explored in section 2.5. This is one of the main fears that accompanies AI.

Fears of a society fully intertwined with AI range from Armageddon to societal and political disruptions (Cave & Dihal, 2019). Even the European Parliament (2017) reference Frankenstein and Golem in an official communiqué on AI, seemingly feeding into the doomsday narrative often attached with AI. Makridakis (2017) created four categories of AI thinkers; The optimists, the pessimists, the pragmatists and the doubters. This ranges from those who think AI is a threat to humanity (the doubters), to those who believe in a

(13)

13 technological utopia (the optimists). Fears such as technological singularity, i.e. when technology takes over humans as the master race (Kurzweil, 2005), often appear when researching AI. With advancements in Lethal Autonomous Weapon Systems (LAWS) and the deployment of Citizen Score applications in China (Wired, 2019), these fears are not without premise. However, these opinions of world-ending scenarios are on the extreme end of the risk spectrum. Risks commonly associated with AI in the literature are; human biases in algorithms (Bryson, 2017), public trust in AI (Eurobarometer, 2017) and lastly inequality, often stemming from a lack of diversity (Eubanks, 2018). Nonetheless, AI does not merely spell negative consequences for society. Advantages that are commonly associated with AI include; more jobs created (Deloitte, 2017), and more leisure time as robots will carry out mundane tasks (Gates, 2018 as cited in CNBC, 2018). Not to mention the economic value AI could potentially have on companies and governments alike. PwC (2017) believe AI will boost global GDP by $15 million before 2030. Like all new technologies, there are concerns but importantly opportunities too. AI is no different in this regard. French theorist Paul Virilio (1999, p.89) sums this up nicely with the following citation:

“When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash…every technology carries its own negativity, which is invented at the same time as technological progress.”

However, unlike the creators of planes or ships, previous inventors did not have to consider the ethical impacts of a machine making a decision that could change a person’s life. It is fundamental to get the balance right with AI, in order for the fears to remain fears. Some believe the main way to achieve this is through ethical AI.

2.4 Ethics and AI Technology – Why the need?

Due to the aforementioned challenges that arise through AI such as human bias, a huge debate has commenced on ethics and technologies such as AI. One element of the debate is revolving around who should participate, contribute and determine the ethical guidelines for AI. Microsoft CEO Satya Nadella has suggested that tech companies, such as the one she heads, should lead the way on the issue (Nadella, 2017 as cited in AFR, 2017). Companies such as the GAFAM2 are amongst the most advanced in using AI-machines, yet should this mean that they get to write the ethical rules? There is a growing fear that tech giants are taking a “relax, leave

(14)

14 it to us” attitude to writing these digital guidelines of the future, but dismissing them when convenient (The Verge, 2019a). In fact, even within these tech giants, ethical problems arise. This was epitomised when Google dissolved its AI ethics board, merely weeks after its foundation (The Verge, 2019b). This contentious group gathered controversy from its launch, primarily due to the appointment of Republican and openly homophobic member Kay Coles James. Furthermore, Google gave zero insight into how it chose its board members, a heavily questionable decision, critiqued by many (The Verge, 2019b). Furthermore, GAFAM and other tech companies have been accused of “ethics washing.” Wagner (2019) calls this: “an attempt to pretend like you're doing ethical things and using ethics as a tool to reach an end, like avoiding regulation…it’s a new form of self-regulation without calling it that by name" (Wagner, 2019 as cited in KQED, 2019, p.1).

Whilst experts cannot agree on who should regulate AI, there is no consensus on what should be regulated either. Baron (2019) believes that AI cannot solely be limited by geographic boundaries, emphasising that global cooperation is be fundamental. This is a point of view shared by Wagner (2019) who states AI regulation cannot be U.S-centric, calling writing the ethical guidelines for AI “a global challenge.” Two leaders in promoting ethical aligned AI are the Institute of Electrical and Electronics Engineers (IEEE) and AI Now. Both organisations have written numerous reports on what aspects of AI should be regulated, by whom, and how. Nevertheless, both these organisations are also American, both located less than an hour’s drive from each other in New Jersey and New York respectively, feeding into the previously alluded to U.S-centric nature of ethics in AI. Stakeholders from an array of backgrounds have attempted to write the AI ethical rules of the future.3 The list ranges from governments to individuals, all with their own ideas on how AI should be regulation in broader society (Winfield, 2019). Even the G20 (2019) has entered the debate in writing their own AI ethical principles. The number of actors sheds light on the lack of consensus over these issues. The question is now leaning towards which ethical narrative will prevail amongst these competing stakeholders, all battling to be the author of the AI rules of the future.

IBM (2017) argue that AI has the capability to be less biased than humans when programmed correctly. The problem is that there is no agreement on what the “correct” way is. Tegmark (2018) argues that in order for a successful and ethical AI that will not cause substantial

3 Professor Alan Winfield (2019) has created a website that tracks relevant stakeholders and their efforts to

write the ethical AI rules of the future: http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html

(15)

15 problems for humanity, we need AI that can; learn our goals, adopt our goals and retain our goals. The ethics of AI should be choosing these goals. Tegmark (2018) argues that the goals should be the promotion of survival and flourishing. However, this is simply one opinion of a person who is cut from the same cloth of the white, male and well-educated people that predominantly programme AI (AI Now, 2017). Agreeing on ethics in AI has proved difficult, yet then again so has defining AI itself. Section 2.5 aims to provide examples of what happens when ethics are not taken into consideration by programmers using AI-machines.

2.5 Human Bias in Machines

Human bias in machines and AI is undoubtedly one of the greatest challenges for tech companies and policymakers alike to overcome. Ultimately, AI is just software. It is therefore susceptible to the same issues that programmes meet such as bias and hacking. It is possible that ML systems reinforce systemic bias and discrimination whilst simultaneously preventing dignity assurance (World Economic Forum, 2018). Bryson (2017) attributes three main sources of biases in machines; poor training data, intentional prejudice and lastly that even the most complete datasets are riddled with human prejudice. The tech world has a heavily documented problem with diversification (see; Financial Times, 2018a; Wired, 2018). Most AI engineers are well educated, normally male and almost always white (AI Now, 2017). Seeing as it will be predominantly these white males that programme AI machines, at least for the foreseeable future, there is a justified concern that these machines will act and indeed think in the same manner as their programmers, thus perpetuating biases.

Eubanks (2018) in her book Automating Inequality delves into the susceptibility of people to algorithms and the inequal nature of their impact. Her studies in the U.S., using three different case studies in Indiana, Los Angeles and Illinois, find that inequality is both systematic and systemic, amplified by the fact that cutting-edge technologies are tested on poor people. In what she coins the “Digital Poorhouse,” she believes technologies such as AI are intensifying cultural and political narratives that frame the poor on the inequality spectrum as lazy and deserving of their financial and social circumstances. There is a nervousness that this rational will be transferred, be it implicitly or explicitly, to our new technologies such as AI. When this happens, it scales very rapidly in the technology and becomes a feedback loop (ibid). Further research on this field has been carried by out Safiya Umoja Noble. Her book Algorithms of Oppression (2018) explains her research into search engines, such as the aforementioned Google, and their impact on furthering society’s preconceived biases and inequalities. Her

(16)

16 research is primarily based on algorithmic discrimination of sexist and racist nature, including a study into how search engine algorithms amplify some voices whilst simultaneously silencing others. She finds that search engine algorithms further marginalise the already marginalised in society, particularly black women.

Beer (2017, p.2) believes that when one considers the capabilities of algorithms in AI machines, one must consider the “consequences of code.” Moreover, Beer urges us to take into account “…the powerful ways that notions and ideas about algorithms circulate in the social world” as algorithms are capable of creating, maintaining or cementing “norms of abnormality” (Beer, 2017, p.2). Essentially, algorithms are determining what matters and what does not matter in society. Algorithms choose what we read and see online (Jolly, 2014). This line of thinking is re-iterated by Guttman (2018, p.1) where he emphasises the “substantial ownership of control” that AI has on humans, making key decisions that can potentially “alter the course of a person’s life dramatically”. This becomes problematic when decisions made by AI-programmed machines take unjust actions. This is an area explored by Wachter et al. (2017), where the authors find that systems can and do make both unfair and discriminatory decisions. In making these decisions, this replicates or indeed develops on human biases as AI-machines behave in inscrutable and unexpected ways in highly sensitive settings. Watcher et al. (2017) believe that this puts both human interests and human safety “at risk.” Pasquale (2018) goes a step further, maintaining that software may undertake a constructive or performative role in ordering the world on the behalf of humans. There has already been a number of famous cases regarding bias in algorithms. The main culprits have been large multinational corporations such as GAFAM.

One famous example of human biases in AI included a case of an algorithm discriminating against black persons in predicting future criminals (ProPublica, 2016a). Black defendants in the U.S. were 77 per cent more likely to be pegged at higher risk of committing a future violent crime and 45 per cent more likely to be predicted to commit a future crime of any kind (ibid). This is contested as humans were asked to complete a similar task and the scores were more or less equal (see; Dinnerstein, 2018; Northpointe, 2016). Yet ProPublica (2016b) strongly defend their claims that algorithms are reinforcing societal biases. Furthermore, Amazon have also suffered from a similar fate when deploying an algorithm that discriminated against female candidates during the recruitment process for new posts in the company (Reuters, 2018). This is a classic example of inequality of opportunities, amplified by AI. Amazon is one of the biggest and most successful companies in the world, yet even with all of their money and

(17)

17 resources, they cannot escape this problem. Discriminatory outcomes not only violate human rights, they also undermine public trust in ML, as well as furthering inequality. Cases such as these have spearheaded the argument for more diversity in AI, as well as furthering the ethical debate.

2.6 Machine and Computer Ethics

Computer ethics, the predecessor of machine ethics was first proposed by James H. Moor (1985). He argued for the “special status” of computer ethics as a separate field of research (Moor, 1985, p.267), claiming that the ethics of computers makes it imperative to consider our values in a world intertwined with computers. His paper acted as the root of machine ethics, paving the way for a new field of study. He suggests three main reasons why machine ethics should be fundamental;

1) Ethics is important. We want machines that treat us well

2) Machines are becoming more sophisticated, making our lives more enjoyable, future machines will probably have increased control and autonomy to do so. Powerful machines need more powerful machine ethics 3) Programming or teaching a machine to act ethically will help us understand

ethics.

Ethical questions are arguably much more important today than they have been in the past. The people who programme self-driving cars powered by AI are ultimately facing a “real-life” trolley problem. Should the train swerve and kill the grandmother, or run over a young child? Remarkably, most programmers are unaware of the importance of their actions. Cooper (2018) believes that these programmers are making choices and are in a position where their actions could have comparable impacts to the inventors of poison gases or even atomic bombs. It is argued that biases such as the examples in section 2.5 occur due to negligence rather than malice (Bowles, 2018). However, does that make this ethical? Can ignorance be bliss in a situation like this? Yet, there is still no black and white when it comes to ethical regulations or rules for AI-powered machines.

Noteworthy challenges for practical, ethical, and legal AI are said to be not yet fully appreciated or understood in the tech sector (Bowles, 2018). Ethics is often linked with legality. However, this also becomes a slippery-slope rather quickly. Laws can be morally wrong, as can ethics, especially when looked upon with the benefit of hindsight. The society we inhabit today,

(18)

18 although commonly rooted to Ancient Greece, is evidently more advanced in technology and science, but the ethics of today are also very different. In Ancient Greece, and indeed in centuries closer to the current day, slavery was acceptable. Women and black people have only recently in some countries been awarded the right to vote. It is very probable that the ethics of today’s society, will be ethically questionable by the society of tomorrow, just as we question the ethical values of yesterday. The next section of this thesis aims to introduce normative ethical theories that could potentially be deployed in machine ethics.

2.7 Normative Ethics – Two schools of thought

This section aims to unravel the normative ethical theories deployed in this thesis. The two main schools of modern ethics explored in this section are; deontological ethics and consequentialism. These are also the theories used to try and better understand the EU’s rhetoric on ethical AI in section 5.0. Normative ethics does not try and explain what is right and what is wrong per se, it is more focused of how to react to situations. This section of this thesis does not aim to re-write modern ethical values, it does however offer theories that are used to understand why and how one makes ethical decisions. These theories are crucial to this thesis, providing the thematic to evaluate and compare the EU’s rhetoric on ethical AI. The schools of thought are central to my evaluation.

2.7.1 Deontological Ethics

Deontological ethics is one of the main schools of modern ethical philosophy. Deontology states that individuals have a strong obligation to respect certain rules (Panza and Potthast, 2010). Human rights are often tied and closely linked to the theory of deontological ethics, due to its rule-based pattern-decision making (ibid). Deontologists believe that ethics is regulated and overseen by rules and principles and that we have a moral obligation to follow said rules and principles (Bowles, 2018). Deontologists complete actions due to a sense of duty to do so. In deontology, unlike consequentialism, an action cannot be judged as right or wrong based on its intended result, but only the inner motives from the person making the decision. The two main lines of thinking with deontological ethics are argued to be nonmaleficence and beneficence (Ross, 1930). Critics of deontology believe it to be too rigid, not allowing flexibility. For instance, a deontologist may have a strong sense of duty to obey principles, yet this may be to the detriment of increasing the good of society. Being bound to telling the truth for instance, may upset others in parallel. With regards to technology, a deontologist believes

(19)

19 that one has a moral duty to intervene to end the suffering of users. For instance, if a user gets addicted to a service, we have a moral duty to act. This could vary from as little as sending the user a notification, suggesting a they take a break, or go as far as banning said users account (Bowles, 2018). One of the founders of deontology is 18th Century German philosopher Immanuel Kant. His school of thinking is often referred to as both Monistic Deontology and Kantianism interchangeably. Section 2.7.2 sets out to explore his way of thinking.

2.7.2 Kantianism

Kant was an 18th century philosopher, yet his ethical lens is still relevant today. Kant emphasises the use of principles in ethics, as opposed to solely rules. His line of thinking stems from one critical and foundational principle - Categorical imperative - which led to the rest of his ethically important principles (Panza and Potthast, 2010). This underlying principle was practical reason, or rationality, believing that this is what separates people from animals. Yet, as alluded to in this literature review in section 2.2, many define AI as machines that can think rationally or human-like. Kantianism fits into deontology as it states people should act from duty, as well as moral inclination. Kant posed questions such as “what if everyone did what I am about to do?” as well as “am I treating people as an end or a means?” From a technological viewpoint, programmers using Kantian ethics should ask themselves if they are creating a product or service to benefit the end user? If the product is not for the benefit of others, even if profitable for the company, the product or service ought not to be released (Bowles, 2018). Furthermore, an AI-powered self-driving car with a choice of killing fifty cows or slightly injuring one human would kill the cows, as every human life has absolute value, whereas cows are worthless morally for Kantian thinkers (Matthias, 2017).

2.7.3 Absolutism

Another renowned section of deontology is moral absolutism. This is the ethical belief that all actions can be judged against a certain set of absolute and concrete standards. It is the idea that there is one right answer, irrespective of context or viewpoint (Weist, 2016). It is the least flexible school of deontology, and the direct opposite of moral relativism. Moral absolutism can be described as the view that there is only one correct way of representing the world (Schick & Vaughn, 2004). For instance, let us take the example of Asimov’s three laws for robots, written in 1950; a robot may not injure a human, a robot must obey human commands and a robot must protect its own existence (Asimov, 1950). In the example cited above, there would

(20)

20 be no room for flexibility, these would be the rules for an programmed robot. Here, an AI-powered robot could not kill or injure another human to save millions of other humans. This theory also receives lots of criticism due to there not being a universal set of moral principles that all citizens follow. There is no set of real-life Ten Commandments for humans, or AI. In section 4.0, I explore what the EU believes these rules might approximate to.

2.8 Consequentialist Ethics

This is the second main school of modern ethics. In a sentence, consequentialists believe that the moral quality of an action is utterly determined by its consequences (Sinnott-Armstrong, 2003). If simplified, if the expected outcome of an action is good then it is ethically right to carry out this action. If the expected outcome is bad, it is ethically wrong to do so (Sinnott-Armstrong, 2003). The notion of good greatly varies. For Bentham (1789), this meant the greatest amount of pleasure for the greatest number. For Mill (1861), good meant the greatest amount of happiness. However, happiness is usually considered the greatest good for consequentialists (Haines, 2006). Consequentialism is usually placed as dichotomy to deontology. One of the main critiques of this ethical theory is that it is difficult to speculate the results of an action. In the previously explored examples in this literature review, one can see that biases in machines can be both explicit and implicit. Many programmers are unaware of the consequences of their actions. One of the most known consequentialists is eighteenth century British philosopher Jeremy Bentham. He is the father of utilitarianism, the main school of consequentialism.

2.8.1 Utilitarianism

Utilitarianism is the ethical school that is associated with achieving the greatest good for the greatest number. Utilitarian thinkers believe the purpose of ethics to be the improvement and betterment of all participants in society (Singer, 1981; Greene, 2013), placing the locus of right and wrong solely on the consequences of choosing one policy or action over another (Panza and Potthast, 2010). Utilitarians ask themselves: “Am I maximising happiness for the maximum number of participants?” whilst simultaneously minimising pain for the rest of society. Sidgwick (1874) believes that the point of view of the universe must be taken into account. Furthermore, utilitarianism does not only consider the result an action will have on humans, but of anything that can feel pain and happiness (Moral Robots, 2017). Bentham (1789) in his avant garde essay named an Introduction to the Principles of Morals and Legislation first

(21)

21 explored the principle of utility in ethics. Bentham believed that actions are morally permitted solely if they produce at least as much happiness as another variable action. The deontological critique of this theory is that is pays little attention to rights or laws. Utilitarians are only interested about results (Bowles, 2018). This becomes complicated as it can lead to a “tyranny of the majority situation” when it comes to pain and happiness. If an AI machines makes 99% of people happy, but racially abuses the 1%, this is sound utilitarian ethics.

2.8.2 Altruism

Moral altruism is fundamentally acting in the best interests of others, even if it is detrimental to your own interests (Kraut, 2016). This is the opposite of selfishness or egoism. Altruism means setting aside all emotional ties and biases that we usually would use to benefit ourselves or out immediate community such as our own population, friends or family (ibid). One of the most distinguished real-life examples of altruistic behaviour would be donating blood. This benefits us in no way, it sometimes may even cause us pain, yet the same blood could help save another person’s life. This school of consequentialism has been heavily critiqued due to its level of difficulty to maintain over a prolonged period. It is very difficult, some would argue impossible, to continuously self-sacrifice for the betterment of others. Gloor (2016) suggests that altruists should prioritise AI. An altruistic AI policy in this case could be one sees an AI-machine sacrificing itself for the betterment of humans. It could be also seen as a policy that ethically is sound, even if economically damaging.

2.9 The EU and AI – A Synopsis

Amiot (2016) argues that AI and the automation that accompanies it may challenge the “fair” distribution of wealth, making state intervention more necessary. A multi-stakeholder effort in which governments play a leading role has been suggested as the best approach towards AI development (Cath et al., 2017). Across the globe, government offices are testing applications of AI (Mehr, 2017). Thompson & Bremmer (2018) believe that a country that both tactically and intelligently deploys AI technologies throughout its workforce will probably develop at a faster rate, its cities will function more competently, its biggest enterprises will best understand consumer behaviour and lastly the people there will live longer.

Although the EU cannot be fully described as a government as such, the Commission have experimented with using AI as part of the Horizon 2020 programme, by implementing AI based lie detector tests at borders in order to aid immigration processes (European Commission,

(22)

22 2018b). This EU funded project, called iBorderCtrl is developing a way to speed up traffic at the EU's external borders and improve security by using an “automated border-control system” that tests people travelling by using “lie-detecting avatars” (ibid). There is also an EU initiative named AI4EU which aims to;

1) Mobilise the entire European AI community to make AI promises real for European Society and Economy

2) Create a leading collaborative AI European platform to nurture economic growth (AI4EU, 2019).

AI in the EU falls under the bracket of the Digital Single Market (DSM). Under the Juncker Commission between 2014 and 2019, the DSM had been prioritised. The EU estimates as part of the DSM plan, that by 2025, the economic impact of the automation of knowledge work, robots and autonomous vehicles will reach between €6.5 and €12 trillion annually (European Commission, 2018a). As part of the DSM, the EU’s AI strategy is built on three pillars; boosting of Europe’s scientific base, technological know-how and industrial capacity; preparing for socio-economic changes brought about by AI and; ensuring an appropriate ethical and legal framework for the implementation of AI on the continent. Both the second and third pillars are relevant to this thesis. The EU’s AI strategy has been branded “human-centric”. 2.10 Conclusion

This section delivered a critical review of the literature on AI, ethics in technology and leading ethical schools of thought as well as an insight into AI from an EU perspective. This included defining AI, as well as its sub categories such as Narrow AI and AGI. Furthermore, Deep Learning and Machine Learning were explored, in order to fully understand how human bias in machines comes about. This led into the debate on machine and computer ethics, sub-fields of moral philosophy, relevant to the field of AI. Six prominent ethical theories were then discussed under a technological umbrella. Finally, a brief synopsis of the EU’s current AI position was explored in order to better understand their ethical AI stance, explored in section 4.0. Firstly, section 3.0 supplies insides into the methodology used in the frame of this thesis in order to answer my research question: The creation of an ethical Artificial Intelligence policy? An exploration into the early days of the European Union’s ethical rhetoric in the field of AI.

(23)

23 3.0 Methodology

3.1 Introduction

This section aims to explain my methodological choices in the cadre of this thesis. It is centred around how I identified, analysed and evaluated the data for my investigation. This section is organised as follows: Section 3.2 provides a philosophical explanation of the methodological supports of my approach to the research question, i.e. the epistemological and ontological perspectives. Section 3.3 explains and justifies why a qualitative approach was preferred. Section 3.4 aims to describe why research of secondary data was preferred to that of primary. Section 3.5 explores and explains the main source of all my data – the Artificial Intelligence High-Level Expert Group (AI HLEG).

To begin, section 3.2 explores my epistemological and ontological lenses used in this thesis. 3.2 The Epistemological and Ontological Questions

“An awareness of the philosophical assumptions around undertaking research improves the quality and purpose of the actual research undertaken” (Patel, 2015, p.1).

In the field of the philosophy of technology, there are three mainstream epistemological schools; (Technological) instrumentalism, determinism and mediation. Instrumentalism claims that technology is merely a tool, one that can be used for good or evil. This idea shifts all of the ethical responsibility to the user of the technological product or service, with the classic example being “guns don’t kill people, people kill people” (Feenberg, 2003). Determinism is the opposite of this, as determinists believe that technology is so powerful that is moulds society and culture (Bowles, 2018). This shifts the ethical responsibility to the technology, not the human programmer (ibid). Winner (1977) hypothesised the theory into two aspects; society’s technologies are essential influencers of society and changes and advancements in technologies are the greatest source of societal change.

Mediation theory, first introduced by Verbeek (2011) offers us a third epistemological viewpoint, and the one that I use in the frame of this thesis. Instrumentalism denies technologies such as AI agency, determinism denies humans agency. Mediation is a theory that meets these two schools in the middle. Verbeek (2011) believes that technologies, such as AI act as a medium that allows us the observe and manipulate the world. Mediation theory, coined by Verbeek (2011), claims that both humans and technology co-create and co-shape the world.

(24)

24 The theory is used to understand technologies and society aims to focus on three main aspects of the relations between both human beings and reality: the dimensions of knowledge, ethics, and metaphysics (Verbeek, 2011). He believes that this loosely corresponds to the aforementioned Immanuel Kant’s three main philosophical questions: (1) what can I know?; (2) what ought I to do?; and (3) what may I hope for? (Verbeek, 2011). Mediation theory builds on Don Ihde’s (1990) analysis of human-technology relations. Whilst both technology and humans shape the world, in mediation theory, the onus still is on the users, designers and policy-makers to shape the impact that technologies, such as AI have on our society (Verbeek, 2011). This means that organisations, such as the EU have a large influence when determining the ethical values that they believe ought to be aligned with AI-machines.

From an ontological perspective have adopted an interpretivist approach. Interpretivist, in research terms, seeks to open up deep or thick insights on actor perspectives and preferences and how these are shaped and are socially constructed (Thompson, 2015). Interpretivism facilitates the application of social action theories so that the complex processes of placing ethics into AI can be theorised and better understood. Interpretivist research is best undertaken by applying qualitative methods (Grix, 2002).

3.3 Qualitative Data

Qualitative methodologies are best deployed when aiming to “understand people’s beliefs, experiences, attitudes, behaviour and interactions” (Pathak, Jena and Kulra, 2013, p.1). In order to attempt to answer the research question posed in the literature review and recognising the emergent and exponentially expansive nature of information and research being generated on AI, I decided that the best methodological approach was to adopt a qualitative data collection and analysis framework. Seeing as ethical AI is a relatively new phenomenon on the EU’s agenda, I will be researching their rhetoric on the topic. Rhetoric in this sense is best understood in its Aristotelian meaning as the art of persuasion encompassing ethos (ethics), logos (logic) and pathos (emotion) in argumentation (Leith, 2009). Accordingly, I decided to analyse and interpret qualitative secondary research sources and not to follow a quantitative approach. Furthermore, in the reports I am examining, there is no quantitative information available. 3.4 Secondary Research – Document Analysis Approach

Secondary analysis of data can be defined as: “any further analysis of an existing data set which presents interpretations, conclusions or knowledge additional to, or different from those

(25)

25 presented in the first” instance (Hakim, 1982, p.1 as cited in Robson, 1995, p.282). Robson (1995) goes on to suggest that secondary analysis of data allows researchers to capitalise on the efforts of others in collecting data and allows researchers to focus on analysis and interpretation and not to be pre-occupied with the identification and management of primary research sources.

A document analysis can be defined as a methodical process that evaluates and reviews documents, in order to find meaning, greater understanding and to develop empirical knowledge (Corbin & Strauss, 2008). Documents can take numerous forms, spanning from advertisements to surveys (Bowenn, 2009). Fereday & Muir-Cochrane (2006) argue that this method is used to identify overarching themes in documents. Because the domain of AI is avant-garde and ground-breaking, there are a multiplicity of entities contributing knowledge on AI. It is still on-going, and one could almost say that it seems it is developing daily. It is one of the most attractive fields in the world of science and technology, evolving at a ferocious speed. Thanks to this, it is part of the most documented fields. In order to understand the EU’s discourse on ethical AI, I had to consult a myriad of documents and reports. In this thesis, I analyse two documents in detail; The Ethical Guidelines for Trustworthy AI (the Guidelines), written by the AI HLEG (2019) and a European Commission (2019) communiqué on the Guidelines. I also briefly discuss a document from the European Political Strategy Centre (EPSC). I analyse these documents in order to find evidence of any underlying ethical theory, present in the documents. These theories are the six examined in the literature review.

The main source of data researched in this thesis is a report, published by the EU’s AI High-Level Expert Group (AI HLEG). Section 3.5 aims to fully explain this group and their contribution to the EU’s ethical AI policy.

3.5 The AI HLEG

The AI HLEG is the publisher of the report that this thesis predominantly aims to research. The European AI Alliance is a multi-stakeholder forum where one can participate in broad and open discussions on a myriad of aspects of AI development as well as its economic and societal impact (European Commission, 2018a). The AI HLEG oversee the steering of the group of the European AI Alliance (European Commission, 2018d). The AI HLEG consists of 52 experts

(26)

26 who were selected by the Commission for this task in June 20184. This group consists of a wide range of experts, across many different fields and sectors. The group is the EU’s first and most advanced attempt to examine the ethical make-up of AI. The AI HLEG’s main focus is concentrated on two main projects:

1) …prepare draft AI ethics guidelines, which will offer guidance on how to implement ethical principles when developing and deploying AI, building on the work of the European Group on Ethics in Science and New Technologies and the European Union Agency for Fundamental Rights; and (2) …make mid- and long-term policy recommendations on AI-related challenges and opportunities, which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy (European Commission, 2018d).

On the 8th of April, 2019, the AI HLEG released their official Ethics Guidelines for Trustworthy AI (The Guidelines from hereinafter). A first draft of the document was launched on the 19th of December 2018, which underwent an open consultation with over 5000 contributors. The document released in April 2019 is the main area of focus of this thesis. This document best represents the current EU-thinking on ethics and AI. In writing these guidelines, the AI HLEG aim to “build and maintain an ethical culture and mind-set through public debate, education and practical learning” (AI HLEG, 2019, p.9). The European Commission has welcomed the ethical guidelines in an official communication, another document studied in section 4.0 (European Commission, 2019c). The Commission believes that the report is “valuable” for its policy-making and supports the key findings of the document (European Commission, 2019c, p.5).

Notwithstanding, the AI HLEG is not without critique. All members of the group are European, almost exclusively white and there are more male than female members. Furthermore, amongst the fifty-two, only four members are experts in ethics. However, unlike the aforementioned Google, the members were chosen after an open selection process.

Nonetheless, the Guidelines released in April 2019 act as a very convenient and reliable source of my data. The fact that the experts were hand-picked by the European Commission, grants

4 A full list of the 52 stakeholders can be found here (28 males, 24 females):

(27)

27 great agency to the AI HLEG as an organisation. The Commission are also now launching a targeted pilot phase of the Guidelines, ensuring they can be tried, tested and implemented in practice. The publication is the EU’s first and most advanced attempt to consider the ethical implications of AI in Europe. This is the reason that I decided to use the document as the source of data for my investigation. The forty-page document, assembled after a large civil societal consultation, explores ethical AI in depth. Prior to this as explained in the literature review, although EU bodies considered this important, there had been little to no headway made on the issue of ethical AI. This report offers the best explanation of the EU’s current views on ethics in AI.

Following an exploration into the methodological approach deployed in this thesis, section 4.0 now examines the EU’s rhetoric on ethical AI.

(28)

28 4.0 Investigation

4.1 Introduction

In this section of my thesis, I critically explore, vis-à-vis the interrogation of a benchmarking publication, the early day official EU policy on AI, specifically AI’s ethical nature. This begins with an exploration of the EU’s human-centric AI (section 4.2). This is followed by a critical examination of the AI HLEG Guidelines in section 4.3 under the headings of; Fundamental Rights as Moral and Legal Entitlements (section 4.3.1) and From Fundamental Rights to Ethical Principles (sub-section 4.3.2). Following this, I analyse the group’s perspective on the necessary ethical principles for AI (section 4.4). Lastly, the seven requirements identified by the AI HLEG for trustworthy AI are analysed in section 4.5. To commence, section 4.2 explores human-centric AI.

4.2 Human-centric AI

An expression frequently encountered when researching the EU, AI and ethics is “human-centric”. This term can be found across a myriad of EU documents, including the AI HLEG Guidelines. This expression and especially what it implies, although not yet in place, is at the heart of the EU's AI intent. The EU believe that in deploying a human-centric approach, this is the first step in ensuring an ethical AI policy (EPSC, 2018). The in-house Commission think tank the EPSC (2018) illustrate this in the following manner:

(29)

29 The Commission (2019c, p.9) in their communiqué would appear to take the ethical dimension of AI very seriously, claiming that ethics in AI is not a “luxury feature or an add-on.” The AI HLEG reinforce the centric strategy, believing that "AI systems need to be human-centric, resting on a commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom” (AI HLEG, 2019, p.4). The Commission believe that "trust is a prerequisite to ensure a human-centric approach" further stating that "AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being" (European Commission, 2019c, p2). The Commission furthermore argues that the best manner to reach this desired trustworthiness is to ensure "the values on which our societies are based" are fully integrated when developing AI (European Commission, 2019c, p.3). The values that the Commission (2019c, p.3) are talking about here are the ones upon which the EU is founded. These include; "respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights." Moreover, the Commission link these with the EU Charter of Fundamental Rights, a text that they say links together the "personal, civic, political, economic and social rights" that EU citizens enjoy. The Commission (2019c, p.3) are persuaded that AI "should be developed in a way that puts people at its centre and is thus worthy of public trust."

The Commission places an emphasis on the development of a human-centric AI throughout its rhetoric. One of the main reasons for this continued underlying strategy is due to human bias in machines. The Commission comprehends that "decisions taken by algorithms could result from data that is incomplete and therefore not reliable, they [data] may be tampered with by cyber-attackers, or they may be biased or simply mistaken" (ibid, p.3). In order to achieve the desired strategy, the Commission believes that "[d]iversity in terms of gender, racial or ethnic origin, religion or belief, disability and age should be ensured at every stage of AI development" (ibid, p.3). The hope is that this would "enhance" the ability of people, not "replace" them.

Therefore, the Commission considers that ethical guidelines for AI are fundamental to achieve this desired objective of human-centric AI. A framework that "should” be applied to "developers, suppliers and ser of AI" in the common market, that would establish an "ethical level playing field" across all EU Member States. Section 4.3 aims to critically investigate the Guidelines written by the AI HLEG after being asked to do so by the European Commission. 4.3 The AI HLEG Guidelines

(30)

30 In order for trustworthy AI to emerge and be developed - the AI HLEG identified three main components that are necessary; it should comply with the law aligning with all applicable laws and regulations, it should fulfil ethical principles and values and it should be robust as even AI with good intentions can cause unintended harm (AI HLEG, 2019, p.5). Whilst the AI HLEG believe that in an ideal world, these three components would work harmoniously together, they accept that this may be challenging, as “at times the scope and content of existing law might be out of step with ethical norms” (ibid). The AI HLEG believe that with these three components, ethics will be “a core pillar for developing a unique approach to AI” (ibid, p.5). Furthermore, the group turn to AI ethics in order to deal with issues around the “development, deployment and use of AI” (ibid, p.9). The expert group trust that ethics will firstly “stimulate reflection” on the necessity to “protect individuals and groups” even at the most basic level. Secondly, in using ethical aligned AI, the group aim to “stimulate new kinds of innovations that seek to foster ethical values” (ibid, p.9). The AI HLEG name the UN Sustainable Development Goals as part of the EU 2030 agenda as an example. The ethical approach put forward by the AI HLEG is split up into four principles.

4.3.1 Fundamental Rights as Moral and Legal Entitlements:

The AI HLEG believe in an ethical approach based on EU fundamental rights5 that are “enshrined” in the EU treaties. These rights are included in Articles 2 and 3 of the Treaty of the European Union and indeed the EU Charter of Fundamental Rights. These rights are the respect for the rule of law and fostering democratic freedom, promoting the common good. The group of experts claim that these rights provide “the most promising foundations for identifying abstract ethical principles and values” that can then be “operationalised in the context of AI” (ibid, p.9). Moreover, the AI HLEG argue that “[t]he common foundation that unites these rights can be understood as rooted in respect for human dignity – thereby what [they] describe as a human-centric approach” (ibid, p.10).

4.3.2 From Fundamental Rights to Ethical Principles

5 The AI HLEG define Fundamental Rights as the following: “Fundamental rights lie at the foundation of both

international and EU human rights law and underpin the legally enforceable rights guaranteed by the EU Treaties and the EU Charter. Being legally binding, compliance with fundamental rights hence falls under trustworthy AI's first component (lawful AI). Fundamental rights can however also be understood as reflecting special moral entitlements of all individuals arising by virtue of their humanity, regardless of their legally binding status. In that sense, they hence also form part of the second component of trustworthy AI (ethical AI)” (AI HLEG, 2019, p.7).

(31)

31 The AI HLEG state that the following rights are of the utmost importance when taking ethical principles into consideration; respect for human dignity, freedom of the individual, respect for democracy, justice and the rule of law, equality, non-discrimination and solidarity and citizens’ right.

Beginning with human dignity, the expert group consider it to mean that “all people are treated with respect” as people are “moral subjects, rather than merely as objects to be sifted, sorted, scored, herded, conditioned or manipulated” (AI HLEG, 2019, p.10). This is followed by the group’s belief that “AI systems should be developed in a manner that respects, serves and protects humans’ physical and mental integrity, personal and cultural sense of identity, and satisfaction of their essential needs” (ibid, p.10).

In regard to freedom of the individual, the AI HLEG emphasises the importance of individuals remaining free to make decisions in life for themselves. The group define this freedom as “a commitment to enabling individuals to wield even higher control over their lives, including (among other rights) protection of the freedom to conduct a business, the freedom of the arts and science, freedom of expression, the right to private life and privacy and freedom of assembly and association” (ibid, p.10-11). This would mean that no AI could unjustifiable observe individuals, or unfairly manipulate individuals.

With the respect for democracy, justice and the rule of law, the group wish for an AI that “should serve to maintain and foster democratic processes and respect the plurality of values and life choices of individuals” (ibid, p.11). This would spell out an AI that protects democratic processes, including voting systems in democratic countries as well as human deliberation. Where equality, non-discrimination and solidarity are concerned, “equal respect for the moral worth and dignity of all human beings must be ensured” (ibid, p.11). Furthermore, the AI HLEG wish for this to encompass the rights of persons at risk of exclusion. The section tackles the phenomenon of human bias in algorithms. The group states that “[i]n an AI context, equality entails that the system’s operations cannot generate unfairly biased outputs” (ibid, p.11).

Lastly regarding citizens’ rights, the group claim “AI systems offer substantial potential to improve the scale and efficiency of government in the provision of public good and services to society” (ibid, p.11). The group wish for AI to safeguard the rights that citizens already have in the EU, as well as under international law.

(32)

32 4.4 Ethical Principles in AI Systems

The AI HLEG in the Guidelines seek to tackle the ethical principles that they wish to be present in the all AI systems. The group believe that said principles can “inspire new and specific regulatory instruments, can help interpreting fundamental rights as our socio-technical environment evolves over time, and can guide the rationale for AI systems’ development, deployment and use – adapting dynamically as society itself evolves” (ibid, p.11). The four principles have been founded upon fundamental rights, based in the EU Charter. These principles are those that the group believe all AI developers and users should adhere to. They go beyond “formal compliance with existing laws” (ibid, p.12). The following are said principles; respect for human autonomy, prevention of harm, fairness and explicability. These are in essence the AI HLEG’s attempt at writing their own ethical robot laws, like Asimov in 1950. To begin, section 4.4.1 explores the complex issue of human autonomy.

4.4.1 Human Autonomy

The principle of respect for human autonomy in the context of ethical AI is the first of the four principles suggested by the AI HLEG. The group state that “fundamental rights upon which the EU is founded are directed towards ensuring respect for the freedom and autonomy of human beings” (ibid, p.12). The expert group recommend that AI machines should be designed in order to “augment, complement and empower human cognitive, social and cultural skills” (ibid, p.12). This is the very essence of a human-centric approach, that would translate into the opportunity for human choice. The main element of this principle is human oversight, throughout the process of developing and deploying AI.

4.4.2 Prevention of Harm

The AI HLEG demand AI systems do not harm or in any manner adversely affect humans. This includes harm through algorithms, as harm also encompasses how certain social groups live, as well as cultural and political environments. This is not just physical, it also includes the protection of human dignity as well as mental integrity of humans. The AI HLEG recommended that special care should be paid to society’s most vulnerable, meaning that they are included in the development of AI. As well as human beings, the group suggest this includes a “consideration of the natural environment.” (ibid, p.12). This would mean AI that are incapable of malicious use, acting both safely and securely, not open to manipulation.

(33)

33 4.4.3 Fairness

Whilst the AI HLEG acknowledge the plethora of definitions that the notion fair can fall under, the AI HLEG state that fairness has both a “substantive and a procedural dimension.” (ibid, p.12). This principle deals with human bias in AI machines. Beginning with the substantive aspect, this would imply the assurance of “equal and just distribution of both benefits and costs” as well as encircling liberty from “unfair bias, discrimination and stigmatisation” (ibid, p.12). Whilst evidently recognising human bias in AI, the group also express their hopes for “equal opportunity in terms of education, good, services and technology” (ibid, p.12). to improve fairness in machines. The expert group also allude to the necessity of the deployment of principle of proportionality between means and ends. Programmers should only use the necessary data. Furthermore, the end should give preference to the measures that is “least adverse to fundamental rights and ethical norms” (ibid, p.13).

4.4.4 Explicability

The final ethical principle in AI recommended by the experts is the principle of explicability. The group believe this principle to be the key to building and maintaining the trust of users in AI systems. This would mean AI systems that make decisions that are transparent and can be explained to people impacted both directly and indirectly by said decisions. When “black box” (i.e. decisions made by algorithms with no explanation offered) algorithmic decisions are made that offer no explanation, measures such as; “traceability, auditability and transparent communication” should be used, as long as the system as a whole respects “fundamental rights” (ibid, p.13). The explicability is also subject dependent, where an algorithm suggests a user buys a grocery item, or if it pegs an individual as a future criminal should be dealt with differently due to the adverse ethical nature of the latter example (ibid).

4.5 Requirements for Trustworthy AI

The centrepiece of the Guidelines are the requirements for trustworthy AI. The AI HLEG identify seven key requirements that all AI applications should respect in order for an AI to be considered trustworthy for society. The seven requirements are as follows; human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. The requirements can be graphed in the following manner (AI HLEG, 2019):

(34)

34 This section aims to explore these requirements, beginning with human agency and oversight in section 4.5.1.

4.5.1 Human Agency and Oversight

The AI HLEG wish for AI systems that act as “enablers to a flourishing and equitable society by supporting human agency and fundamental rights” (European Commission, 2019c, p.5). With AI-systems, the experts suggest fundamental rights impact tests before an AI-system is deployed. The idea of these tests is to ensure risks can be reduced, or justified as necessary (AI HLEG, 2019, p.15). Furthermore, the group express interest for AI-machines that have the “overall wellbeing of the user” at the core of their functionality (ibid). The AI HLEG consider that an approach that includes human-intervention at every stage of decision-making, as well as the overseeing of the entire process by humans is crucial. Finally, a human ought to be in place to make the final decision of releasing the AI to society and in what way it should be deployed. The group feel that the less oversight a human can have over a process, the more regulation and governance an AI-system must have.

4.5.2 Technical Robustness and Safety

This is closely linked to the ethical principle of the prevention of harm. The AI HLEG aim for trustworthy algorithms that are “secure, reliable and robust enough” in order to deal with any

Referenties

GERELATEERDE DOCUMENTEN

Ook is het College het met z ijn medisch adv iseur eens dat verzekerde naast verblijf is aangewez en op persoonlijke verz orging, ondersteunende begeleiding en behandeling.. Er

Still, discourse around the hegemony of large technology companies can potentially cause harm on their power, since hegemony is most powerful when it convinces the less powerful

In summary 341 : Benjamin exhorts his sons to imitate the avrip aya&is xat SOLOS Joseph. He cites the example of Joseph in the description of his ideal of the good and pious

De realisatie van de Ecologische Hoofd- structuur (EHS) verloopt moeizaam. Alterra onderzocht samen met Wageningen Universiteit hoe in drie gebieden het EHS- beleid wordt

Note that, P 1 contains attributes related to the resource (In CP-ABE a policy contains attributes which identify the user), in which the attribute aˆ MD identifies

[r]

De term fosfaatverzadigde grond suggereert dat de bodem met fosfaat verzadigd is en geen fosfaat meer vasthoudt, waardoor al het fosfaat dat niet door het gewas wordt

[r]