• No results found

Algorithmic Decision-Making and Democracy: The Implications for Democratic Accountability

N/A
N/A
Protected

Academic year: 2021

Share "Algorithmic Decision-Making and Democracy: The Implications for Democratic Accountability"

Copied!
45
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Algorithmic Decision-Making and Democracy:

The Implications for Democratic Accountability

MSc thesis written by Ate Pelsma

UvA ID: 11861800 Academic year: 2019-2020 Submission date: June 2020

Supervisor: Dr. Afsoun Afsahi Second reader: Dr. Gordon Arlen

Word count: 12432

Master Thesis Political Science (Political Theory) Graduate School of Social Sciences

(2)

Table of Contents

1. Introduction p.2

2. Democratic Accountability p.4 2.1 The Elements of Democratic Accountability p.8

3. Algorithmic Decision-Making p.12

4. The Implications for Democratic Accountability p.18

4.1 Hidden Decision-Making p.18

4.2 Opaque Decision-Making p.22

4.3 Automated Decision-Making p.27

5. Conclusion p.33

(3)

1

Algorithmic Decision-Making and Democracy:

The Implications for Democratic Accountability

What happens to democracy when algorithms increasingly complement or substitute human decision-making in public institutions? In this paper, I focus on the implications of algorithmic decision-making for democratic accountability. I develop an expanded conception of democratic accountability that captures how relations of accountability are of central importance to any notion of democracy. As such, democratic accountability entails that those affected by algorithmic decision-making ought to be able to demand justifications and potentially impose sanctions. I argue that algorithmic decision-making significantly impairs

democratic accountability by focusing on three concerns that capture the current functioning of algorithmic decision-making: hiddenness, opacity and automation. For each of these concerns, I draw out the implications for democratic

accountability. This analysis shows that algorithmic decision-making presents serious obstacles for democratic accountability that are potentially detrimental to the functioning of the democratic system. Democratic accountability needs to be restored before public institutions start to implement algorithms in decision-making processes.

Keywords: algorithms; democracy; accountability; decision-making; legitimacy; machine learning

(4)

2 1. Introduction

The digital turn, powered by technological innovations, has expanded the realm of possibilities in terms of organizing society. The growing abundance of ‘Big Data’ and increasingly sophisticated algorithms have come together in ways that influence and affect people’s lives. In today’s world, choices and decisions are increasingly delegated to algorithms, which are advising and sometimes wholly deciding how to interpret data and what actions to take (Mittelstadt et al 2016). Algorithms are widely used to determine who gets hired or promoted (O’Neil 2016). The stock market is now run by algorithms which automatically buy or sell stocks (Pasquale 2015). One venture capital fund even has

appointed an algorithm to its Board of Directors, granting it the right to vote on investment decisions (Business Insider 2014). Financial institutions implement algorithms that generate credit scores, automatically granting or denying loans (Zarsky 2016). Insurance companies use algorithms to identify fraudulent behavior and determine premiums (Eubanks 2018). Algorithms also affect the information we perceive by automatically sorting, filtering and sifting through data (Bucher 2018, p. 49). Moreover, algorithms affect what cannot be

perceived, e.g. websites that violate intellectual property rights can automatically be removed from search results (Campbell-Verduyn et al 2017, p. 219).

In sum, these examples reflect that computer algorithms are being implemented in a wide range of social contexts and have thus started to increasingly affect people’s lives by structuring “the social in particular ways” (Katzenbach and Ulbricht 2019, p. 2). It is now widely recognized that we live in a digital society in which digital systems increasingly govern our lives either directly or indirectly (Susskind 2018, p. 2). Algorithms have become increasingly ingrained in these digital systems. The concept algorithmic governance is used to describe systems in which operations, decisions and choices previously made solely by humans now significantly rely on computer algorithms, which advise, and sometimes

(5)

3 autonomously decide, about the interpretation of data and the actions to be taken (Danaher et

al 2017; Mittelstadt et al 2016). The ability to nudge, guide, control, manipulate, constrain or

predict human behavior through these digital systems outlines how algorithms have become part of a new mode of governing (Danaher et al 2017).

While much has been written about the emergence of algorithmic governance, it has remained largely under the radar in the field of political theory. This is remarkable, since public institutions and governmental agencies have also started to increasingly rely on computer algorithms in public decision-making processes. This increasing use of algorithms to complement or sometimes even substitute human decision-making is labeled algorithmic decision-making and can be considered a sub-category of the literature around algorithmic governance. Examples of algorithmic decision-making in the public sphere include

identifying tax fraud, money laundering, and flagging potential terrorists or drug traffickers (De Laat 2017, p. 526). In the U.S., an algorithm called COMPAS has even been

implemented in the judicial system to assign scores to defendants, indicating the likeliness of committing crimes (Corbett-Davies et al 2017), thereby influencing the judge’s decision and thus indirectly affecting people’s freedom. What are the implications for democratic

governance when governmental agencies start to increasingly complement or even substitute human decision-making with algorithmic decision-making?

The focus of this paper is on the implications of this algorithmic decision-making for democratic accountability. I argue that as algorithmic decision-making increasingly

complements or even substitutes human decision-making, democratic accountability is

significantly impaired. By drawing on literature from democratic theory, I argue that relations of accountability are essential to the overall legitimacy of the democratic system. By drawing on the all affected principle, democratic accountability is conceived through the elements of answerability and empowerment, entailing that those affected by decisions should be able to

(6)

4 demand justifications and potentially sanction those who make these decisions. As decision-makers start to be increasingly influenced or even substituted by computer algorithms, I argue that the elements of answerability and empowerment are significantly weakened. I focus on three key properties that capture the current functioning of algorithmic decision-making: hiddenness, opacity and automation. For each of these, I draw out the implications for the elements of answerability and empowerment and show how the current functioning of algorithmic decision-making forms serious obstacles for democratic accountability.

The structure of my argument is as follows. In the first section, I develop my conception of democratic accountability. This outlines what democratic accountability means, what it requires and why we ought to care about it in the democratic setting. In the second section, I briefly conceptualize algorithmic decision-making and outline how it currently functions. This brings out the need for democratic accountability. In the third section, I explore the three key properties of algorithmic decision-making in terms of their impact on the elements of democratic accountability. I conclude that the rise of algorithmic decision-making is problematic for democratic accountability. Considering the importance of democratic accountability to any notion of democracy, I argue that the desirability of

algorithmic decision-making in the public domain is questionable due to its current functioning.

2. Democratic Accountability

In this section I provide a brief conceptualization of democratic accountability and describe it as a mechanism of central importance for the legitimacy of a democratic system. Highlighting this essential function of democratic accountability within any notion of democracy is

(7)

decision-5 making has the potential to seriously undermine democratic accountability in ways that can be detrimental to the functioning of the overall system.

Relations of accountability are all around us. Imagine showing up late for class because you were stuck in traffic. You nervously knock on the door, put on your most guilty face, and enter class. When you step inside, the teacher looks at you fiercely and demands that you justify why you have disturbed the lecture, to which you reply that traffic was

extremely busy this morning. You sincerely apologize and stress that it will not happen again. If the teacher is in a good mood, you might be allowed to take a seat and follow the lecture. If the teacher is not satisfied with your explanation, you are sent out to repent for your sins.

It is this mundane example that outlines the nature of an accountability relationship. The student has an obligation to account for the action of showing up late and potentially faces consequences if the justification for that action is unsatisfactory to the teacher. In a general sense, accountability can be defined as follows: party X is accountable to party Y regarding conduct Z, if X has an obligation to provide Y with a justification for Z, and X may face consequences if Y finds X’s justification unsatisfactory (Bovens 2007, p. 450).

Accountability is thus understood as a relational concept, asymmetrical in nature, “organized around the relationship between an accountability holder and an accountability holdee” (Borowiak 2011, p. 6). As such, accountability is defined independently of the democratic context.

To fit accountability within the confines of my argument, I am interested in a specific form of accountability, i.e. democratic accountability. For an accountability relationship to be democratic, the right actors need to be involved. Democratic accountability is generally understood as a relationship between those making collective decisions and those affected by or subjected to those decisions (Warren 2014; Goodin 2003; Borowiak 2011). Therefore, in the democratic context, those who make decisions are accountable to those affected by those

(8)

6 decisions. In the literature on democratic accountability, it is traditionally meant to signify the idea that those who rule or govern ought to be accountable to the citizenry (Goodhart 2011, p. 46).

However, identifying these relations of accountability is dependent upon the model of democracy that is assumed. The way in which democratic accountability is conceptualized is contingent upon one’s conception of democracy. Most of the literature on democratic

accountability implicitly assumes the model of representative democracy. Within a representative system, those who make decisions are the elected representatives, who are authorized by the constituency to do so (Pitkin 1967). These representatives are then considered to be accountable to the constituency for the decisions that they make (e.g. Urbinati and Warren 2008, p. 396). Answers in the literature to the question “who should be accountable to whom, for what, and how?” (Borowiak 2011, p. 9) are mostly built on the assumption of such a representative system.

This almost exclusive focus on accountability’s role in representative democracies might impair the theoretical and empirical integrality of the concept (Trechsel 2010, p. 1053). I therefore aim to broaden the concept of democratic accountability to include not only representative conceptions of democracy, but other notions, such as deliberative systems of democracy as well. Expanding the concept of democratic accountability beyond its use in representative democracies is important because it outlines accountability’s crucial role in any notion of democracy, not just in representative models. Moreover, it broadens the perspective by going beyond traditional relations of democratic accountability between representatives and citizens. This is essential in order to assess the implications of emerging systems of algorithmic decision-making, which do not always fit traditional notions of government or other formal institutions.

(9)

7 I expand democratic accountability to include other conceptions of democracy by making use of the all affected principle. This principle broadly entails that all individuals affected by collective decisions should be able to influence the outcome (Warren 2014; Goodin 2007; Young 2000; Dahl 1989). While the implicit normative assumptions of this principle can vary, it generally reflects broader democratic values of political equality, personal autonomy and self-determination (Urbinati and Warren 2008, p. 395). I am aware that this principle faces several critiques in terms of its practical use to constitute the demos. The principle faces the issue of infinite regress, i.e. who gets to decide who is affected by a decision? (Goodhart 2011, p. 50). Moreover, the over-inclusive tendency of the principle, through which virtually anyone ends up being affected, is considered problematic (Goodin 2007; Sangiovanni 2019). I want to make clear that it is not within the scope of my analysis to argue that the all affected principle provides a coherent, satisfactory answer per se. It merely provides a consistent answer to the question “who should be accountable to whom, for what, and how?” that can be operationalized for the remainder of my argument.

By linking the general definition of accountability to the normative conception of democracy based on the all affected principle, I formulate my expanded understanding of democratic accountability. Democratic accountability entails that those affected by collective decisions should be able to demand justifications from and potentially sanction those who govern on their behalf (Warren 2014; Bovens 2007). In other words, I understand democratic accountability as a democratic norm which reflects the idea that the “sources of governance should be answerable to and punishable by those who are governed” (Borowiak 2011, p. 9). This description brings forth the two defining elements that relations of democratic

accountability are comprised of, i.e. answerability and empowerment (Warren 2014, p. 40-41). I briefly explore these two elements to elucidate what my conception of democratic accountability looks like and what it requires.

(10)

8 2.1 The Elements of Democratic Accountability

The first element, answerability, points at the discursive process involved in all accountability relations. When actors make certain decisions, they need to be able to provide those affected with a justification when asked to do so, implying that, in the democratic context, the higher authority resides not with the decision-maker, but with those affected by that decision (Mulgan 2000, p. 555). A justification means providing the reasons for one’s actions while simultaneously linking these reasons to “the normative grounds whereby they may be ‘justified’” (Giddens 1984, p. 30). After the agent attempts to justify his or her actions or decisions, the agent can be further interrogated and questioned about the adequacy of the justification (Bovens 2007, p. 451).

Those holding the agent accountable either accept or reject that attempt based on a set of standards. This implies that being held accountable entails that the agent has certain obligations to act in accordance with ‘accepted standards of behavior’ (Grant and Keohane 2005, p. 30). Other authors have recognized that these standards to which those who govern are held accountable are not always pre-determined but often part of the discursive process (Borowiak 2011, p. 8). It is in the element of answerability that some authors recognize democratic accountability as a virtue (Warren 2014, p. 41). As such, democratic

accountability becomes central to legitimate political order, i.e. “one that could be justified to all those living under its laws” (Chambers 2003, p. 308). Giving press conferences is an example of the element of answerability. In a press conference, those who wish to implement new or adapted policies attempt to justify the changes and can concurrently be interrogated or challenged when the reasons given are deemed unsatisfactory. The element of answerability therefore touches strongly upon one of Dahl’s criteria for the democratic process, i.e.

(11)

9 decisions attempt to justify their actions, allows those affected to learn about made decisions, relevant alternatives and possible consequences (Dahl 1998, p. 37).

The second element of democratic accountability is empowerment, meaning that those holding power-wielders accountable must be able, in the first place, to demand an account, but also to impose sanctions when the agent fails to justify his or her conduct (Bovens 2007; Warren 2014). Control is exercised by granting those who are affected the right to demand an account and to impose sanctions if they determine that this account is unsatisfactory (Grant and Keohane 2005, p. 29). Where some authors have questioned

whether the possibility for sanctions is inherent to accountability (Philp 2009; Mulgan 2000), I maintain that it is a constitutive element of democratic accountability. Without possible sanctions, processes of accountability become empty (Borowiak 2011, p. 7). Moreover, it is what distinguishes accountability from ‘non-committal provision of information’ (Bovens 2007, p. 451). It is through the anticipation of possible sanctions that accountability relations can deter agents from abusing their power, thus not only operating ex post by scrutinizing past behavior, but also ex ante (Moncrieffe 2001, p. 26).

Within the context of democratic accountability, sanctions can come in a myriad of ways. In the representative system, sanctions usually revolve around elections and removal from office due to a lack of votes. Yet sanctions can also involve fines, social pressures, calls for resignation, negative publicity, or other consequences (Bovens 2007; Borowiak 2011). Sanctions can therefore be either formal, e.g. penal sanctions, or informal, e.g. publicly admitting misconduct. What is important in relation to sanctions is that these are enforceable, i.e. it empowers those who are affected by decisions.

By empowering those who are affected to hold those who govern accountable, vulnerability to delegated power is limited (Warren 2014, p. 40). This is important since the ability of individuals to influence collective decisions comes to depend on the agents that

(12)

10 govern on their behalves (Mulgan 2003, p. 29). When individuals delegate their political power, they lose their ability to directly influence decisions and no longer directly engage in governing (Dahl 1989, p. 79). This can potentially limit their capacity for ‘effective

participation’, since those in power have greater opportunities for expressing their views (Dahl 1998, p. 39). In this scenario, accountability becomes key to ensure that those affected maintain indirect political influence over those who have direct influence. Potential interest divergence of power-wielding agents can increase the risk that the interests of those affected are neglected or harmed, limiting their capacity for self-determination (Warren 2014, p. 40). It is then the problem of vulnerability to delegated power, present in any notion of democracy (Dahl 1999, p. 21), that democratic accountability aims to solve.

Both constitutive elements of democratic accountability require transparency. This implies that accountability relies on ‘epistemic visibility’ of an agent’s actions, i.e. it requires availability of transparent information and understanding (Borowiak 2011, p. 7).

Understanding what actions were taken necessarily precedes the ability to hold people accountable for those actions, i.e. to demand a justification and to potentially sanction misconduct. Without transparency, democratic accountability cannot properly take form.

Thus, democratic accountability requires answerability and empowerment, which in turn both rely on transparency. When these requirements are fulfilled, democratic

accountability can help to maintain and improve the overall legitimacy of the democratic system by providing those affected by decisions with the necessary justifications while simultaneously limiting their vulnerability to delegated power. However, in practice these requirements face many difficulties. For example, in democratic systems of today, voting is one of the primary forms of democratic accountability, yet it can be questioned whether voting is an effective and reliable mechanism to hold those who govern accountable (see e.g. Przeworski, Stokes and Manin 1999).

(13)

11 Operationalized as a democratic norm, it becomes apparent that the ‘democratic accountability regimes’ currently in place might not suffice to solve present and future challenges for democracy (Warren 2014, p. 48). Take for example the global issue of climate change. In this case, the effects are visible, but the sources of these effects are “diffuse, systemic, or otherwise unidentifiable” (Borowiak 2011, p. 7). Those affected by the consequences of climate change are unable to identify the agents that can be held

accountable. It shows that while the all affected principle provides a normatively preferred guideline that can help to identify those entitled to demand accountability, it is significantly more challenging to identify the agents who can bear these demands correspondingly. The lack of perceptible agents that can bear the demands for accountability is one of the limiting factors for democratic accountability to take shape.

As a normative principle, democratic accountability faces many other limitations and challenges in the real world. Considering that we live in an unequal world, ‘weak’

accountability holders are in many cases unable to sanction powerful actors for failing to fulfill obligations (Rubenstein 2007), e.g. poor workers in developing countries are often unable to sanction governments or large companies for violating labor laws. Similarly, on the level of world politics, there is a lack of effective accountability mechanisms that can limit abuses of power (Grant and Keohane 2005; Dahl 1999). Establishing democratic

accountability relationships is also significantly more difficult when we consider the problem of ‘many hands’, which, akin to the issue of climate change, outlines that there are many different people who contribute in different ways to policies and decisions (Thompson 1980). Who is to be held accountable for the consequences of certain decisions when these decisions have been made by ‘many hands’ and not a single entity?

These examples are not meant to provide a coherent overview of all the limiting factors to democratic accountability but merely serve to present that in practice, there are

(14)

12 already many challenges to establishing effective relationships of democratic accountability. In the next section, I present an emerging challenge for democratic accountability and thus democracy itself, i.e. algorithmic decision-making.

3. Algorithmic Decision-Making

According to my conception of democratic accountability as a normative principle outlined above, those affected by decisions should be able to demand justifications from and impose sanctions on the decision-maker. Yet what happens to democratic accountability when computer algorithms are increasingly implemented in decision-making processes across a wide range of contexts? To answer this question, it is necessary to first clarify what is meant by algorithmic decision-making and why it is important that these systems can be held accountable. I briefly outline the most important recent developments to capture in a general sense how processes of algorithmic decision-making can be understood and how these currently function. Since this functioning can sometimes be problematic, it is important that these systems can be held accountable.

Many decisions previously made solely by humans now significantly rely on

algorithms, which advise, and in some cases autonomously decide, about the interpretation of data and the actions to be taken (Danaher et al 2017; Mittelstadt et al 2016). It is this

phenomenon that is called algorithmic decision-making. By influencing the outcome of certain decisions, algorithms can endow those affected with either benefits or harms (Binns 2017, p. 543). As mentioned in the introduction, algorithmic decision-making is being applied in an increasingly wide range of contexts including health, finance, law and public administration. The contexts in which these decisions are made can be mundane, like an algorithm recommending a new book. Yet algorithms also increasingly inform or make decisions in high-stake contexts, for example whether people get loans and welfare (Eubanks

(15)

13 2018) or algorithms that decide who to target during political campaigns (Howard, Woolley and Calo 2018).

To avoid any ambiguity around the concept of algorithmic decision-making, I think it is useful to make some clarifications about the meaning of the word algorithm. I want to make clear that I intentionally refrain from specifying the various technical categories and mathematical constructs behind these algorithms. For the sake of my argument, detailed analysis of the specific techniques and mathematical formulas behind the algorithms is not necessarily useful as the effects of implementation in decision-making processes are similar for different techniques. “Algorithms must be implemented and executed to take action and have effects” (Mittelstadt et al 2016, p. 2). Therefore, I focus on the effects of

implementation and use of algorithms in decision-making processes without detailed analysis of the mathematical constructs behind these algorithms.

Understood broadly, an algorithm is no more than a “finite set of rules which gives a sequence of operations for solving a specific type of problem” (Knuth 1968, p. 27), thereby transforming input to output (Alpaydin 2016, p. 16). In this light, detailed instructions that specify how to get from your house to the nearest supermarket can be called an algorithm. This broad definition is problematic because it includes virtually any decision-making process, inhibiting the analytical applicability of the concept. I therefore specify that my analysis revolves around computer algorithms, specifically data-mining algorithms aimed at automatically identifying patterns in large datasets (‘Big Data’) based on which decisions are made (Zarsky 2011, p. 291). Such patterns can be used both descriptively and predictively. Descriptively, these algorithms focus on understanding or explaining past behavior, e.g. identifying past instances of fraud. Predictively, these algorithms identify patterns which are then applied to create models of the future and anticipate behavior (Danaher 2016, p. 247;

(16)

14 Zuboff 2015), e.g. identifying potential terrorists or scoring individuals based on their

likeliness to fall back into criminal behavior.

As the word itself already implies, data-mining algorithms heavily rely on and are enabled by the existence of large datasets, which are increasingly produced in the context of digitalization. Digitalization can be understood as the expansion of systems in which people, machines and data become linked in complex webs of connections (Susskind 2018, p.29). The defining feature of the increasingly digital world is that an ever-increasing amount of data is created, collected, analyzed and used (Kalpokas 2019, p. 11). Technologies of surveillance can capture facets of human behavior that were unattainable before and these technologies have become increasingly integrated into people’s everyday lives (Danaher et al 2017, p. 2). As these technologies create larger and more expansive datasets by capturing more aspects of human behavior, data-mining algorithms are increasingly implemented and relied upon to identify patterns, provide recommendations and make decisions about these datasets. It is in this context that algorithmic decision-making has become increasingly prevalent with the capability to endlessly gather, organize and analyze vast quantities of data (Yeung 2017).

The key stages of this decision-making process are data collection, processing, utilization and feedback/learning (Citron and Pasquale 2014; Danaher et al 2017; Zuboff 2015). Increasingly, the design of algorithmic decision-making processes relies on techniques from machine learning1, a sub-field of Artificial Intelligence research. Traditional,

1 How does machine learning work? Typically, machine learning involves two components, a ‘learner’ and a

‘classifier’ (Burrell 2016). The classifier takes the input and produces an output. For example, a spam filter looks at incoming e-mails and produces an output (spam or non-spam). It classifies incoming e-mails based on a decision-making model that the ‘learner’ component created. The learner is trained on existing data, for example a database full of e-mails that the programmer labeled spam or non-spam. Based on that training data, it

develops a model by looking at certain features of e-mails (title, word count, specific words, etc.), identifying patterns and giving different weights to different features that help determine whether e-mail is spam or not. The model is continuously adapted until it can predict the training data successfully. This model is then applied by the classifier in order to decide how to categorize new e-mails (see e.g. Domingos 2015 for more detailed explanation). Key here is that once the algorithm manages to accurately categorize e-mails, the programmer no longer exactly knows how the algorithm achieves this as the learner has autonomously adapted and defined

(17)

15 knowledge-based systems function based on explicit rules, precisely defined by a

programmer down to the smallest detail (Danaher et al 2017; Binns 2017). Machine learning algorithms function in quite a different manner than knowledge-based systems. Through machine learning, algorithms can now discover patterns, make models, and perform tasks on their own by adapting and learning from extremely large datasets (Susskind 2018, p. 34). Compared to knowledge-based systems, which are programmed to solve a specific problem, machine learning algorithms are much more versatile because they are programmed to learn to solve problems (Tutt 2017, p. 85). As the digital world grows and more data is produced, knowledge-based systems increasingly struggle to cope with all that data and accomplish complex tasks (Matthias 2004, p. 183). Conversely, ‘learning’ algorithms thrive as more data becomes available, allowing it to automatically develop new knowledge and capabilities from that data (Domingos 2015; Susskind 2018). Through this symbiotic relationship with data, machine learning has become more promising and prevalent than knowledge-based systems. Today, machine learning algorithms are the driving force behind many applications, services and other uses, including search engines, recommendation systems, self-driving cars, medical diagnosis and dating apps (see e.g. Siegel 2013; Domingos 2015; Tutt 2017).

While machine learning shows promising capabilities and a wide applicability in diverse contexts, these learning algorithms are by their very nature unpredictable and hard to explain. Machine learning algorithms possess abilities to adapt and learn from extremely large datasets to predict or classify new data on their own, reducing the need for human involvement yet significantly increasing the complexity and uncertainty of these

semi-autonomous systems (Binns 2017; Mittelstadt et al 2016; Coglianese and Lehr 2017). As they

decision-making rules. This ability to autonomously modify or define decision-making rules is the

distinguishing factor that defines machine learning algorithms from other algorithms (Mittelstadt et al 2017, p. 3). The programmer only defines the learning rule and feeds large datasets into the system, based on which the algorithm develops its own rules. Ultimately, the goal is to create a model from the training data that can then be applied to handle and accurately predict new inputs (Tutt 2017).

(18)

16 are no longer programmed down to the smallest detail, but instead learn from data

autonomously, it is hard to predict beforehand how exactly a new input will be categorized (Mittelstadt et al 2016). Moreover, once the algorithm works, the programmer does not necessarily understand the rationale behind the decision-making rules that the algorithm follows, making it difficult to explain exactly what happened after a decision has been made (Matthias 2004). The shift from knowledge-based systems to machine learning algorithms is important as it creates novel problems in terms of the predictability and interpretability of the system (Danaher et al 2017, p. 3). I briefly emphasize this shift because it creates serious obstacles for democratic accountability, which I expand upon further below.

Existing literature around algorithmic decision-making has already shown how systems of algorithmic decision-making always embody certain values or assumptions and are therefore far from objective (Binns 2017). Knowledge-based systems are entirely dependent on programmers that decide how to collect data and design the system, and this can reflect biases of the designers (Domingos 2015; Friedman & Nissenbaum 1996). In the case of machine learning, the algorithm is trained on data based on which it determines its own model. If this data contains biases, the algorithm reflects the bias by incorporating it into its own decision-making rules. In this way, algorithmic decision-making can discriminate based on certain attributes. For example, the COMPAS algorithm used to score defendants awaiting trial has been proven to systematically score Afro-Americans high risk compared to white defendants (Corbett-Davies et al 2017). While race was not used as an explicit input, the algorithm nevertheless identified an implicit pattern between racial categories and crime risk in the training data and included this pattern in its decision-making model.

Taking into account that algorithms can embody assumptions, values or bias shows that there are definite shortcomings and potential problems in the current functioning of algorithmic decision-making. These algorithms can produce or reinforce “social, ethical,

(19)

17 political and legal problems” (Danaher et al 2017, p. 3). I briefly touch upon these issues not to argue that algorithmic making is a bad thing per se. Just like human decision-makers can produce questionable outcomes, so can algorithms. When machine learning reproduces bias or enhances discrimination by learning from historic data, it is reflecting the biases, prejudices and inequalities that human beings have produced and maintained over time. Therefore, it cannot be concluded that the outcomes produced by algorithmic making are necessarily better or worse than the outcomes produced by human decision-making. Both human and algorithmic decision-making rely on epistemic and normative assumptions that can in turn produce inefficient or unfair outcomes (Zarsky 2016). What is crucial then is that these decision-makers can provide justifications, be questioned or

challenged, and potentially face consequences for the decisions that they make. In short, they ought to be accountable for the decisions they make to those who are affected. Especially now that algorithms are implemented in public systems, it is critical that those who make decisions, whether people, algorithms, or a combination of, can be held accountable.

While recent literature starts to increasingly call for ‘algorithmic accountability’ (see e.g. Diakopoulos 2015; Pasquale 2015; O’Neil 2016), it remains unclear what this refers to and how this can be achieved. Accountability is often conflated with transparency, auditing or responsibility (Binns 2017). In the first section, I have articulated my conception of democratic accountability that can be conceived through the elements of answerability and empowerment. These two elements are mostly overlooked when it comes to discussing accountability in relation to the functioning of algorithmic decision-making. The paragraphs above briefly outlined how this current functioning can potentially produce inefficient or unfair outcomes, stressing the need for accountability. In the next section, I assess whether these demands for accountability can be met. I argue that the implications of algorithmic decision-making for democratic accountability are inherently problematic.

(20)

18 4. The Implications for Democratic Accountability

In the following paragraphs, I argue that algorithmic decision-making has the potential to undermine democratic accountability by seriously limiting and problematizing the

possibilities for answerability and empowerment. I focus on three key properties of algorithmic decision-making that have caused concern in the literature, which are the

properties that capture its nature as a ‘black box’ (Mittelstadt et al 2016; Danaher et al 2017; Pasquale 2015; Zarsky 2016).

First, processes of algorithmic decision-making are often hidden, meaning that the way in which these systems collect and use data to make decisions is often obscured from view or simply inaccessible to those who are affected. Second, algorithmic decision-making is increasingly opaque, especially due to advances in machine learning. Opacity reflects the concern that the decisions produced by these complex systems are unpredictable and incomprehensible to human reasoning. Finally, algorithmic decision-making is increasingly

automated, meaning that there can be little to no human involvement in the decision-making

processes.

I explore these three properties in terms of the implications for democratic accountability and thereby outline how algorithmic decision-making can undermine the ability of those affected to demand answers and potentially sanction those who make decisions.

4.1 Hidden Decision-Making

The key stages in the algorithmic decision-making process, i.e. collecting, processing, utilization and feedback/learning (Citron and Pasquale 2014; Danaher et al 2017), are often hidden from view and inaccessible to the public. Debates around algorithmic

(21)

19 2017; De Laat 2018). I therefore only briefly outline the implications for democratic

accountability. The lack of visibility in algorithmic decision-making can arise in two distinct ways. The first is centered around the concern that algorithmic decision-making collects and processes data in a covert manner, which creates difficulties for the epistemic visibility of these systems (e.g. Pasquale 2015). The second relates to the observation that in many cases, algorithmic decision-making are hidden due to secrecy laws, meaning that public scrutiny of algorithms is often prevented by the judicial system when algorithms are deemed trade secrets or labeled as classified by the state (Zarsky 2013).

Algorithms are often embedded in the architecture of the platforms that use them and thereby continuously shape the structure of the information and choice environment, i.e. these algorithms determine what we perceive and therefore also what we do not perceive (Bucher 2018; Kalpokas 2019). Yet as these algorithms are seamlessly engrained in the digital realm, they are hidden from view by design, silently operating in the background of the platforms that we use (Domingos 2015). This means that uninformed users of such systems are often not aware that their online behavior is recorded, collected and processed as data, based on which decisions are made. This threatens democratic accountability’s requirement of epistemic visibility. As an example, think of a company that uses algorithmic decision-making to identify candidates for an available position by sending out job ads to relevant people. As this algorithm decides who to send the ad to, it is simultaneously excluding others. Yet as those who are excluded never see the ad appear in the first place, they cannot know that a certain decision has been made that affects them. Such examples can easily be expanded to be more relevant in the democratic context, e.g. algorithms deciding who to show ads about upcoming elections or possibilities of voting by mail. It is in this way that the hidden nature of algorithmic decision-making potentially obstructs the epistemic visibility

(22)

20 that is required for democratic accountability, as those affected by a decision are hardly aware that a decision has been made in the first place.

Besides the problems that arise from the fact that algorithms are embedded in the digital architecture and therefore hidden from view, algorithmic decision-making is often inaccessible to the public. Many companies who implement algorithmic decision-making do not disclose information about the functioning of those systems to “preserve valuable

intellectual property” (Pasquale 2015, p. 10). In terms of the implications for democratic accountability, this means that even if those affected are aware of decisions, the ability to demand justifications is limited. These demands are refuted by hiding behind secrecy laws, enabling those who make decisions to escape from the obligation to provide an explanation for the decisions that the algorithm makes (O’Neil 2016). For example, the algorithm behind Google’s search engine is labeled as valuable intellectual property and therefore cannot be scrutinized as the judicial system recognizes the algorithm as a trade secret, vital to Google’s success. That algorithmic decision-making is inaccessible becomes especially problematic in the democratic context as public institutions also keep their algorithms hidden, for example in policing, tax or security departments (De Laat 2018, p. 526). In the public domain,

governments can classify information about the algorithmic decision-making system, preventing the public from scrutinizing these algorithms. Moreover, public agencies often implement algorithms that were created by private companies (Carlson 2017).

That the hidden nature of algorithmic decision-making, both by design and by law, is problematic, has by now been widely recognized. Regulators, politicians, academics and civil society increasingly demand that those who implement algorithmic decision-making systems need to reveal information about its functioning (Binns 2017, p. 547). This is reflected in the emergence of recent data protection and right to information laws. For example, the EU’s General Data Protection Regulation aims to enable citizens to assess and possibly contest the

(23)

21 decisions made by automated systems (Goodman and Flaxman 2016). It is indeed important for democratic accountability that decision-subjects are aware of the decisions that affect them and ought to be able to demand an explanation about how these decisions are made. As transparency is a necessary condition for democratic accountability, illuminating the

hiddenness of algorithmic decision-making in order to reveal its innards is seen as a ‘critical right’ in today’s day and age (Hildebrandt 2011).

Yet while most of the literature agrees that transparency is indeed important, it remains somewhat contested. For example, there are some who are critical of the

‘unrealistically high’ standards of transparency that are being demanded from algorithmic decision-making (Zerili et al 2018). Indeed, human decision-making can often be equally hidden and thus lacks transparency as well. Information asymmetries have long been shown to hinder democratic accountability. As citizens cannot always observe or monitor the actions of politicians, holding them accountable through voting is difficult (Przeworski, Stokes and Manin 1999). Therefore, the difficulties for democratic accountability that arise from often hidden decisions pertain to both algorithmic and human decision-making.

Besides the critique that the lack of transparency in algorithmic decision-making is nothing new, it is not clear whether more visibility is always desirable. Revealing information about the workings of an algorithm can lead to perverse effects (Zarsky 2013; De Laat 2018). When it is publicly known how a system operates, it becomes possible to ‘game the system’. For example, when the proxies involved in an algorithm to identify tax evaders are known, potential tax evaders are aware of these proxies and can subsequently find a way around that. In this context, transparency can enable tax evasion, making the algorithmic decision-making system ineffective or more costly (Zarsky 2013, p. 1554). In some cases, transparency might not be the preferred option due to the negative consequences.

(24)

22 In addition to the perverse effects that transparency can generate, it is not clear that transparency alone is enough for those affected to hold algorithmic decision-makers accountable. I argue that while transparency is a necessary condition for democratic accountability of algorithmic making, it is not sufficient. As algorithmic decision-making is increasingly opaque and autonomous, just decision-making it more visible is not enough.

4.2 Opaque Decision-Making

The property of opacity relates to concerns that the functioning of algorithmic decision-making is increasingly complex and therefore difficult to understand, sometimes even incomprehensible (Danaher 2016). A decision-making process can be considered opaque when it is not possible to know or comprehend all elements of the process that are epistemically relevant to the outcome (Humphreys 2009, p. 618). The complexity and unpredictability of algorithmic decision-making has surged due to the current scale of analysis and the increasing reliance on semi-autonomous machine learning algorithms

(Mittelstadt et al 2016, p. 3). In the following paragraphs, I elucidate the opacity concern and outline the implications for democratic accountability.

To differentiate the opacity concern from the issues around hiddenness, consider the case of a bank that implements algorithms to automatically calculate credit scores and decide whether people should be granted or denied a loan. With machine learning, the algorithm is trained on a large dataset that includes information about people who have requested loans from the bank in the past and whether these loans have been repaid. As described earlier, such algorithms autonomously find patterns in this data between certain features, for example a positive correlation between a person’s age and the likeliness that loans will be repaid. The algorithm might find thousands of these patterns between different datapoints, which are then included in a model that can decide whether new requests should be granted or denied. When

(25)

23 an incoming request is denied by the algorithm, the person who requested the loan might contact the bank in order to understand why the request was denied. Now imagine that the bank is completely open about the system of algorithmic decision-making that it has implemented, explaining that the request has been denied due to a low credit score that is generated by an algorithm. Moreover, the bank is willing to make the algorithm publicly available for anyone to scrutinize. In that case, the concerns around hiddenness do not apply, as decision-subjects are aware that an algorithm has made a decision that affects them and are even able to scrutinize the process of decision-making. However, even if the decisions that arise from algorithmic decision-making are no longer hidden, either by design or by law, the opacity concern remains.

Whereas hiddenness relates to the concern that algorithmic decision-making is often made inaccessible, opacity relates to the fact that the logic behind algorithmic decision-making is inherently difficult to understand and sometimes incomprehensible. The bank might be able to explain that the request was denied because the credit score was low, yet it cannot explain how exactly the credit score has been calculated. Data-mining algorithms can identify relationships and patterns in vast and dispersed datasets (Floridi 2012) based on which models are created that consider correlations between thousands of different datapoints when making decisions (Hildebrandt 2011). The bank’s algorithm might take into

consideration features like age, geographic location or debt history, but also seemingly random features like the speed of scrolling through websites (Binns 2017) and the correct use of capital letters when filling in forms (Koren 2015). As the machine learning algorithm modifies and creates these decision-making rules autonomously based on available data, the bank can no longer precisely determine which factors have led to a low credit score. It is not clear how the input is transformed into output. The algorithm has autonomously identified patterns and appointed weights to specific factors, making it inherently difficult to understand

(26)

24 what the most relevant elements of the decision-making process are. Even the expert is unable to explain how different inputs contribute to the conclusion that the algorithm makes (Matthias 2004).

In sum, both the scale of analysis and the reliance on machine learning make algorithmic decision-making less interpretable, i.e. explanations in understandable human language becomes difficult (Zarsky 2011; Danaher 2016). Traditional, knowledge-based systems operate under precisely defined rules. While these might already be difficult to understand for the average person, at least the programmer who created the algorithm is able to explain exactly how the system works. Machine learning makes this much more difficult, since these algorithms can modify and define rules autonomously. In machine learning, there appears to be a tradeoff between the accuracy of the model and its explanatory power (De Laat 2018, p. 539). When the algorithm is successfully trained and has a working model, it is unclear what the rationale behind decision-making is. Does the credit scoring algorithm assign more weight to age, location, credit history or another one of the thousands of features included in the model? The complex and autonomous functioning of these algorithms makes it virtually impossible to comprehend all the elements that are relevant to the decision that has been made. The problem of opacity thus revolves around the concern that algorithmic decision-making can be so complex that the logic behind these systems is not easily understood and, in some cases, goes beyond human comprehension.

As public institutions start to gain interest in the possibilities of using data-mining and machine learning in decision-making (Koene et al 2019), the implications for democratic accountability are serious. The lack of interpretability severely undermines the ability of those affected to demand an explanation, since the logic behind algorithmic decision-making can be incomprehensible. As the rationale behind the decision cannot be understood, those affected are also severely impaired in their ability to challenge those decisions. Even if an

(27)

25 algorithm can show how a certain outcome has been produced, this consists of an endless list of correlations between thousands of datapoints. This might technically be an explanation about the outcome, yet it does not make sense to human reasoning. In the case of machine learning, not even the creator of the algorithm always comprehends the rationale behind the rules (Matthias 2004, p. 179). Less interpretability might mean more accurate models, but as previous analysis has shown, machine learning does not necessarily produce fair or efficient outcomes. Opacity prevents scrutiny, which presents a problem for democratic

accountability. How to settle between the demands of those affected and the outcomes of uninterpretable algorithms? As the decisions are incomprehensible to human understanding, it is hard to say whether those affected are treated unfairly or not. Even if the algorithm would produce the most accurate or ethical outcomes possible (Binns 2017), the lack of interpretability prevents those affected from understanding that, since decisions cannot be explained. How are decision-subjects supposed to hold these uninterpretable systems accountable if the logic behind the decisions that affect them is incomprehensible? Using such a system then becomes a matter of trust, either you accept the validity of the outcomes or you do not, but there is no real way to control whether the outcomes are produced in an agreeable manner. This is exactly what democratic accountability means to avoid, granting those affected with the means to judge for themselves whether decision-makers acted in satisfactory ways.

That the opacity of algorithmic decision-making is problematic is starting to be

acknowledged and researchers in the field of machine learning now increasingly aim to create more interpretable or ‘explainable’ algorithms (Mittelstadt et al 2019). This is achieved by creating a second model that replicates the first model but is easier to interpret (Rudin 2019). While this is a step in the right direction, it does not readily solve the problem of opacity. Ultimately, the second model that is created only approximates the first and is therefore

(28)

26 necessarily inaccurate and unreliable (Selbst and Barocas 2018). Had it been possible to accurately mimic the first model with a more interpretable second model, then the problem of interpretability would no longer exist. However, the second model is an imperfect

approximation, making it difficult to assess when it can be trusted or not. Calling the approximation of a model an ‘explanation’ can be misleading when presented to a lay user, since it falsely suggests that the approximation fully captures the model it is trying to explain (Rudin 2019). Explainable algorithms therefore do not resolve the issue of opacity.

Even if the ‘explanations’ produced by the second model are accurate, it is unclear how that would resolve the issues for democratic accountability. Current work on making algorithmic decision-making more interpretable is mainly focused on making the inside visible, i.e. how does the model work? Explanations of this kind do not necessarily provide a justification, since the assumptions behind the model are disregarded. I have outlined how my conception of democratic accountability relies on justifications, which crudely said revolves around linking reasons to the normative grounds in order to justify one’s decision or action. It seems important then that algorithmic decision-making includes not only an explanation about the model in use, but also an account of the normative and epistemological assumptions behind that model. Making the assumptions behind the system visible is crucial as it creates the possibility to contest these implicit values on both epistemic and normative grounds (Binns 2017).

For example, predictive data-mining algorithms rely on inductive logic and correlations yet are incapable of establishing causal links (Mckinlay 2017). Despite this limitation, correlations serve as evidence to direct action as long as the volume of data is large enough (Mittelstadt et al 2016, p. 5). These assumptions may rightfully be challenged. Correlations produce probable but uncertain knowledge and as evidence, correlations remain inconclusive if causality is not established (Hildebrandt 2011). Moreover, predictions based

(29)

27 on correlations use evidence that is produced at the level of populations, yet decisions aim at individuals (Ananny 2016). Algorithmic decision-making thus treats individuals based on the analysis of aggregated behaviors of others. Whether this is acceptable or not is besides the scope of my analysis, yet it shows that if algorithmic decision-making is to become more accountable, explanations need to include more than just a look into the model that turns inputs into outputs. With the credit scoring algorithm, an explainable algorithm could provide an approximation of the model that outlines how scores are significantly lower or higher depending on specific features, e.g. age, location or credit history. However, this does not necessarily explain why these features are deemed more important than others, what the error margins are or whether the used training data can be considered generalizable and without biases (Mittelstadt et al 2016, p. 5). The epistemic and normative assumptions need to be included if explainable algorithms are to provide a satisfactory justification to those affected by decisions.

As there might be reasonable disagreements about these ethical and epistemic

standards behind algorithmic decision-making (Binns 2017), it is essential that those affected can demand justifications that include such assumptions. As of now, the opacity problem seriously limits the accountability of algorithmic decision-making. As the decisions that arise from it are increasingly incomprehensible, it is not clear how those affected can demand a justification. Moreover, current work to create more explainable algorithms might not suffice to resolve this issue.

4.3 Automated Decision-Making

The final concern that has been raised in the literature around algorithmic decision-making is that these systems are becoming increasingly automated. Systems can function

(30)

28 automatically, the level of human involvement provides a useful frame to clarify present-day functioning of algorithms in systems of decision-making. I base this distinction on the work of Citron and Pasquale (2014, p. 6-7), who classify robotic weapon systems based on human involvement. Human-in-the-loop indicates a system in which algorithms assist by performing certain tasks when commanded to do so. Human-on-the-loop indicates a system in which algorithms already perform most tasks autonomously, yet the ultimate decision depends on a human actor. Human-out-of-the-loop indicates a system in which everything is automatically done by algorithms. As an example, take an insurance company that is focused on identifying fraudulent claims. In the first system, when a human employee looks at a claim, an algorithm might be commanded to analyze the data at hand, after which the employee decides how to act. In the second system, the algorithm autonomously analyzes all the claims, identifies potential fraud and issues recommendations, based on which the employee decides whether to implement the advice. In the final system, the algorithm is the only actor in the process of identifying and acting on fraudulent claims, making the process completely automated.

In the first two systems, the decision or action taken is ultimately still made by people and therefore the algorithm is not directly determining the outcome. In the first system, the recommendations or analysis made by the algorithm is just one factor of consideration. Since the human being remains in control of the decision, accountability can be demanded from that actor. Yet even in the systems in which algorithms do not directly determine the outcome, we need to consider that there are implications for accountability. Especially the element of answerability might be impaired as decisions become increasingly automated. Where in the first system the output of the algorithm is one factor of consideration, in the second system the algorithm already does most of the data interpretation and recommends what decisions or actions should be taken. While a human being is ultimately still in control, and can therefore be held accountable, the agent’s ability to justify the decision made can

(31)

29 come to depend on the algorithm. Take the example of a company in which an algorithm automatically analyzes and scores the performance of employees, showing recommendations about who to promote to a manager who still ultimately decides. Employee A gets promoted, but employee B does not. When employee B asks the manager to justify why the promotion was granted to A, the manager might provide B with many reasons, one of which being a lower score from the promotion algorithm. In most cases, it is reasonable to assume that the manager is not a computer programmer and therefore unable to explain exactly why B’s score was lower than A’s. This is not to say that this score is not accurate per se. The problem is that the manager’s ability to properly justify his decision about promoting A instead of B has come to partially rely on the functioning of the algorithm.

In this way, it might be the case that the element of answerability is impaired even in the systems where humans take the ultimate decision or action. They can still be held

accountable, i.e. those affected can demand a justification and potentially sanction, yet the substantive content of the decision-makers account decreases as their decisions come to rely on the output of an algorithm that might be poorly understood. As algorithmic decision-making starts to be implemented in public institutions, this is problematic for democratic accountability, especially in combination with the opacity problem. For example,

governmental agencies now use algorithms to determine welfare programs (Brown et al 2019). A human decision-maker decides whether to grant or deny welfare benefits, yet this decision is heavily influenced by algorithmic recommendations. If someone is denied a welfare claim, the decision-maker ought to be accountable for that decision. However, answerability is limited as the justification comes to revolve around the recommendation made by the algorithm. Even if the decision-maker can explain how the algorithm works overall, opacity limits the ability to explain to those affected why the particular claim has been denied, since it is unclear how exactly inputs are transformed into outputs.

(32)

30 Nevertheless, if those affected by such decisions might find the justification

unsatisfactory then consequences of some kind might follow. The element of empowerment is still in place, so if the decision-maker fails to justify past actions or decisions, sanctions might follow, e.g. public pressure and a call for resignation. In response, the public agent that is being held accountable might indeed resign, yet it is unclear whether empowerment is effective this way. If the human decision-maker is sanctioned, yet the algorithm remains in place, then sanctions do not entirely resolve the issue as the system that influenced the agent’s decisions has not changed.

This is complicated even further in the case of fully automatic systems. Actions taken or decisions made no longer require any human involvement, but the outcomes still affect people. Who is to be accountable for the decisions of these fully automatic systems? A knowledge-based system can be entirely automated, yet as a human programmer has explicitly defined all decision-making rules, the programmer is in full control of the actions that the algorithm takes (Danaher et al 2017). It is still possible to hold that person

accountable for the decisions that the system makes once it is operationalized. Identifying the agents that can bear the demands for accountability becomes more problematic with machine learning algorithms, which behave autonomously and therefore unpredictably (Mittelstadt et

al 2016). The decision-making rules are no longer static and pre-defined but can be adapted

by the algorithm itself, also during operation (Susskind 2018, p. 36). These developments create a ‘responsibility gap’, meaning that it is unclear who is to be responsible for the actions of “machines over which they could not have sufficient control” (Matthias 2004, p. 183).

To illustrate the complexities in terms of identifying the agents who can bear the demands for accountability, consider the example of self-driving cars. While not entirely operational yet, these autonomous vehicles (AVs) are already being tested and implemented

(33)

31 throughout the world (Boudette 2019). Traditionally, once you buy a car from a

manufacturer, you obtain the specifications for the operation of the vehicle and thereby become responsible for the use and care of that car. Of course, if the car does not work according to the specifications, e.g. a faulty design of the braking system, responsibility is usually ascribed to the manufacturer. In the case of AVs, machine learning algorithms are operating the car, which complicates the question of assigning responsibility and distributing liability (Stilgoe 2018). The person making use of the vehicle is no longer in control, so the locus of responsibility is to be found somewhere else. If the user of the car cannot be held responsible, can we hold the manufacturer of the AV responsible?

Considering the autonomous and therefore unpredictable nature of machine learning algorithms, the manufacturer is not entirely in control of the actions that the algorithm takes. The AV makes decisions without human intervention based on rules that are not completely pre-determined by the manufacturer, but instead modified by the algorithm itself, both in development and on the road. As the algorithm learns from experience and adapts to the environment in which it operates, it becomes impossible for the manufacturer to accurately predict what actions the system might take while in use. By learning from the operating environment, the possibility for unpredicted or unintended decision-making increases and the manufacturer gradually loses control over the exact actions that the algorithm might take (Sparrow 2007). We are thus confronted with a responsibility gap, in which neither the user nor the manufacturer of the AV can be considered to have sufficient control over the actions of the vehicle to be assigned responsibility (Matthias 2004).

Now this responsibility gap relies on the assumption that it is unfair to attribute responsibility to someone for outcomes of decisions that are beyond their control (Johnson 2015). Yet in other contexts, regardless of control over the outcomes, people are already held responsible, e.g. in strict liability law (Santoro et al 2008). It is therefore unclear why the

(34)

32 manufacturer of the algorithm cannot be held responsible for its actions. Similarly, others argue that despite the lack of control over the behavior of algorithms, engineers ought to be responsible for their creations (Nagenborg et al 2008). Indeed, while coders of learning algorithms do not define every decision-making rule, they do write the initial code that

determines how the algorithm learns. However, even if we accept that the manufacturer of the AV ought to be assigned responsibility, it is unclear how the demands for accountability can be met. Algorithms are created by large teams of programmers and often not created from scratch but built from packages of pre-existing code that others have written (Danaher et al 2017, p. 5). Moreover, the AV is not controlled by a single algorithm but a combination of multiple algorithms. Here, the problem of many hands occurs (Thompson 1980), which complicates the identification of the agents that can bear the demands for accountability. The problem of opacity further increases the complexity, as it is inherently difficult to determine the relevant parts of the code that contributed to the outcome.

To generalize from the AV example, fully automated algorithmic decision-making systems can generate significant obstacles for democratic accountability. Neither those who implement nor those who manufacture the algorithm are in full control over the possible outcomes. This could lead to a situation in which no one is willing to bear the responsibility for the decisions that these systems make. In the case of AVs, some companies are already actively lobbying governments to be absolved of liability (Grieman 2018). It becomes

apparent that society is yet to work out the terms of responsibility when it comes to decisions that fully automated systems make. This is problematic for democratic accountability, since the agents that can bear the demands for accountability remain, at least to some extent, unidentifiable. Those affected by the decisions of automated systems are in no position to demand justifications and impose sanctions. In fully automated algorithmic decision-making, there is a lack of perceptible agents who can be held accountable.

(35)

33 5. Conclusion

To sum up, in this paper I aimed to assess the emergence of algorithmic decision-making in terms of the impact on democratic accountability. My analysis consisted of three main parts. First, I articulated my expanded conception of democratic accountability as central to any notion of democracy by drawing on the all affected principle. As such, democratic

accountability entails that decision-makers ought to be answerable to and punishable by those who are affected. I outlined how democratic accountability becomes central to maintaining the legitimacy of the democratic system through the elements of answerability and

empowerment. Second, I argued that systems of algorithmic decision-making are not perfect and can produce unfair or inefficient outcomes, bringing out the need for accountability. Third, I analyzed three concerns closely associated with the current functioning of

algorithmic decision-making, i.e. hiddenness, opacity and automation. My analysis of these three concerns brought forth how algorithmic decision-making, and specifically machine learning, can create novel obstacles for democratic accountability. Altogether, the

inaccessible, incomprehensible and autonomous nature of algorithmic decision-making seriously limits the effectiveness of the elements of answerability and empowerment. It is therefore currently unclear how, or even if, the demands for democratic accountability that arise from algorithmic decision-making can be met.

While existing literature already called for more accountability in algorithmic

decision-making, it remained underdeveloped why we ought to care about accountability and what exactly it requires. This paper complements the literature by drawing from democratic theory to outline the importance and the requirements of accountability in the democratic context, while simultaneously linking this to the obstacles that arise from the current functioning of algorithmic decision-making. The implication of my argument is that within the democratic context, the increasing reliance on algorithmic decision-making is undesirable

(36)

34 due to its current functioning. As public institutions start to rely on algorithms to complement or even substitute human decision-making, democratic accountability is impaired.

Considering the crucial function of accountability in any notion of democracy, algorithmic decision-making in public institutions forms a challenge for maintaining the legitimacy of the democratic system.

I want to reaffirm that my analysis does not mean to imply that algorithmic decision-making should be avoided entirely. In this paper, I merely aimed to understand the

implications of algorithmic decision-making for democratic accountability. My analysis has shown that the increasing reliance on algorithms in public decision-making processes creates serious obstacles for democratic accountability. While recognizing these obstacles is an important first step, it is critical that research focuses on possible solutions that can help to overcome them. Especially since algorithmic decision-making is already widely relied upon to solve increasingly complex problems. These problems cannot readily be solved without these algorithms. Therefore, it seems unreasonable to expect that society suddenly abstains from the use of algorithmic decision-making. Yet if we accept that algorithms increasingly complement or substitute existing decision-making processes in the democratic context, finding ways to restore democratic accountability needs to be prioritized.

Solutions need to come from a multi-disciplinary effort in which researchers with a technical background are connected to those from the social sciences. This ensures that possible solutions are technologically feasible as well as desirable in terms of their social, political, ethical and legal implications. The emergence of algorithmic decision-making cannot be regarded as only a technical subject. I hope that my paper serves as a reminder for the social sciences that algorithmic decision-making is no longer a futuristic topic but the reality of today. More social scientists need to join the debate around algorithmic

Referenties

GERELATEERDE DOCUMENTEN

Vervolgens laten we aan de hand van onderzoek uit verschillende disciplines zien dat mensen in het algemeen veel minder rationeel en individueel kiezen dan in deze theorie

In the two main chapters of this thesis, we have discussed efficient and adaptive estima- tion in semiparametric models and applied the theory of asymptotic efficiency to

wag samewerking op staatkundige gebied nastrewe, net so s trewe hy ook samewcrking tussen die rasse van Suid·Afrika na, wa nt hy weet dat die hell van die

The implications of legal accountability are more extensive than electoral accountability and parliamentary oversight, and can empower citizens with a clarification of their rights

Our data not only provide correlations between fecal calprotectin levels and inflammatory markers in the systemic circulation, but also confirm that Th1 responses are important in

Perpendicular folds are those that run perpendicular to the spindle 0 No evidence of vertical folds 1 Minor or mild vertical fold damage that does not affect scroll's

Chapter 7: Short-Channel Vertical Organic Field-Effect Transistors with High On/Off Ratios device Figure 7.1.. LPCVD Si3N4 has a high relative dielectric constant ~8, withstanding

cracies in their willingness to attack. Hawkishness was found to be an important explana- tory factor for both democratic and autocratic participants. The more hawkish, the more