• No results found

The Role of Ethics in the use of Algorithms

N/A
N/A
Protected

Academic year: 2021

Share "The Role of Ethics in the use of Algorithms"

Copied!
36
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Role of Ethics in the use of Algorithms

The ethical challenges of algorithm-use and how governmental organizations cope with them

University of Groningen

MSc Strategic Innovation Management

Master Thesis

Iris Hurenkamp

S2943190

Dr. E. Smailhodzic

AP M. Hanisch

18-01-2021

12936 words

Abstract

Increasingly governmental institutions are making use of algorithms in conducting their public affairs. While ethical challenges arising as a result are expected to increase and government officials indicate they require more knowledge and insights on these issues, current literature is mainly focused on corporate spheres and lacks insights on such challenges and potential coping actions present in governmental institutions. Therefore, in this thesis I aim to shed light on the ethical challenges and coping actions involved when using algorithms in governmental institutions. Based on interviews with government officials in a Dutch municipality, complemented with municipality documents and online records, this thesis contributes to the extant literature on the topic of algorithms by offering new ethical insights. It finds that government officials face six ethical challenges when using algorithms: (1) a risk-averse culture, (2) the role of a governmental institution, (3) expectations of governmental institutions, (4) being an actor in the governmental environment, (5) decisions based on the outcomes of algorithms and (6) being in the implementation phase. In addition, this thesis identifies five actions to cope with several of the underlying factors associated with these ethical challenges: (1) working with pilots and demos containing algorithms, (2) implementation of innovation enhancing functions, (3) work groups focused on algorithm transition, (4) creating and providing better insights on algorithms, and (5) the ethical data assistant. Practically this thesis recommends that managing officials assess the ethical challenge and its underlying factors faced by their institution according the proposed model, and subsequently implement the appropriate coping actions.

(2)

2

Introduction

Recently, the Dutch tax authorities were accused of bringing many families in financial problems by withholding allowances as a result of a failed ‘fraude-hunt’ performed by an algorithm (Financieel Dagblad, 2020a). Since the outcomes of the algorithm contradicted with acting in the best interest of citizens, the main duty of government officials, an ethically challenging tradeoff was created (Chapman, 2019). By choosing to act according the algorithm a significant group of Dutch citizens was unfairly disadvantaged, thereby contributing to the social unrest surrounding the current debate on algorithm use by governmental institutions (Financieel Dagblad, 2020b). In today’s world almost everything that we do has the capacity to be observed, analyzed and stored, often without our knowledge. Accordingly, this pile of data can be assigned explicit meaning through algorithmic processing. An algorithm can be described as: ‘‘a finite, abstract, effective, compound control structure, imperatively given, accomplishing a given purpose under given provisions’’ (Hill, 2015, p. 47), meaning that algorithms are something you cannot actually touch, they are a set of predefined rules to turn data into outcomes within certain boundaries set by its developers. This information revolution is fundamentally changing society and is becoming increasingly interwoven in our economy (Mayer-Schönenberger and Cukier, 2013). Businesses use algorithms recognizing complex patterns in the enormous data sets they gather increasingly to make decisions and predictions for purposes such as targeting and personalizing products (Newell and Marabelli, 2015). Recently, governmental institutions have started implementing algorithms to conduct public affairs, and it is starting to become commonplace in the governance of our society (Brauneis and Goodman, 2018; Coglianese and Lehr, 2017). The main push for this adoption comes from large corporations promoting their new technologies and services

(3)

3 Since this study specifically considers the role of ethics in governmental institutions, a more appropriate definition is provided by O'Faircheallaigh, Wanna, and Weller (1999), who state that government ethics is ‘‘the disposition to distinguish right from wrong and to pursue the right course of action, on the basis that officials have as their first duty to serve all citizens rather than their administrative superiors or their own consciences and are guided by values that reflect societal expectations regarding the appropriate role for government’’ (p. 225). From this definition follows that a challenge in governmental institutions can be considered ethical when it interferes with pursuing the right actions serving citizens. Officials with a management position should be prepared for such challenges by being able to recognize areas of ethical concern and acting accordingly (Chapman, 2019). By doing so, a dual advantage can be obtained through identifying and leveraging new socially acceptable opportunities whilst anticipating on potential challenges created by the use of algorithms simultaneously (Floridi et al., 2018).

Thus, despite the importance of addressing ethical challenges and coping actions in governmental organizations using algorithms little attention has been paid to these topics and no studies have been found uncovering such challenges and coping actions specific for governmental institutions. This leads to the following research question:

‘‘What are the ethical challenges in using algorithms for governmental institutions, and how do they deal with them?’’

This thesis considers the ethics of algorithm usage within governmental institutions. Although previous research did not uncover specific ethical challenges, this study finds that governmental institutions face six types of ethical challenges

when using algorithms, all involving a tradeoff between acting in the best interest of citizens and: (1) a risk-averse culture, (2) the role of a governmental institution, (3) expectations of governmental institutions, (4) being an actor in the governmental environment, (5) decisions making according algorithm-outcomes and (6) being in the implementation phase. Moreover, current research has not considered potential coping actions in relation to these challenges. Therefore, this thesis also identifies five actions used by officials to cope with several of the underlying factors of the identified ethical challenges resulting from using algorithms: (1) working with pilots and demos containing algorithms, (2) implementation of innovation enhancing functions, (3) work groups focused on algorithm transition, (4) creating and providing better insights on algorithms, and (5) the ethical data assistant. Several of these actions address the underlying factors of multiple ethical challenges, whereas others only help to cope with one underlying factor thereby alleviating a specific ethical challenge. Whether a coping action is appropriate depends thus on the underlying factors present that are associated with a specific ethical challenge. These findings are brought together into a model proposing specific actions for managing officials to assess which specific factors of ethical challenges they encounter and implement the relevant coping action accordingly.

(4)

4 contribution to the literature. Moreover, the findings and proposed model opened up new avenues for future research, for which suggestions are proposed. Second, by combining the topic of ethics with algorithms in governmental institutions an interdisciplinary contribution is made to the literature on both algorithms and ethics. A compelling contribution is made to these streams in the literature by providing results of a context-specific case. And third, by proposing a model which managing officials can adopt to assess which specific factors of ethical challenges they encounter and implement relevant coping actions accordingly, practical guidance is offered and a dual advantage could be obtained anticipating to the ethical challenges whilst identifying and leveraging opportunities (Floridi et al., 2018).

The remainder of this study is organized as follows. The next section reviews relevant literature addressing the concepts of algorithm usage and its application in the governmental sphere, followed by government ethics. I then discuss the case study method and results continued by a discussion of the findings and concluding remarks.

Literature review

To be able to understand the ethical challenges faced by governmental institutions using algorithms and their coping actions, I first define some key terms. In the sub-sections following the literature on the topics of algorithms and ethics and both their applications in governmental spheres is reviewed to provide an overview of existing literature relevant for this study.

Algorithms

In the academic and public debate on digitalization, the term ‘algorithm’ has become one of the central buzzwords during the past decade. However, despite its popularity definitions and meanings differ across contexts

(5)

5 Literature recognizes the significant impact of algorithms which are becoming a key logic in governing flows of information, and their ubiquity in society is growing (Annany, 2016; Olhede and Wolfe, 2018). A new kind of information revolution is happening: algorithms have become central to the ways communities and economies are run and governed (Janssen and Kuk, 2016; Janssen and Van den Hoven, 2015). Examples of algorithms present in our daily lives are recommendation systems on which music to listen to, suggestions for routes to avoid traffic or computational advertising (Latzer and Just, 2020; Mittelstadt et al., 2016). Corporate life improves their profit margins significantly by using algorithms to recognize patterns in the enormous amount of data they gather and makes decisions and predictions based upon them by using algorithms, for example by personalizing products and services for clients and customers (Newell and Marabelli, 2015). Governmental institutions are not staying behind and apply algorithms increasingly, thereby moving it to the heart of the governance of our society (Brauneis and Goodman, 2018; Oswald, 2018).

Algorithms in governmental institutions

Historically, governments have mainly employed the ‘clinical prediction’ approach, based on applying accumulated wisdom of officials to particular situations (Meehl, 1954; Sarbin, 1943). Accordingly, officials developed a sense for likely consequences of their administrative decisions (Brauneis and Goodman, 2018). Governments have long put these methods alongside the contrasting less frequently used ‘actuarial prediction’, a statistical or algorithmic prediction or judgement (Grove, Zald, Lebow, Snitz and Nelson, 2000). However, a significant increase in computer power and networking has led to an increasing application of actuarial judgement. (Brauneis and Goodman, 2018). Moreover, governmental institutions are facing a push to adopt algorithms increasingly from

corporate life (Kitchin, 2015). Corporations such as CISCO, IBM and Oracle are fueling the shift by exerting pressure on governments through lobbying with the aim of adoption of their new technologies and services (ibid.). Even though a partial force to adopt algorithms might come from corporates, their execution is different for governmental institutions. In governments algorithms execute part of their public body’s legal functions and duties, such as governance, accountability and transparency, and accordingly its legal basis, differing from corporations (Waller and Waller, 2020). It is thus important to make a distinction between algorithm-use in business and governmental contexts.

(6)

6 I will introduce this concept of ethics in relation to algorithms, and algorithm usage by governmental institutions in particular, functioning as a basis to uncover the ethical challenges faced by governmental institutions implementing algorithms.

Ethics

The appropriateness of the definition of ethics is context-specific (Moon and Bonny, 2011). Since this thesis takes the systems into account in which algorithms are implemented (Hill, 2015), it is essential to discuss ethics in a governmental context in specific.

Government ethics

Literature acknowledges the necessity of ethics for the quality of democracies and their administration (e.g. Cooper, 2006; Lawton and Doig, 2006). However, it is important to consider ethics in governmental institutions separately for which the main reason lies in differing values compared to businesses, arising from the fact that citizens should have confidence in these institutions protecting public interests and creating feelings of mutual obligation and respect in society (Frederickson, 2005; Needham, 2006). Moreover, governments are created and defined by law, performing legal functions relating to obligations, entitlements and duties for care to vulnerable citizens and are overall more public-regarding, sculpting these different standards (Frederickson, 2005) leading to ethical behavior being one of the principal means by which they maintain accountability (O’Faircheallaigh et al., 1999). However, stressing these values too much could lead to their relative importance to become the subject of dispute since it is subject to change over time. Moreover, they are a result of societal preferences and perceptions regarding the role of governmental institutions (Chapman, 2019). A definition I will follow in this thesis takes these remarks into account by stating that government ethics is ‘‘the disposition to distinguish right from

(7)

7 parties, society and governmental institutions; funding of parties and politicians; and discourse on public values on policy (e.g. Haworth and Manzi, 1999; Jorgensen, 1999).

Concluding, substantial research has been conducted on ethics in governments. However, literature on combining this subject with the concept of algorithms is lacking (Coglianese and Lehr, 2017). Moreover, the literature focuses mainly on how these ethics are shaped instead of the ethical challenges resulting from it. To be able to uncover those ethical challenges faced by governments using algorithms and their potential coping actions I will first review relevant literature on ethics and algorithms in the next section.

Ethics and algorithms

Algorithms are not ethically neutral, and the current body of academic research on the topic of the ethics of algorithms has grown significantly in recent years (e.g. Hoffman, Roberts, Wolf and Woods, 2018; Lee, 2018; Shin and Park, 2019; Tsamados et al., 2020). For example, a potential issue raised is non-existing patterns to be recognized by algorithms (Boyd and Crawford, 2012). The main challenge resulting from this is that it could cause underlying problems not to be recognized and unfair decisions to be made (Olhede and Wolfe, 2018). Another cause of potential issues is a lack of transparency, which could be created for example by users unable to interpret the algorithms (Buhmann, Paßmann and Fieseler, 2019). Tsamados et al. (2020) outline in their work how this could lead to diminished accountability. Moreover, there is the potential of misguided evidence leading to an unwanted bias, which is a deviation from the standard that can occur in all stages of algorithm use (Danks and London, 2017) and is often caused by the data input of the algorithm that is used to train it to recognize patterns (Shah, 2018). Also, a challenge is the issue of discrimination resulting

from unfair outcomes of the algorithm (Tsamados et al., 2020). If not prevented, this could lead to reinforcement of existing inequalities (Wolkenstein, Jox and Friedrich, 2018). Another issue mentioned, is the chance of challenges in autonomy and privacy arising as a result of the algorithm’s transformative effects, by impacting the choices its users make (Floridi and Cowls, 2019). In addition, a challenge recognized in the literature is the issue of moral responsibility resulting from traceability issues, because it is difficult to trace back the responsibility for choices made by or assisted by algorithms (Floridi, 2012).

These are some raised concerns, however rigorous research on them is lacking (Coglianese and Lehr, 2017; Henman, 2019). Moreover, they are not researched within specific contexts. Based on the significant difference between business- and government ethics it is likely that the concerns exclude ethical challenges arising from government-specific sources or that some are not applicable at all. Therefore, it is important to conduct a research specifically within a governmental context.

Ethical challenges in governments

(8)

8 dual advantage by identifying and leveraging new socially acceptable opportunities in combination with anticipation of potential challenges (ibid.). By identifying these challenges in governmental institutions, insights are provided to be able to optimize chances for such a dual advantage. The topic of ethics in governmental institutions regarding algorithm usage, however, has escaped sustained analysis and limited research focuses mainly on the impact on citizens instead of governmental institutions themselves (Coglianese and Lehr, 2017; Peeters and Schuilenberg, 2018). However, the lack of research within this context should not lead to the misconception that governmental institutions are immune from ethical challenges resulting from algorithms. To the contrary, empirical evidence confirms that virtually all officials claim to encounter ethical dilemmas at work (Bowman and Knox, 2008). This thesis will explore ethical challenges in governmental institutions specifically when using algorithms. Moreover, it analyses how governments are currently coping with such challenges.

Methodology

Research Design

Thus far, the ethical challenges faced by governmental institutions using algorithms and the associated coping actions have not been explored. To do so, a qualitative case-study approach is adopted to focus on the dynamics within the single setting of a municipality (Eisenhardt, 1989). Qualitative research can be defined as ‘research that involves analyzing and interpreting texts and interviews in order to discover meaningful patterns descriptive of a particular phenomenon’ (Auerbach and Silverstein, 2003, p.3). Since the aim of this thesis is to conduct an exploratory research on a concept that escaped sustained research thus far, a case study is appropriate (Eisenhardt, 1989). Single case studies are able to provide novel and rich theoretical insights, and by conducting interviews

the unique aspects of the case could be explored using the officials as sources of knowledge (Guest , Bunce and Johnson, 2006; Yin, 2014). In specific, semi-structured interviews were conducted using an interview guide providing freedom in the formulation of questions, follow-up strategies and sequencing (Schmidt, 2004). The interviews were centered around the experience of officials with algorithms. Interviewees were asked questions about, among other things, their general experience with algorithms, challenges faced when working with them, and ethical challenges and coping actions. In Appendix A the full interview protocol can be found.

Local governments are a type of governmental institution increasingly using algorithms to conduct their public affairs (Janssen and Kuk, 2016). Moreover, officials from municipalities claim to encounter ethical dilemmas at work (Bowman and Knox, 2008). Since this research attempts to uncover ethical challenges and coping actions specifically related to algorithm usage, a municipality is thus an appropriate research context.

Data Collection

(9)

9 that algorithms affect their daily activities. This resulted in a broad range of officials, from developers of algorithms to officials only working with their outputs. They served as key informants for this thesis since their insights helped to provide an answer to the research question. Examples of algorithms present in the municipality are; a model predicting dangerous traffic situations and a system estimating the values for local property taxes. By interviewing people involved in all stages of the algorithm usage, a comprehensive overview of challenges could be provided. Table 1 shows the interviewee’s functions, their relation to algorithms and the duration of the interview.

The interview data were collected in October, November, and December 2020. All interviews were conducted through video calling in the interviewee’s native language to ensure all participants could express themselves clearly without facing a language barrier. Accordingly, all interviews were recorded and transcribed for which each interviewee provided consent orally. Moreover, notes were taken during each of the interviews. By recording, transcribing and taking notes the reliability of the data collection improved (Sandelowski, 1994). To ensure reliability, the extent to which the findings are reproducible, the interview guide is provided in Appendix A and the research process is described

(10)

10 into detail (Bloor and Wood, 2006). Moreover, the reliability of the analysis was improved through discussions regarding the findings with another researcher (Silverman, 2004), thereby resolving ambiguities. To ensure that the conclusions correctly represent the collected data, internal validity, and are generalizable outside the specific study context, external validity, saturation and triangulation were ensured (Bloor and Wood, 2006). Since this thesis is concerned with developing a theory rather than testing it, the sample size was not determined in advance. Data saturation was reached when additional interviews did not provide significant new insights and new participants were not producing any new data adding new concepts to the findings (Auerbach and Silverstein, 2003). Triangulation, systematically comparing the same research topic by different research methods (Bloor and Wood, 2006), was achieved through supplementation of the interview data with seven official government documents on the municipality’s privacy policy, its political structure, three projects using algorithms, and two experience stories of officials working with algorithms accessible via the municipality’s website. Moreover, I utilized insights published on the webpage of the research and business intelligence department of the municipality providing scientifically established insights in the form of dashboards and monitors.

Data Analysis

This thesis follows the theoretical coding approach by Strauss (1987) who distinguishes between open, axial and selective coding. During the first open coding step of the process of uncovering the challenges, themes were derived from the transcripts which led to a total of 237 open codes. For example, the phrase ‘‘People are not really motivated to work with it [algorithms]’’ (Interviewee 3) was labeled no motivation to innovate. During the second step, the axial coding, categories were formed by linking these themes. An example of this is the

combination of the open codes threat to take over jobs and automation of government officials’ tasks into the axial code job replacement. In total this resulted in 26 axial codes. Several open codes fit into multiple axial codes and were therefore included in both. As a final step the axial codes were categorized, which resulted in six selective codes. In table 2 an overview can be found of the results of this analysis. After identifying the ethical challenges, all interviews were examined once more to uncover the coping actions present in the organization. For each identified ethical challenge at least one way in which there is currently attempted to cope with the underlying factors of these challenges was identified. The identified coping actions will be discussed in the findings as well and are included in the proposed model depicted in figure 1.

Findings

The goal of this thesis is to identify the ethical challenges faced when governmental institutions use algorithms and uncovers their potential coping actions. Each of the interviewees comes into contact with algorithms or their outcomes in their daily professional tasks at the municipality. The application of algorithms within governmental institutions such as the municipality displays itself by shifting more to actuarial prediction approaches (Brauneis and Goodman, 2018). Most interviewees acknowledge the increased use of algorithms and expect this trend to continue in the future: ‘‘I think that there are many possibilities, small and big, and the municipality is currently discovering their [algorithms] opportunities.’’ (Interviewee 5)

(11)
(12)

12 within the municipality. Inevitably, it also leads to several ethical challenges. Six of those challenges were identified regarding the tradeoff between acting in the best interest of citizens versus: (1) the risk-averse culture, (2) the role of a governmental institution (3) expectations of governmental institutions, (4) being an actor in the governmental environment, (5) decision making according the outcomes of algorithms, and (6) being in the implementation phase. Officials have figured out several ways to cope with them, at least partially, by alleviating several underlying factors: (1) working with pilots and demos containing algorithms, (2) implementation of innovation enhancing functions, (3) work groups focused on algorithm transition, (4) creating and providing better insights on algorithms, and (5) the ethical data assistant. Each of these challenges and coping actions will be discussed into detail in the following sections.

Ethical challenges faced when using algorithms

Balancing a risk-averse culture versus acting in the best interest of citizens The first ethical

challenge is the tradeoff officials face when making decisions on algorithm use involving acting according the municipality’s risk-averse culture versus acting in the best interest of citizens. The factors involved in the tradeoff can have contradicting outcomes which then creates ethically challenging decisions for officials. The next section discusses how this risk-aversity displays itself in the municipality and what the main source of the ethical challenge is. Accordingly, each of the underlying factors contributing to this challenge are discussed. The risk-aversity causes risks associated with algorithm-usage to be valued higher, thereby diverging the factors of the ethical tradeoff, challenging decision making on algorithms even

more. It results in strict procedures that need to be followed with regards to algorithm use:

‘‘A business would immediately start implementing algorithms if they think it could provide profits, the municipality to the contrary is careful and wants to install safety procedures before implementing algorithms.’’ (Interviewee 1)

The municipality states in its policy that such procedures are implemented because of its accountability duty towards residents, involving the need to demonstrate how they safeguard their interests, hence the outlined procedures.

The main result of this risk-aversity and thus the main source of the ethical challenge, is that it prevents officials from learning from mistakes with regards to algorithms. This could cause the organization to end up in a vicious cycle:

‘‘The cycle of not knowing what we can do with algorithms, therefore not doing it, thus not finding out what we can do, and therefore still not doing it’’ (Interviewee 7)

(13)

13 stated that their colleagues often do not take responsibility for project implementations involving algorithms, nor feel motivated to do so: ‘‘I feel that my colleagues within the municipality do not address it when they notice others not acting according to their promises. I see this at all levels within the organization and I think it really harms improvement [of algorithm usage].’’ (Interviewee 2)

‘’I am happy if we make a start with it [using algorithms], for now I would already be satisfied with that.’’ (Interviewee 1).

Receiving little to no response to proactive behavior creates a negative feedback loop resulting in reinforcement of the negative cycle of missing out on opportunities to learn and improve, thereby strengthening the ethical challenge with regards to tradeoff between perceived risk and potential benefits for residents further.

The second underlying factor is the priorities characterizing the risk-averse municipality that cause little time to be spent thinking about the ethical consequences of algorithms. Interviewees stated that ethical discussions on algorithms are no priority in their departments, creating missed opportunities to create and spread insight with regards to potential benefits and risks of algorithms.

Accordingly, it becomes challenging to make informed decisions on the tradeoff between potential risks and benefits associated with algorithm usage, thereby strengthening the ethical challenge.

Balancing the role of a governmental institution versus acting in the best interest of citizens The second ethical challenge is the

tradeoff that officials need to make between

acting according to their governmental role versus acting in the best interest of citizens regarding algorithm use, since these two factors are often contradicting. In many cases acting according their governmental role is conflicting with applying algorithms in such a way that they benefit citizens most. In the next sections I will discuss the underlying factors creating this ethical challenge.

Without an ethical challenge present, the wellbeing of citizens should be the first priority in decisions made on algorithms. However, the majority of the officials mentioned distraction by new possibilities enabled by algorithms, thereby sometimes forgetting about their main duty: ‘‘The municipality should wonder whether their decisions will make society better. You can use algorithms for everything, but is that what you want? What cons does it have for your citizens? I really think that question should be asked more.’’ (Interviewee 4)

Moreover, findings indicate that the organization should not exclude anyone. Thus, even though an algorithm might be beneficial to a majority, if it excludes a small group it is no ethical solution according to government ethics. When distracted by the potential benefits algorithms seemingly have to offer, these benefits could be overvalued whilst making the tradeoff involving the governmental role, by diminishing its importance and contributing to the ethical challenge.

(14)

14 implemented as a result. The inclusivity towards employees is then influencing the tradeoff involving its governmental role by potentially putting more emphasis on the side of its role as governmental institution and thus contributing to the ethical challenge.

By being inclusive to both its citizens and employees, the municipality is reflecting the society it represents. However, to be truly representative, current trends should also be taken into account. Nowadays society is increasingly characterized by innovations and the applications of algorithms are rising. Therefore, a representative government needs to discover algorithmic opportunities as well:

‘‘In a world where almost all corporations are using algorithms, you have the responsibility as a municipality to form an opinion on this. You need to be able to say something about it, make up your mind about what you want to do with it, and what the parties in your city are allowed to do with it. If you are ignorant regarding the developments you are not able to protect your citizens’’ (Interviewee 7)

Since being representative does not always contribute to acting in the best interest of all citizens, this factor contributes to the ethical challenge.

Another obligation is the transparency duty, emphasized throughout governmental documents. Officials indicated that transparency towards citizens is incredibly important within the organization. However, as algorithms are implemented increasingly, remaining transparent towards society becomes increasingly difficult: ‘‘Municipalities across the country use different systems containing different algorithms causing transparency issues.’’ (Interviewee 10)

Moreover, with the increased algorithm-use within society, public debate on potential negative consequences regarding safety is getting louder. Officials recognize these worries in their encounters with citizens causing challenges in regard to algorithms:

‘‘The government should recognize this fear and should handle it in a decent way. It is better that you remain transparent which causes some delay in processes but still being able to explain what you did to handle their [of citizens] data in a safe way.’’ (Interviewee 3)

The transparency duty regarding algorithms is thus challenging the tradeoff because it takes increased effort to remain transparent accompanied by an increasing demand of citizens for transparency.

A final contributor to the tradeoff is the fear of officials that algorithms will take over jobs. In regard to the inclusivity of the municipality mentioned earlier, it challenges the municipality to convince its officials it wants to retain them: ‘‘I don’t believe that people are not able to add value, but I do think that organizations are sometimes not capable of extracting value from its employees. The municipality definitely has the intention to do so, however it is a challenge.’’ (Interviewee 9)

Exerting efforts to retain officials, being part of its governmental role, intensifies the challenging tradeoff between this role versus acting in the best interest of its residents.

Balancing expectations of governmental institutions versus acting in the best interest of citizens The third ethical challenge is the tradeoff

(15)

15 regard to algorithms, which could contradict. There are three underlying factors contributing to the this inaccurate expectations citizens have of algorithm use of their government, dispersing the factors of the tradeoff. Each of them is discussed below.

First, the interviewees mention the negative bias citizens have towards algorithms. This bias in prevents citizens from being properly informed and influences the residents’ image in respect to algorithms and their municipality in a negative way.

Secondly, publicity is influencing the image of citizens, thereby influencing the ethical challenge:

‘‘I could develop an algorithm with the best intentions, but when the media picks it up and makes up a story around it you have no control how it will be perceived.’’ (Interviewee 7)

By receiving negative publicity regarding algorithms the image of residents is influenced negatively. Accordingly, the two factors of the tradeoff, could become more dispersed increasing the difficulty of making the tradeoff. Interviewee 7 describes the effect of this lack of control the following:

‘‘You want to retain control over the story and make sure it comes out in a way actually representing reality including the good intentions behind it. You want to be able to explain yourself if necessary, which results in us not making our developments public too early in the process.’’ This restrained attitude towards the media influences the content in the publicity with regards to algorithms and the municipality. Accordingly, this also strengthens the issue of uninformed citizens on the topic of algorithms within their municipality.

Lastly, the overall high expectations that citizens have of governmental institutions compared to businesses disperses the two factors involved in the ethical tradeoff as well. Officials indicated that they feel that citizens require incredibly high standards from the municipality. Several made the comparison with businesses in their explanation:

‘‘People expect the municipality to handle their data safely, because we are a government body that has the duty to do so, whereas private institutions are mainly associated with obtaining profits.’’ (Interviewee 3)

Another reason for these expectations is the role governmental institutions take in society, putting the interest of its citizens first. This is confirmed by the municipality who included in an official statement that gaining trust and confidence of their citizens is one of their top priorities.

In many cases the actual laws that corporations and governmental institutions have to abide by are the same for both. However, due to the higher expectations of citizens regarding governmental institutions, they assume the government to excel in tasks compared to private corporations creating unattainably high expectations, thereby moving the factors of expectations and expected benefit apart from each other, challenging the ethical tradeoff.

Balancing being an actor in the governmental environment versus acting in the best interest of citizens. The fourth ethical challenge is

created by three major characteristics of the governmental environment contradicting with acting in the best interest of citizens in the area of algorithms. In the next sections these underlying factors are discussed.

(16)

16 because of the relatively high amount of regulations and procedures put in place to ensure proper governance. Mainly algorithm-developers face the challenges imposed by this:

‘‘Sometimes you have to watch out when you have to deal with bureaucracy already early in the project [of developing an algorithm] because it is important to develop a prototype quickly and to see what it can bring us (...) otherwise chances increase that your project gets killed.’’ (Interviewee 8)

Officials have to abide by the rules and procedures of their organization when using algorithms, even though these could constrain and delay their development affecting their potential benefit to citizens. Bureaucracy is thereby pulling both factors involved in the tradeoff further apart, increasing the ethical challenge associated with implementing algorithms.

Moreover, the political system creates challenges since officials are facing changing counsellors with different political agendas. All council members are democratically selected by the citizens. In their turn, the councilors decide on the priority of algorithms for their policy area, affecting the decision making of officials employed in this cluster.

‘‘We work in a political environment. Sometimes a councilor wants something, and no is not an option. As an advisor I always try to provide them with insights, but in the end they are the ones responsible and you have to act upon their decisions.’’ (Interviewee 4)

Thus, officials have to follow the decisions of the current councilors. However, these policies could interfere with maximizing potential benefits of citizens by means of algorithm implementation.

Third, the municipality needs to act according to and reinforce legislation. Especially, its translation into algorithmic applications is perceived as a challenge:

‘‘Lots of things are regulated well by law, but the translation into an algorithmic application is incredibly challenging.’’ (Interviewee 5)

When legislation is changed, officials have limited time to adapt the algorithms interplaying with them. This challenge also acts in the opposite direction when officials have ideas on how to improve the application of algorithms thereby improving service towards citizens. However, implementing such improvements is not always possible because of the legislation, prohibiting acting in the best interest of the citizens. An example of this is provided by interviewee 10:

‘‘I really think we should start implementing it [new algorithm]. However, these decisions are made on country-level, so the national government would have to make some adaptations in the law first.’’

Thus, sometimes the law is preventing algorithm use that could benefit citizens. Thereby the two factors included in the tradeoff become mutually exclusive, contributing to the ethical challenge. Moreover, the main challenge concerned with abiding by the law are privacy regulations. From 2018 all European Union member states have to abide by the same privacy regulations. In response to this the municipality developed a specific privacy policy:

(17)

17 truly privacy-proof, we provided a translation from the law into our core-values.’’ (Privacy Policy Municipality of Rotterdam, 2018)

Compliance with this policy is demanding a considerable amount of effort from officials. With the implementation of new algorithms come new demands of data and privacy protection, and new ways have to be found to secure privacy of citizens. Developers expressed their struggles with finding proper data input for the algorithms. All interviewees agreed on the importance of safeguarding the privacy of citizens. However, it becomes challenging when these privacy regulations seem to prohibit algorithms that could benefit those same citizens significantly. Interviewee 4 indicated that developing the algorithm was not the main challenge, but making sure the data input meets all the requirements was more difficult. He stated:

‘‘Your data really needs a clear link to its purpose. We are really putting a lot of research into this. Gathering the data should have a clear goal and you should go through a lengthy process to prove it’’.

This issue becomes increasingly prominent when the implementation speed of the algorithm is crucial since a lengthy process has to be followed to ensure that privacy regulations are met. The tradeoff of what is more important, privacy or the benefit of citizens, contributes significantly to the ethical challenge.

Balancing decision making according

algorithm-outcomes versus acting in the best interest of citizens A fifth ethical challenge is

created by officials required to make a tradeoff between the outcomes of the algorithms the government uses versus acting in a way that is actually best for its citizens. Its main source lies in the fact that currently the institution is not prepared yet to rely fully on the algorithms. The

underlying factors contributing to this will be discussed below. It is important to note however, that interviewees stated that they currently never apply the outcomes of the algorithms directly. Nonetheless, several officials indicated that it is a future goal to do so, thereby developing a more efficient way of working.

First of all, before such a shift can be implemented, it should be clear who is responsible for the actions made according to the algorithms which is a challenging point of discussion.

‘‘What if something happens because of a recommendation of our algorithm, who is responsible in that case and how do we handle such an issue?’’ (Interviewee 2)

An answer to such questions should be found before actually using algorithms. Until this is done it is difficult to make a proper tradeoff between the best way of decision making and the interest of citizens.

(18)

18 ‘‘I do not think you want to let an algorithm make decisions on data with the quality that we currently have.’’ (Interviewee 7)

The challenge is deciding when data quality is high enough. Moreover, it is an ethical question to ask who is eligible to make such decisions. As long as the answer to these two questions remains unclear it is challenging to make a well-informed tradeoff.

A fourth challenge associated with data, is its collection through external parties which do often not have the same rigid privacy and security processes in place as the municipality. Therefore, it becomes challenging to guarantee the privacy and security of citizen’s data:

‘‘We have to assess the data provided by external partners more thoroughly’’ (Interviewee 9) Contrarily, external data creates challenges by external parties not providing consent to use their data for all purposes, even though it could be beneficial in serving citizens. In cases like this decisions are made according to the requirements of external parties instead of in the best interest of citizens, an example of an outcome of an ethically challenging tradeoff.

A fifth underlying factor is the knowledge level regarding algorithms within the organization. The majority of interviewees working with algorithm outputs indicated that they do not understand how the algorithms work, or notice that their colleagues have difficulty understanding them. According to the developers it is no issue when these officials do not know all technical details, however they should know how to implement them properly. This is a contributor to the ethical challenge, because the algorithms that could benefit citizens are often out there, but their potential is not optimally used because of the lack of knowledge on how to do so.

The final contributor are the unintended side effects caused when algorithms are implemented. Officials are often caught up in their daily activities, a source of developing a tunnel vision on the outcomes of algorithms, thereby forgetting to pay attention to potential side effects arising from them. An example of such a situation was provided by one of the interviewees describing the implementation of an algorithm monitoring felonies within certain areas of the municipality: ‘‘The outcomes of the monitor showed that the fear of burglary increased, but this did not match the actual crime rates. In the end we found out that our awareness program of which the monitor was a part increased the feeling of unsafety among the residents’’ (Interviewee 4)

From this example it becomes apparent that potential side-effects should not be overlooked. It is thus a challenge to continuously find the balance between what you want to achieve with the algorithm, and its potential side effects, and make the tradeoff for that specific moment.

Balancing being in the

algorithm-implementation phase versus acting in the best interest of citizens The final ethical challenge is

the contradiction between the algorithm-implementation phase the municipality currently finds itself in and using the algorithms actually in the best interest of citizens. The main source of this lies in the unknowingness on where the algorithms are able to provide the optimal benefit to citizens. This is caused by underlying factors discussed below.

The first contributor is the lack of required knowledge associated with the algorithms within the municipality:

(19)

19 organization. However, when you start to do so the amount of knowledge you receive goes so fast that it becomes increasingly difficult to make sure the municipality as a whole keeps involved’’ (Interviewee 9)

Second, interviewees state that the municipality is in a ‘hype-phase’ with regards to algorithms, acknowledging their potential but not applying it widely throughout the organization yet: ‘‘We are looking how we can make our algorithms even smarter and more efficient, but in my opinion there are very few applications yet. We are at the moment mainly experimenting but not actually using them’’ (Interviewee 3). This results in a challenging search to find proper use-cases for the algorithms:

‘‘The municipality states that it is being progressive. But I think it is very difficult to find a proper use-case and know how to put an algorithm in there and actually optimize a process, which is the phase the municipality is currently in.’’ (Interviewee 7)

Third, some matters are perceived as more urgent than others. As mentioned earlier, officials are often caught up in their day to day activities and algorithms often have no priority. As a consequence many decisions in the area of algorithms are postponed until a critical level of urgency is reached. In the case of such circumstances, it becomes challenging to make decisions with regard to algorithm implementations and the relative importance of such specific adoptions because of the associated time pressure. It is challenging to decide on such a short notice on matters which are typical questions to be asked when an organization is still in its developing phase regarding algorithm implementation.

Coping with ethical challenges when implementing algorithms Currently, these

identified ethical challenges are alleviated by coping actions weakening the effect of several of their underlying factors. By doing so, the effect of the main source is reduced, thereby mitigating the ethical challenges. In the next sections these coping actions are discussed. In figure 1 a model is proposed which aids in identifying the appropriate coping actions related to the six ethical challenges.

Working with pilots and demos containing algorithms The municipality is increasingly

putting pilots and demos of projects containing algorithms into place, alleviating two identified ethical challenges.

First of all, by making use of pilots and demos, resistance against algorithm implementation is decreased. Officials are now able to get familiar with the applications first, thereby gaining insights on their potential benefits:

‘‘When you are able to show a demo of an algorithm you can really show how it adds value and explain how it is helpful to the organization’’ (Interviewee 3)

(20)

20 you implement them permanently’’ (Interviewee 2)

Thus, these insights provide information on the previously unknown side-effects of an algorithm. Moreover, understandings regarding the balance between the outcomes of the algorithms, human insights and other available data are created using such demos and pilots:

‘‘If the algorithm [part of a pilot] points in a direction where expertise colleagues expertise doubt upon, we are having discussions whether this is acceptable’’ (Interviewee 2)

Additionally, when during the pilot results are run by an expert official first, better informed discussions can be held on whether the associated data quality is good enough, or what still needs to be done to achieve this:

‘‘Right now it is our intention to check first whether it is correct [algorithm outcomes], and eventually you want to go to a model where you can trust purely on the outcomes.’’ (Interviewee 4)

Since these three factors are underlying the main source of the ethical challenge involving decision making according algorithms, the use of pilots and demos can be considered as a coping action for this specific ethical challenge.

Implementation of innovation enhancing functions From statements of the municipality

and during the interviews appeared that there are multiple innovation enhancing functions put into place. This serves as a coping action by alleviating several underlying factors, weakening a number of main sources associated with the identified ethical challenges.

The first example of such an innovation enhancing function are the ones part of the data

science team focused on data driven working, executing projects for all departments of the municipality:

‘‘I try to set up projects [with algorithms], get finance for them, find others who can help with it, and just perform really well.’’ (Interviewee 6) By creating this function, immediate proactivity with regards to algorithms was showcased and they prioritized throughout the whole organization. Through diminishing the underlying factors of the main source of not gaining knowledge on how algorithms can serve citizens optimally, the ethical challenge involving the municipality’s risk-adverse culture is mitigated.

Second, the implementation of advisory functions is serving as a coping action:

‘‘Sometimes people are so stuck in their daily tasks that they don’t see how algorithms could help them. I can help them to make clear what the value of the algorithms could be and how it can help to gain insights making their jobs easier.’’ (Interviewee 5)

(21)

21 Additionally, by hiring such officials with specific knowledge on algorithms for these functions knowledge is brought in-house at the municipality thereby becoming able to develop its own algorithms and thus decide on their required data input. Hereby the challenges surrounding involving external parties are mitigated, decreasing the unpreparedness to act directly according the outcomes of algorithms: A final example of such functions are privacy and security officers:

‘‘We discussed everything for a long time with our privacy- and security- colleagues’’ (Interviewee 2)

Both functions help managing the risks surrounding algorithm implementation by providing expert insights. Moreover, they assist the departments in the challenging duty of the municipality to act in a transparent manner and advise on topics regarding legislation, privacy in specific. By doing so, it becomes easier to fulfill the municipality’s transparency duty and to act according legislation and privacy legislation in specific, thereby diminishing the ethical challenges involving the role of governmental institutions and being an actor in the governmental environment.

Work groups focused on algorithm transition

In the past few years the municipality implemented work groups focused on making the transition of departments towards working more with algorithms smoother. By doing so, several underlying factors of the identified ethical challenges are mitigated, thereby alleviating the associated main causes and thus ethical challenges.

First, by focusing on the social aspects of algorithm implementation, resistance against algorithms is decreased, thereby alleviating the

ethical challenge involving the municipality’s risk-aversity. Things discussed in the groups are: ‘‘All officials have valuable knowledge that is still able to add value after implementing algorithms (…) the intention to retain that knowledge is definitely there’’ (Interviewee 9) ‘‘We really acknowledge that there should be acceptance among all colleagues within the department before a real cultural change can happen’’ (Interviewee 10)

Moreover, by aiming for acceptance among all colleagues, these workgroups decrease the problems created by the inclusivity principle, thereby alleviating the ethical challenge involving its governmental role.

‘‘The workgroup is focused on the total package: how to deal with the transition from a technical aspect, but also from the social viewpoint.’’ (Interviewee 10)

By also focusing on the technical aspects, the overall knowledge level of on algorithms is increased which alleviates both the underlying factor of the lack of implementation knowledge, and the limited required knowledge associated with being in the implementation phase. Thereby the ethical challenges involving decision making according the algorithms and being in the algorithm-implementation phase are eased.

Creating and providing better insights on algorithms The municipality is actively trying to

provide officials with better insights to base their decisions on. Doing so serves as a coping action by enabling the decrease of the negative effect of several underlying factors involved in the identified ethical challenges.

(22)

22 underlying reasons explaining their opinions. Insights like this help to understand how public opinions are developed and where they obtain their information. These insights can serve as a coping action in dealing with the underlying factors of the inaccurate information citizens have of algorithms and countering their unrealistically high expectations. By doing so, the inaccurate image of citizens regarding algorithms and their municipality can be adjusted positively, thereby alleviating the associated ethical challenge.

Second, by conducting such research it becomes clear where algorithms are causing unintended side effects and where they could potentially provide the most benefit for citizens:

‘‘Our goal here is to have insight in the side-effects and to be able to ask the question: what are we aiming for and what are possible side-effects, and what is more important at the moment?’’ (Interviewee 5)

Accordingly officials can make better informed decisions taking these side-effects into account, being better prepared to act directly according the outcomes of the algorithms.

Lastly, the municipality is continuously trying to create organization-wide data insights:

‘‘Every process using data is getting increasingly straightened out and should go via our team’’ (Interviewee 3)

By doing so, insights on the overall data quality present in the organization are created and the institution becomes better prepared to act directly according to outcomes of algorithms, alleviating the associated ethical challenge. This is not only happening internally, but also towards citizens: ‘‘The municipality is actively trying to make all

their systems fit so that citizens gain more insights in which data is available on them’’ (Interviewee 4)

Accordingly, the underlying factor associated with their transparency duty is alleviated, creating less conflict when making decisions on using algorithms involving the role of governmental institutions.

Ethical data assistant The final identified

coping action is the application of the ethical data assistant, a tool used to discuss the ethics involved when implementing algorithms.

‘‘The ethical data assistant helps to map out what the project exactly does, what you could potentially do with all the data used by the algorithm, what kind of potential side-effects all the options could cause, and start a discussion on whether you want to actually implement it or not.’’ (Interviewee 5)

By providing better insights in potential side-effects and discussing potential use of the gathered data, the municipality becomes better prepared to act directly according the outcomes of algorithms, thereby alleviating the ethical challenge involving decision making according these algorithms.

Second, by applying a tool involving a clear procedure, it becomes easier for the institution to remain transparent:

‘‘You rather have a process taking a little longer, when you are able to be transparent on what you did to handle all that data properly’’ (Interviewee 4)

(23)
(24)

24 being a governmental institution with regards to algorithms.

Discussion

This study set out to answer the following research question: ‘What are the ethical challenges in using algorithms for governmental institutions, and how do they deal with them?’. By answering this question three main contributions are made to existent literature. First, it fills the identified research gap by uncovering six ethical challenges that officials in governmental institutions face when using algorithms and five coping actions to deal with several of their underlying factors. Each ethical challenge involves a tradeoff regarding acting in the best interest of citizens. The factors that make up ethically challenging tradeoffs are: (1) a risk-averse culture, (2) the role of governmental institutions, (3) expectations of governmental institutions, (4) being an actor in the governmental environment, (5) decisions based on the outcomes of algorithms and (6) being in the implementation phase. The five actions identified to cope with several underlying factors of these challenges, thereby alleviating the challenge are: (1) working with pilots and demos containing algorithms, (2) implementation of innovation enhancing functions, (3) work groups focused on algorithm transition, (4) creating and providing better insights on algorithms, and (5) the ethical data assistant. Second, by adding an ethical viewpoint to the discussion of algorithm use within governmental institutions a significant contribution to existent literature on algorithms is

made and the academic debate is broadened. And third, by providing a model managing officials are offered practical guidance facing these challenges. By implementing coping actions matching with the observed underlying factors of the ethical challenges their impacts can be mitigated. Moreover, doing so can result in a dual advantage when there is not only anticipated on the ethical challenges but when opportunities are levered as well (Floridi et al., 2018).

(25)

25 Moreover, I identified three different underlying factors instead, which can be observed in figure 1. Three coping actions alleviating these underlying factors were identified. Whereas the implementation of innovation enhancing functions affects both factors, implementing pilots and demos containing algorithms and applying work groups focused on algorithm transition only have an effect on the resistance against algorithms and the associated proactivity towards them. Managing officials thus need to pay specific attention to which of these underlying factors is present when using algorithms to make sure the right coping action can be applied. In sum, this study shows thus how risk-aversity in governmental organizations is able to create ethical challenges in the specific context of algorithm use.

Through identifying the second ethical challenge this study contributes to the literature by uncovering specific aspects that managing officials should pay attention to when implementing algorithms, which can be observed in figure 1. As mentioned above, the underlying factor regarding the government’s transparency duty has been recognized in earlier research on ethics and algorithms (Buhman et al., 2019) in which it is related to diminished accountability (Tsamados et al., 2020). Since accountability is an essential aspect of governments’ ethical behavior (O’Faircheallaigh et al., 1999) the results of this thesis confirm previous research by stating that transparency might be a source of an ethical challenge, in this case by contradicting with serving citizens optimally when using algorithms. Furthermore, this thesis makes both an academic and practical contribution by proposing two coping actions to downplay the negative effects of this underlying factor. Moreover, three other underlying factors were identified contributing to the ethical challenge which were thus far undiscussed in previous

literature. Unfortunately, for two of those factors no coping actions were identified.

(26)

26 this thesis found several underlying factors in line with prior research on ethical challenges arising from algorithm implementation (Tsamados et al., 2020). However, the underlying factor of unintended side-effects was left undiscussed in earlier work on ethical consequences of algorithms (e.g. Tsamados et al., 2020). Here it becomes apparent that challenges resulting from algorithms are not self-evident and context specific. An interesting finding is that all identified coping actions seem to have an effect on this ethical challenge by alleviating one or multiple underlying factors. Managing officials should therefore pay close attention which underlying factors they face to be able to implement the appropriate coping action.

Lastly, one of the underlying factors, having too little knowledge on advanced algorithms, is not resulting in the ethical challenge of a lack of transparency as argued by previous research (e.g. Buhmann et al., 2019; Floridi and Cowls, 2019; Wolkenstein et al., 2018). Instead, this thesis finds it contributes to the unknowingness on where algorithms can serve citizens optimally, and thus to the ethical challenge involving being in the algorithm-implementation phase. This finding is confirming that the ethical challenges regarding algorithm use are context dependent, especially for governmental institutions. As can be observed in figure 1, two coping actions were identified mitigating two of the three underlying factors. It is interesting to note that this thesis found a coping action addressing the lack of required knowledge of specific algorithms, work groups focused on algorithm transition, which could potentially alleviate the lack of transparency in non-governmental contexts, since it is identified as an underlying factor of transparency challenges in earlier research (Buhmann et al., 2019). However, this thesis did not found such a relation.

Overall, a compelling finding is that each of the coping actions only seem to affect specific underlying factors of an ethical challenge, and do not influence all of them. Moreover, there is no coping action specific to one of the ethical challenges and for some underlying factors no coping actions were identified at all. Furthermore, when comparing the findings of this thesis with the concerns raised by in earlier research on ethics and algorithms (e.g. Buhmann et al., 2019; Hoffman et al., 2018; Lee, 2018), the results are both confirming and contradicting previous work. The findings confirm the discussed concerns by identifying several similar underlying factors leading to ethically challenging situations, such as transparency, privacy, and responsibility issues.(Buhmann et al., 2019; Floridi and Cowls, 2019; Floridi, 2012). The contribution of this thesis lies here in the identification of these issues as underlying factors related to specific ethical challenges. Moreover, the majority of all discussed issues raised in earlier research were not found in this thesis. Only five out of 22 underlying factors corresponded with the earlier raised concerns. A significant contribution of this study to the current body of research on algorithm ethics is thus that challenges resulting from algorithms are context specific and not self-evident.

(27)

27 with a risk-averse culture and decision-making according algorithm-outcomes.

The second coping action, the implementation of innovation enhancing functions, is in line with earlier research on innovation management. This research states that to enhance innovation, of which algorithm use is an example, it is important to set overall goals while simultaneously allowing procedural autonomy, and to have a clear planning, feedback and communication (Amabile and Gryskiewicz, 1987). By implementing these specific functions these prerequisites are fulfilled, contributing to an algorithm promoting environment thereby alleviating the ethical challenge.

The third coping action identified is the implementation of work groups aimed at algorithm transition. This is in line with earlier research on the effectiveness of such groups. Amabile and Gryskiewicz (1987) state for example that the ability to constitute effective work groups are able to enhance innovation. The findings of this thesis confirm these arguments for the specific innovation of algorithms within a governmental context.

The fourth coping action is about governmental institutions actively trying to provide their officials with better insights to base their decisions on. In prior literature these insights are acknowledged to foster innovative applications, such as algorithms (Marshall, Mueck and Shockley, 2015). However, it has not been associated with alleviating the ethical challenge associated with it. By doing so, this thesis is contributing to existent literature.

The final coping action is the application of an ethical data assistant, a tool used to discuss the ethics involved when using algorithms. Prior research on the use of such tools revealed that it improves ethical decision-making and helps to

feel better prepared for ethical dilemmas (Stenmark, Riley and Kreitler, 2020). This thesis is contributing to the literature by confirming these findings by means of empirical research in the specific context of governments using algorithms.

Overall, this thesis answers the identified research gap and proposes a model offering practical guidance for managing officials. Specifically, this thesis contributes to the existing literature by identifying six ethical challenges faced by governments using algorithms. In a response, several coping actions were implemented by officials of which five were identified.

Limitations and Avenues for Future Research

(28)

28 findings, each of them only seem to affect specific underlying factors of an ethical challenge, and do not influence all. Future research is required to find out why this is the case and to identify coping actions for underlying factors left without a coping action as a result of this thesis. Moreover, future research could focus on which coping mechanism is most effective for which underlying factor, since each coping action seems to affect multiple factors.

Managerial implications

In addition to its theoretical implications, this thesis provides significant managerial contributions. Managers within governmental institutions facing increasing algorithm usage should be aware of ethical challenges resulting from these algorithms. To cope with them, they should encourage the appropriate action implementation among their departments using the proposed model depicted in figure 1. Depending on the underlying factors reflected in specific situations associated with the identified ethical challenges, this thesis suggests that managers should consider to focus more on pilots and demos before actually starting to use an algorithm, create innovation enhancing functions within their departments, implement a work group focused on algorithm transition, create and provide better insights on the specific algorithms relevant for their departments, or use an ethical data assistant to address ethical issues involved with algorithm usage.

Conclusion

This thesis has identified the ethical challenges governmental institutions face when using algorithms. Moreover, coping actions applied by officials in regard to these challenges have been uncovered. This research is contributing to the lack of existing literature on the role of ethics on algorithms within governmental organizations, it

Referenties

GERELATEERDE DOCUMENTEN

In other words, talking about ethics is used by publishers to provide a foundation for decision making to cope with four sources of ethical challenges: new technologies,

This implementation of the comprehensive galactic propagation model (GALPROP), alongside a sophisticated solar modulation model to compute CR spectra for comparison with both Voyager

Without any accepted guidance, and with inconsistencies in REC interviewees’ approaches to ethical practice, researcher justifications of their research approach rather than

CPM Dairy voorspel wel ’n betekenisvolle afname in mikrobiese RP-vloei na die duodenum wanneer rantsoene met lae of hoë groenvoerinhoud gesupplementeer word met vet, wat nie in

Hoewel daar in die departement Plantkunde 'n ekologiese benadering gevolg word (i.e. waarin die verskillende sub-dissiplines in navorsing en opleiding ge'in- tegreer is), moet

For the waste containing longer lived naturally occurring radionuclides, lower levels of activity concentration are expected in order to qualify as VLLW

Printed and published by the National Museum, Aliwal Street, B loem fontein. Any article and/or photographs may be reprinted provided the source is

From Figure 3-2 it can be gleaned that the average composite mould surface has a better surface roughness than the average tooling board mould surface.. The tooling board mould