• No results found

A Hippocratic Oath for Programmers

N/A
N/A
Protected

Academic year: 2021

Share "A Hippocratic Oath for Programmers"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A Hippocratic Oath for

Programmers

1

Jonne J.C. Maas Bachelor Thesis Credits: 18 EC Student number: 10719407 Bachelor Artificial Intelligence

Faculty of Science University of Amsterdam

Science Park 904 1098 XH Amsterdam

Supervisor

Dhr. Dr. Ing. R.A.M. (Robert) van Rooij Logic and Cognition Institute

Faculty of Science University of Amsterdam Science Park 105 1090 GE Amsterdam June 30, 2017 1

I am indebted to Jim Stolze for helping me search for a subject and interviewees. My gratitude also to all interviewees who shared their thoughts, beliefs and experiences with me.

I am thankful for my supervisor Robert van Rooij for sharing his comments with me to improve my thesis.

(2)

Summary

In this paper I attempt to find an answer to the question whether an introduction of a Hippocratic oath for programmers might reduce negative consequences of algorithms. I first analyze the use of a Hippocratic oath in the sectors medicine, finance and science. I then look at the value of a promise. Lastly, I conduct an interview with experts in programming to investigate their opinion on such a an oath. Research shows that people have a tendency to commemorate their promise, although the evidence is contradicting on whether this is due to an intrinsic motivation or external factors. According to the programmers interviewed the oath would be more effective if sanctions are attached. However, realization of an oath will bring several impediments. I conclude that an oath could reduce negative consequences based on the analysis of the value of a promise, although it might be more effective if programmers were monitored and sanctions for violating the oath were undertaken. I end the paper with suggestions for further research.

(3)

Contents

1 Introduction 4

2 Method 5

2.1 Literature research . . . 5

2.2 Empirical research . . . 6

2.2.1 Support for choice in interview . . . 6

2.2.2 Support for choice in programmers . . . 6

3 Results 7 3.1 The use of an oath in other disciplines . . . 7

3.1.1 The Hippocratic oath in Science . . . 8

3.1.2 The Hippocratic oath in Medicine . . . 8

3.1.3 The Hippocratic oath in Finance . . . 9

3.1.4 Relevance in the artificial intelligence . . . 9

3.1.5 Objections . . . 10

3.2 The effects of promises in general . . . 11

3.2.1 Relevance of a promise . . . 11

3.2.2 Keeping a promise . . . 11

3.3 The effects for programmers . . . 13

4 Evaluation of results 14 4.1 General Discussion . . . 16

5 Conclusion 18 5.1 Future work . . . 18

(4)

1 Introduction

Technological inventions are created exponentially, contributing to numerous positive improvements for our everyday life. Transferring money to someone else’s bank account has become a one minute task, finding an address can be done while on the road, and many more practicalities have been invented. However, the improvement in technology brings a cost. Our choice in food, clothes and activities in our daily life are increasingly decided for us by algorithms. One would expect an algorithm to be fully objective. Algorithms decide based on their input data, therefore no personal bias is attached to their output. Unfortunately, real life works differently. Both the algorithms themselves and their input are created by humans, causing the continuation of human errors and biases. Regulation of algorithms is a current and important topic, raised by numerous scientists and governments and is therefore essential to be investigated (Maedche, 2017; O’Neil, 2016; Muller, 2017). In order to limit negative impacts of artificial intelligence, the European Economic and Social Committee has released a report advising on the institution of measures against negative externalities, including the introduction of an ethical code (Muller, 2017).

Cathy O’Neil, author of Weapons of Math Destruction and data scientist, men-tions an example enlightening the problem with these algorithms. In the 1970s, the St. George’s Hospital Medical School received such a high number of applications that the admissions office decided to use an algorithm to make a first selection of resumes. The algorithm saved the administration time by sorting out all of the applications and discarding those that surely would not be considered by the hospital staff. The algo-rithm was received rather well and continued to be in use at the hospital. However, the algorithm was based on information received during the 1980s, in which women and men with an unusual or non-American name would not be considered by the hospital. Women were assumed to be nurses, not doctors, and applicants with an unusual name were considered to be incomprehensive of the English language. Last year, the hospital was filed with a lawsuit stating every year 60% of the applicants was discarded directly based on gender or race (O’Neil, 2016). In her book, O’Neil highlights multiple problems algorithms may yield. Generally, people do not question an algorithm’s output, as it is thought to be objective. In the case of the hospital, the algorithm’s training data was very outdated and the hospital staff had not considered to update the algorithm. Every year, the admissions office ended up with enough noteworthy applicants and thus the algorithm was considered to be doing its job. The applicants were disadvantaged without them realizing the true reason behind their rejection. This lack of awareness is a serious problem caused by the use of algorithms, as algorithms are becoming increas-ingly opaque every day. A denial for a loan based on a zip code, a rejection for a job application based on gender and a higher cost for health insurance based on a higher age are all cases that have a significant impact on people’s life.

O’Neil proposes the introduction of a Hippocratic oath for programmers. Her pro-posal follows similar ones for the financial market. Due to the collapse of the market in 2008, financial engineers Emanuel Derman and Paul Wilmott wrote the Financial Mod-elers’ Manifesto, including a Hippocratic Oath for all workers in the financing sector,

(5)

in the hope to increase responsibility in risk management (Derman & Wilmott, 2009). While the banking and financing world was too late to prevent a financial crisis, intro-ducing a Hippocratic Oath in the technological sector might prevent a crisis caused by negative effects of algorithms on time. This paper will attempt to answer the question to which extent the introduction of a Hippocratic Oath could prevent negative conse-quences caused by algorithms. It will seek an answer by exploring the following sub questions:

• What is the use of a Hippocratic oath in other fields? • What is the value of a promise?

• What is the estimated effect of a Hippocratic oath for programmers?

I start from the assumption that the use of a Hippocratic oath in other fields con-tributes to the awareness of the consequences of someone’s work in a positive way. I expect the making of a promise to be considered valuable and to have a significant impact on someone’s behaviour. My other assumption on promises is that the failure of commemorating a promise will result in a feeling of guilt. I assume programmers to be relatively unconcerned about any negative effect their algorithm might produce. Programmers are expected to consider the information behind (personal) data as irrel-evant, i.e. they use data simply as a tool for their work and do not consider the effects on people. Surprised by the acknowledgement of the problematic ethics behind their algorithms, programmers might hold a reluctant attitude towards a Hippocratic oath at first. However, after an evaluation of such an oath, programmers might gain a more positive attitude towards the oath. The overall hypothesis is that the installment of a Hippocratic oath might reduce negative consequences caused by algorithms as people have a tendency to honor their promise and programmers are consciously and explicitly making a promise when pledging an oath. In addition, the oath might create awareness among programmers

Literature research shall be done to answer the first two subquestions. The last subquestion I attempt to resolve by conducting interviews with programmers dealing with personal data. They are the producers of the algorithms and most possibly have interesting observations on the proposal of introducing an oath for programming. I start by explaining the method in section 2, after which section 3 will illustrate the results of my research. In section 4 I evaluate and discuss my results. Section 5 includes a summary of the research with my overall conclusion and suggestions for future work.

2 Method

2.1 Literature research

I will conduct literature research on the Hippocratic oath and the value of a promise. I will analyze the use of a Hippocratic oaths in different fields and shed light on negative

(6)

aspects on the use of such an oath. I then explore the influence of a promise on a person by investigating the value of a promise and the motivation for keeping a promise2.

2.2 Empirical research

The aim of the empirical part of my research is to obtain more information about ex-perts’ insights into the programming world, in order to proclaim a valid statement on the effects a Hippocratic oath might have on programmers themselves. I will examine the effects for programmers provoked by a Hippocratic oath by conducting semi-structured interviews. Multiple programmers were sent a questionnaire regarding a Hippocratic oath for programmers and their view on consequences of algorithms. The questions can be found in Appendix A. Additional programmers were introduced by other participants that were considered to have the ability to share valuable and useful insights. In table 1 the interviewees are identified by their affiliation. The programmers were asked ques-tions concerning their own experiences and opinions, but did not delve ‘deeply into [...] personal matters’ (DiCicco-Bloom & Crabtree, 2006, p.315).

In order to support the scientific methodological approach, validation for the chosen method is emphasized in the following subsections. Firstly, the substantiation for choice of interviews is evaluated. Secondly, the substantiation for choice in programmers is discussed.

2.2.1 Support for choice in interview

Qualitative research was considered more useful than quantitative research for the ques-tion this study attempts to clarify. In general, the quesques-tions demanded long, in-depth answers. For this reason, it was desirable to gather a small group of programmers that could give this detailed information in an interview instead of a larger group of pro-grammers. An interview is useful to ‘yield rich insights into people’s [...] experiences, opinions, values, aspirations, attitudes and feelings’ (May, 2001, p. 120). However, as the interviews would contain principally equivalent questions for every interviewee, I de-cided to conduct a semi-structured interview rather than an open interview. For answers considered to be unclear, follow-up questions were asked in order to reach an unambigu-ous answer, allowing new and unexpected information to be shared by the interviewees. The interview results elucidate experts’ beliefs in the programming world, reinforcing or refuting the initial assumptions of this research.

2.2.2 Support for choice in programmers

The questions of the interview deal with scenario’s that play in the participants’ daily life. According to Key Informant Interviews (2016) this means they primarily shared information from their recent experiences. Interviewing a selection of programmers was important to obtain a meaningful idea of the introduction of a Hippocratic oath, because

2In this paper when I talk about a promise I mean the agreement between two agents where one states

he will perform a certain act or deed and the other one expects the first agent to truly perform this act.

(7)

programmers are familiar with the development of algorithms. It was also beneficial for the strength of their response that the interviewees manage data concerning people rather than objects, as personal data more directly influences an individual.

All interviewees work at a company rather than a university. The reason for this choice is that, generally, universities are more constrained to the conformation of ethical rules than companies. This would make the programmers more aware of their choices. As the majority of programmers does not work for a university, programmers working at companies are therefore more representative of beliefs on the ethical consequences of algorithms than programmers at universities.

As shown in table 1, the number of programmers interviewed is small. In addition, all the programmers were initially provided by the same person. This could indicate that all interviewees share a similar view on certain situations, resulting in a biased opinion on the matter of a Hippocratic oath. In favour of the selection counts that they are employed by various firms and followed different educational paths. They therefore still represent an overview of the programming community.

Identifier Affiliation I-1 Senior consultant

I-2 Studying MSc AI

I-3 Data Scientist

I-4 Co-founder & Chief science officer

I-5 Data Scientist

I-6 Senior Data Scientist Table 1

3 Results

Firstly, I discuss the Hippocratic oath in different sectors, including any objections raised against the oath. I also elaborate on the relevance of an oath in the field of artificial intelligence. Secondly, I analyze the value of a promise and the motivation behind keeping a promise. Lastly, the responses of the interviewees are presented.

3.1 The use of an oath in other disciplines

The Hippocratic oath is named after the Greek doctor Hippocrates and arose several decennia after his existence (Tyson, 2001). Originally, the code was written regarding an ethical conduct of the medical profession. However, the term ‘Hippocratic Oath’ nowadays signifies more than merely an ethical code for doctors and is used for numerous professions, for example law and of course medicine, where oaths are taken to ensure the making of ethical decisions. Several professions are outlined below with their use of a Hippocratic oath. Although the medicine sector is still leading in the debate of the

(8)

use of an oath, interesting remarks can be made on the scientific and financial sectors as well.

3.1.1 The Hippocratic oath in Science

In the science sector the formulation of a Hippocratic oath has been encouraged by several scientists, including Joseph Rotblat and David King. In 1995, Rotblat won a Nobel peace prize along with Pugwash Conferences on Science and World Affairs ‘for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms’ (The Nobel Peace Prize 1995 , 2014). Rotblat strongly encouraged scientists to recognize the ethical consequences their work might have. He argued that ‘for pure science, they say that the scientist’s only obligation is to make the results of research known to the public. What the public does with them is its business, not that of the scientist. This amoral attitude is in my opinion actually immoral, because it eschews personal responsibility for the likely consequences of one’s actions’ (Rotblat, 1999). Rotblat deemed it valuable for scientists, in particular for ‘new entrants’, to ‘be made aware of their social and moral responsibilities’ and to ‘reflect on the wider consequences of their intended field of work before embarking on a career in academia or industry.’ He is supported by Nancy Jones (2007) who argues that the distance between science and society is becoming more diluted. She argues that ‘society has demanded growing accountability from the scientific community as implications of life science research rise in influence’ (Jones, 2007, p. 25). Rotblat proposed the installment of ethical codes, ‘including the monitoring of research projects for possible harm to society’, by obligating scientists to pledge a Hippocratic oath. He stated that ‘the main value of such an oath might be symbolic’, but stimulated young scientists to reflect on their work. David King, former Government Chief Scientific Adviser in the United Kingdom, introduced a universal ethical code for scientists he compares to a Hippocratic oath in order to reinforce ‘trust between scientists and society’ (Ethical Code for Scientists, 2007). Scientists and institutions are encouraged to follow the guidelines proposed. However, the code is not mandatory.

3.1.2 The Hippocratic oath in Medicine

Rotblat compared an oath for scientists with the oath pledged in the field of medicine. The original oath established in the fifth century is barely in use anymore (Tyson, 2001; Orr et al., 1997). A study conducted in 1993 by Orr et al. proclaims the tremendous decrease in the testification of the core fundamentals stated in the initial code by forth-coming doctors. Most of the core fundamentals mentioned are not applicable any longer in the current day and age. For example, the code requires doctors not to take someone’s life, even if being requested to do so. Furthermore, the code commands the rejection of a woman’s request if she desires an abortion. Presently, euthanasia and abortion are more common than it was in the time the ethical code was written. Although the Hippocratic oath has changed, it is still used by future doctors to swear they will do all in their power for a fair and ethical treatment of their patients. Moreover, the oath is seen as a

(9)

formality rather than a legal binding contract, even though pledging the oath is greatly common in many countries. For this reason, the value of the oath is questioned (Tyson, 2001). In section 3.1.5 I elaborate on this topic.

According to Nathanson (2003), the oath taken by future doctors invigorates ‘the public trust in medical practice’, since doctors are ‘bound by a set of rules, of behavioural norms that go far beyond simple questions of manners and etiquette.’ She argues that ‘these norms, a set of ethical principles laid out in the Hippocratic Oath, should enable patients to expect specific standards of practice from their doctors.’ Nevertheless, the pledging of an oath for doctors is not by definition mandatory and commonly no sanctions are affiliated (Tyson, 2001).

3.1.3 The Hippocratic oath in Finance

The oath in the finance sector is of far more recent data. As mentioned in the introduc-tion, Emanuel Derman and Paul Wilmott introduced the Financial Modelers’ Manifesto after the financial crisis in 2007. The financial crisis was among others triggered by an irresponsible and overconfident estimate of the use of mathematics in controlling risks for investors (O’Neil, 2016). Derman and Wilmott included a Hippocratic oath in their Manifesto based on the eagerness to fame and wealth found in financial modellers that was partly responsible for the global crisis. The swearer of the oath roughly declares to realize the consequences of his work, to respect the limits of mathematics and to not let himself be ‘overly impressed’. He states his understanding of the effects of his work on ‘society and the economy’, many of which incomprehensible are to himself.

3.1.4 Relevance in the artificial intelligence

Accepting the algorithm’s training data at face value can also induce implicit discrim-ination, as another of O’Neil’s examples shows. A company investigated what factors influence employees to quit their job. One of these factors included their zip code. Peo-ple that had a longer commute were more likely to quit their job than those with a shorter commute, meaning people with a long commute would be less beneficial to hire. However, people with a longer commute were also more likely to be poorer and non-white than those with a shorter commute. Therefore, the company decided not to focus on zip codes when hiring new people (O’Neil, 2016). The algorithm simply correlates two factors, zip codes and resignations. An investigation for a more profitable means of enlisting demonstrates the ease in which algorithms can contribute to discrimination. An oath might encourage programmers to recognize these implicit consequences and motivate them to reduce these effects as well as they can.

Winner states (1978) that ‘algorithms are designed to be and prized for being functionally automatic, to act when triggered without any regular human intervention or oversight’ (as cited in Gillespie et al., 2014). Gillespie explains that algorithms must be capable of managing the data automatically, if they are designed as such. However, the examples mentioned indicate a possibility that the automatic management of data by algorithms may influence part of society in a negative way. For this reason, it is

(10)

important that during the development of an algorithm these possibilities are prevented as good as possible.

Currently, neural networks is one of the most common type of algorithms used (Ng, 2012). Neural networks consist of hidden layers, in which massive mathematics occur that is often not understandable for programmers themselves. The hidden layers are so to say ‘black boxes’ that cannot be controlled or monitored. The black box problem plays a part in the risk artificial intelligence imposes. Chris Nicholson, expert in deep learning, explains this as follows: ‘People understand the linear algebra behind deep learning. But the models it produces are less human-readable. They’re machine-readable. They can retrieve very accurate results, but we can’t always explain, on an individual basis, what led them to those accurate results’ (as cited in Metz, 2016). Although programmers might not completely understand the black box problem, by adjusting parameters the behaviour of a neural network can be changed.

One could argue that a Hippocratic oath should be pledged by data scientists rather than programmers, as the gathering of data is usually done by data scientists while programmers simply try to solve a problem with the given data. However, it is essential that those who develop the code are aware of the possibility of negative effects, especially with the ’black box’ issue, inasmuch that developers possess the opportunity to change or compute additional features to reduce these negative consequences caused by algorithms. Furthermore, this reduction might be accomplished by spreading awareness among programmers on the implications data and code might cause. For these reasons, the focus in this research lies with programmers rather than with data scientists3.

3.1.5 Objections

There are also objections against the use of a Hippocratic oath. In the field of medicine the debate on pledging an oath principally concerns the relevance of the oath (Colvin, 2003; Tyson, 2001). Colvin (2003) argues that the ‘validity of an oath depends on the idea that it is freely entered into’. However, the potency of an oath ‘depends on the knowledge that action will be taken if the oath is broken.’ In some pernicious dictatorships a death penalty can be granted for defying the rules stated in the oath, but in Western societies an infringe on the rules ordinarily leads to a mere exhortation (Colvin, 2003). So ‘if there is no penalty for non-adherence, why bother to adhere?’ (Colvin, 2003).

Another critique on a Hippocratic oath in the programming world is given by O’Neil who argues that the Hippocratic oath disregards the pressure authorities lay on their programmers. Employers demand a project to be finished in a certain time that is simply not feasible for programmers to accomplish (O’Neil, 2016). However, David King argues that an oath could be used to settle a disagreement between an employer and employee by regarding the rules stated in the oath as a valid reason not to accept the employer’s demands (as cited in Ghosh, 2007). This implies that the oath needs to be backed in the legal domain.

3After I presented this paper a remark was made by a student who argued it is not a programmer’s

responsibility to be concerned with the consequences of algorithms. This comment emphasizes the purpose of this section, in which I clarify why this paper regards programmers.

(11)

Furthermore, O’Neil argues that development of mathematical models are too con-cerned with ‘profit, efficiency and default rates’ (O’Neil, 2016, p. X). She proposes the representing of ‘various ethnicities or income levels [...] within groups of voters or con-sumers’ or the clarifying of ‘cases in which people in certain zip codes pay twice the average for certain services.’ This representation shows the inequality caused by algo-rithms. Yet, the black box problem might impede any such clarifications, which is why these clarifications could be problematic.

3.2 The effects of promises in general

A Hippocratic oath can be seen as a promise to follow certain rules. In this section I examine the literature on promises. In section 3.2.1 the value of a promise is explored. In section 3.2.2 I discuss several empirical studies regarding reasons why a person might keep his promise. Based on this evidence, I discuss whether the making of a promise motivates someone to keep that promise and thus if the use of an oath leads to a reduction in negative effects of algorithms.

3.2.1 Relevance of a promise

Understanding the value of a promise is important for understanding the value of a potential Hippocratic oath. If promises can be discarded in a blink of an eye, people might not regard promises highly and therefore may not respect an oath. In this section, I examine several studies exploring the value and use of a promise.

An important study on promises has been conducted by Charness & Dufwenberg (2006). Participants were split in group A, the decision makers (principals), and group B, the accepting group (agents). Part of group B was allowed to send a message to A before A made a decision. When this was the case, Charness & Dufwenberg (2006) noticed significantly more cooperation. They found that ‘promises (or statements of intent) sent from agents to principals enhance trust, cooperation, and efficiency’ (p. 1597). Gneezy & Epley (2014) conducted an experiment on exceeding, keeping and breaking a promise. Their evidence suggests that exceeding a promise is not valued more than simply keeping a promise. However, breaking a promise was considered ‘more negatively than fair traits’ (p. 7). Almenberg et al. (2011) have conducted an experiment to investigate whether a person would be rewarded for being generous or punished for being selfish by a third party. Their findings are in line with the research conducted by Gneezy & Epley. The results of the research of Almenberg et al. also shows that ‘participants punished others for being selfish, but rewarded them equally for being fair or generous’ (p. 2). Gneezy & Epley (2014) argue that these results may indicate that the value of a promise in a social system is to demoralize selfishness and to ‘enable cooperation among unrelated individuals’ (p. 7).

3.2.2 Keeping a promise

So what motivates a person to keep a promise? Is this based on external factors or more an intrinsic feeling of moral values? Understanding the reason for keeping or breaking a

(12)

promise sheds light on the effect of promises on a person and on the probability someone will keep his word.

The results of Charness & Dufwenberg (2006) indicate that communication is an important factor in the ‘degree of guilt’ experienced by the person making the promise. ‘The more the agent believes his principal expects to be helped, the more likely the agent is to actually help’ (p. 1596). Christoph Vanberg (2008) extended the study of Charness & Dufwenberg and investigated whether promises themselves have an independent effect on behavior. The research of Charness & Dufwenberg did not show a distinction in the ‘commitment-based and expectation-based explanations’ for keeping a promise (Vanberg, 2008, p. 1469). The commitment-based and expectation-based explanations concern the effect on behaviour caused by promises directly and indirectly. The commitment-based explanation claims that ‘because people have a preference for keeping their word’, behaviour is affected directly. According to this explanation, the feeling of guilt comes from ‘behavior that is inconsistent with contractual or moral obligations concerning what “should” be done’ (Vanberg, 2008, p. 1477, my emphasis). The expectation-based explanation is expectation-based on the theory of guilt aversion, which implies ‘that guilt is triggered by behavior that leads to an outcome which is inconsistent with others’ empirical expectations concerning what will happen’ (Vanberg, 2008, p. 1477, original emphasis). Thus, in contrast with commitment-based explanations, the expectation-based explanation ‘says that promises affect behavior indirectly, because they lead to changes in second-order beliefs, which in turn affect behavior’ (Vanberg, 2008, 1476)4. Vanberg’s research shows that a participant’s ‘own promise’ substantially influences his behaviour toward the truster5. However, when promises were made by others changes in

second-order belief occurred while no change in behaviour occurred. This is in contrast with studies that indicate that promises are based on the expectation-based explanation, whereby a change in behaviour occurs due to change in second-order belief. Based on this evidence, Vanberg concludes that ‘promises [...] have an independent effect on behavior, as suggested by the commitment-based account’ (Vanberg, 2008, p. 1476). These findings demonstrate that people have an intrinsic urge to keep their word.

Vanberg’s study hinges on the inclusion of the concept of ‘guilt’ in his conclusion. Breaking a promise would make someone feel guilty because he would be letting down the truster. Charness & Dufwenberg (2006) give an example of expectation-based ex-planation in their research on the concept of ‘tipping’. In the U.S. a waiter makes a living primarily thanks to tips received from customers, which is why the tips usually include quite a high percentage, namely 15% (Charness & Dufwenberg, 2006). Because it is both conventional to tip and because waiters have a hard time surviving without the tips, people tend to feel morally obliged to give a tip. If not, they would disregard social conducts and feel guilty towards the waiter (assuming the waiter did a proper job). The commitment-based explanation, however, claims that ‘a promise creates a contractual obligation toward the person the promise is made’ and that a person keeps

4

A second-order belief is ‘one’s beliefs about what other people believe’, whereas a first-order belief is one’s belief about a concept or action (Shoemaker, 2009, p. 41).

5

The truster is the person the promise is made to, the promise-maker is called the trustee. Above I have mentioned a principal and agent that could be named the truster and trustee respectively.

(13)

his promise in order to avoid any negative emotions, i.e. guilt, if he would break the promise (Vanberg, 2008, p. 1476, original emphasis).

Research conducted by Deck et al. (2011) contrasts with Vanberg’s findings. Deck et al. wondered whether the results of Charness & Dufwenberg (2006) and Vanberg (2008) were due to the participant’s knowledge of being watched. For this reason, Deck et al. conducted an experiment where the participants were not aware they could be observed by the experimenter in order to eliminate external influences on their behaviour, such as ‘shame or reputational concerns’ (p. 3). Deck et al. (2011, p. 14) conclude that the sending of messages is not effective to increase cooperation between the truster and trustee when behavior is not observable, ‘suggesting the effectiveness of promises is due to reputational concerns that can be enforced by an observer.’

Bacharach et al. (2007) have conducted an experimental study to research whether ‘trust responsiveness’, the ‘tendency for trustees to fulfill trust because they believe they are trusted’, is accurate (p. 373). They also investigated the interaction hypothesis, which assumes that ‘sympathy, respect and other pro attitudes strengthen trust respon-siveness’ (p. 359). The evidence suggests, among other things, that trust responsiveness exists and the interaction hypothesis is true. Beck et al. (2010) found similar results in their study. As Vanberg found as well, a trustee feels guilty when breaking a truster’s trust. In addition, Beck et al. found that the ‘increased trust of the consumer increases the amount of guilt an expert feels if he breaks his promise’ (p. 26). This shows that the more trust a truster puts in a trustee, the more shameful he feels when he breaks the trust. This increased guilt leads to ‘consumer-friendlier behaviour’ (p. 26).

It is common to consider breaking a promise similar to breaching a contract (Wilkinson-Ryan & Baron, 2009). Research has shown that a person who has made an explicit promise is likely to keep this trust, ‘even without threat of sanctions’ (Wilkinson-Ryan & Baron, 2009, p. 409). Wilkinson-Ryan (2010) conducted an experiment to investigate what factors would motivate people to break a promise. She found that ‘when parties understand the possibility of breach and its consequences to be a part of the contractual agreement, they are more willing to breach, and less likely to find the breach morally offensive’ (p. 671). This indicates that people prefer to remain ‘within the framework of normative moral behavior’ (p. 671) while still breaching a contract.

In short, a promise is considered relevant for to manage social systems and to increase trust, cooperation and efficiency among people. Research shows that people have a tendency to keep their promise, although there is conflicting evidence on whether this is due to an intrinsic motivation or external factors. Furthermore, Wilkinson-Ryan (2010) found that the threat of sanctions do not by definition influence a person’s motivation to keep his promise, which indicates that sanctions might not necessarily be efficient.

3.3 The effects for programmers

A collection of the answers given by the interviewees can be seen in Figure 1 below. In Appendix B a summary of each interviewee is outlined illustrating their opinions and if applicable remarkable quotes. The strategy used for analyzing the interviews is

(14)

dubbed the ‘editing approach’, as the text segments have been reviewed and identified (DiCicco-Bloom & Crabtree, 2006, p.318).

Figure 1: Collection answers interviewees

4 Evaluation of results

Section 3.1 has shown the reasons oaths are introduced in different fields and the use of these oaths. In medicine the oath is the result of a tradition, while the oath in the financial sector is the result of a crisis. The oath in the science sector is the result of a decrease in distance between society and science. This last reason is meaningful, as algorithms and society are also closely connected. For this reason, the introduction of an oath could attempt to bridge the gap between pure mathematics, models and programming and effects the algorithms have on society. Several scientists argue that scientists should not just take their work and self-regulation into consideration, but should also include the consequences of their work for society in general. In the medical sector, opinions differ on the use of a Hippocratic oath. Although some believe it is important for doctors to pledge an oath to be properly introduced in the doctors world and create awareness of one’s responsibility, others argue an oath is unnecessary as it does not have any consequences if the rules are violated. Both in the science field as in the medicine field the oath can be considered to enhance society’s trust (Ethical Code for Scientists, 2007; Nathanson, 2003). In the financial sector the oath is the result of the eagerness of financial modelers. There is a great deal of eagerness in the programming world as well. The will to improve technology and earn money seems greater than the will to improve society. An oath could remind the technological sector about the importance of the latter.

Section 3.2 analyzes the relevance of promises. Promises are considered important for regulating social systems and for the improvement of trust, cooperation, and effi-ciency. Research has also illustrated that if a person believes he is trusted, he is more likely to fulfill a truster’s trust. Research of Vanberg and Charness & Dufwenberg has

(15)

indicated that the motives for fulfilling a promise are intrinsic. However, research of Deck et al. shows that the commitment to a promise might be influenced by external factors, such as reputational concerns. Nevertheless, the studies mentioned above all indicate that a person is motivated to keep his promise, whether this is due to external factors or intrinsic motivation. Therefore, an oath might still be effective as the intro-duction of a Hippocratic oath for programmers could influence society’s perception on algorithms positively and encourage programmers to consider the rules the oath states. Section 3.3 concerns the responses of the interviewees. The interview was limited in the number of questions, in order to encourage the interviewees to answer the questions that were asked in as much detail as possible. In general, this was indeed the case. Most questions were answered with noticeable consideration, improving my ability to draw a well-based conclusion.

The opinions on responsibility are slightly varied. I-1 considers the programmer most responsible. I-6 considers the person who thought of the idea the most responsible, which is in most cases the person who pays the programmer. I-2 and I-3 view the user as most responsible and I-4 and I-5 do not believe one particular sector is responsible, but the developer, distributor and user all share part of the responsibility. The opinions on the importance of the information behind the data, any effect an oath might have on the programming style and on the manner of realizing an oath are quite similar.

All programmers consider the information important. For I-4 the importance de-pends on the type of project, because the data is sometimes anonymized. However, for other projects it is relevant to improve the programming. I-2 believes the programming style will lose its creativity if an oath would be put into place, while the other intervie-wees believe it will not have any effect. According to I-1 and I-6 only if violating the oath were punishable, it would change the style of programming.

Except for I-5 nobody had heard of the Code of Ethics. I-5 states that he would only be willing to follow the rules of an oath if he agrees with them. I-2 explicitly mentioned that he would not be interested in following the rules of an oath, because it would decrease his creativity. A concern could be that a Hippocratic oath would not be taken seriously. This also shows in the answers the interviewees have given when asked what would be a good implementation of the oath. Although most interviewees doubt an implementation could be conceivable, a possibility they offer is sanctioning a disregard of the rules.

The overall attitude towards the oath is more negative than positive. The majority of interviewees indicated that they would not want to harm society by their algorithms. I-3 and I-4 mentioned their loyalty lies a bit more toward their employer, however, they would not be part of a project they disagree with. This could indicate that an oath might be unnecessary, as programmers decide for themselves whether or not they find a project acceptable. I-3 argues that everyone can learn to program, which makes the regulation of algorithms difficult and the conduction of taking a Hippocratic oath prob-lematic. The responses given by the interviewees indicate that an oath for programmers might not affect the amount of negative consequences caused by algorithms, as long as violating the oath is not sanctioned. This is supported by Colvin (2003), although research of Wilkinson-Ryan shows that a person is likely to keep a promise even when

(16)

no sanctions will follow when the promise is broken. Sanctioning the oath will yield many implications, including questions like who takes responsibility for controlling all algorithms, who ensures all programmers pledge an oath, and so forth. Hence, realizing the oath will be troublesome.

Based on these evaluations, I presume an introduction of a Hippocratic oath the most valuable if the following conditions count: the first condition is based on the result from Deck et al., who have shown in their research that keeping a promise might be influenced by external factors. Thus, for an oath to be fully effective, programmers would have to be monitored on their behaviour in order to control whether they keep their promise.

My second condition is based on the answers given by the interviewees. Pledging an oath would not directly decrease negative consequences caused by algorithms. In order to reduce these consequences, programmers would need to feel the urge to follow the rules. Opinions of programmers differ in the importance of the consequences an algorithm entails. Although they are concerned about the information behind the data they work with and would not want to harm society by their algorithms, the main attitude towards a Hippocratic oath is nonetheless that such an oath would have little to no use if no sanctions would be undertaken for violation of the rules of the oath. For this reason, the original idea of a Hippocratic oath for programmers might not make a significant difference. However, if sanctions were attached to neglecting the oath, it might have a positive influence on reducing negative consequences caused by algorithms. Despite the fact that negative consequences might be reduced by an introduction of an oath, observing and sanctioning the oath will generate several impediments, as is mentioned before.

4.1 General Discussion

Multiple components of this research deserve a more detailed evaluation. In the first place a remark can be made on the incomplete answering of questions by several program-mers. These interviewees were snowballed through another interviewee, and therefore it was more difficult to reach some of them. The questions that demanded a follow-up I was able to get an answer to. The remaining unanswered questions concerned the realization of the oath, which was an addition to the main research and not a principal component. For this reason, I decided to leave them unanswered.

In hindsight, the third question (‘Do you believe the responsibility of the effects of an algorithm lie with the programmer of the algorithm, the distributor of the algorithm or the user of the algorithm?’) might have steered the answers of the participants by suggesting three possibilities. This is based on the answers of I-4 and I-5. However, I-6 did mention a possibility not suggested in the question, which is why I consider the question still valid. The fourth question (‘As a programmer, do you believe loyalty towards your employer or loyalty towards society is more important? [...]’) could have been phrased less directly. A better question might have been whether they considered any conflicts between employers and society, and if they did, they, as programmers, considered themselves involved in this conflict. I still received individualized answers

(17)

with great depth for this question, highlighting the interviewee’s personal opinions and beliefs.

A small remark can be made on the fact that I-1 did not code personal data, but rather objective data. Although mentioned in section 2.3 that it would be beneficial to interview programmers that were working with personal data, I did not consider it a requirement. The answers are meant to give an overview of the opinions of programmers on a Hippocratic oath, which has been accomplished.

Furthermore, DiCicco-Bloom & Crabtree (2006, p. 315-316) argue that ‘qualitative data analysis ideally occurs concurrently with data collection’, in order to develop a satisfying insight into the research questions. This ‘process of data collection and analysis eventually leads to a point in the data collection where no new categories or themes emerge. This is referred to as saturation, signalling that data collection is complete.’ However, due to the restraints of this paper I was not able to get in touch with all the programmers I approached, resulting in a limited number of interviewees. I do believe I could have gained a deeper understanding about beliefs of programmers on a Hippocratic oath, which is why I do not consider my results to have reached the saturation level. Regarding the data received from the interviewees in combination with the information gathered from the literature research, I believe I have nevertheless acquired a valuable insight into the potential benefit of a Hippocratic oath for programmers. Moreover, I believe to have set an initiative for others interested in ethics of technology to continue an investigation in the possibility to reduce negative consequences caused by algorithms, including discrimination, privacy and freedom of speech.

A remark can be made as well on the literature review of this research. An inter-esting inclusion would have been an empirical study on whether the Hippocratic oath makes a difference in the sectors analyzed in section 3.1. Unfortunately, I was not able to find studies showing any evidence of possible effects. Section 3.2 highlights several findings on promises. More research has been conducted in this specific area, however not all studies could be included due to limitations of this paper. In addition, I-4 men-tions the inevitable bias existing in society. The constraints of the study enabled only a limited literature review. For this reason, a full discussion of the issue of bias exceeds the scope of the paper, although this is an important topic.

Lastly, in this research the field of medicine has been used for information on the Hippocratic oath and its use in that sector. This was done in order to provide a base line on the value of a Hippocratic oath for programmers. One important aspect of the medical aspect has not been considered. That is, in general people become doctors because they are motivated to help people. However, this is not necessarily the case for programmers. Programmers enjoy solving problems, but might not be interested in spending time helping others. As Antonio Damasio mentions in his book Descartes’ Error : ‘We all know creative scientists and artists whose social sense is a disgrace, and who regularly harm themselves and others with their behavior’ (Damasio, 1994, p. 169). As I-3 mentioned in his interview, doctors need several years of schooling before they are allowed to practice medicine. Programmers, on the contrary, are able to learn their skills on the internet and can find a good job without a diploma. Reducing negative consequences caused by algorithms might only be truly possible if a diploma

(18)

in programming would demand a similar track of studies as medicine, to emphasize the responsibilities of programmers. I-3 already objects to this idea with the argument that there is a lack of well-skilled programmers. If only licensed programmers were allowed to work, even less programmers would be available. In turn, it is becoming more common for children to learn to program in school. Although currently the number of competent programmers might be low, in fifteen years this could be different.

5 Conclusion

This paper has examined whether a Hippocratic oath for programmers would reduce negative consequences caused by algorithms by exploring the use of such an oath in different sectors and investigating the value of a promise to someone. In addition, this research has analyzed several responses given by programming experts on their opinion on the introduction of a Hippocratic oath. I conclude that a Hippocratic oath for programmers might be efficient based on the analysis of promises given in section 3.2. However, according to the results shown in section 3.1 and 3.3, the oath might be more effective when sanctions are attached if the programmers are not adhering to their promise. Despite research of Deck et al. indicating that an oath might be more beneficial if the pledgers of the oath are monitored on whether they are fulfilling their promise, this needful control is yet far from being realized.

Although mankind is curious about concepts we have not created ourselves, like the brain and the universe, humanity lacks curiosity in its own creations. There is much we do not know about technology, which calls for caution. Thinking forward is positive and has brought many improvements. However, understanding the risks our creations entail is a subject that needs to be investigated further.

Hannah Arendt, famous for her theory on the banality of evil, argues that a shortfall in rationality can be a cause for negative consequences rather than an explicit will to be evil. She believes Eichmann, a Nazi who killed over six million Jews during World War II, not to be ‘monstrous nor demonic’. The ‘only specific characteristic one could detect in his past as well as in his behavior during the trial and the preceding police examination was something entirely negative: it was not stupidity but a curious, quite authentic inability to think’ (Arendt, 1971, p. 417). As seen in Figure 1, programmers feel loyal towards society and therefore presumably do not wish to conduct disastrous deeds. Similar to Eichmann, however, their acts could be considered unethical since they, as far as this study shows, do not regard ethical consequences as an essential aspect of their work.

5.1 Future work

Based on the results of the interview, an interesting approach for future research could be to explore possible sanctions for disregarding a Hippocratic oath and to investigate which sanctions would encourage programmers to obey the rules a Hippocratic oath states. Several interesting possibilities have been proposed, including a similar system as is used for violating traffic laws in, for instance, the United Kingdom. This proposal

(19)

could be combined with the thought of scanning every code for important violations in order to satisfy the second condition I stated in the conclusion. However, it might be sufficient to scan only codes that lead to severe consequences, such as being ignored for a job offer or continuously being denied the request for a loan.

Another suggestion for further research could be to explore a person’s motivation for keeping a promise. Results of studies discussed in section 3.2 provide conflicting evidence. It could be useful to understand the motivation behind someone’s decision to keep a promise if it is known whether this motivation originates more internally or externally in a person. This could either support the findings of Deck et al., whose evidence indicates that motivation is partly influenced by an external factor, or new evidence could defend the results of Vanberg and Charness & Dufwenberg.

Moreover, based on the findings of Wilkinson-Ryan it would be interesting to in-vestigate whether someone would keep his promise if no severe sanctions are attached. So far, there is not much known about the correlation between keeping a promise and sanctions. Numerous experiments have been done on game theory and sanctions. How-ever, these are not fully applicable for this particular topic as these sanctions usually include a reduce in profit for the player rather than a legal sanction.

´

Ardal (1968, p. 234) argues in his paper the following: ’Since the promiser raises the expectations in the promisee by the promise, and in most cases intentionally does this, he is responsible if he fails to do what he has promised to do.’ This citation raises interesting questions about responsibility. The issue of responsibility in artificial intelligence is subjective and ethical. This is also illustrated in Figure 1, where programmers themselves differ in opinion who should be appointed responsible. If promisers are considered to be more responsible, programmers pledging an oath could be considered responsible for the consequences of algorithms. This would be an intriguing topic to explore, because if it were to be true, a Hippocratic oath for programmers has more consequences than just an attempt to decrease negative effects.

References

Almenberg, J., Dreber, A., Apicella, C., & Rand, D. (2011). Third party reward and punishment: group size, efficiency and public goods. In N. Palmetti & et al. (Eds.), Psychology of punishment (chap. 4).

´

Ardal, P. S. (1968). And that’s a promise. The Philosophical Quarterly (1950-), 18 (72), 225–237.

Arendt, H. (1971). Thinking and moral considerations: A lecture. Social Research, 417–446.

Bacharach, M., Guerra, G., & Zizzo, D. (2007). The self-fulfilling property of trust: An experimental study. Theory and Decision, 63 (4), 349-388.

Beck, A., Kerschbamer, R., Qiu, J., & Sutter, M. (2010). Guilt from promise-breaking and trust in markets for expert services: Theory and experiment.

(20)

Charness, G., & Dufwenberg, M. (2006). Promises and partnership. Econometrica, 74 (6), 1579-1601.

Colvin, B. (2003). Why we do not need a hippocratic oath. Medical Education, 37 (12), 1125–1126.

Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. AVON BOOKS: New York.

Deck, C., Serv´atka, M., & Tucker, S. (2011). Do people keep socially unverifiable promises?

Derman, E., & Wilmott, P. (2009, February). The financial modelers manifesto. Risk Management .

DiCicco-Bloom, B., & Crabtree, B. (2006). The qualitative research interview. Medical education, 40 (4), 314-321.

Ethical code for scientists. (2007). Retrieved 2017-06-06, from https://www.liverpool.ac.uk/media/livacuk/researchintegrity/Government Office for Science Ethical Code for Scientists.pdf

Ghosh, P. (2007, September 12). Uk science head backs ethics code. Retrieved 2017-06-06, from http://news.bbc.co.uk/2/hi/science/nature/6990868.stm

Gillespie, T., Boczkowski, P., & Foot, K. (2014). The relevance of algorithms. Media Technologies, 167–194.

Gneezy, A., & Epley, N. (2014). Worth keeping but not exceeding asymmetric conse-quences of breaking versus exceeding promises. Social Psychological and Personality Science.

Jones, N. (2007). A code of ethics for the life sciences. Science and Engineering Ethics, 13 (1), 25-43.

Key informant interviews. (2016). University Lecture.

Maedche, A. (2017). Interview with prof. jeroen van den hoven on “why do ethics and values matter in business and information systems engineering?”. Business & Information Systems Engineering, 1–4.

May, T. (2001). Social research: Issues, methods and process (3rd ed.). Open University Press.

Metz, C. (2016). Ai is transforming google search. the rest of the web is next. WIRED Magazine.

Muller, C. (2017, May). The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society (Opinion No. INT/806). European Economic and Social Committee.

(21)

Nathanson, V. (2003). Why we need a new hippocratic oath. Medical Education, 37 (12), 1123–1124.

Ng, A. (2012, April 8). Neural networks. Retrieved 2017-20-04, from https://www.coursera.org/learn/machine-learningsyllabus

The nobel peace prize 1995. (2014). Retrieved 2017-05-06, from https://www.nobelprize.org/nobelprizes/peace/laureates/1995/

O’Neil, C. (2016). Weapons of math destruction. Crown.

Orr, R. D., Pang, N., Pellegrino, E. D., & Siegler, M. (1997). Use of the hippocratic oath: A review of twentieth-century practice and a content analysis of oaths administered in medical schools in the u.s. and canada in 1993. The Journal of Clinical Ethics, 8 (Winter), 377–388.

Rotblat, J. (1999, November 19). A hippocratic oath for scientists. Science, 286 (5444), 1475.

Shoemaker, S. (2009). Self-intimation and second order belief. Erkenntnis, 71 (1), 35–51. Tyson, P. (2001). The hippocratic oath today. Retrieved 2017-18-05, from

http://www.pbs.org/wgbh/nova/body/hippocratic-oath-today.html

Vanberg, C. (2008). Why do people keep their promises? an experimental test of two explanations. Econometrica, 76 (6), 1467-1480.

Wilkinson-Ryan, T. (2010). Do liquidated damages encourage breach? a psychological experiment. Michigan Law Review , 633–671.

Wilkinson-Ryan, T., & Baron, J. (2009). Moral judgment and moral heuristics in breach of contract. Journal of Empirical Legal Studies, 6 (2), 405–423.

(22)

Appendix A

A Hippocratic Oath for Programming Background

Technology is increasing rapidly, causing quicker and easier decision making. Unfor-tunately, this decision making is meant for the benefit of a company or government rather than for society. Many people suffer unknowingly from discrimination based on age, gender and race. For this reason I am interested to discover whether there is a possibility to reduce this discrimination. Cathy O’Neil proposes in her book Weapons of Math Destruction that modelers of mathematical models should pledge a Hippocratic Oath, as doctors do as well. This oath for programmers will focus of the possible mis-uses and misinterpretations that might follow from the programmers’ algorithms. For my bachelor thesis I would like to research whether this oath would indeed decrease the amount of discrimination caused by algorithms. As the Hippocratic Oath would affect programmers, I considered it appropriate to ask programmers themselves for their opin-ion. Below I have set out numerous questions concerning the installation of the oath and its effect.

• Do you work for a university or for a company?

• Do you consider any negative consequences your algorithm could entail?

• Do you believe the responsibility of the effects of an algorithm lie with the pro-grammer of the algorithm, the distributor of the algorithm or the user of the algorithm?

• As a programmer, do you believe loyalty towards your employer or loyalty towards society is more important? Loyalty towards employer here means fulfilling exactly the desires the employer has, regardless of the consequences for society. Loyalty towards society implies purposefully dismissing certain requests of the employer in order to prevent negative consequences for certain people.

• Do you view the data you work with purely as a tool for you to do your assignment, or does the information behind the data play a role in the way you write your code? • There exists a Programming Code of Ethics that programmers should follow

(http://www.gammadyne.com/ethics.htm). However, this code does not contain any rules for discrimination. As a programmer, are you aware of this Code of Ethics? If yes, are you consciously trying to follow the rules?

• Do you think such an oath would make a significant difference in the programming style of programmers?

(23)

• According to you, what topics are most important for such an oath to cover, assuming the use of an oath is feasible.

• Do you believe you would have a positive attitude towards the use of an Hippocratic Oath?

The following questions concern the realization of the oath in practice. If you have time left, perhaps you can shed some light on the benefits and drawbacks of the oath.

• Do you believe a programmer’s oath could be realized? • What type of implications will the oath entail?

• What impediments will the oath entail?

• What resources are necessary for the programmers in order for the oath to work in practice?

(24)

Appendix B

I-1

I-1 programs several algorithms per week, most of them are ‘harmless’. The question if he considers any negative consequences caused by the algorithms does not fully apply, as his algorithms are not concerned with decisions regarding personal data. Nonetheless, he believes algorithms are not discriminating. An algorithm’s output is based purely on former behaviour. He doubt there is any danger in these algorithms.

I-1 believes that ‘a programmer is responsible for the effects when the algorithm is used with the intended data, in the appropriate context. With different data however or in another context, a user (or distributor) can abuse an algorithm with negative effects.’ He also believes that his loyalty lies more towards society than towards his employer. I-1 considers the information behind the data ‘very important for a successful project.’ This is called ‘Domain Knowledge’ of the customer and a meaningful result cannot be provided without it. He is not aware of the Code of Ethics for programmers. I-1 thinks a Hippocratic oath would only have effect if breaking the rules would be punishable. He has mixed feelings towards such an oath. For him, it ‘depends on the contents. Some “rules” are obvious and I agree. Some others could be too strict for me... So mixed feelings probably.’ He deems a realization of an oath possible by automatic code analysis to see if the oath is followed.

I-2

I-2 absolutely does not believe responsibility of an algorithm’s output should lie with its developer, but that ‘the user of the algorithm should hold the responsibility of the actions it performs in the users care.’ His opinion on freedom of creation outweighs negative consequences an algorithm could entail. He would never deny anyone the right to publish something, even is that tool can be used for bad.

For I-2 as well his loyalty lies towards society. ‘I would probably act against my employers wishes if my personal threshold for what I believe is bad/right for society is reached.’ I-2 argues that data ‘definitely play a role in the way I write my code.’ Nevertheless, he programs his codes ‘as generic as possible’. I-2 is not aware of the Code of Ethics for programmers, stating not to commit fraud and take someone’s code, etc. and he ‘would never abide to such a code.’ He does believe a programmer’s oath would make a difference in their programming style. I-2 believes that the need to obey laws will discard of a programmer’s creativity. For this reason he does not have a positive attitude towards a Hippocratic oath. He considers himself a ‘strong believer in a society where freedom of information exists, and with freedom of information, comes freedom of creation. Freedom of use, however, is a completely different thing, as this could cause harm or be misused etc.’ He is also not convinced a programmer’s oath could be realized, as the evaluation of the results would be too time consuming. If this turns out to be possible I-2 believes the realization could be an option. ‘However, most programmers do not program in industry. Most programmers are so called hobbyhackers and who checks these peoples work? [...] So, I don’t believe it is possible to realize a programmer’s oath.’

(25)

I-3

Responsibility wise, I-3 leans towards the user. He gives an analogy from nuclear physics, where research in radioactivity is acceptable, ‘building a nuclear warhead is not great but not outright bad’, but the use of a nuclear warhead is faulty. I-3 considers it most important whether an algorithm does correctly what he wants it to do. He argues that ‘if we think long enough, we could find a way to use any tool for nefarious purposes’. He believes it important to consider consequences nonetheless ‘on the individual (project) basis’, as an algorithm predicting the demand for chocolate ice-cream in a shop, the ‘negative consequences would probably be less important than in the case of an algorithm that should autonomously detect brain tumours in MRI scans.’ He believes loyalty lies toward the employer, but would not work for someone who disagrees with his own beliefs. He thinks that the information behind data enables him to ‘pursue specific ways to program’, so he considers the information important. He has not heard of the Code of Ethics, but considers it ‘a good idea’. He wonders if an oath would make any significant difference in programming style, as he does not know any people programming for nefarious purposes. I-3 argues that in order to work as a doctor, you need to be in possession of a degree and have pledged the oath. However, one can learn to program by himself. Thus, there is no force to pledge an oath. For this reason he considers it difficult for an oath to be realized. There already ‘seems to be lack of CS graduates as it is, so companies are employing self taught people; with an obligatory oath, the pool of employees would shrink a lot. Also you would need to limit the access to programming tools, which would be next to impossible given the prevalence of open source and the internet.’ In order for the oath to work in practice, I-3 believes limits and regulations for programmers would be necessary. According to I-3, however, this is ‘the exact opposite of what programmer’s culture is about’.

I-4

I-4 argues that one can never know what his algorithm can or will be used for, both in good as bad ways and thus thinking about an algorithm’s negative consequences is a difficult topic. He believes the responsibility of an algorithm lies with the programmer, the distributor and the user of the algorithm. He argues that in principal loyalty lies with a company, but that he could never work for a company if he does not agree with its intentions. I-4 argues that sometimes they have to find causations, for which information behind data is important. However, they can do a lot without the information and ‘especially personal information is sensitive and often anonymized’.

He states that ‘it is good manner to be aware and follow the guidelines. However, even with most perfect guidelines it is inevitable to overcome biases. Society is biased and you will find these in data science everywhere. It’s a big problem as many take results for granted even if it’s not that hard to see bias is present.’ I-4 does not believe an oath ‘would make a big difference compared to a socially aware programmer’, as it is difficult to estimate what is done with a code after a programmer has finished it. I-4 believes it is important to not be constricted by the rules of an oath for security

(26)

reasons. However, if he agrees with the guidelines of an oath he would not mind signing it. He does not believe an oath could be realized and thinks if it is, the programmer’s landscape might be less open or divided and research ’might be hampered’. Nevertheless, if something had to be put in place, it could be described in legal documents.

I-5

I-5 has a simple answer on the question if he considers negative consequences caused by his algorithms: yes he does. The responsibility of negative effects caused by algorithms lies with the programmer, the distributor and the user of the algorithm. He believes that ‘if you, as a programmer, think something will cause problems for certain layers of society, you shouldn’t go through with this.’ According to I-5, the information behind the data generally ‘plays a big role’. He has heard of the Code of Ethics for programmers, but does not always manage to abide the rules and argues that the code is ‘mainly abided by universities’. He does not think a Hippocratic oath will make a difference in the style of programming, as he believes ‘the ball lies in the superiors and company’s court’. According to I-5, ‘the bigger companies wouldn’t like it’, and therefore does not believe an oath could be realized. He states that an oath would be hard to regulate and ‘the governmental organisations are far from ready to regulate or even control such an oath’. However, if he would have to give an opinion on what resources would be useful for an oath to work in practice, he would say clear guidance and supervision. Topics the oath should cover should be similar to the rights we have now for privacy and freedom of voice.

I-6

I-6 believes the person who came up with the idea behind the algorithm in the first place (usually the person who is paying the programmer) should be responsible for the effects of an algorithm. He tries to focus on ‘positive consequences that would, on the whole, outweigh the negatives.’ Concerning his loyalty he makes the following remark: ‘It’s always a tradeoff here. The benefit of a free market economy is that one can simply leave their employer should they see large conflicts between the employer’s desires and the needs of society as a whole.’ I-6 believes that ‘the information behind the data plays a huge role in the way we write code to deal with it, as this knowledge enables us to perform meaningful feature engineering.’ I-6 was not aware of the Code of Ethics for programmers, but considers it to be reasonable. He thinks a Hippocratic oath could make a significant difference in the programming style of programmers if it was backed up by European/US law, and it was easy for programmers to be prosecuted. I-6 believes to have a positive attitude towards the oath. In order for the oath to be realized he believes it should be backed up by Eureopean or US law and to have a clear set of guidelines. Resources to help facilitate the oath could be some kind of directory of programmers linked to github/bitbucket. ‘Any transgression that was reported would result in black marks on one’s Github/bitbucket profile (similar to points on a UK driving licence).’

Referenties

GERELATEERDE DOCUMENTEN

This paper used frameworks from the strategy field that are implemented to information systems to provide a clear view that cloud services are destructive for the current

Lockup expiration x lead manager is an interaction dummy variable, equaling one if the recommendation was made by an analyst affiliated with the lead manager and if

In this context, Mozambique in general and Gorongosa in particular offers a privileged or special site to study the local perceptions on the potential role of various types of

The monkey's sworn oath : Cultures of engagement for reconciliation and healing in the aftermath of the civil war in Mozambique..

In this context, Mozambique in general and Gorongosa in particular offers a privileged or special site to study the local perceptions on the potential role of various types of

From the data analysis, some major themes emerged, including: the functions of members of the sex work network; the significant role of oath taking in the network; the implications

While assembling the relevant papyri for a new list of oath for- mulas in Greek papyri which mention the Byzantine emperor, either by giving his full name and titulature, or

Furthermore, it seems impossible to see in these lines the re- mainder of imperial titulature used in a regnal dating formula... Oath Formulas with Imperial Titulature in