Tilburg University
A vulnerability analysis
Krupiy, Tetyana
Published in:
Computer Law and Security Review
DOI:
10.1016/j.clsr.2020.105429
Publication date:
2020
Document Version
Publisher's PDF, also known as Version of record
Link to publication in Tilburg University Research Portal
Citation for published version (APA):
Krupiy, T. (2020). A vulnerability analysis: Theorising the impact of artificial intelligence decision-making
processes on individuals, society and human diversity from a social justice perspective. Computer Law and
Security Review, 38(September), 1-25. [105429]. https://doi.org/10.1016/j.clsr.2020.105429
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Availableonlineatwww.sciencedirect.com
journalhomepage:www.elsevier.com/locate/CLSR
A
vulnerability
analysis:
Theorising
the
impact
of
artificial
intelligence
decision-making
processes
on
individuals,
society
and
human
diversity
from
a
social
justice
perspective
Tetyana
(Tanya)
Krupiy
TilburgUniversity,MontesquieuBuilding,Room808,Prof.Cobbenhagenlaan221,Tilburg,NorthBrabant,5037DE, theNetherlands
a
r
t
i
c
l
e
i
n
f
o
Keywords: Artificialintelligence Datascience Decision-makingprocess Socialjustice Humandiversity Vulnerabilitytheory FeminismQueerlegaltheory Criticaldisabilitytheory
a
b
s
t
r
a
c
t
Thearticleexaminesanumberofwaysinwhichtheuseofartificialintelligence technolo-giestopredicttheperformanceofindividualsandtoreachdecisionsconcerningthe enti-tlementofindividualstopositivedecisionsimpactsindividualsandsociety.Itanalysesthe effectsusingasocialjusticelens.Particularattentionispaidtotheexperiencesof individ-ualswhohavehistoricallyexperienceddisadvantageanddiscrimination.Thearticleuses theuniversityadmissionsprocesswheretheuniversityutilisesafullyautomated decision-makingprocesstoevaluatethecapabilityorsuitabilityofthecandidateasacasestudy. Thearticlepositsthattheartificialintelligencedecision-makingprocessshouldbeviewed asaninstitutionthatreconfigurestherelationshipsbetweenindividuals,andbetween indi-vidualsandinstitutions.Artificialintelligencedecision-makingprocesseshaveinstitutional elementsembeddedwithinthemthatresultintheiroperationdisadvantaginggroupswho havehistoricallyexperienceddiscrimination.Dependingonthemannerinwhichan artifi-cialintelligencedecision-makingprocessisdesigned,itcanproducesolidarityor segrega-tionbetweengroupsinsociety.Thereisapotentialfortheoperationofartificialintelligence decision-makingprocessestofailtoreflectthelivedexperiencesofindividualsandasa re-sulttounderminetheprotectionofhumandiversity.Someoftheseeffectsarelinkedtothe creationofanableistcultureandtotheresurrectionofeugenics-typediscourses.Itis con-cludedthatoneofthecontextsinwhichhumanbeingsshouldreachdecisionsiswherethe decisioninvolvesrepresentingandevaluatingthecapabilitiesofanindividual.The legisla-tureshouldrespondaccordinglybyidentifyingcontextsinwhichitismandatorytoemploy humandecision-makersandbyenactingtherelevantlegislation.
© 2020 Tetyana˜(Tanya)Krupiy.PublishedbyElsevierLtd. ThisisanopenaccessarticleundertheCCBY-NC-NDlicense. (http://creativecommons.org/licenses/by-nc-nd/4.0/)
E-mailaddress:t.krupiy@uvt.nl https://doi.org/10.1016/j.clsr.2020.105429
EricaCurtis,aformeradmissionsevaluatoratBrown Uni-versityintheUnitedStates,hasnotedthatsheevaluatedeach student’sapplicationconsisting ofstandardisedtestscores, thetranscript,thepersonalstatement,andmultiple supple-mentalessayswithinatwelve-minutetimeframe.1Arguably,
thisisaveryshortperiodoftimewithinwhichanadmissions officercanevaluatetheapplicant’spersonalityandacademic qualitiesholistically.2Thetimeconstraintscreateapossibility
thattheadmissionsofficermayfailtodetecttheapplicants’ capabilitiesorhowsocietalbarriersdiminishedtheirabilityto realisetheirpotential.Anotherconcernwithhuman decision-making isthatthe decision-makerofficermay act arbitrar-ily inthecourseofexercising discretion3 byputting
differ-entweightoncomparableattributesthatcannotbemeasured. What ismore,anadmissions officercould treatapplicants on an unequal basisdue tobeing influenced byconscious or unconscious biases.4 Advances in artificial intelligence
(hereinafterAI)technologygiverisetoadiscussionwhether organisationsshoulduseAIsystemstoselectapplicantsfor admission touniversity.5 Technology companiesmarket AI
systemswithacapabilitytopredict thecandidates’ perfor-manceandtofollowadecision-makingprocedureas possess-ing thecapacitytoeliminatebiasandtoimprove decision-making.6 Thecomputersciencecommunityisnowworking
onembeddingvalues,suchasfairness,intotheAI
decision-Acknowledgments:IwouldliketothankProfessorCorienPrins forherfeedbackonthedraftversionofthisarticle.Iamgratefulto AtienoSamandari,StuMarvel,ProfessorMarthaAlbertson Fine-man,ProfessorNicoleMorrisandProfessorPaulMyersfortheir feedbackonapresentationwhichformedthefoundationforthis article.Additionally,Iwishtothankscholarswhoasked stimulat-ingquestionsduringtheEthicsofDataScience:Addressingthe FutureUseandMisuseofOurDataConference,theBIASin Artifi-cialIntelligenceandNeuroscienceTransdisciplinaryConference, andtheMedia&Space:TheRegulationofDigitalPlatforms,New Media&TechnologiesSymposiumwhereIpresentedmyongoing work.
1Joel Butterly, ‘7 Admissions Officers Share the Things
They Never Tell Applicants’ (Insider Inc., 2018) <https: //www.businessinsider.com/7-things-college- admissions-officers-wishevery-applicant-knew-2018-2?international= true&r=US&IR=T>accessed26June2019
2Ibid
3Mark Bovens and Stavros Zouridis, ‘From Street-Level to
System-LevelBureaucracies:HowInformationand Communica-tionTechnologyis TransformingAdministrative Discretionand Constitutional Control’ (2002) 62 Public Administration Review 174,181
4Josh Wood, ‘“The Wolf of Racial Bias": the Admissions
Lawsuit Rocking Harvard’ The Guardian (London 18 October 2018) <https://www.theguardian.com/education/2018/oct/18/ harvard-affirmative-action-trial-asian-american-students> accessed10March2019
5MoritzHardt,HowBigDataisUnfair:UnderstandingUnintended
SourcesofUnfairnessinDataDrivenDecision-making(Medium Cor-poration2014)
6EktaDokania,‘CanAIHelpHumansOvercomeBias?’The
Seat-tle Globalist(Seattle 22May2019) <https://www.seattleglobalist. com/2019/05/22/can-ai-help-humans-overcome-bias/83957> ac-cessed3March2019
makingprocedure.7 Daniel Greene andcolleagues view the
focusonachievingfairnessbyincorporatingvaluesintothe designofthesystemasshort-sighted.8Theattentiononhow
toembedfairnessinto thedecision-making procedureofa technicalsystemside-linesthediscussionhowthe employ-mentofAIdecision-makingprocessesimpactsonachieving socialgoals,suchassocialjusticeand‘equitablehuman flour-ishing.’9VirginiaEubank’sworkunderscorestheimportance
ofinvestigatinghowtheuseofAIdecision-makingprocesses impactsindividualsandsociety.Herinterviewswithaffected individualswhoappliedtoaccessstatebenefitsinthestateof IndianaintheUnitedStates10demonstratethatthe
employ-mentofAIdecision-makingprocessescanleadtothe deepen-ingofinequality,11tosocialsorting12andtosocialdivision.13
Theenquiryisparticularlypertinentgiventhefactthatnotall sourcesreportadverseoutcomes.TheBritishUniversitiesand CollegesAdmissionsServiceassertsthatinitspilotprojectan algorithmicprocessselectedthesamepoolofapplicantstobe admittedtouniversitiesasadmissionsofficers;the organisa-tiondidnotrevealthealgorithm’sdesignandoperation pro-cedure.14
Thepresent paper exploressome ofthe hitherto unre-solved longstanding societalproblems and newissues the employmentofAIdecision-makingprocessesraises.It con-tributestoexistingliteraturebyproposingthatanAI decision-makingprocessshouldbeunderstoodasaninstitution.TheAI decision-makingprocessreconfiguresrelationshipsbetween individualsaswellasbetweenindividuals andinstitutions. Thepaperexaminessomeofthevaluesandtypesof institu-tionalarrangementstheemploymentofAIdecision-making processesembedsintosociety.Thisissueissignificant.The CouncilofEuropeCommitteeofMinistersstatedthatwhen data-driven technologies operate ‘at scale’ their operation prioritisescertainvaluesoverothers.15 Theassertionofthe
CouncilofEuropeCommitteeofMinistersthatdata-driven technologiesreconfiguretheenvironmentinwhich individ-ualsprocessinformation16shouldbeextendedtoencompass
7 AdityaKrishnaMenonandRobertCWilliamson,‘TheCostof
FairnessinBinaryClassification’(2018)81ProceedingsofMachine LearningResearch1,10
8 DanielGreene,AnnaLaurenHoffmanandLuke Stark,Better,
Nicer,Clearer,Fairer:ACriticalAssessmentoftheMovementfor Eth-icalArtificialIntelligenceandMachineLearning(TheProceedingsof the52ndHawaiiInternational ConferenceonSystem Sciences, Hawaii,2019)2122
9 Ibid
10VirginiaEubanks,AutomatingInequality:HowHigh-techTools
Pro-file,Police,andPunishthePoor(StMartin’sPress2018)10
11Ibid204 12Ibid122 13Ibid196-97
14BenJordan,MinimisingtheRisksofUnconsciousBiasinUniversity
Admissions:2017UpdateonProgress(UniversitiesandColleges Ad-missionsService2017)11
15CouncilofEuropeCommitteeofMinisters,‘Declarationbythe
CommitteeofMinistersontheManipulativeCapabilitiesof Algo-rithmicProcessesDecl(13/02/2019)1’(1337thmeetingofthe Minis-ters’Deputies,CouncilofEurope2019)<https://search.coe.int/cm/ pages/result_details.aspx?objectid=090000168092dd4b>15 Febru-ary2019
therelationshipsindividualshavewitheachotherandwith theinstitutions.Thearticleexaminessomeofthetypesof so-cialtransformationsthattheuseofAIdecision-making pro-cesses acrossdomainswillaccentuate.Whilethedesignof AIdecision-making processeswillshape whethertheir op-erationgivesrisetosolidarityorsegregation,thereisa po-tentialforthesesystemstoadverselyaffectindividualswho have historically experienceddiscrimination, disadvantage, disempowermentandmarginalisation.Theprovisionsin in-ternationalhumanrightstreatiesprohibitingdiscrimination provideanon-exhaustivelistofindividualswhoexperience discrimination,exclusion,oppression,disempowermentand disadvantage.17Thecharacteristicssuchindividualspossess
includesex,genderidentity,sexualorientation,age, ethnic-ity,race,colour,descent,language,religion,politicalorother opinion,nationalorsocialorigin,property,birthanddisability amongstothers.18
Theuniversityadmissionsprocessservesasacasestudy forcontextualisingthediscussioninthepresentpaper.One ofthereasonsforusingacasestudyforfocusingthe discus-sionisthatanevaluationofanytechnologyneedstobe con-textspecific.JaneBaileyandValerieSteevesobservethat tech-nologyisneithergoodnorbad.19Everythingdependsonhow
developersdesignatechnology,howthelawregulatesitand whatvaluesthedevelopersembedintothetechnology.20One
mayaddtothisobservationthathowindividualsusethe tech-nologyandforwhatpurposematterstoo.Clearly,itis possi-bletouseAItechnologytoadvancesocietalobjectives.Bruce DHaynesandSebastianBenthallproposethatcomputer sci-entistsshoulddevelopAIsystemsthatdetectracial segrega-tioninsociety.21Thisinformationcanthenbeusedtodetect
similartreatmentofindividuals.22Sinceindividualshave
dis-parateopportunitiesasaresultoflivinginsegregatedareas withinthesamecity,23theuseofAItechnologiestoremedy
segregationwouldcontributetotheattainmentofsocial jus-tice.
Thisarticlefocusesonuncoveringanumberofadverse im-pactstheuseofanAIdecision-makingsystemislikelytohave bothonindividualsandsocietyfromtheperspectiveof
ad-17ConventionfortheProtectionofHumanRights and
Funda-mentalFreedomsart14;AfricanCharteronHumanandPeoples’ Rightsart2;AmericanConventionofHumanRightsart1; Interna-tionalCovenantonCivilandPoliticalRights(adopted16 Decem-ber1966,enteredintoforce23March1976)999UNTS171art26; InternationalConventiononEconomic,SocialandCulturalRights (adopted16December1966,enteredintoforce3January1976)993 UNTS3art2.2
18Ibid; Convention on the Rights of Personswith Disabilities
(adopted 13 December2006,entryinto force 3May 2008)2515 UNTS3art5(2);IdentobaandOthersvGeorgiaAppNo73235/12 (EC-tHR,12May2015),para96.
19JaneBaileyandValerieSteeves,‘Introduction:Cyber-Utopia?
GettingBeyondtheBinaryNotionofTechnologyasGoodorBad forGirls’inJaneBaileyandValerieSteeves(eds),eGirls,eCitizens: PuttingTechnology,TheoryandPolicyintoDialoguewithGirls’andYoung Women’sVoices(UniversityofOttawaPress2015)5
20Ibid
21SebastianBenthallandBruceDHaynes,RacialCategoriesin
Ma-chineLearning(AssociationforComputingMachinery2019)9
22Ibid8 23Ibid7
vancingsocialjustice.Itisconfinedtoscrutinisingthe con-textwhereeducationalinstitutionsautomatetheprocessof theselection ofstudentsbyemployingAI decision-making processes.Suchcriteriacouldincludeperformanceon exami-nations,extra-curricularactivities,personalstatements, sam-plesofstudentworkandsoon.Whilethearticleuses exam-plesfromanumberofcountries,thefindingscanbeextended toalluniversitiesthatuseavarietyofcriteriatojudge the meritofindividuals.Theanalysisdoesnotincludewithinits scopeAIsystemsthatallocatestudentstouniversitiesbased onthestudents’preferencesforastudyprogrammewithout referencetothemeritcriteria.Anexampleoftheuniversity admissionsprocessesbeyondthescopeofthispaperisthatof theFrenchstateuniversitiesotherthangrandesécoles.24The
algorithmallocatesplacesatFrenchstateuniversitiesto stu-dentsaccordingtothestudent’shighestpreferencefora pro-grammeandaccordingtowhetherastudentliveswithinthe districtwheretheuniversityislocated;arandomprocedureis usedtobreaktheties.25Forreasonsofspaceitisbeyondthe
scopeofthepresentenquirytoconsiderthebeneficialusesto whichavarietyofAItechnologiesmaybeput.
Section1maintainsthatitismoremeaningfultotalkof anAIdecision-making process ratherthan an AI decision-making system. It defines the elements comprising an AI decision-makingprocessforthepurposeofsituatingthe dis-cussion.Section2introducesMarthaAlbertsonFineman’s vul-nerabilitytheory26asatheoreticalframeworkforexamining
someofthewaysinwhichtheuseoftheAIdecision-making processeswillaffectindividualsandsocietyfromthe perspec-tiveofsocialjustice.Section3investigatessomeofthetypes ofvaluesthattheoperationofAIdecision-makingprocesses givesriseto.Thediscussiondrawsonthevulnerability the-orytoillustratesomeofthewaysinwhichtheseprocesses reconfiguresocialandinstitutionalrelationshipsinwhich in-dividualsareembedded.27Itscrutiniseshowtheemployment
ofAIdecision-makingprocessesimpactsonhowsociety un-derstandslivedhumanexperienceandhumandiversity.Itis concludedthatoneofthecontexts inwhichit isdesirable topreservehumandecision-makingprocessesiswherethe decisionconcernsevaluatingthecapabilityoftheindividuals forthepurposeofdeterminingtheirentitlementtoresources. Automateddecision-makingshouldbeavoidedwherethe de-cisioninvolvesrepresentingindividualsingeometricspace. Theuniversityadmissionsprocessisanexamplewherethe decision-makerevaluatesthecapabilitiesofindividualsby as-sessingtheirskillsandpersonalqualities.Thepresentwork isdesignedtobeastartingpointforfurtherscholarly explo-rationforhowthe useofAIdecision-makingprocesses re-configuressocietalarrangementsandproducessociety-wide
24Lucien Frys and Christian Staat, ‘University Admission
Practices-France’ (Matching in Practice, 2016) <http://www. matching-in-practice.eu/university-admission-practices-france> accessed1August2019
25Ibid
26MarthaAlbertsonFineman,‘EqualityandDifference–the
Re-strainedState’(2015)66AlabamaLawReview609,614
27MarthaAlbertsonFineman,‘Equality,Autonomyandthe
effects.Greaterscholarlyattentionisneededtoaddressthe questioninwhatcontextsthelegislatureshouldrequire hu-mandecision-making.
1.
A
definition
of
an
artificial
intelligence
decision-making
process
AnevaluationofAI-baseddecision-makingprocesses neces-sitatesunderstandingwhatAIis,howitfunctionsandwhat elementscomprisethedecision-makingprocess.Thedecision toframethe discussioninterms ofanAIdecision-making processasopposedtoanAIdecision-makingsystemis inten-tional.OneofthereasonsforthischoiceisthatAItechnology isevolving.Forthisreason,itismoremeaningfultofocuson thetypesofproceduresandprocessesthatunderliepresentAI technologiesratherthanonhowcomputerscientistsdesign suchsystems.TheevolvingnatureofAIsystemsisillustrated bythefactthatmultipledefinitionsofartificial intelligence existandthedefinitionshavebeenevolvingovertime.28One
ofthereasonswhyitisdifficulttodefinethetermAIstems fromthefactthatitisunclearwhatsocietymeansbytheterm intelligent.29AccordingtoJohnMcCarthy,‘theproblemisthat
wecannotcharacteriseingeneralwhatkindofcomputational procedureswewanttocallintelligent.’30GiventhatAIasa
disciplineisasocialphenomenonshapedbyindividuals,Bao ShengLoeandcolleaguesrecommendthatthedefinitionofAI becontinuouslyupdated.31Apresentcommonunderstanding
ofanAIsystemisthatitautonomouslylearnsfrombeing ex-posedtoitsenvironmentandmakeschangestoitsmodelof theexternalenvironmentbasedonthesensedchangesinthe environment.32
Itismorefruitfultounderstandtheterm AIintermsof howaparticularsystemisdesignedandoperatesratherthan byreferencetothetermintelligence.IgSnellenarguesthat intelligenceisametaphorinthecontextoftechnicalsystems becausehumanbeingsdothethinkinginthecourseof cre-atingthe system’s architecture.33 Similarly,the DutchRaad
28Bao ShengLoe andothers,The FacetsofArtificialIntelligence:
AFrameworktoTracktheEvolutionofAI(InternationalJoint Con-ferencesonArtificialIntelligenceOrganization,Stockholm,2018) 5180
29Max Vetzo, JannekeGerards and Remco Nehmelman,
Algo-ritmesenGrondrechten(BoomJuridisch2018)41
30John McCarthy, ‘What is Artificial Iintelligence? Basic
Questions’ (Stanford University, 2007) <http://jmc.stanford.edu/ artificial-intelligence/what-is-ai/index.html> accessed 13 May 2019
31Loeandothers,TheFacetsofArtificialIntelligence:AFrameworkto
TracktheEvolutionofAI5186
32Vetzo,GerardsandNehmelman,AlgoritmesenGrondrechten41 33Ignatius Theodorus Maria Snellen, ‘Het Automatiseren van
BeschikkingenBestuurskundigBeschouwd’inHansFrankenand others (eds), Beschikken en Automatiseren, Preadviezen Voor de Vereniging voorAdministratiefRecht (Samsom HDTjeenkWillink 1993) 55,quotedinBeppieMargreetAlizevanEck, ‘Geautoma-tiseerde Ketenbesluiten & Rechtsbescherming: Een Onderzoek NaardePraktijkvanGeautomatiseerdeKetenbesluitenOvereen FinancieelBelanginRelatieTotRechtsbescherming’(PhDthesis, TilburgUniversity2018)193
vanState(theCouncilofState)34maintainsthatitis
mislead-ingtocallAIdecision-makingsystemsself-learningbecause theydonotunderstandreality.35Theprocessesunderlyingthe
constructionand operationofAIsystemswillbeexamined toshowwhythetermintelligenceshouldbeunderstoodas havingaspecialistmeaninginthecontextofanAIsystem. Thediscussionwilldemonstratethatitismorefruitfultotalk ofanAIdecision-makingprocessratherthananAI decision-makingsystem.
Computer scientists draw on data science techniques whencreatingAIsystems.36Whenoneunderstandsthe
de-signandoperationofAIsystemsitemergesthatthese sys-temsare notintelligent inthe senseinwhich societies at-tributethetermintelligencetohumanbeings.Computer sci-entistsbeginthedevelopmentofanAIsystembyformulating aproblemforwhichtheyaimtogenerateusefulknowledge.37
Computerscientiststhenpreparedatabyconvertingitintoa formatanAIsystemcanprocess.38Theendresultthe
com-puterscientistsstrivetoachievewillinfluencehowthey ma-nipulateandlabelthedata.39Thenextstepistousethedata
tocreateamodeloftheexternalenvironmentthatcaptures theobjectofinterest,40suchasastudent’spredicted
examina-tiongrades.Themodellocatespatternsinthedataby detect-ingcorrelationsbetweenpiecesofdata.41Themodelidentifies
whatpiecesofinformationarerelatedtoeachother.42Inthe
unsupervisedlearningprocesscomputerscientistsletthe sys-temsearchforpatterns;thesystemallocatesindividualsinto groupsbasedonsharedcharacteristics.43 Inthesupervised
learningprocessthecomputerscientistsformulateacriterion andtheAIsystemsortsindividualsintogroupsbasedontheir likelihoodoffulfillingthatcriterion.44Themodelthesystem
generatesallowstheusertopredictthatanindividualbelongs toaparticulargroupofpeoplewithsharedcharacteristics.45
TheAIsystempredictsanindividual’sperformancebasedon theperformanceofindividualswhomittreatsashaving sim-ilarcharacteristicstotheindividualinquestion.46Itappliesa
34TheCouncilofStateAdvisestheGovernmentandParliament
onLegislationandGovernance.RaadvanState,‘TheCouncilof State’(RaadvanState2019) <https://www.raadvanstate.nl/talen/ artikel>accessed25July2019
35RaadvanState,AdviesW04.18.0230/I:OngevraagdAdviesOver
deEffectenvandeDigitaliseringVoordeRechtsstatelijkeVerhoudingen (RaadvanState2018)9
36RosariaSilipo,‘What’sinaName?ArtificialIntelligenceorData
Science?’(BetaNewsInc,2019)<https://betanews.com/2019/02/05/ artificial-intelligence-or-data-science>accessed14May2019
37Foster Provost and Tom Fawcett, Data Science for Business
(O’ReillyMediaInc2013)19
decision-makingprocedurefordeterminingwhetheran indi-vidualisentitledtoapositivedecision.47
Presently,AIsystemslackhumanintelligence.Theydonot havethecapacitytounderstandwhatthecorrelationbetween thepiecesofdatameans,whetherthecorrelationhas signif-icanceandhowthiscorrelationcorrespondstophenomena intheworld.WhenanAIsystemfindsacorrelationbetween twopiecesofdata,thisdoesnotsignifythatAcausesB.48In
fact,thecorrelationstheAIsystemdetectscanbespuriousor accidental.49Forinstance,thereisahighcorrelationbutnot
causationbetweenice-creamconsumptionandsharkattacks; bothtendtooccurduringwarmerseasons.50Anotherreason
whyAIsystemsarenotintelligentstemsfromthefactthat theycannotindependentlyreflectonwhattheirpredictions signifyandwhetherthedecision-makingprocedureproduces societallydesirableoutcomes.
David Preiss and Robert Sternberg view human beings and technologyashavingareciprocalinfluence.51
Technol-ogytransformshumancognitiveskills52aswellasthe
under-standingofwhatishumanintelligence.53Meanwhile,cultural
contextinfluencestechnologies.54Thisobservationis
corrob-oratedbythefactthatasAIgainsnewcapabilitiestosolve problems,societyredefinesthe termhumanintelligencein order todifferentiatebetweenAIandhumanbeings.55 The
betterapproachistoacknowledgethatanydefinitionof hu-manandmachineintelligenceistentative.Thereneedstobe anawarenessofwhatdefinitionofintelligenceonechooses, whyandwithwhatconsequences.Giventhatsomecomputer scientistsseektoreplicatehumanintelligenceinAI,56there
maycomeatimewhenthedividinglinebetween‘artificial’ and‘human’intelligenceislessclear.Societyshouldreflect onthesocialrolethetermintelligencehasasitcontinuesto refinethemeaningofthisterm.
Currently, different definitions of algorithmic or auto-mateddecision systemsexist.Definitions framingthe sub-ject matterbroadly andbyreference toanartificial intelli-gencedecision-makingprocessarepreferable.TheAustralian Human Rights Commission defines ‘AI-informed decision-making’as‘decision-making whichrelieswhollyorinpart
47SorelleAFriedler,CarlosScheideggerandSuresh
Venkatasub-ramanian,‘Onthe(Im)possibilityofFairness’(2016)1609.07236v1 arXiv1,10
48Eubanks,AutomatingInequality:HowHigh-techToolsProfile,Police,
andPunishthePoor144
49Ibid144-45 50Ibid145
51DavidPreissandRobertSternberg,‘TechnologiesforWorking
Intelligence’inDavidPreissandRobertSternberg(eds),Intelligence andTechnology:theImpactofToolsontheNatureandDevelopmentof HumanAbilities(Routledge2005)199
52Ibid 53Ibid184-85 54Ibid199
55ChrisSmith,‘Introduction’,TheHistoryof ArtificialIntelligence
(UniversityofWashington2006)4
56BenGoertzelandPeiWang,‘Introduction:AspectsofArtificial
GeneralIntelligence’inBenGoertzelandPeiWang(eds),Advances inArtificialGeneralIntelligence:Concepts,ArchitecturesandAlgorithms ProceedingsoftheAGIWorkshop2006(IOSPress2007)1
onartificialintelligence.’57Byhingeingthedefinitiononthe
termartificialintelligenceandthenotionofdecision-making, the AustralianHumanRightsCommission conceivesof AI-based decision-making in terms of the computer science techniquesunderpinningthe decision-makingprocess. Pro-videdonegivesthetermartificialintelligenceaholistic inter-pretation,thedefinitionoftheAI-informeddecision-making can be interpreted as covering all the stages involved in the decision-making processes. What is significant is that theAustralianHumanRightsCommissionusestheterm ‘AI-informed decision-making’ rather than the term decision-makingsystem.
Incontrast,theDirectiveonAutomatedDecision-Making oftheGovernmentofCanadacontainsanarrowerdefinition becausesitemploysthetermanautomateddecision-making system.Itdefinesanautomateddecisionsystemas
[A]nytechnologythateitherassistsorreplacesthe judge-ment of human decision-makers. These systems draw fromfieldslikestatistics,linguistics,andcomputerscience, andusetechniquessuchasrules-basedsystems, regres-sion,predictiveanalytics,machinelearning,deeplearning, andneuralnets.58
WhatiscommontothedefinitionsoftheAustralian Hu-man Rights Commission and the Directive on Automated Decision-Makingisthattheydiscussthe respectiverolesof artificial intelligence technology and humanbeings inthe decision-makingprocess.Whatismore,thedefinitions cen-treonthetypesofcomputersciencetechniquesinvolved.One ofthereasonswhythetermautomateddecision-making sys-tem isnarrowerthan the term decision-making process is becauseitexcludesstagesthatbearontheoutcomeofthe decision-makingprocessbutthattakeplacepriortothe ac-tualconstructionofthe system.Inparticular,the decision-makingprocessbeginswhenthecomputerscientist formu-latesaproblemtobesolvedusingtheAI-drivenprocedure be-causethisstagebearsontheoutcomeofthedecision-making process.SolonBarocasandAndrewDSelbstpositthat com-puterscientistsexercisesubjectivitywhentheyformulatea problemthemachineshouldsolve.59Theydothisbydefining
thevariableforwhichthemachinemakesaprediction.60How
computerscientistsdefinethisvariable,suchasagood em-ployee,shapeswhatrelationshipsbetweendatathemachine findsandthereforeitspredictionsaboutthesuitabilityofthe applicantfortheposition.61AccordingtoReubenBinns,ifthe
computerscientistusesabiasedvariableasabenchmarkfor thebasisonwhichtheAIdecision-makingprocesspredicts
57Australian Human Rights Commission, ‘Decision
Mak-ing and Artificial Intelligence’ (Australian Human Rights Commission, 2019) < https://tech.humanrights.gov.au/ decision-making-and-artificial-intelligence>accessed22January 2020
58Government of Canada,‘Directive on Automated
Decision-Making’ (Government of Canada 1 April 2019) <https://www. tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592> accessed 23 January 2019
59SolonBarocasandAndrewDSelbst,‘BigData’sDisparate
Im-pact’(2016)104CaliforniaLawReview671
futureperformancethentheoutcome willbebiased.62 The
termdecision-makingprocessispreferablebecauseitcanbe definedtoincludethestagewherethecomputerscientist for-malisestheproblemtobesolvedusinganAI-driven proce-dure.
TheCouncilofEuropeCommitteeofExpertsusestheterm anAIdecision-makingsystemandinthisrespectmirrorsthe approachoftheDirectiveonAutomatedDecision-Making.The CouncilofEuropeCommitteeofExpertsliststhetypesoftasks theAIdecision-makingsystemcarriesoutandtheprocesses thatcomprisetheoperationofsuchsystems.63Inparticular,
itdefinesalgorithmicsystemsasapplicationsthat‘perform one or moretaskssuch as gathering,combining,cleaning, sorting andclassifyingdata,aswellasselection, prioritisa-tion,recommendationanddecision-making.’64Thedefinition
oftheCouncilofEuropeCommitteeofExpertsispreferableto thedefinitiontheCanadianDirectiveonAutomated Decision-Makingoffers.Itfocusesonthestepsinvolvedinconstructing amodelthattheAIdecision-makingsystemusesformaking predictionsaboutfutureperformanceandtheprocessof pro-ducingadecisioninrespectofanindividual.Thus,thereisan emphasisontheprocessleadingtothedecisionratherthan onthetypeoftechniquesthecomputerscientistsusein or-dertoprogramAIsystems.ThisaspectmakesthenatureofAI decision-makingsystemsexplicitbylistingthestepsentailed inproducingadecision.Thisdefinitionmakesiteasierforthe lawmakersandenduserstodebatethesocialconsequences ofusingAIdecision-makingsystems.
Anotheradvantageofdefiningthetermintermsofwhat elements comprise the AI decision-making process is that it providesanunderstanding ofwhatthesystem doesand how it achieves its objective. On the other hand,the term AI decision-makingsystem isopaque.Little understanding may begleanedfrom thisterm.Societyuses theterm sys-tem to refertointerdependent and interactingelements.65
ThefactthatAItechnologyutilisesacombinationof differ-entelementsrevealslittleaboutthenatureofthetasksit per-formsandhowitperformsthem.Thisstemsfromthefactthat thetermsystemdrawsattentiontothephysicalarchitecture ofthesystemand whatcomponents orelementscomprise thesystem.Whatiscoreforunderstandingdecision-making istheprocessthroughwhichonearrivesatadecisionrather thanthefactthatvariousinterdependentstagesareinvolved inthedecision-makingprocess.
Arelevantconsiderationisthatsincesocietal understand-ingofAIisevolving66itcouldbethatinfiftyyearstimethe
62ReubenBinns,‘ImaginingData,BetweenLaplace’sDemonand
theRuleofSuccession’inIrinaBaraliucandothers(eds),Being Pro-filed:CogitasErgoSum(AmsterdamUniversityPress2018)
63CommitteeofExpertsonHumanRightsDimensionsof
Auto-mated DataProcessing andDifferentFormsofArtificial Intelli-gence,DraftRecommendationoftheCommitteeofMinisterstoMember StatesontheHumanRightsImpactsofAlgorithmicSystems(Councilof Europe2019)par3
64Ibid
65Merriam-WebsterIncorporated,‘System’(Merriam-Webster
In-corporated,2019)<https://www.merriam-webster.com/dictionary/ system>accessed22January2020
66Loeandothers,TheFacetsofArtificialIntelligence:AFrameworkto
TracktheEvolutionofAI5180
definitionofwhatAIisandhowitoperateswillbevery dif-ferent.Forinstance,thearchitectureofAIcouldhaveahighly distributedformwhereitisunclearexactlywhatitselements areandhowitsdifferentelementsinterplay.Giventhat com-puterscientistsusetheknowledgeaboutthehumanbrainas inspirationtocreatenewAItechniques,67AIcouldresemble
thefunctioningofahumanbodyveryclosely.Such develop-mentscouldmakeitdifficulttospeakaboutthemachineasa system.
Ofsignificanceisthatthereisaparallelbetweenthe el-ementscomprisinghumanandtheAIdecision-making pro-cesses. The term AI decision-making system fails to cap-ture this important element.This is because society does not conceive of humanbeings and their deliberation as a system.However,onecantalk aboutthe similaritiesinthe decision-making process humanbeings engage inand the AIsystemscarryoutbecausehumanbeingsdeveloptheAI decision-makingprocess.Giventhathumanbeingsexercise theirjudgementindevelopingAIdecision-makingprocesses, itisnotsurprisingthattherecanbeadegreeofsimilarity be-tweenhumanandAIdecision-makingprocesses.
Thehumandecision-makingprocessbeginswiththe fram-ingofthegoalthatthedecision-makingprocedureisdesigned toachieve andwith theidentification ofthe criteria corre-spondingtoentitlementtoapositivedecision.Forinstance, whenauniversitysetsupanadmissionsprocess,it formu-latesa set ofgoals.Thegoals could betoattract students who possessaparticularskillset,who haveparticular esti-matedacademicperformanceor who are representativeof thepopulationasawhole.Theuniversitycouldaimto miti-gatetheexistenceofsocietallyembeddedinequalitiesby plac-ing emphasison attracting candidates who experience so-cialbarriers.Thegoaltheuniversitysetswilldeterminewhat criteria it choosesasa basison which the officialsshould selectthe students.Theadmissionscriteria willdetermine whetherthehumandecision-makerconsidersonlythe stu-dent’sgradesoradditionalcriteria,suchasextracurricular ac-tivities,workexperienceandtheapplicant’spersonal circum-stances.Thedecision-makingcriteriadeterminewhat quali-tiesthedecision-makertakesintoaccountorignores.
When computer scientistsdecide how to formulate the problemandwhattheAIprocesspredicts,theyselectthegoal forthe decision-makingprocess and the criteria thatform thebasisofthedecision-makingprocess.68 When
formulat-ingtheproblemtobesolvedthecomputerscientsitsarein asimilarpositiontohumandecision-makerswhoaretasked withdevelopingandapplyingadecision-makingprocedure. Inbothcasesthegoaltobeachieveddetermineswhatcriteria thedecision-makersadoptforselectingapoolofcandidates. Thedifferenceisthathumandecision-makerscanchoose cri-teriathatcanbeexpressedinbothquantitativeand qualita-tiveterms.Ontheotherhand,computerscientistscanselect onlythosecriteriaasindicatorsofagoodcandidatethatcan beexpressedinquantitativeterms.69 Examplesof quantita-67ZhongzhiShi,Advanced ArtificialIntelligence(WorldScientific
Publishing2011)10par1
68BarocasandSelbst,‘BigData’sDisparateImpact’678-80 69Friedler, Scheidegger and Venkatasubramanian, ‘On the
tiveproxiesaregrades,rankingsandnumberofoutputs. Com-puterscientistsneedtofindquantitativeproxiesiftheywant tocapturequalitativecharacteristics.70Qualitative
character-isticsrefertomultidimensionalandmultitexturedqualities, suchascreativityandleadershipskills.
When decision-makersuseagradetocapturea qualita-tivecharacteristic,suchasintelligence,theyemployaproxy.71
For instance,the standardised admission test forgraduate programmes GeneralRecord Examinationmeasures analyt-ical,quantitativeandverbalskills.72Thescoresofthistest
didnothaveaccurate predictivecapacityforhow lecturers atYaleUniversityratedthestudents’analytical,creativeand practicalskillsonagraduatepsychologyprogramme.73Thus,
AIdecision-makingprocessesshouldbeviewedasmapping proxiesontothemodelalongsideactualcharacteristics.Itis difficulttoexpressqualitativecharacteristics,suchas inter-personalandteamworkskills,numericallybecausethey re-latetohowindividualsinteractwitheachother.While class-matescouldbeaskedtorankeachotheronthemetricofa teamworkingskill,suchresponseswouldbeunreliable. Indi-vidualscanbeinfluencedbytheirpersonalattitudes,bya de-siretocompeteorbyunconsciousbiases.Theymaylackthe distancenecessarytoreflectonhowallteammembers inter-acted.Qualitativedescriptionsofhowanindividualactedin particularcircumstancesprovidemoreinformationaboutan individual’steamworkskills.TheAIdecision-makingprocess shouldbeunderstoodasamorelimitedprocedurethana hu-mandecision-makingprocessbyvirtueofitslimitedcapacity tocapturequalitativedataandthecontextbehindthisdata.
Thereanother importantdifference betweenthe human and the AI decision-making processes. Human decision-makersdeterminewhat facetsoftheperson theyconsider throughchoosingthedecision-makingcriteria.Incontrast,AI decision-makingprocessescreatewhatLukeStarkcallsthe ‘scalablesubject.’74Thesystempurportstomodelthe
indi-vidualbutinfactreflectscorrelationsbetweencharacteristics presentwithinagroupthatmaynotapplytotheindividual inquestion.75 AnnamariaCarusielaborates thatthe model
representstheindividual inareductiveway76and thatthe
approachtorepresentationcontainsparticularvalues.77The
lackofgranularinformationcanresultinthemodelmaking
70ProvostandFawcett,DataScienceforBusiness339
71Friedler, Scheidegger and Venkatasubramanian, ‘On the
(Im)possibilityofFairness’3
72EducationalTestingService,‘AbouttheGREGeneralTest’(
Ed-ucational Testing Service,2019) <https://www.ets.org/gre/revised_ general/about>accessed22July2019
73RobertJSternbergandWendyMWilliams,DoestheGraduate
RecordExaminationPredictMeaningfulSuccessinPsychology(Yale Uni-versity1994),quotedinRobertJSternberg,‘Myths,Countermyths, andTruthsaboutIntelligence’(1996)25EducationalResearcher11, 14
74LukeStark,‘AlgorithmicPsychometricsandtheScalable
Sub-ject’(2018)48SocialStudiesofScience204,207
75Ibid213
76AnnamariaCarusi,‘BeyondAnonymity:Dataas
Representa-tioninE-researchEthics’(2008)37InternationalJournalofInternet ResearchEthics37,61
77Ibid42
generalisationsthatare unfair totheindividual.78 To
illus-trate,sincethemodelgroupsstudentsbasedonpast exam-inationsperformanceforthepurposeofmakingaprediction aboutfutureresults,itwouldgroupstudentswhoperformed poorlyirrespectiveofthereasonforthelowresults.Thismay resultintheAIsystemfalselypredictingalowgradefora stu-dentwhoseperformanceinthepasthadbeeninfluencedby anillnessbutwhorecoveredlater.
PatrickAllocomments thatthemodeldepictingthe pat-terns in the data represents a proxy for what one is try-ingtopredictratherthantheactualstateoftheworld.79 A
modelthat predicted the student’s performanceon an ex-aminationthattestedhowwellthestudentmemorisedthe materialis not a reliable proxy for the student’s aptitude. Whilethemodelprovidesindirectinformationaboutan in-dividual’smemorycapacity,ittellslittleaboutthestudent’s aptitudes,such as problem-solvingcapacity and creativity. ThedifferencesbetweenhumanandAIdecision-making pro-cessesdonotprecludeconsideringtheautomationof deci-sionsintermsofanAIdecision-makingprocess.Infact,the termprocesshighlightsthefactthathumanbeingsconstruct thedecision-makingprocedurewithinthemachine.Whatis more,thistermmakesitpossibletodemarcateatwhatpoint thedecision-makingcommencesandends.Crucially,the defi-nitionthatfocusesontheprocessratherthanthesystem pre-ventsmisrepresentation.Thetermartificialintelligence sys-temcancreateamisleadingimpressionthatthesystemhas capabilitiesthatcorrespondtohumanintelligenceorthatthe decisionofcomputerscientistsrelatingtothevariabletobe predictedhadnoimpactontheoutcomefortheapplicants.
ThequestioniswhatelementscomprisetheAI decision-makingprocess.Theproposeddefinitiondoesnotcover situa-tionswhereahumandecision-makerusestheanalyticstheAI systemgeneratesasasoleorpartialbasisforreachinga deci-sion.BycombiningdefinitionsoftheCouncilofEurope Com-mitteeofExpertsandtheAustralianHumanRights Commis-siononecanarriveatasuitabledefinitionofanAI decision-makingprocess.Whatneedstobeincludedinadditionisthe elementofhumandecision-makinginvolvedinformulating aproblemtobesolvedand howtoconstructthesystemto achievethisgoal.TheAIdecision-makingprocessshouldbe understoodasstartingwiththecomputerscientist formulat-ingtheproblemtobesolvedandthegoalstobeachieved.It encompassesthecollection,cleaning,labelling,aggregation, analysis,manipulationand processingofdata.Thesesteps arecarriedoutinordertopredictfutureperformanceandto produceadecision affectingtheright orentitlement ofan individualtoapositivedecision.Thedefinitionincludesthe applicationofatemplatefordeterminingwhetheran indi-vidualshouldbegrantedapositivedecision.Thisdefinition isappropriate eventhoughthe processofcreatingamodel oftheenvironment asabasisformakinga predictionisa separatestagefrom the decision-makingprocedure for de-termining anindividual’sentitlement toan affirmative
de-78FrederickSchauer,Profiles,ProbabilitiesandStereotypes(Belknap
Press2006)3
79PatrickAllo,‘Mathematical Values and theEpistemology of
cision.80 A broaddefinitionencompassingall the elements
thatbearonhowthealgorithmicprocessmeasuresand pre-dictsfutureperformanceaswellashowitproducesa deci-sionisneeded. Thisapproachensuresadequate protection ofindividuals.Thisapproachleavesscopeforthefactthata computerscientistcouldembedavarietyofdecision-making procedures for arriving at a decision. For instance, an AI decision-makingprocess couldallocatetheresourcesto in-dividualswiththehighestscoreforpredictedperformance.81
Itcouldtakeintoaccountotherconsiderations.Toillustrate, AdityaKrishnaMenonandRobertCWilliamsondesignedan algorithmicdecision-makingprocedureforanAIsystemthat theyargueachievesthebesttrade-offbetweenaccuracyand fairness.82
WhyisanexpansivedefinitionforthetermAI decision-makingprocessnecessary?Theprocessofmappingthedata ontothemodelandofpredictinganindividual’sperformance based onthe modelhas influenceonwhether the individ-ual willreceiveapositivedecision.Theproposeddefinition isdesignedtocapturethefactthatcomputerscientistsmake subjective decisions inthe courseofcreating the architec-turethatenablestheAIdecision-makingprocess tocollect, aggregateandanalysedata.83Thechoicescomputerscientists
makeaffecthowtheAIdecision-makingprocessproduces de-cisions andwhat kind ofdecisionanindividual receives.84
Often,thedecisionsofcomputerscientistsarehiddenand re-flectaparticularunderstandingoftheworld.85Forexample,
computerscientistsmakeassumptionswhendecidinghow torepresentapersoninamodel.86Sinceindividualsare
mul-tidimensionalandcannotbedescribedexhaustively,itisin theorypossibletocreateaninfinitenumberofsnapshotsof theindividualdependingonwhatcombinationof character-isticsoneinputs.Forinstance,acandidatecanbedescribed asafemalewithascoreofeightypercentformathematics andascoreoffiftypercentforEnglishlanguage.Alternatively, thesamecandidatecanbedesignatedasafemalecandidate who isenroledin aschoollocated inanunderfunded dis-trict.Shelearnsinanovercrowdedclassroomduetoa short-age ofEnglishteachers.Dependingon whatcharacteristics one chooses asbeing relevant for the purpose of generat-ingamodel,onecangetadifferentsnapshotoftheperson. What ismore,sinceinequalitiesare structurallyembedded insociety,groupswillberepresentedinadistortedmanner whenmappedontothemodel.87Theresearcherscitethefact
thattheverbalsectionofthestandardisedAmerican univer-sityadmissiontestSATfunctionsdifferentlyforthe African-Americanindividuals.88Itfollowsthatthereisadiscrepancy
80ProvostandFawcett,DataScienceforBusiness25
81Friedler, Scheidegger and Venkatasubramanian, ‘On the
(Im)possibilityofFairness’3
82MenonandWilliamson,‘TheCostofFairnessinBinary
Classi-fication’2
83Friedler, Scheidegger and Venkatasubramanian, ‘On the
(Im)possibilityofFairness’3
84Ibid 85Ibid 86Ibid6-7 87Ibid7
88Ibid 8; Maria Veronica Santelicesand Mark Wilson,‘Unfair
Treatment?TheCaseofFreedle,theSAT,andtheStandardisation
betweentherealworldandhowtheAIdecision-making pro-cessmapstheworldontoageometricalspaceaspartof gen-eratingamodeloftheworld.89
Computerscientistscansteerthedataanalysisprocessby framingforwhatmetrictheAIdecision-makingprocess for-mulatesthepredictionsandbychoosingaparticularapproach todataanalysis.90InthecontextofAIandhuman
decision-makingprocessesthechoiceofcharacteristicstodenotemerit forthepurposeofselectingstudentsshapeswhether individ-ualshaveanequalopportunitytobeadmittedtouniversity. Someselectioncriteria appearneutral but infact hide the fact thatthe decision-making procedurecreates admission barriersforchildrenfrompoorsocioeconomicbackgrounds. Forinstance,thecomputerscientistcansetagoodcandidate performanceforadmissionstoauniversityinterms of ex-cellingatplayingamusicalinstrument,painting,playing pro-fessionalsportsorwinningadancecontest.Thisapproachto studentadmissionsresembleshowthehighestranked uni-versitiesintheUnitedStatesselectstudents.91Childrenhave
unequalaccesstoparticipationinextracurricularactivities. AnnaBull examines how complex factors lead tochildren frommiddle-classandupper-classfamiliesbeingmorelikely toplayamusicalinstrument.92Thereasonsincludethecost
ofmusiclessonsandthefactthattheapproachtoteaching musicreflectsthenatureofinteractionsprevalentin middle-classteachingsettings.93Thisexampleshowsthatthe
crite-riacomputerscientistsembedinto the AIdecision-making processandthemetricsbywhichtheprogramgeneratesthe predictionwillshapewhetherindividualshaveequalaccess touniversityeducation.Accordingly,theAIdecision-making processshouldbedefinedtoincorporateallstagesofsystem developmentandoperationbeginningwithformulationofthe problemtobesolvedandendingwithadecisionoutput.
2.
Introducing
the
theoretical
framework:
the
vulnerability
theory
Thevaluesoneholdsamongstotherswilldeterminehowone evaluatestheAIdecision-makingprocess.Forinstance,those whovalueefficiencywillaskquestionssuchaswhetherthe useoftheAIdecision-makingprocesscutscostsorshortens
ApproachtoDifferentialItemFunctioning’(2010)80Harvard Edu-cationalReview106,126
89Friedler, Scheidegger and Venkatasubramanian, ‘On the
(Im)possibilityofFairness’7
90Felix Stalder, ‘From Inter-subjectivity to Multi-subjectivity:
KnowledgeClaimsandtheDigitalCondition’inIrinaBaraliucand others(eds),Beingprofiled:CogitasErgoSum(AmsterdamUniversity Press2018)136
91Ilana Kowarski, ‘How Colleges Weigh Applicants’
Ex-tracurricular Activities’ (US News, 2018) <https://www. usnews.com/education/best-colleges/articles/2018-10-25/ how-colleges-weigh-applicants-extracurricular-activities> ac-cessed14May2019
92Anna Bull, ‘Reproducing Class? Classical Music Education
and Inequality’ (Discover Society, 2014) <https://discoversociety. org/2014/11/04/reproducing-class-classical-music- education-and-inequality/>accessed14May2019
thedeliberationtime.94Economistsdefineefficiencyas‘a
situ-ationwhereeachgoodisproducedattheminimumcostand whereindividualpeopleandfirmsgetthemaximum bene-fitfromtheuseoftheresources.’95Equityisadifferenttype
ofvalueincomparisontoefficiency.96Equityconcentrateson
whetherthereisfairnessandjustice.97Individualswhovalue
equitywillaskdifferenttypesofquestionsthaneconomists whenassessingthedesirabilityofusingAIdecision-making processes.HowoneevaluatesAIdecision-makingprocesses willdifferdependingonhowonedefinesfairness.Different people have a different understanding ofwhat constitutes fairness.98 Additionally, there is a difference between how
scholars99 and thegeneralpopulationdefine fairness.100 In
discussingfairnessitisimportanttoacknowledgethevalue of pluralism and cultural diversity.Respect for individuals is contingenton a recognitionof theiropinions and value systems. Therepresentation ofaplurality ofviews is con-ducivetoinformeddiscussionsaboutwhatconstitutesagood life.Itwidensthearrayofargumentsandintroducesnew vis-tasfromwhichtoassesspropositions.
Roger Brownsword argues that to gain legitimacy regu-lators shouldadoptinstrumentsthat capturecommon val-uesandconcernswhileleavingroomforlocaldifference.101
The present article uses the vulnerability theory as a lens forevaluatingAIdecision-makingprocessesbecausethe the-ory captureshow citizens conceive ofcore components of fairnessand justice.MarthaAlbertson Finemanformulated ‘vulnerabilitytheory’asan‘analternativetotheoriesof so-cialjusticeandresponsibilitythatfocusonachievingformal equality.’102Theterm‘socialjustice’focusesontheposition
ofmanyindividuals withinasociety.103 Traditionally,
advo-catesofsocialjusticecalledforajustdistributionofresources and ofthe fruitofeconomicproductionamongst the indi-viduals.104WhatdifferentiatesFineman’sapproachto
under-standinghowtoadvancesocialjusticeisthatsheexamines theimpactoflegallyconstructedsocialinstitutionsand rela-tionshipsonthelivesofindividuals.105Thevulnerability
the-oryreflectshowcitizensunderstandfairnessbyfocusingon thewayinwhichthestateconstructsrelationshipsbetween individualsandinstitutions.106Astudyfoundthat
individu-94VishalMarria,TheFutureofArtificialIntelligenceInTheWorkplace
(ForbesMediaLLC2019)
95JohnSloman,Economics(6edn,PrenticeHall2006)9 96Ibid11
97Ibid 98Ibid
99Norman JFinkel,RomHarré andJosé-LuisRodriguezLopez,
‘CommonsenseMoralityAcrossCultures:NotionsofFairness, Jus-tice,HonorandEquity’(2001)3DiscourseStudies5,5
100Ibid21
101RogerBrownsword,‘RegulatoryCosmopolitanism:Clubs,
Com-mons,andQuestionsofCoherence’(2010)018/2010TILTLaw& TechnologyWorkingPaper2,4
102NinaA Kohn,‘VulnerabilityTheoryandtheRoleof
Govern-ment’(2014)26YaleJournalofLaw&Feminism2,6
103Martha Albertson Fineman, Vulnerability and Social Justice
(EmoryUniversity2019)1
104UnitedNations,SocialJusticeinanOpenWorld:TheRoleofthe
UnitedNations(UnitedNations2006)7
105Fineman,VulnerabilityandSocialJustice2 106Ibid
alsusethetermsfairnessandjusticeinterchangeablytorefer toviolationsofequityandequality.107Individualsunderstand
fairnesstoincludebothhowindividualsarepositionedin re-lationtootherindividualsinrelationshipsaswellashow indi-vidualsarepositionedinrelationtoinstitutionsinsociety.108
Theemploymentofthevulnerabilitytheoryallowsoneto as-sesswhatimpact the use ofAIdecision-making processes hasonindividualsandsociety.Thepresentarticleevaluates anumberofwaysinwhichtheuseoftheAIdecision-making processesimpactsontheindividualsandsocietyfromthe per-spectiveofsocialjustice.Itisbeyondthescopeofthiswork toevaluatetheAIdecision-makingprocessfromthevantage pointofalltheories ofjusticeand fairnessacrosscultures. Neitherisitpossibletoexamineinacomprehensivemanner allthewaysinwhichthecumulativeuseoftheAI decision-makingprocessindifferentdomainswilltransformsociety.
Theuseofthevulnerabilitytheoryapproachavoids draw-inganarbitrarydistinctionbetweentheprivateandthe pub-licdomains.109Whatbecomesrelevantfortheanalysisishow
theemploymentoftheAIdecision-makingprocessesaffects the subject of the decision-making procedure and society ratherthanwhethertheinequityarosefromtherelationship withthestateor withotherindividuals.Incontrast, schol-arlywritingsinpoliticalscienceandphilosophydistinguish betweenpublicandprivatedomainstodemarcatewhenthe statecanintervenetoregulate.110Thisdistinctionisarguably
apparentfromhowsomescholarscontrastthetermsjustice andfairness.111JohnRawlsforexampledefinesjusticeas
re-latingtotheinstitutionalarrangementsandpracticesthat de-finerights,dutiesandoffices.112Hedefinesfairnessasrelating
totherightsofpersonsarisinginthecourseoftheir interac-tionwithoneanotheronanequalbasis.113Individualswould
agreeonrulestoensurethattheydidnotfeeltheywere be-ingtakenadvantageofininterpersonalrelationships.114The
drawingofadistinctionbetweentherelationshipsindividuals havewitheachotherandwiththeinstitutionsforthepurpose ofassessingtheimpactofAIdecision-makingprocessesis un-desirable.Fairnessandjusticeareopen-endedtermsthat so-cietyusesasheuristicdevicestoredressinequities.The con-tentofthetermsjusticeandfairnesscanbegivendifferent meanings115depending on thecontext to which individuals 107 Finkel,Harré andLopez,‘CommonsenseMoralityAcross
Cul-tures:NotionsofFairness,Justice,HonorandEquity’21
108 Ibid
109 MarthaAlbertsonFineman,‘InjuryintheUnresponsiveState:
WritingtheVulnerableSubjectIntoNeo-LiberalLegalCulture’in AnneBloom,DavidMEngelandMichaelMcCann(eds),Injuryand Injustice:TheCulturalPoliticsofHarmandRedress(Cambridge Uni-versityPress2018)19
110 SeeforinstancetheworkofMaxWeber,IsaiahBerlin,Jürgen
Habermas,RichardRorty,Michael Walzerand JohnStuartMill. RaymondGeuss,PublicGoods,PrivateGoods(PrincetonUniversity Press2003)10
111 Finkel,Harré andLopez,‘CommonsenseMoralityAcross
Cul-tures:NotionsofFairness,Justice,HonorandEquity’5
112 JohnRawls,‘JusticeasFairness’(1958)67ThePhilosophical
Re-view164,164
113 Ibid178 114 Ibid
115 Manuel Velasquez and Claire Andre, ‘Justice and Fairness’
applytheseterms116anddependingonthesociety’svalue
sys-tem.117 Ifoneistoaddress inequitiescomprehensively,one
should focusonall sourcesfrom whichtheinequities may arise.
Thevulnerabilitytheorydemonstrateswhyitisimportant tofocusbothonhowtheoperationoftheAIdecision-making processconstructsrelationshipsbetweenindividuals,and be-tweenindividualsandinstitutionsinanalysingtheimpactof thesesystemsfromtheperspectiveofsocialjustice.According tothevulnerabilitytheory,individualsaresituatedindifferent economic,social,culturaland institutionalrelationships.118
Theserelationshipscannotbeclearlydemarcatedasbeing ei-therprivateorpublic.119Thepositionoftheindividualwithin
theserelationshipsdetermineswhethertheinstitutional ar-rangements create opportunities or impediments.120 These
institutionsformasystemthatdeterminestheresilienceof the individual.121 Thetermresilience refersto the
individ-ual’sabilitytorecoverfromlife’ssetbacksandtotake advan-tageofopportunities.122Therearefivetypesofresourcesthat
theinstitutionsprovidethatarecrucialforhuman flourish-ing.123First,materialgoodsdeterminetheindividuals’quality
oflifeandallowthemtoaccumulateadditionalresources.124
Second,individualsderive supportfromsocialnetworks.125
Third, human assets, such as education and employment, placetheindividualsinapositiontodeveloptheir capabili-ties.126Fourth,individualsbenefitfromhavingaccessto
exis-tentialandaestheticresources,suchasreligion,cultureand art.127 Fifth,individuals needecological assets,suchasthe
naturalenvironment,tomaintainphysicalwell-being.128
Ac-cesstointerpersonalresourcesofsupport,suchasfamily129
and socialnetworks,constitute relationshipsthatone typi-callyviewsasprivate.Inpractice,suchprivaterelationships cannotbeseparatedfromrelationshipswiththestate.130For
instance,lawsprohibitingharassment,bullyingand discrim-inationplayacrucialroleincreatinginclusivespaceswhere individualscanengageininterpersonalrelationships.
Conse-116MichaelAdler,‘FairnessinContext’(2006)33JournalofLawand
Society615,638
117KennethARasinskiandLeslieAScott,‘Culture,Values,and
Be-liefsAboutEconomicJustice’(1990)4SocialJusticeResearch307, 320
118Fineman,‘Equality,AutonomyandtheVulnerableSubjectin
LawandPolitics’22
119Fineman,‘InjuryintheUnresponsiveState:Writingthe
Vul-nerableSubjectintoNeo-LiberalLegalCulture’19
120Fineman,‘Equality,AutonomyandtheVulnerableSubjectin
LawandPolitics’23
121Ibid22
122Fineman,‘EqualityandDifference–theRestrainedState’622-23 123Fineman,‘Equality,AutonomyandtheVulnerableSubjectin
LawandPolitics’22-23
124Ibid22 125Ibid23 126Ibid 127Ibid 128Ibid
129MargaretThornton,‘TheCartographyofPublicandPrivate’in
MargaretThornton (ed),PublicandPrivate:FeministLegalDebates (OxfordUniversityPress1995)2
130Fineman,‘InjuryintheUnresponsiveState:Writingthe
Vul-nerableSubjectintoNeo-LiberalLegalCulture’19
quently,itisartificialtodistinguishbetweenjusticeand fair-nessbasedonwhethertherelationshipispublicorprivatein nature.
Forthepurposeofthisarticle,itappearsdesirabletouse thevulnerabilitytheoryratherthantheoriesoffairness,which focus onthe treatmentofanindividual forthepurposeof evaluatingtheAIdecision-makingprocesses.Theschool al-locationsysteminNewYorkCityisacasestudythat illus-trateshowafocusontheimpactoftheemploymentoftheAI decision-makingprocessontheindividualcanresultin fail-ingtodetectbothsourcesofsocialinjusticeandunfairness fortheindividual.Currently,theauthoritiesintheNewYork Cityuseanalgorithmtoplacechildrenintohighschools.131
Childrenprovidealistoftwelveschoolchoicestothe author-ities.132Thealgorithmallocateschildrentoaschoolby
select-ingapoolofcandidateswiththehighestgrade.133Thismeans
thatinpracticeeachschoolwillhavestudentswithina par-ticulargraderange.Schoolsthatareinthehighestdemand willhaveapoolofstudentswithtopgrades.Schoolswitha lesserdemandwillhaveapoolofstudentswhohavegrades inthemidorlowrange.Thisapproachtousingan algorith-micprocesstoallocatechildrentoschoolsresultsin segre-gation.Childrenwithhighgradesstudyindifferentbuildings andaregeographicallyseparatedfromthechildrenwithlow grades.Thisfindingshouldbeviewedagainstthebackdrop thatschoolsworldwidehaveracialand socioeconomic seg-regation.134JamesAAllenexpressesabroaderconcernthat
theuseofAIdecision-makingprocessesperpetuatesand re-inforcesexistingsegregation.135
The focus on whether the selection procedure the AI decision-makingprocessutilisedisfairforaparticular stu-dentintermsofmeritoccludesthewidersocialjustice con-cerns.When athirteen-year-old Jimmy(not hisreal name) voicedhisoppositiontobeingrejectedbyfiveofhistop pref-erenceschools,hewastoldthathisgradeofeightyfivedidnot qualifyhimforadmission.136Thecut-offpointforadmission
tothoseschoolswasagradeofninety.137Afocusonwhether
Jimmy performedbetter incomparisonto another student precludesamorein-depthenquiry.Thegradehasan appear-anceofbeinganobjectivemarkerthatmeasuresthestudents’
131Alvin Roth, ‘Why New York City’s High School
Ad-missions Process Only Works Most of the Time’ ( Chalk-beat, 2015) <https://www.chalkbeat.org/posts/ny/2015/07/02/ why-new-york-citys-high-school-admissions-process-only-works-most-of-the-time>accessed15May2019
132Ibid 133Ibid
134Thomas Toch, ‘The Lottery That’s Revolutionizing
D.C. Schools’ (The Washington Post, 2019) <https://www. washingtonpost.com/news/magazine/wp/2019/03/20/feature/ the-lottery-thats-revolutionizing-ddd-c-schools> accessed 15 May2019
135James A Allen, ‘The Color of Algorithms: An Analysis and
ProposedResearchAgendaforDeterringAlgorithmicRedlining’ (2019)46(2)FordhamUrbanLawJournal219,234
136Alvin Roth, ‘Why New York City’s High School
Ad-missions Process Only Works Most of the Time’ ( Chalk-beat, 2015) <https://www.chalkbeat.org/posts/ny/2015/07/02/ why-new-york-citys-high-school-admissions-process-only-works-most-of-the-time>accessed15May2019