• No results found

Poietic Responsibilities, the Infosphere, and the Epistemic Environment:

N/A
N/A
Protected

Academic year: 2023

Share "Poietic Responsibilities, the Infosphere, and the Epistemic Environment: "

Copied!
80
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A thesis submitted to the University of Amsterdam for the degree of MA Philosophy.

William James Victor Gopal

Poietic Responsibilities, the Infosphere, and the Epistemic Environment:

Conceptualising the Harms of Misinformation

(Image produced by DALL-E – a generative AI. Input terms were the title of this thesis).

Word Count = 24,926 Supervised by: Federica Russo Second Reader: Aybüke Özgün

(2)

Acknowledgements:

Writing a thesis is stereotypically a thankless task; late nights in the library, a ream of skipped plans because you’ve “really got to get this done”, little headspace to think of much else, all for sixty or so pages of dense academic prose only a handful of people will read. Thankfully, this has not been my experience. The researching and writing of this thesis provided me with some of the most intellectually, socially, and emotionally rewarding months of my life. This, of course, is all down to the people I have been blessed enough to have surrounding me during this time, all of whom are exemplars of wisdom, care, joy, and solidarity. First and foremost, I would like to extend my gratitude to Federica Russo for supervising this thesis and for the wisdom, support, and guidance you have provided throughout the process - I cannot sing your praises highly enough nor articulate just how much of an impact you have had upon my philosophical development. Thank you. I would also like to extend my thanks to the close friendships forged during this master’s degree - both within the halls of the University and in many cafés (though, let’s keep it real – I’m talking about Staalmeesters). Alessandro, Alex, Betsy, Jan, Juliane, Julius, Leonora, Martijn, Miriam, Nich, Yvo, and Zina – from the bottom of my heart, thank you. I hope to continue learning from each of you in the future and will forever cherish our time together. Annareet, thank you for everything – I’m at a loss for words. Mum, Dad – none of this would have been possible without you both – thank you for sharing your virtues of curiosity, humility, and sincerity with me; I am truly blessed to be your son. Finally, this thesis is dedicated to the memory of my grandmother, Lulu Gopal - I would have loved to have been able to sit and makan with you, hearing which parts of this thesis you would, of course, disagree with.

(3)

Abstract:

In this thesis, I conceptualise the harms of misinformation in two ways. First, as an injury to the epistemic environment and second, as engendering entropy within the infosphere. The pars destruens of this thesis is that analytic socio-ecological epistemology cannot fully capture the harms and actors involved in misinformation practices. It fails on two grounds: (i) its anthropocentricity and (ii) its overly synchronic approach to knowledge practices. The pars construens of this thesis is the suggestion that recognising and developing our poietic responsibilities can aid in understanding agents’ responsibilities regarding misinformation. In making this claim, I carve out a set of responsibilities that agents possess in virtue of having poietic capacities. To argue for these claims, this thesis is structured as follows. In §1, I introduce the social context of misinformation and offer empirical grounding for a philosophical analysis of misinformation. In §2, I provide a unified characterisation of “epistemic environment” – a nascent concept in analytic social epistemology deployed to understand the harms of misinformation. I suggest that such an approach is flawed due to its synchronicity and anthropocentrism. In §3, I provide motivation for adopting an approach informed by the philosophy of information when analysing the harms and responsibilities associated with misinformation. In doing so, I attempt to fix the shortcomings of

“epistemic environments” by noting that an informational approach can aid in understanding epistemic practices as being performed by non-human agents and diachronic. In the latter half of

§3, I introduce the notion of level of abstraction (LoA) to aid in analysing misinformation's harm.

I build on the view within the philosophy of information that misinformation is false information mistakenly considered to be true by informational agents due to an ill-adopted LoA. I do so by explaining why agents may adopt this LoA. Then, I note that producing and sharing misinformation incurs a moral and epistemic harm when analysed from an informational approach.

Finally, in §4, I introduce the notion of poietic responsibilities. I suggest that poietic agents, in virtue of their creative capacities, possess a distinct class of responsibilities that can aid in understanding misinformation.

(4)

Table of Contents

Acknowledgements: ... 2

Abstract: ... 3

§1: Introduction ... 5

§1.1: Social Context & Empirical Grounding ... 5

§1.1.1: Key Definitions ... 7

§1.2: Philosophical Approaches to the Harms of Misinformation: Analytic Social Epistemology and the Philosophy of Information ... 8

§1.3: Research Question, Methodology, and Structure ... 9

§2: Analytic Socio-Ecological-Epistemological Analyses of Misinformation: The Corruption of an Epistemic Environment ... 10

§2.1: Existing Definitions of “Epistemic Environment” ... 10

§2.1.1 A Unified Account of Epistemic Environment ... 15

§2.2: The Corruption of an Epistemic Environment – Wherein Lies the Harm? ... 17

§2.3: Who is responsible for the corruption of the epistemic environment? ... 21

§2.4: Limitations of Analytic Socio-Ecological Epistemology ... 25

§2.4.1: Anthropocentrism and the Instrumentalist Bias in Analytic Socio-Ecological Epistemology . 26 §2.4.2: The Synchronicity of Analytic Social Epistemology: An Idealisation of Social Epistemic Practices ... 27

§3: (Mis)Information Ethics: The Infosphere, Poiesis, and Entropy ... 30

§3.1: Information Ethics and the Infosphere ... 31

§3.2: Poiesis, Poietic Agents, and Constructionist Ethics ... 35

§3.3: Introducing Levels of Abstraction ... 37

§3.4: Misinformation: A Mistake in Level of Abstraction and a Source of Entropy in the Infosphere ... 38

§4: Poietic Responsibility ... 41

§4.1: Situating Poietic Responsibilities within Information Ethics: With Great (Poietic) Power, Comes Great (Poietic) Responsibility ... 41

§4.1.1 Introducing the Structure of Poietic Responsibilities ... 43

§4.2: The Poietic Responsibilities ... 47

§4.2.1: Responsibility of Care ... 48

§4.2.2: Care and Misinformation ... 50

§4.2.1: Responsibility of Future Use and Misinformation ... 50

§4.2.3: Responsibility of Artefactual Autonomy: Against Automating Entropy Within the Infosphere ... 51

§4.2.4: Artefactual Autonomy and Misinformation ... 51

§4.2.5: Responsibility of Process ... 53

§4.2.6: Process and Misinformation ... 54

§4.3: Creative Responsibilities: Poietic Responsibility or Responsible Innovation? ... 55

§5: Conclusion ... 57

Bibliography: ... 59

Appendix – Tables & Figures: ... 71

(5)

§1: Introduction

§1.1: Social Context & Empirical Grounding

The object of analysis of this thesis is misinformation and the variety of actors involved in its production and dissemination. I provide a conceptualisation of the harms of misinformation from the frameworks of analytic social epistemology and the philosophy of information. Many of us find ourselves in the unique position of having lived through multiple “once-in-a-lifetime” crises in the past decade - the dissolution of European unity (Brexit), the re-emergence of far-right, authoritarian, populist regimes (Trump, Bolsonaro, Modi, Duterte, and Johnson), the COVID-19 Pandemic and its associated global recession, unprecedented levels of political polarisation, and the Russian invasion of Ukraine. All amidst the backdrop of the climate crisis. Considering this, it is not an exaggeration to claim that we live in an age of crises. Moreover, an additional crisis runs throughout the others, providing a bedrock for further troubles. Albeit almost glib to remark, we live in an epistemic crisis characterised by ‘fake news’, ‘alternative facts’, a proliferation of conspiracy theories, and an atmosphere of ‘post-truth’. Our epistemic climate is one characterised by the rapid spread of mis and disinformation. To begin, let me first substantiate the sweeping narrative just presented.

During the Brexit campaign, the British electorate was regularly presented with the false claim that the UK was sending £350 million to the EU every week and that if Britain were to leave the EU, this money would be spent on the National Health Service (NHS) (Henley, 2016). Misinformation of this kind had a definitive impact on the British public’s “decision” to leave the EU (Grice, 2017) and points to a broader concern that misinformation poses a severe threat to due democratic processes. Similar worries were expressed regarding Trump’s victory in the 2016 US Presidential Election (Allcott & Getnzkow, 2016). Misinformation can also have deadly effects. South African President Thabo Mbeki suggested that garlic and lemon juice was a viable and preferable HIV treatment in comparison to Western antiretroviral drugs, resulting in over 300,000 preventable deaths (McIntyre, 2018, 10). During the COVID-19 Pandemic, the World Health Organisation declared that we are amid an ‘infodemic’ (an overabundance of (mis)information about COVID- 19, which impacts correct belief formation), leading to significant effects on the uptake, and adherence, to necessary public health measures (Tangcharoensathien et al., 2020, 2; WHO, 2021).

It is thus evident that misinformation can significantly harm our politics and democracies (Woolley

& Howard, 2018; Pomerantsev, 2019; Wooley, 2020; Colomina et al., 2021), interpersonal relationships (Watt, 2020), and public health (WHO, 2021).

Various social factors contributing to this epistemic crisis have been noted, such as the decline of traditional media (McIntyre, 2019, 64-87), an erosion of trust in experts and institutions (Nichols, 2017), and group polarisation (Klintman, 2019). Whilst each has a degree of truth, and a deep understanding of this phenomenon must address the relations between these factors, I focus predominantly on digital technologies' role in producing and disseminating misinformation. As Dutilh-Novaes and de Ridder (2021, 174) note, there is nothing new about the practices and presence of misinformation within our political, personal, and scientific lives. What is new is the emergence of digital technologies that provide the means and infrastructure required for the rapid production and dissemination of misinformation. This is illustrated, though not exclusively, by three kinds of technology which are analysed in this thesis: (i) social bots, (ii) social networking sites, and (iii) generative AI.

(6)

Bots are entities which operate within a digital space controlled by software rather than humans (Ferrara, 2020, 14). This software can include ‘more or less sophisticated’ versions of artificial intelligence (Ferrara, 2020, 14). For the purposes of this thesis, I will not take a stance on the debate on how to define artificial intelligence. Social bots (hereafter bots) are those that

‘automatically produce content and interact with humans on social media’ under the guise of being natural (human) users (Gorwa & Guilbeault, 2020, 232). Note that there is nothing inherently tying bots to misinformation. There can be benign bots. For example, @racooonshourly is a bot on Twitter which posts a picture of a raccoon every hour. Bots can function in coordination with each other, forming bot-networks which can spread misinformation rapidly by reposting, liking, or mentioning each other, thereby increasing the likelihood of human users doing the same, thus

“gaming” a social-networking site’s recommendation algorithm (Wooley, 2020, 89; Himelein- Wachowiak, 2021, 4). Bots, and their associated networks, played a significant role in spreading misinformation during the Brexit campaign, the 2016 US Presidential Election, the 2017 French Presidential Election, the 2018 US Midterm, and promoting COVID-19 conspiracy theories (Grice, 2017; Benkler et al., 2018; Ferarra, 2019; 2020). Regarding the scope of the issue, it has been estimated that 33% of the top sharers of low-credibility content are bots (Shao et al., 2018, 3). 1

Moreover, as the content produced by bots is often indistinguishable from that produced by human users (Alfari et al., 2016, 333) and platforms’ automated detection of bot-generated content is semi-reliable at best (Søe, 2018, 311-312), the infrastructure of social networking sites enables bot networks to spread misinformation. There are additional infrastructural features which contribute to the spread of misinformation. Suggestion algorithms favour content which captures a user’s attention (Eyal, 2014, 14; Williams, 2018, 33), aligns and reinforces users’ political, social, and moral values (Pariser, 2011; Nguyen, 2020), thus allowing misinformation to spread faster than accurate information online (Vosoughi et al., 2018). Therefore, social networking sites have enabled the rapid and broad dissemination of misinformation online (Giansiracusa, 2021;

McIntyre, 2019, 105-108). Generative AI, such as GPT-3, a natural-language generator (NLG) (Heaven, 2020a; Knight, 2021), and deepfake algorithms (Smith & Manstead, 2020) further contribute to the creation of misinformation as they can rapidly and automatically generate content indistinguishable to the average human user (Giansiarusca, 2021, 17- 66). For example, during the Russian invasion of Ukraine, deepfake videos circulated on Facebook and YouTube of Zelensky urging Ukrainians to lay down arms (Milmo & Sauer, 2022).2

Abstracting from the above case studies, I take the following schema to be a paradigmatic case of producing and sharing misinformation online. For the sake of simplicity and continuity, I restrict my analysis to Twitter, though nothing philosophical hinges upon this. X produces a false news report, M, stating that-p, which is contrary to the accepted, and true, claim that not-p. X produces M by either using generative AI technology or independently. X shares M online by posting it on Twitter. M is widely shared by users (humans and bots). M eventually gains enough traction that a collective distributor of claims (e.g., a newspaper) shares M. From this schema, one can identify the following actors in the production and dissemination of misinformation (Table 1).

1 This is a rough estimation as knowing how many active bots exist runs into issues due to the lack of a ground truth (Martini et.al, 2021,2).

2 For a technical overview of how generative AI produces misinformation see: Giransiarusca (2021, 30-35).

(7)

Actor: Definition

(i) (a) Human 1 Producer of false news report (inputs into GPT-3

(i) (b) Bot Built upon/uses GPT-3 engine (Heaven,

2020b).

(ii) GPT-3 An NLG which generates text used in false

news report.

(iii) Open AI Group actor developing GPT-3.

(iv) Bot Network Influential in gaming algorithm to make news report a “Trending Topic”.

(v) Bot developers Humans who created code used for (ib), (ii), (iv), (v), and (ix).

(vi) Human(s) 2,3,4, n Individuals interacting with false story (likes, re-tweets, scrolling past, etc.)

(vii) Poor quality newspaper Group agent which mistakenly shares false report.

(viii) Twitter Group actor responsible for creating the platform.

(ix) Twitter Algorithm An algorithm designed to show and recommend content to users.

(Table 1).

§1.1.1: Key Definitions

Many of the terms used throughout this thesis have contested definitions with rich areas of philosophical debate. For brevity’s sake, I will adopt the following definitions. Artefacts are understood to be human-made objects, either material or immaterial (Coecklebergh, 2019, 5). For example, books, word-processing software, and bots are artefacts. Technology is understood broadly as both as a device that is used (a screwdriver) and as a system (a social networking site) (Coecklebergh, 2019, 5). Information and communication technologies (ITs) allow users to interact in a digital space. The term “onlife” denotes the idea that the ubiquity of ITs has blurred the distinction between online and offline. We are neither within a ‘digital-online’ world nor an

‘analogue-offline’ world but ‘onlife’ (The Onlife Initiative, 2015, 8). Semantic information is

defined as follows. X is semantic information iff:

(i) x consists of data (understood to be a lack of uniformity between two variables) (ii) data is well-formed (data is formed correctly according to a syntax which ‘determines

the form, construction, composition, or structure of something’ (Floridi, 2011, 84) (iii) well-formed data is meaningful (data must comply with the meanings of a given

system)

(iv) x is true (veridicality thesis) (Floridi, 2011, 104)

Note here that the broad definition of syntax and meaning entails that semantic information is not only linguistic; a map or infographic also counts as semantic information. Whilst semantic information is the focus of this thesis, Floridi (2010, 60-87) also provides definitions of biological and physical information, which has a bearing on his ethics as outlined in §3.1. For clarity’s sake, when I use “information”, I am referring to semantic information.

(8)

Much ink has been spilt as to which definition of misinformation ought to be adopted based on whether its extension captures intuitive intensions and the bearing a commitment to the veridicality thesis has upon these definitions.3 For the purposes of this thesis, I adopt Floridi’s (2011, 104) definition of misinformation. For Floridi (2011, 104), misinformation is a kind of pseudo- information as it is false information. Note that this is not a contradiction in terms if one accepts that “information” can be used as a ‘synecdoche to refer to both “information” and

“misinformation”’ (Floridi, 2011, 104). That is, misinformation (understood as false information) specifies that ‘the contents in question do not conform to the situation they purport to model’

(Floridi, 2011, 104). For example, how one might say that “Jane is a fake friend”. As such, misinformation plays a ‘deleterious’ (Floridi, 2011, 260) role for an agent insofar as it provides them with no informative content. I reject, however, Floridi’s (2011, 260) suggestion that misinformation makes ‘no worthwhile difference’ to an agent’s ‘representation of the world’. The reasoning for this rejection, and its relevance, are provided in §3.2.

§1.2: Philosophical Approaches to the Harms of Misinformation: Analytic Social Epistemology and the Philosophy of Information

Given that the practices of producing and sharing misinformation are intertwined with our social practices (Vosoughi et al., 2018, Klintman, 2019), a fitting framework is social epistemology. Social epistemology can be broadly distinguished along two lines – analytic social epistemology (ASE) and critical social epistemology (CSE) (Collin, 2019, 29-31). CSE is informed by the sociology of knowledge and centres issues of power (Fuller, 1988; Collins, 2019, 34-35). On the other hand, ASE is broadly concerned with describing the social practices pertaining to knowledge.

Specifically, that of how social practices aid and contribute to agents acquiring and sharing knowledge. Knowledge is understood as justified, true belief (JTB), or some variant thereof with a (semi)satisfactory solution to the Gettier Problem. It is thus concerned with the social practices of belief acquisition, revision, and justification. Such practices include testimony (Coady, 1973;

Fricker, 2008), dependence (Hardwig, 1985), trust (McCraw, 2015), and group justification (Lackey, 2014). ASE is thus a rich and diverse field. In what follows, I review how different strands of ASE have discussed the issue of misinformation and digital technologies. In doing so, I identify a nascent approach within the field that I call analytic socio-ecological epistemology, which I contribute to and challenge in §2.

One strand of thought applies debates surrounding testimony, communicative intention, and epistemic norms to ITs, often understood as social-media sites and describes how practices of misinformation are facilitated on these platforms (Rini, 2017; Alfano & Sullivan, 2021; Marin, 2021). A second examines whether virtue – and vice – epistemology is a sufficient framework to describe our epistemic use of information technologies and whether practices of misinformation can be combatted by cultivating epistemic virtues or whether such virtues are corrupted by misinformation (Heersmink, 2018; Smart 2018a; Schwengerer, 2020; Priest, 2021). A third strand of thought uses methods adopted from game theory and network analysis to examine how misinformation, and concomitant false beliefs, spread through a network of knowers (O’Connor

& Weatherall, 2019; Sullivan et al., 2020). A fourth approach is to examine how digital technologies alter the justificatory status of beliefs and, given their role in recommending misinformation, whether individual knowers can be blamed for forming false beliefs (Millar & Record, 2013, 2017,

3 The main point of contention within this definition is whether information is necessarily true or not; that is, whether one commits to the veridicality thesis. See Fetzer (2004) for a definition of information which drops the veridicality thesis. See Fallis (2016) for an excellent overview of the definitional debate surrounding misinformation within the philosophy of information. For debates about a species of misinformation (fake news) within analytic social epistemology see: Gelfert (2018), Mukerij (2018), Habgood-Coote (2019), Jaster & Lanius (2021), and Croce

& Piazza (2021a).

(9)

Millar, 2019; 2021). A fifth focuses on the conceptual analysis of ‘fake news’ (understood as a branch of misinformation) (Gelfert, 2018; Mukerij, 2018; Habgood-Coote, 2019; Jaster & Lanius, 2021; Croce & Piazza, 2021a). Finally, the last strand examines how best to combat the consumption of misinformation from either structural or individualist perspectives (Croce &

Piazza, 2021b).

When discussing the harms of misinformation, a common thread running through social epistemological approaches is that the principal harm is one inflicted on individual knowers by inducing false belief, thus undermining the decision-making necessary for a well-functioning democracy (Priest, 2018; Smart, 2018a, Sullivan et al., 2020; Croce & Piazza, 2021b). A nascent approach, yet to be fully developed, examines the issue of misinformation from an environmental perspective. Theorists (Ryan, 2018; Blake-Turner, 2020; Levy, 2020; Rini, 2020; de Ridder, 2021) suggest that misinformation degrades our “epistemic environment” and that this constitutes a harm. There is, however, little agreement as to what constitutes an “epistemic environment”, its corruption, who is responsible for this corruption, and how such degradation can be understood as a harm. For clarity’s sake, I will call this approach analytic socio-ecological epistemology – I also use the term analytic socio-eco epistemology for flow and readability. As such, this thesis contributes to a gap within analytic socio-eco epistemology by clearing up existing conceptual confusion and providing a unified account of the concept of “epistemic environment” in §2.

Whilst an overview of the current state of the art in information ethics is provided in §3-5; I will briefly note the gap within the literature I contribute to. Information ethics develops an ethical theory based upon an informational ontology. In brief, it is a patient-oriented approach to ethics, in which patients deserving of moral respect are informational entities (Floridi, 2013, 53-101).

Floridi (2013, 168) and Russo (2012; 2018; 2022, §9.5.2)4 note that ITs have endowed agents (broadly construed) with an increase in ontological power and their creative-enhancing power (think, of the ease of (re)producing a computer file), have bestowed a greater sense of responsibility upon humans. Currently, this responsibility is understood to be one of care toward informational entities (Floridi, 2013, 74-75; Russo, 2018, 13). I contribute to this debate by delineating further responsibilities derived from an agent’s creative capacities and applying them to practices of misinformation. In doing so, I further contribute to a gap within information ethics as it has yet to discuss the ethics of misinformation in a rigid, philosophical manner. I say in a rigid manner as Floridi (2016) has provided a journalistic analysis of misinformation for The Guardian. Its philosophical detail, however, is understandably lacking.

§1.3: Research Question, Methodology, and Structure

Having outlined the relevant social and brief philosophical context, I now introduce the research questions this thesis addresses.

RQ1: What are the harms associated with misinformation?

RQ2: Are they to be understood as moral harms, epistemic harms, or both?

RQ3: Which framework can best explain who (or what) is responsible for these harms?

4 This thesis was written whilst Russo’s (2022) Techno-Scientific Practices: An Informational Approach was undergoing publication; as such, I will reference by subsection rather than page number so the diligent reader can still find the content in the fully published manuscript.

(10)

RQ4: What kind of responsibilities do agents possess to prevent or rectify such harms?

The hierarchy between these questions is as follows. RQ2-RQ4 follow from RQ1, and answers to those provide a richer and deeper understanding of the harms of misinformation (RQ1).

I also wish to acknowledge the broader metaphilosophical motivation of this project briefly. In comparing the approaches and frameworks of analytic socio-ecological epistemology and the philosophy of information, I hope to contribute toward, and motivate, a cross-pollination between analytic and continental philosophy. This is done by illustrating where analytic socio-ecological epistemology can benefit from adopting concepts from the philosophy of information and philosophy of technology, and vice-versa. Furthermore, whilst Floridi’s philosophy heavily draws upon analytic methods, he explicitly characterises his project as transgressing analytic philosophy in The Philosophy of Information (Floridi, 2011, 20-23), and notes a continuity between topics between analytic and continental philosophy. Methodologically, this thesis takes empirical work on the practices of misinformation seriously. That is, the evaluation of frameworks, concepts, and theory is done on a conceptual level (logical consistency, clarity etc.) and whether they can sufficiently account for the actual practices of producing and disseminating misinformation.

The structure of this thesis is as follows. In §2, I provide a critical review of work in analytic socio- ecological epistemology and a unified account of “epistemic environment”. I then suggest that such an approach is lacking based on its anthropocentrism and the assumption that our epistemic practices are synchronic. Then, in §3, I introduce information ethics and argue that the framework allows the harms of misinformation to be understood to be epistemic and moral. I also provide reasons for not jettisoning an analytic socio-ecological approach altogether. In doing so, I introduce the notion of levels of abstraction (LoA). Finally, in §4, I delineate a distinct class of responsibilities an agent possesses in virtue of their creative capacities, which identifies how they may prevent and rectify the harms of misinformation. The original contribution of this thesis is thus threefold. I (i) provide a unified characterisation of “epistemic environment”, (ii) compare the frameworks of analytic socio-ecological epistemology and the philosophy of information, and (iii) contribute to a gap within the philosophy of information by introducing the notion of poietic responsibilities.

§2: Analytic Socio-Ecological Epistemological Analyses of Misinformation: The Corruption of an Epistemic Environment

In this chapter, I critically review work in analytic socio-eco epistemology. In §2.1, I provide a unified characterisation of “epistemic environment”. In §2.2, I specify the harms associated with misinformation from this approach. In §2.3, I interrogate whether these accounts can account for the actors outlined in the paradigmatic example of misinformation practices offered in §1. Then, in §2.4, I suggest that such an approach is lacking based on its anthropocentrism (§2.4.1) and the assumption that our epistemic practices are synchronic (§2.4.2).

§2.1: Existing Definitions of “Epistemic Environment”

In this section, I reconstruct various uses of the concept “epistemic environment” within analytic socio-eco epistemology. This section is structured as follows. First, I draw attention to (dis)continuities between accounts and provide a novel taxonomy of existing conceptualisations.

I argue that accounts can be considered as regulatory, conducive, monadic, or pluralist conceptualisations of “epistemic environment”. Then I highlight who, and what, constitutes an epistemic

(11)

environment. Finally, I provide a unified characterisation and highlight missing features from current accounts.

Blake-Turner (2020, 9) suggests that Levy (2018) contains the first discussion of an epistemic environment. This, to my knowledge, is mistaken. Goldberg (2016, 8) utilises the term in establishing his ideal research programme for analytic social epistemology. Furthermore, whilst the term is often deployed within analytic social epistemology, an environmental or ecological, approach to epistemology ought to be credited primarily to Code’s (2006) work. A commonality of analytic socio-ecological accounts (Goldberg, 2016, 10; Ryan, 2018, 99; Blake-Turner, 2020, 9- 10; Rini, 2020, 2; Levy, 2021, 2; de Ridder, 2021; 12) is that an epistemic environment is a space in which epistemic agents engage in epistemic activity. That is the space in which agents share, produce, and acquire information (in a non-technical sense) to achieve positive epistemic states – knowledge, belief, justification, understanding, and so on. Thus, existing accounts are doxastic- centric insofar as the latter states are understood to be dependent on belief.

Writ large accounts either emphasise the regulatory or conducive function of an epistemic environment. Regulatory accounts emphasise how an epistemic environment regulates the social practices of sharing knowledge, be it through epistemic norms and their institutional codification (Goldberg, 2016, 14-17) or standard evidential practices associated with photographs and recordings (Rini, 2020, 3-5). Conducive accounts emphasise how an epistemic environment allows for the acquisition of epistemic states such as justification, warrant, credibility, knowledge, and understanding (Ryan, 2018, 99; Blake-Turner, 2020, 9-10; Levy, 2021, 2; de Ridder, 2021, 13). Ryan (2018, 98-99) claims that the epistemic environment ‘determine[s]’ whether an epistemic agent is in an epistemically favourable position. That is, whether they are in a suitable position to form justified, warranted, or credible beliefs, knowledge, or understanding. The epistemic environment aids the agent by influencing how an agent may act by providing affordances for epistemic agents to acquire positive epistemic statuses. Blake-Turner (2020, 9-10) also suggests that the ease of gaining positive epistemic states depends on whether there are suitable informational and conceptual resources for epistemic agents to gain knowledge about x. It is on these grounds that Blake-Turner’s (2020, 9-10) account can be classified as a conducive account. De Ridder (2021, 12-13) endorses a similar account to Ryan (2018) and Blake-Turner (20202) insofar as the function of the epistemic environment is that of facilitating the obtainment of positive epistemic states.

Accounts also differ in fixing the scope of the concept “epistemic environment”, either implying that there is one epistemic environment, hereafter monadic accounts, (Goldberg, 2016; Rini, 2020;

Levy, 2021; de Ridder, 2021) or that there are many (Ryan, 2018; Blake-Turner, 2020) – hereafter pluralist accounts. Note that it is not the case that adopting a monadic or pluralist account commits one to endorse a regulatory or conducive account. For example, Levy (2021) and Goldberg (2016) use the term epistemic environment in a singular sense yet disagree on its function, with each respectively endorsing a conducive and regulatory account.

A summary of existing definitions and where they belong within this taxonomy is found in Table 2.

(12)

Author: Concept: Function: Scope:

Blake- Turner (2020)

Epistemic Environment:

“The circumstances, resources, and other factors of an epistemic community that determine whether one of its members is in a position to gain positive epistemic statuses”

(2018, 9-10)

Conducive Pluralist

Rini (2020) Epistemic Backdrop:

The norms and practices which regulate our practices of testimony based on the use of photographs and recordings. Photographs and recordings regulate our epistemic practice in two ways.

(1) As an acute corrective insofar as they provide agents with the ability to ‘check the record’ to resolve disputation as to whether-p (Rini, 2020, 3-5).

(2) As a form of passive regulation (Rini, 2020, 3-5). One may adhere more closely to

testimonial norms insofar as she is aware that the recording of testimony be used as a form of acute correction.

Regulatory Monadic

Levy (2021) Epistemic Landscape:

A source of information with a high degree of credibility is a peak of the epistemic landscape.

The contrary holds for the troughs of the epistemic landscape. Peaks of our epistemic landscape are central to the proper functioning of epistemic practices as belonging to a peak generates higher-order evidence; one possesses evidence about the strength and character of the evidence they possess. For example, a

newspaper renowned for its high-quality investigative journalism belongs to a peak within the epistemic landscape generates higher- order evidence. (Levy, 2021, 7-11).

Conducive Monadic

Ryan (2018) Epistemic Environmentalism: Conducive Pluralist

(13)

(Table 2)

Conceptual confusion emerges as some monadic accounts differ in the terms used to denote

“epistemic environment”. Notably, Rini’s (2020, 2-5) use of ‘epistemic backdrop’ and Levy’s (2021,7) term – ‘epistemic landscape’. Rini (2020, 12-13) falsely equivocates between the term

“epistemic environment” and “epistemic backdrop”. She suggests the epistemic environment ‘in which [recording technologies] function in… is an epistemic backdrop’ (Rini, 2020, 1).

Furthermore, Rini’s (2020) is characterised by a lack of reference to previous work on epistemic environments. As such, it is difficult to assess whether Rini (2020) sees the epistemic backdrop as a constitutive element of a broader epistemic environment or whether the epistemic backdrop is the epistemic environment itself. The latter formulation is too narrow. Suppose one were to read Rini’s (2020) account in this way. In that case, there is the implication that only practices of testimony, and by extension evidential practices associated with photographs and audio-visual recordings, constitute the epistemic environment. As such, it is better to read Rini (2020, 1-5) as providing an explanation of the function of artefacts – photographs, video, and audio recordings – within an epistemic environment. They constitute and regulate the epistemic backdrop, which functions as a regulatory aspect of the epistemic environment.

Whilst united in endorsing the existence of multiple epistemic environments, pluralist accounts (de Ridder, 2018; Blake-Turner, 2020) fix the scope of “epistemic environment” differently. Blake- Turner (2020, 9-10) claims that an epistemic environment is always relative to a specific epistemic community. An epistemic community is understood to be a collection of agents that engage in practices of knowledge sharing. A community may be as large as the sum of all epistemic agents but also more finely grained to include research groups, a small reading group, or an investigative

Just as our ecosystem is constituted between the interrelation and dependencies of organisms. As is our epistemic environment.

It is constituted by the ‘interconnections and interdependencies’ (Ryan, 2018, 99) between different epistemic agents and the physical environment which affords the possibility of gaining knowledge.

Goldberg

(2016) Epistemic Environment:

A structure of idealised epistemic norms (such as the norm of assertion) and the institutions in which they are codified, which in turn regulate social epistemic practice (Goldberg, 2016, 14- 17).

Regulatory Monadic

De Ridder

(2021) Epistemic Environment:

The ‘totality of information sources [an agent]

typically interacts with or easily could have interacted with’ (de Ridder, 2021, 13) including the physical environment, print and visual media, social media, websites, scientific

instruments, and so forth, ‘all qualified to include nearby possibilities’ (my emphasis).

Conducive Monadic

(14)

body. Thus, if multiple epistemic communities exist, multiple epistemic environments also exist.

Whilst beyond the scope of this thesis, this implicitly commits Blake-Turner (2020, 10) to a non- summativist position in the debate within collective epistemology. Summativists maintains that the epistemic practices and states of group agents occur and can be reducible to individual members.

Non-summativists claim that epistemic practices and states take place at the level of a group (Lackey, 2014, 2-3). For the purposes of this thesis, I adopt the latter and maintain that groups form a supra-individual epistemic agent. Further evidence for Blake-Turner (2020, 10) endorsing the existence of multiple epistemic environments is found in footnote thirteen – in which he provides a brief interpretation of the Cartesian demon as an epistemic environment in which rational but false beliefs are easy to form. Ryan (2018, 109) offers a conceptualisation that bears a strong resemblance to how we think of our geo-environment and claims that ‘just as we can distinguish the (geo)environment from localised (geo-) environments, the same is true of epistemic environments’. Thus, the difference between monadic and pluralist accounts raises the question of the structure of multiple epistemic environments and the role of the broadest epistemic environment – this is answered §2.1.1.

A final point of difference within current definitions rests on who, or what, inhabits, alters, and constitutes an epistemic environment. According to Ryan (2018, 99), ‘beings with cognitive capacities’, institutions, and mass distributors inhabit, alter, and constitute an epistemic environment. Trivially, humans occupy the epistemic environment in virtue of possessing cognitive capacities.5 Group epistemic agents in the form of institutions, such as universities and thinktanks, also occupy and constitute the epistemic environment.6 Newspapers, media outlets, government information bodies, and conscious-raising groups are also included within Ryan’s (2018, 99) definition insofar as they are mass distributors of claims. This is also the case for Goldberg (2016, 14-17), Levy (2021, 7-11), and Blake-Turner (2020, 9-10). Levy’s (2021, 7-11) account focuses predominantly on the institutional aspects of the epistemic environment and delineates institutions' specific functions.

Notably, Ryan (2018, 99) notes that cognitive scaffolding and extensions constitute an epistemic environment. Thus, his account dovetails with theories of extended cognition, mind (Chalmers &

Clark, 1998), and knowledge (Pritchard, 2016; Carter et al., 2018a). That is, artefacts used for epistemic purposes – think of the classic example of Otto’s notebook (Chalmers & Clark, 1998, 10) or, more contemporaneously, Otto’s brain-computer-interface (Schwengerer, 2021, 314-315) – constitute the epistemic environment. They are not, however, granted epistemic agency, a matter explored further in §2.3. The same can be said of Blake-Turner’s (2020, 10) inclusion of

‘technological resources’ and Rini’s (2021, 2-4) discussion of photographs and recordings.

Consequently, these notions of “epistemic environment” capture a large array of entities which constitute epistemic environments: individuals, collective epistemic agents, artefacts, and technologies. This is a strength of such accounts as they identify the complex network of actors involved in creating and disseminating misinformation (see list of actors outlined in §1). Whether these accounts can conceptualise whether non-human actors can influence the quality of an epistemic environment is the focus of §2.3.

5 Non-human animals with cognitive capacities might also occupy and constitute the epistemic environment. Think of the canary in the coal mine and the miners' dependence upon the canary to know that the mine is safe. This, however, is controversial and beyond the scope of this thesis.

6 Again, note that Ryan (2018, 109) maintains a non-summativist position. Goldberg’s (2016, 14-18) account also implies that humans and collective agents occupy and engage in the upkeep of the epistemic environment.

(15)

§2.1.1 A Unified Account of Epistemic Environment

Recall that those working on the topic of epistemic environments have predominantly done so to provide a concept to make sense of the practices associated with sharing misinformation online – in doing so, many (de Ridder, 2021, 13; Blake-Turner, 2020, 9; Levy, 2021, 7; Rini, 2020, 4) have suggested that a complete conceptualisation lies beyond the scope of their focus claiming that they have explicated only a few important features. Let us take stock and summarise the features highlighted in the existing literature. Those qualifying as epistemic agents, including collective agents (Ryan, 2018,109; Levy, 2021,11; Blake-Turner, 2020, 9), constitute and can alter the quality of an epistemic environment. On this view, technologies and artefacts constitute an epistemic environment but cannot alter its quality unless used by the aforementioned epistemic agents (Goldberg, 2016, 8; Ryan, 2018, 99; Blake-Turner, 2020, 9-10; Levy, 2021, 14-15). That some technologies can autonomously alter the epistemic environment without being used by humans is discussed in §3.1. An epistemic environment may be of greater or worse quality depending upon the available information and conceptual resources which provide the affordances for acquiring positive epistemic states (Blake-Turner, 2020, 10; de Ridder, 2021, 13) and whether it sufficiently regulates epistemic practices (Goldberg, 2016, 14-17; Rini, 2020, 3-5). A better-quality epistemic environment aids in the overall achievement of positive epistemic statuses and the acquisition, sharing, and production of knowledge via the well-functioning of its conducive and regulatory aspects. In what follows, I provide a unified characterisation of epistemic environment and highlight three features which aid in understanding the practices of misinformation that have been overlooked within the existing literature. Specifically, that epistemic environments are (i) nested, (ii) attentional environments, and (iii) extend through time.

First, to illustrate the nested quality of epistemic environments, suppose that you are writing a research paper. You take the train to the nearest city and do some light-reading on your Kindle in a busy carriage. Let us call this EE1. EE1 is constituted by literature available on your Kindle, the physical environment surrounding you, and the non-aggregative group of people on the train.

Whilst it is possible to provide a more finely-grained analysis of this example, the relations between epistemic agents and artefacts in each carriage could constitute other epistemic environments; I remain at a coarser level to convey the broader notion of nested epistemic environments. Unable to concentrate, you open your phone and load your Twitter feed. Thereby entering EE2 whilst remaining in EE1. That is, your access to informational sources and possible interlocutors expands.

As epistemic environments can be nested and more finely grained, there is the most general epistemic environment which contains the totality of our informational and conceptual resources, epistemic institutions, and the notion of epistemic community is construed broadly.7 This sense of the term is captured by de Ridder’s (2021, 13) broad definition in virtue of his modal condition.

To continue the environmentalist analogy, I shall call this sense of the term our epistemic climate.

In pressing this distinction, I provide a pluralist account of “epistemic environment” yet retain the possibility of deploying the term as is done in monadic accounts. That is, on my account, one can now delineate the more finely grained epistemic environment undergoing analysis or adopt the term “epistemic climate” if required. A visual representation of the nested nature of epistemic environments and the epistemic climate is found in Figure 1.

7 Note that this concept is similar to the infosphere (Floridi, 2013, 6). A full discussion of the relationship between the infosphere and epistemic environments is given in §3.2.

(16)

(Figure 1.)

Second, as alluded to in the above example, epistemic environments are also attentional environments. Features of our epistemic environment, such as information sources and other epistemic agents, demand our attention for us to achieve various positive epistemic states.

Concerning attention, I have in mind what Williams (2018, 56-58) calls the ‘daylight’ of our attention. That is, the type of attention we pay to certain things to successfully engage in some activity and achieve a goal. When paying attention to one phenomenon, one ‘pays attention’ by not paying attention to other phenomena (Williams, 2018, 45). Thus, by paying attention to one phenomenon, A, one forgoes the possibility of acquiring positive epistemic states regarding other phenomena, B, C, D, and so forth. That epistemic environments are attentional environments, which is a conducive function, also indicates what constitutes the quality of an epistemic environment.

Various objects within our epistemic environment demand our attention in different ways. Take, for example, the epistemic environment of a library, in which you are using your laptop to read several journal articles. The print books within the library do not demand one’s attention in the same way as the push notifications of your e-mail on your laptop. That digital technologies are designed to capture users’ attention has already been acknowledged (Schüll, 2012, 97; Ward, 2013, 344; Eyal, 2014, 14; Williams, 2018, 35; Hanin, 2021, 397). As such, if objects within an epistemic, and therefore attentional, environment demand our attention in ways which unduly distract us from achieving positive epistemic states, then the quality of such an environment is lesser. Undue distraction can be understood as an unintentional direction of attention away from achieving one’s goals. This can occur through design features such as push notifications (Eyal, 2014), infinite scroll (Ahuvia, 2013), and content recommendation (Giansirusca, 2021, 68-69). This aids in understanding how misinformation functions within online epistemic environments. Features of misinformation often encourage users to consume and share misinformation amongst their peers by capturing their attention. This is done by including highly emotional content (Williams, 2018, 33-34; Ecker et al., 2022, 15) and attention-grabbing headlines (Bernecker, Flowerree, &

Grundman, 2021, 6). Moreover, given that social networking sites prioritise capturing users’

attention (more time on a site = more clicks = more profit), content recommendation algorithms often expose users to misinformation to achieve this goal (Bozdag 2013; Nichols 2017;

Giansirusca, 2021). Thus, the design of online epistemic environments privileges directing users’

attention away from achieving their epistemic goals.

Finally, that epistemic environments extend through time is an aspect missed within most existing accounts and only implicitly acknowledged by Rini (2020, 3-5). The relevance of the overly synchronic nature of socio-ecological analytic epistemology regarding misinformation is explored

EE1 EE2

EEn

EC (Epistemic Climate)

(17)

fully in §2.4.1. Rini (2020, 3-5) implicitly acknowledges the diachronic nature of the epistemic backdrop insofar as she notes that recordings offer a corrective function. That is, one can see whether x occurred in the past by, in the present, checking a recording. To see how epistemic environments extend through time, take the following example. Suppose that a pair of roommates are in the practice of leaving notes to inform each other of their whereabouts throughout the day.

In the morning, one roommate, R, scribbles a note telling the other, T, that they will be out all evening and to pay the electricity bill. R alters their shared epistemic environment by producing and leaving an artefact that extends through time. Within this artefact is R’s temporally deferred testimony and instruction. When T wakes up and enters this epistemic environment, T now has the affordance of gaining knowledge of R’s whereabouts and that T needs to pay the bill. Thus, epistemic environments impact epistemic agents both at the specific time in which they engage in epistemic practices and their own and others' future (epistemic) actions.

As such, I suggest that the following is an adequate characterisation of epistemic environment suitable for analysing the production, dissemination, and consumption of misinformation. An epistemic environment is an onlife (recall from §1.1.1; this is the blurring of the “digital” and

“analogue”) space in which epistemic agents engage in epistemic activity and possess the following features:

(1) There can be better or worse epistemic environments. (Implied in all accounts)

(2) Epistemic agents are embedded within epistemic environments. (Implied in all accounts) (3) Epistemic agents constitute and can alter how good an epistemic environment is by

influencing its norms and information resources.

(4) Norms structure and constitute our epistemic environment. (Regulatory function) (5) Information sources, in the form of other epistemic agents, artefacts, and technologies

which constitute an epistemic environment afford the possibilities for the acquisition of positive epistemic states. (Conducive function)

(6) Epistemic environments are nested. (My contribution)

(7) Epistemic environments are attentional environments. (My contribution) (8) Epistemic environments extend through time. (My contribution)

As the characterisation I provide of “epistemic environment” does not turn on considering technologies and artefacts as epistemic agents, it thus remains acceptable if one wishes to maintain an anthropocentric account of epistemic agency, which is outlined in §2.3. Having established the principal features of “epistemic environment” and “epistemic climate”; I now turn to explicating what constitutes the corruption of an epistemic environment. That is, when agents worsen (intentionally or not) the quality of the epistemic environment by producing and sharing misinformation.

§2.2: The Corruption of an Epistemic Environment – Wherein Lies the Harm?

A commonality between all accounts (Blake-Turner, 2020; Levy, 2020; Rini, 2020; Goldberg, 2021;

de Ridder, 2021) is that misinformation is considered to worsen the quality of an epistemic environment. However, as in §2.1, there is a distinct lack of theoretical unity in how misinformation corrupts an epistemic environment other than it occurs when misinformation is shared. This section provides conceptual clarification regarding what the corruption of an epistemic environment is and examines how it constitutes a harm. This section is structured as follows. First, I outline the concomitant accounts of the degradation of the epistemic environment to those discussed in §2.1. In doing so, I show how the epistemic environment itself is altered and how agents within the environment are changed. Following this, I suggest additional ways the epistemic

(18)

environment may be corrupted based on the additional features I outlined in §2.1. I now turn to the first portion of this section.

Recall from §2.1, I characterised existing accounts as being doxastic-centric. From this, it follows that a well-functioning epistemic environment aids in acquiring true beliefs while avoiding false ones. As such, a poorly functioning epistemic environment negatively affects the acquisition of true beliefs and makes acquiring false beliefs easier (Ryan, 2018; Blake-Turner, 2020; Levy, 202; de Ridder, 2021). On regulatory accounts, a poorly functioning epistemic environment is one in which epistemic norms have been eroded (Goldberg, 2016; Rini, 2021). Thus, misinformation inflicts ‘a harm that impacts the system within which we form beliefs, gain knowledge, provide further testimony, and so on’ (Ryan, 2018, 103), thereby negatively impacting our epistemic practices and subsequent states (Blake-Turner, 2020; Levy, 2020; Rini, 2020; Goldberg, 2021; de Ridder, 2021).

Regarding RQ1, which asked what the harms of misinformation are, analytic socio-eco epistemological accounts frame the primary harm of misinformation as bringing about false beliefs in individual knowers because the regulatory and conducive functions of an epistemic environment are degraded. Blake-Turner (2020, 13) neatly highlights three ways in which a corrupted epistemic environment incurs this harm. These are as follows:

(1) inhabiting an environment rich in misinformation; thus, the acquisition of false beliefs is easier (2) as a weakening of the status of regulatory epistemic institutions and practices, and

(3) instilling bad epistemic habits in agents.

How each account discussed in §2.1 falls into this taxonomy is presented in Table 3.

Author: Concept: How misinformation corrupts an epistemic environment:

Harm According to Blake-Turner’s (2020, 13) Taxonomy:

Function of

Epistemic Environm ent

Impaired:

Blake- Turner (2020)

Degradation of

EE Introduction of

(ir)relevant alternatives to previously justified claims.

The framework of relevant alternatives is as follows:

S knows that-p iff S can rule out the relevant alternatives to that-p.

Misinformation introduces a greater number of alternatives that individuals must discount to know that-p.

(1)-(3) Conducive

(19)

Rini

(2020) Crises in epistemic

backdrop Misinformation, in the form of deepfake videos, leads to the testimonial standing of images &

recordings fails. Agents can no longer go to recording/images to verify

narratives/statements.

Deliberation as to whether x is true or doctored, and the overall loss in trust in images &

recordings, is an epistemic crisis.

(2) Weakens status of regulatory epistemic institutions and practices.

Deepfake misinformation dissolves the corrective and regulatory functions recordings play in our epistemic practices;

thus, impairing the regulatory function of an epistemic

environment.

Regulatory

Levy

(2021) Flattening of epistemic landscape

A cumulative loss of higher-order evidence within the epistemic landscape leads to sources of information being assigned the same degree of credibility and thus equally suspect.

For example, it is no longer enough to assume that a piece of

information from nytimes.com is reliable;

it could easily be from ny-times.com (a fake- news website).

(2) Weakens status of regulatory epistemic institutions and practices.

Regulatory

Ryan (2018)

Epistemic Pollution

Misinformation weakens the interrelations and interdependencies between agents, which provides reason for individuals to lower the degree of trust they place in others and epistemic institutions.

(2) Weakens status of regulatory epistemic institutions and practices.

Regulatory

De Ridder (2021)

Epistemic Pollution

Misinformation provides misleading defeaters. A misleading defeater is a belief which undermines the justificatory status of other beliefs by

contradicting them

(1) Agents inhabit environment rich in misinformation which negatively impacts the

conducive function

Conducive

(20)

(rebutting defeater) or discrediting their

grounds for justification (undercutting defeater).

From misleading defeaters, one can infer false beliefs. This reduces understanding insofar as understanding is predicated upon identifying dependency relations between beliefs.

of an epistemic environment.

(Table 3).

Regarding (3) – the claim that using technologies and artefacts, which constitute a degraded epistemic environment, can instil bad epistemic habits in agents - is acknowledged by Blake-Turner (2019, 8) and de Ridder (2021, 13). They note that the quality of an epistemic environment is dependent upon its resources, which include artefacts and technologies. 8 If, under normal circumstances, an artefact or technology reliably provides truth-inducive, or minimally justificatory-inducive, evidence (the epistemic function of an artefact/technology), but due to the presence of misinformation becomes consistently defective, then it can instil bad epistemic habits in its users. Consistently defective here means that it falls short in providing positive epistemic states within its users. That misinformation can affect the quality of the epistemic function of ITs is as follows. If an IT is an algorithm or uses machine learning, and its input consists of misinformation, its output will also include instances of misinformation.9 That is, if there is a greater amount of misinformation present in an epistemic environment, then ITs may be more likely to yield outputting misinformation – to borrow a phrase from computer science, garbage in

= garbage out.

That consistently defective technology can lead to bad epistemic habits is exemplified in Millar and Record’s (2013, 126) suggestion that, due to personalisation and bias in results, search engines raise the standards for justified belief. They do so by focusing on how the use of technology influences the doxastic states of users, thus adopting an instrumentalist10 approach to technology.

Individuals must also be justified in a supporting belief that the information yielded by a search engine has not been affected by these factors. Furthermore, suppose users do not possess the supporting belief, which Millar and Record (2013, 126) argue is often the case due to these technologies being black boxes. In that case, users may claim to have knowledge when they do not, which is a bad epistemic habit of the user. Thus, an epistemically defective technology worsens the conducive function of an epistemic environment. Furthermore, suppose an epistemic environment has consistently defective artefacts and technologies, then, through their use. In that case, they also corrupt the epistemic environment by eroding regulatory norms – agents would regularly flout the norm of assertion on the basis that they think that know that-p but are not justified in their belief, a further bad habit of the user. However, what is lacking is an explanation of how technologies and artefacts might corrupt the epistemic environment beyond their use by

8 Approaches beyond the ecological approaches discussed prior utilise virtue and vice epistemology to illustrate how misinformation negatively influences our epistemic practices. Such as Priest’s (2018) suggestion that the abundance of misinformation, which often reaffirms one’s pre-existing partisan beliefs, results in epistemic laziness. For brevity’s sake, such accounts are not fully expanded upon.

9 See: Giansirusca (2021, 119-171) and Noble (2018, for (respectively) a technical overview and a case study.

10 This is developed in §2.4.1.

(21)

human agents. This shortcoming is the focus of §2.4.1. Furthermore, that ITs produce and generate misinformation and that this is an aspect which cannot be captured by analytic socio-eco epistemology is discussed in §4.2.3.

§2.3: Who is responsible for the corruption of the epistemic environment?

This section addresses the question of whom is responsible for the corruption and upkeep of epistemic environments. I argue that one ought to adopt a permissive account of epistemic agency to be able to adequately account for the role ITs play in facilitating the dissemination of misinformation, as shown in §1.1. That is, in their role in degrading the epistemic environment.

To argue for this claim, I will first gather the notions of epistemic agency utilised in analytic socio- ecological epistemology and suggest that such a view forecloses the possibility of holding the non- human actors accountable. In doing so, I illustrate the close conceptual ties between epistemic agency and epistemic responsibility.

Recall the standard example and list of actors offered in §1.1 in Table 1.

Actor: Definition

(i) (a) Human 1 Producer of false news report (inputs into GPT-3

(i) (b) Bot Built upon/uses GPT-3 engine (Heaven,

2020b).

(ii) GPT-3 An NLG which generates text used in false

news report.

(iii) Open AI Group actor developing GPT-3.

(iv) Bot Network Influential in gaming algorithm to make news report a “Trending Topic”.

(v) Bot developers Humans who created code used for (ib), (ii), (iv), (v), and (ix).

(vi) Human(s) 2,3,4, n Individuals interacting with false story (likes, re-tweets, scrolling past, etc.)

(vii) Poor quality newspaper Group agent which mistakenly shares false report.

(viii) Twitter Group actor responsible for creating the platform.

(ix) Twitter Algorithm An algorithm designed to show and recommend content to users.

(Table 1)

Table 1 illustrates the complex entanglement of actors within the epistemic socio-technical systems (Simon, 2015, 145) associated with misinformation. A socio-technical system is a network of artefacts, social practices, the interrelations between humans and artefacts and systems of knowledge (Coeckelburgh, 2020a, 243). This entanglement leads to the following question. Which actor(s) ought to be held responsible or accountable, for corrupting the epistemic environment?11

11 Note that Nissenbaum (1997) frames the difficulty of allocating responsibility across diffuse and distributed networks as the ‘problem of many hands’. That is, given the multiplicity and entanglement of actors involved in a moral action it is difficult to attribute responsibility and blame. See also (Nylhom, 2018a; 2018b; Coeckelburgh, 2020b, 140-141).

(22)

To be held responsible or accountable, one must possess agency; specifically, in this context, epistemic agency insofar as it is epistemic actions (providing reasons, sharing, and acquiring information) which influence the quality of the epistemic environment. What, then, within the accounts encountered in §2.1 and §2.2 are considered epistemic agents?

Goldberg (2016, 9-10) offers a paradigmatic account of epistemic agency within analytic social epistemology. Which for x to be an epistemic agent, x must:

(a) be reasonably ascribed to possess knowledge and epistemic states (belief, justification, understanding etc.) and;

(b) engaged in the social practices of acquiring, producing, and sharing knowledge.

On Goldberg’s (2016, 9-10) account, individual humans are epistemic agents, thereby accounting for agents (i) and (vi) in Table 1. Groups of agents and collectives might also be considered as agents if one maintains a nonreductionist account regarding the ontological status of groups (Petit

& List, 2011; Collins, 2019).12 Moreover, Reider (2016, x) and Levy (2021, 7-8) note that epistemic agency should not be understood as being overly individualistic but rather distributed/emergent13 within communities and groups. Think of a couple’s joint effort in remembering and retelling a story, each filling in the gaps forgotten by the other. A non-individualistic view is also adopted by Ryan (2018, 101), Blake-Turner (2020, 13), and Levy (2021, 5). Consequently, on such accounts of epistemic agency, one can sufficiently account for the actors specified in (i.a), (iii), and (v-viii) to be epistemic agents and thus responsible for the corruption of the epistemic environment (see Table 4).

Whilst Ryan (2018, 99) and Blake-Turner (2020, 10) recognise that technologies and artefacts constitute an epistemic environment, they do not go so far as to suggest that these actors can be held accountable for influencing its quality. This is indicative of a broader disciplinary trend.

Goldberg (2016, 10) argues that social epistemological analysis, thereby including the accounts outlined in §2.2, are focused on understanding ‘the epistemic significance of other minds’ (my emphasis). That is, the attempt to understand how other human epistemic agents change the epistemic environment through social epistemic practices. Such an account of epistemic agency, however, is lacking in understanding the actualities of the practices of sharing misinformation insofar as actors (ii) (GPT-3), (iv) (bots), and (ix) (Twitter-algorithm) are excluded on the grounds that they do not possess the mental states required for achieving epistemic states. See Table 4 for what counts as an epistemic agent on a standard analytic social epistemological account within the list of actors outlined in §1.1.

12For the purposes of brevity, I assume that it is plausible to assign groups and collectives moral agency, but I will provide a brief explanation as to why they ought to be also understood to possess epistemic agency.

13 Note here that Reider (2016, x) and Levy (2021, 7-8) do not commit themselves to either a summativist or non- summativist position regarding the debate in collective epistemology (Lackey, 2014, 2-4). Depending on which position one adopts entails whether one uses the term “distributed” or “emergent”.

Referenties

GERELATEERDE DOCUMENTEN

In Nederland komen in het gebied dat gedurende de voorlaatste ijstijd door het landijs is bedekt heuvels voor die door sommige onderzoekers drumlins of drum- linoiden worden

p Verwacht wordt dat de kosten voor het lozen van het gezuiverde water op het riool ongeveer f 0,80 per m3 bedragen, De totaal berekende kosten die- nen vergeleken te worden met

theory calculations to perform a comprehensive mechanistic study on this transformation, in which we find two clearly defined stages: an associative path from the nitro to the

Here symbionts, host, and the remainder microbiome interact with each other, but are also in fluenced by free-living microbial communities and environmental conditions, for

Two general methods are commonly employed for the development of monolayer-based surface chemical gradients: (i) the controlled adsorption/ desorption of SAMs on gold or silicon

Expectation 2: Beginning male entrepreneurs who take actions and have beliefs that are associated with increased attractiveness to the opposite sex, disclose these

Indicatie op basis van emotionele problemen en cognitieve vertekeningen is niet wenselijk, aangezien er op basis van deze variabelen geen onderscheid gemaakt kan worden

Specialty section: This article was submitted to Human-Robot Interaction, a section of the journal Frontiers in Robotics and AI Received: 31 May 2019 Accepted: 11 October