• No results found

Having outlined the harm that misinformation enacts upon the infosphere, this chapter establishes a novel class of responsibilities poietic agents possess and ought to enact when engaged in acts of poiesis. That is, I argue that poietic agents possess, to differing degrees, poietic responsibilities. I agree broadly with the approach of information ethics and contribute to the field by introducing the notion of poietic responsibility. In §4.1, I position the notion of poietic responsibility within contemporary work in the philosophy of information. Then, in §4.1.1 I briefly characterise the structure of poietic responsibilities. In §4.2.1-6, I argue that poietic agents possess four poietic responsibilities, the responsibility of care, future use, artefactual autonomy, and process. In doing so, I detail their relation to misinformation. Thus, answering RQ4, which asked what responsibilities an agent might possess to prevent or rectify the harms of misinformation. Finally, in §4.3, I introduce a closely related concept to poietic responsibility – responsible innovation. I do this for two reasons. First, to distinguish the concepts from each other to clear any conceptual confusion for future work. Second, to illustrate the generality of poietic responsibility, I suggest that the notion of responsible innovation can be subsumed under poietic responsibility.

§4.1: Situating Poietic Responsibilities within Information Ethics: With Great (Poietic) Power, Comes Great (Poietic) Responsibility

In this section, I outline the conceptual space in which “poietic responsibilities” are developed and contextualise the concept within the philosophy of information and information ethics. Floridi (2013, 169) and Russo (2012, 4) note that digital environments are ‘poietically enabling’

environments. That is, digital technologies provide an interface which mediates between the individual and digital areas of the infosphere. Digital technologies enable individuals to interact with this area of the infosphere, and as a result, these technologies and environments provide the partnership required for poietic activity. For example, the interface of a laptop’s keyboard allows an individual to type the words of a Tweet that they will send, which, in turn, will alter the infosphere. In this example, there is the initial act of poiesis – writing the content of the tweet – which, once posted on Twitter, becomes an act of ecopoiesis. The scale at which digital technologies have increased our poietic power can be exemplified by the consequences of a collection of tweets which can be understood as acts of ecopoiesis, sociopoiesis, and egopoiesis.

Gruzd and Mai (2020, 2) perform a social network analysis to trace the origins of the

#FilmYourHospital COVID-19 conspiracy theory. The conspiracy theory suggested that COVID-19 was a hoax and that one could prove this by showing that ‘if hospital parking lots and waiting rooms [are] empty’ (Gruzd & Mai, 2020, 2), then the high rate of hospitalisations is false and that the pandemic is not real (sociopoiesis). In their analysis, they note that the ‘rise of the

#FilmYourHospital conspiracy’ could be traced from a ‘single tweet’ that became viral following the sharing of a former US Republican Congressional candidate and amplification from bots (Gruzd and Mai, 2020, 6-8). That the latter occurred draws attention to the fact that the architecture of digital environments plays a role in enabling our poietic capacities insofar as

“recommendation” and “trending” algorithms contributed to an alteration of the infosphere. Or, to adapt Russo’s (2022, §9.4.3) notion of poietic agent and coproduction, artificial epistemic agents (algorithms) and individuals coproduced the semantic artefacts (the tokens of each shared/viewed video) which contributed to eco, socio, and egopoiesis. That bots amplified this tweet raises the question of who/what exactly is responsible for exercising poietic power. Whilst one cannot know the number of people convinced by the #FilmYourHospital conspiracy theory (egopoiesis), this example illustrates that the proliferation of access to digital technologies has resulted in an enormous ‘increase [in] the ontic and epistemic power of human agents’ (Floridi, 2013, 53). That is, a cursory tweet from a relatively unknown account (Gruzd and Mai, 2020, 3) can significantly impact the state of the infosphere (ecopoiesis).

Floridi (2013, 168) notes that with our increasingly powerful poietic capacities, as endowed by digital technologies, we possess greater ‘duties and responsibilities’ to the infosphere on a collective scale. Humanity thus has ‘a responsibility towards the infosphere… both present and future’

(Russo, 2018, 16). Simon (2015, 153) also recognises that with an increase in our productive epistemo-ontological powers, we are endowed with greater epistemic and, thus also, moral responsibilities. Recall from §3.3.1 that Russo (2022, §9) noted that human and artificial poietic agents co-produce knowledge in a partnership. She emphatically suggests that human and artificial epistemic agents do not have equal degrees of poietic capacities, agency, and autonomy (Russo, 2022, §9.5.2). Rather, human and artificial epistemic agents are engaged in a partnership in which knowledge is co-produced; that is, technologies require human epistemic agents to interpret, (mutually) guide inquiry, and… knowledge. That is, ‘the role of human and of artificial epistemic agents in the process of knowledge production is not totally symmetric because, after all, we human agents remain – and should remain – in the driver’s seat’ (Russo, 2022, §9.5.2). From this, Russo (2022, §9.5.2) claims that ‘we are still the agents that ultimately carry responsibility – a responsibility that is at once epistemic and moral’ (my emphasis). As such, the partnership between humans and artificial poietic agents incurs additional responsibilities for humans grounded in our increasing poietic capacities (Floridi, 2013, 168; Russo, 2022, §9.5.2). A similar position is adopted by Simon (2015, 153), albeit not couched in terms of poiesis or homo poieticus; instead, she claims that as our

‘epistemic practices are productive and different practices produce different phenomena’ the notion of epistemic responsibility ‘gets a whole different flavour’. In acknowledging our epistemic actions' creative and productive nature, Simon’s (2015, 153) account chimes well with the notion of poiesis. Moreover, in closely aligning herself with Barad (2007), Simon (2015, 153) claims that because our epistemic actions are productive, insofar as they create new phenomena and knowledge, our knowledge-practices always imply ‘issues of ethics and politics’. Thus, further reaffirming the marriage of epistemology and ethics. Yet again, an epistemic act, in the form of producing and sharing knowledge, becomes an ethical act also.

As I have argued (in §2.4.2 and §3.2), recognising that epistemology and ethics are strongly intertwined is important when conceptualising the harms of producing misinformation. As producing and sharing misinformation has a significant effect on an individual and group’s epistemic wellbeing via the corruption of the epistemic environment (§2.2) and causing entropy within the infosphere (§3.4). Whilst Russo (2022, 169) makes clear that she is not providing a ‘full-blown account of an epistemology-cum-ethics’, I begin to contribute to such a project by delineating a class of responsibilities stemming from an agent’s poietic capacities and the partnership between human and artificial poietic agents. That is, I introduce the concept of poietic responsibilities. Whilst information ethics does not provide a rule-based ethics, nor does it aim to do so (Floridi, 2013, 163), I suggest that if fulfilled, poietic responsibilities guide how a moral agent can act to combat the entropy within the infosphere introduced by misinformation.32

§4.1.1 Introducing the Structure of Poietic Responsibilities

In this section, I briefly outline the structure of poietic responsibilities. First, I designate the sense of the term “responsibility” in the class of “poietic responsibilities” and suggest that each poietic responsibility captures different senses. Second, I outline the agents which possess poietic responsibilities. Third, I suggest that poietic responsibilities are both epistemic and moral responsibilities, drawing upon the idea of a productive epistemic responsibility noted in in §2.3. I now turn to the first portion of this section.

Responsibilities can be either forward-looking or backward-looking (van de Poel, 2015, 39; van de Poel & Sand, 2018, 5-6). Forward-looking responsibilities refer to those in which one ought to see that some action, or state of affairs, obtains. Backward-facing responsibilities refer to something that has already occurred; think here of assuming responsibility, that is, adopting a reactive attitude toward an error in one’s ways. Van de Poel (2015, 38-39; van de Poel & Sand, 2018, 4-5) classifies nine senses of the term “responsibility” in line with this distinction.

1. ‘Responsibility-as-cause’ – “The rain is responsible for the flood”33

2. ‘Responsibility-as-task’ – “The moderator is responsible for keeping the discussion focused”

3. ‘Responsibility-as-authority’ – “The chief is responsible for the safety of the tribe.”

4. ‘Responsibility-as-capacity’ – The ability to reflect upon one’s actions and assume responsibility.

5. ‘Responsibility-as-virtue’ – The disposition to take on responsibility

6. ‘Responsibility-as-(moral)-obligation’ - “I am responsible not to plagiarise”; that is, to see that x does (not) occur.

32 See also Vallor (2016, 17-23) for an argument regarding the limitations of rule-based ethics surrounding the ethics of digital technologies.

33 Note that this is the sense of responsibility associated with “accountability” discussed in §2.3 and §3.1.

7. ‘Responsibility-as-accountability’ - as the (moral) obligation to account for what you did or what happened…

8. ‘Responsibility-as-blameworthiness’ – John is responsible for setting the fire insofar as he is deserving of blame.

9. ‘Responsibility-as-liability’ -- John is responsible for the damage caused by the fire in the sense that he ought to pay for the damages.

Van de Poel (2015, 39-40) suggests that (5)-(6) are forward-looking responsibilities, whereas (7)-(9) are backward-looking. He excludes (1)-(4) as being primarily descriptive senses of the term – a point with which I disagree insofar as one’s role (3) and authority (4) can incur additional reason to adopt (5)-(9). As Smith and Niker (2021, 619-620) argue, the role that social media companies play within society entail a requirement to accept additional duties regarding the regulation of content;

that is, a reason to accept responsibility-as-moral-obligation.34 I draw upon van de Poel (2015) and van de Poel and Sand’s (2018) to denote how each poietic responsibility captures a different sense of the term “responsibility”. Whilst how each poietic responsibility encapsulates the different senses of “responsibility” (van de Poel, 2015, 38-39; van de Poel & Sand, 2018, 4-5) is developed in each section of this chapter; it is worth taking stock of what I am building towards which is represented in Table 6.

Poietic Responsibility Sense of Responsibility (van de Poel, 2015, 38-89;

van de Poel & Sand, 2018, 4-5)

Forward or Backward Looking

Responsibility of Care Responsibility-as-virtue Forward Looking Responsibility of Future Use Responsibility-as-virtue and

capacity. Responsibility-as-(moral)obligation rooted in responsibility-as-authority.

Forward and Backward Looking

Responsibility of Process Responsibility-as-obligation Forward Looking Responsibility of Artefactual

Autonomy Responsibility-as-task and

authority Forward and Backward

Looking.

(Table 6).

Regarding the question of who, or what, can possess poietic responsibilities, I broadly agree with Simon (2015, 155) and Russo (2022, §9.5.2) on attributions of responsibility. Specifically, that technologies are neither isolated actors nor do they emerge from a vacuum. As noted in §3.1, Simon (2015, 155) suggests that only a ‘technical artefact [as understood] in isolation cannot be made responsible. For socio-technical compounds, the possibility of attributing responsibility would still be given’ (my emphasis). Moreover, Russo (2022, §9.5.1) places pressure on Floridi’s (2016, 25-34) notion of third-order technologies. That is, those technologies in which human beings are

34 Note that a similar argument can be made from Confucian role-ethics & role-responsibilities (Zhu, 2020).

no longer required for ‘dependence and interaction’ which ‘have the possibility and power to interact among themselves, without humans’ (Russo, 2022, §9.5.1) (my emphasis). She does so by noting that there exists a partnership (recall the notion of the co-production of knowledge introduced in §4.1) between third-order technologies and humans, claiming that ‘even the most advanced autonomous technologies cannot operate entirely on their own’. Think, for example, of GPT-3. Whilst alarming in its poietic power, it still requires a prompt from a human epistemic agent; that is, it requires a partnership. It requires a partnership on the user's side, but also during development insofar as developers must decide exactly what data to train the deep learning algorithm on. The technologies at the fore of this thesis (GPT-3, bots, and social networking sites) cannot be understood as isolated, wholly autonomous moral and epistemic agents. Thus, whilst Floridi (2013, 62) claims that ‘ontological power brings with it new moral’35 (and I would add epistemic) responsibilities and artificial poietic agents possess these powers, humans cannot shirk their poietic responsibilities.

What I am suggesting is that artificial agents can and do possess poietic responsibilities, albeit in a less demanding manner; they are accountable for their poietic output. In doing so, I am not diminishing our (human) poietic responsibilities, but rather attempting to bring a strong focus to our greater responsibilities. Thus, the answer to the question of who, or what, possesses poietic responsibility is all poietic agents. If they are an artificial poietic agent, their poietic responsibility is diffused. That is, artificial agents can be held accountable for violating a poietic responsibility whilst their progenitors have a stronger obligation to fulfil their own poietic responsibilities. That there exists a poietically responsible human behind every artificial poietic agent can be represented as in Figure 3.

(Figure 3).

35 My emphasis.

Technology

(User)

Technology Technology

(Prompter)

Third-Order Technology (Poietically Responsible-as-Accountability)

Designers (Poietically Responsible)

Users (Poietically Responsible)

This aids in arriving at the following allocation of (poietic) responsibilities (Table 7) in the list of actors involved in paradigmatic cases of misinformation. Thereby providing an answer to RQ3 (the question surrounding the best framework to understand responsibility in practices of misinformation) and RQ4 (the question of what responsibilities agents possess to rectify or prevent the harms of misinformation).

(Table 7)

The structure of poietic responsibilities is as follows. In brief, all poietic responsibilities are both moral and epistemic responsibilities. This is due to the distinction between ethical and epistemological acts being blurred in information ethics and the dual function of poiesis. Poiesis alters and constructs the moral situation of agents and produces knowledge in the form of semantic artefacts. Russo (2022, §9.5.2) suggests that when engaging in acts of poiesis whilst in a partnership with technologies, we (humans) possess ‘a responsibility that is at once epistemic and moral’.

Floridi (2013, 70-77) acknowledges that one’s ‘first duty is epistemic: whenever possible, we must try to understand before acting’. That is to first understand and recognise that our actions may (not) contribute to the flourishing of an informational entity and whether we ought to (not) contribute to this flourishing. Thus, attempting to predict the moral outcome of our actions. It is, pithily, an antithesis to the current approach adopted by Silicon Valley. Instead of the imperative to “move fast, break things”, we ought to “slow down, tend to things”.

The epistemic component of poietic responsibilities pertains to the necessity of predicting the consequences and results of one’s poietic actions regarding how the informational entity produced might affect the infosphere (information-as-target) or be used by others (information-as-product).

This is the “predictive” aspect of epistemic responsibility within a poietic responsibility. Recall,

Actor: Poietic

Agent (Moral &

Epistemic Agent) (Y/N) (Human

= H) (Artificial

= A)

Can be held accountable for acts of poiesis?

(Y/N)

Can be

held responsible for acts of poiesis?

(Y/N)

Additional Responsibilities in virtue of partnership between human and artificial poietic agents?

(Y/N)

Increased poietic power resulting in greater

responsibilities toward

infosphere?

(Y/N)

(i) (a) Human 1 Y & H Y Y Y Y

(ii) (b) Bot Y & A Y N N N

(iii) GPT-3 Y & A Y N N N

(iv) Open AI Y & A Y Y Y Y

(v) Bot Network Y & A Y N N N

(vi) Bot developers Y & H Y N Y Y

(vii) Human(s) 2,3,4, n

Y & H Y Y Y Y

(viii) Poor quality newspaper

Y & A Y Y Y Y

(ix) Twitter Y & A Y Y Y Y

(x) Twitter Algorithm

Y & A Y N N N

from §2.3, Simon’s (2015) account highlighted the productive component of epistemic responsibility. This is the “constructionist” component. The moral components of poietic responsibilities are fourfold. In addition to possessing the aforementioned epistemic responsibility, agents also possess either (inclusive) the responsibility of care, future use, process, or artefactual autonomy. Each of these possesses further, differentiated structures. The structure of, and the relationship between, poietic responsibilities may be represented as in Figure 4.

(Figure 4).

§4.2: The Poietic Responsibilities

This section provides an account of poietic responsibilities by sketching their structure and content. It does not, however, provide a complete account of poietic responsibilities due to the limitations of space and the wide scope of such a project. I outline the contents of each

responsibility in turn whilst applying them to issues of misinformation. In doing so, I draw

Poietic

Responsibilities

Epistemic

Responsibility Moral Responsibility

Predictive Constructionist

(Productive) Care Future

Use

Artefactual

Autonomy Process

Responsibility as Virtue

Forward Looking

Responsibility as Virtue, Capacity, Obligation

(rooted in Forward &

Backward looking

Responsibility as Obligation

Forward Looking

Responsibility of Task and

Authority

Forward Backward

Looking

attention to pre-existing interventions about misinformation and how a recognition of each poietic responsibility can provide grounds for other ameliorative practices.

§4.2.1: Responsibility of Care

Turning to the specific content of each poietic responsibility, following Floridi (2013, 75) and Russo (2022, §9.5.2), I suggest that poietic agents possess the responsibility of care. Whilst present within information ethics, a full development of the responsibility of care is not present in either Floridi's (2013, 75) or Russo’s (2022, §9.5.2) work. Floridi (2011, 23) notes that ‘poiesis emerges as being more primordial than care’. Thereby implying that acts of care emerge from acts of poiesis.

This kind of poietic care occurs when ‘an agent cares for the patient of her action when her behaviour enhances the possibilities that the patient may come to achieve whatever is good for it’

(Floridi, 2013, 75). That one engages in a relation of care toward informational entities requires a shift in perspective. That is, from conceiving the social and thus also the ethical-political realm, as being constituted by individual objects analysed by examining ‘specific actions or interactions’

(Floridi, 2020, 10) to a relational approach. A relational approach involves adopting the perspective that individual “things” are, instead, constituted by bundles of relations (in the logico-mathematical sense, as in unary, binary, ternary, n-ary relation) which, in turn, relate to other bundles (Floridi, 2020, 9). As such, if one shifts toward a perspective which privileges the relationality, specifically a relation of care, between informational entities, fulfilling the poietic responsibility of care becomes viable. Consequently, fulfilling, and acknowledging the existence of our poietic responsibilities, requires a prior epistemo-ethical act; one must recognise the relata which constitute and connect informational entities which, when taken in relation to one another, constitute the ethical and political realm.

The recognition of the ethical and political realm as equivalent to the informational realm is, unsurprisingly, controversial. Whilst I do not have the space, nor the scope, to provide a full defence of information ethics within this thesis, I will briefly address the most relevant objections which pertain to the notion of poietic responsibility. Specifically, that information ethics requires a ‘conversion experience’ (Floridi, 2013, 316) or a “take it or leave it” approach (Stahl, 2008, 106).

Stahl (2008, 106) argues that ‘at some point [the agent] needs to accept the intrinsic moral value of information entities qua information entities or not’; if one does accept the ontological foundation of information ethics, then it cannot be used as an ethical framework. This chimes well with the objection that information ethics requires a ‘conversion experience’ (Floridi, 2013, 316) in which one must see ‘the whole project of one’s life’ from the perspective of respecting informational entities and that ‘they will not be converted unless one starts’ from the position of information ethics. Floridi (2013, 316) bites the bullet and accepts that information ethics

‘resonates with spiritual positions’, which are interpreted as being a kind of ‘poietic ethics’ that attempts to ‘overcome the polarisation between the self and the other… agent and the environment… [and] the informational entities and their infosphere’. More concretely, Floridi (2013, 316) suggests that ‘there is clearly a way forward for information ethics, which finds in virtue ethics an ally’ (my emphasis). Tentatively, one might argue that one such virtue is to come to a recognition of a relational, informational approach toward the ethical and political realm. Russo (2022, §9.5.2) also notes that the marriage between information ethics and virtue ethics and epistemology promises a fruitful avenue for further research. One such avenue would be reading Vallor (2016) from an informational, or rather poietic, approach. An initial path for future research would be to draw upon virtue theory and the existing work on care ethics (Gary, 2022) to identify a constellation of traits, which, when cultivated and exercised wisely, would aid in fulfilling the poietic responsibility of care.

Whilst a discussion of the relation between poietic responsibilities and the virtues is beyond the scope of this thesis, as I do not intend, nor attempt, to offer a full account of poietic responsibilities, I suggest the following. Following van de Poel’s (2015, 38-40) taxonomy, I suggest that the responsibility of care can be understood as a responsibility-as-virtue which is a forward-looking responsibility. That is, one possesses the virtue, understood here in the moral and epistemic sense, to often engage in relations of care with informational entities.

How a poietic agent cares for the infosphere and informational objects (the patients of care), is as follows. Recall that Floridi (2013, 71) claims that ‘the flourishing of informational entities as well as of the whole infosphere ought to be promoted by preserving, cultivating, and enriching their well-being’. Whilst not fully developed within information ethics, I interpret preservation, cultivation, and enrichment as follows. Preservation refers to ensuring that the informational content of an entity is not corrupted or degraded and ensuring the continuing existence of informational entities.

This relationship of care is rooted in ensuring that the informational object remains accessible for future inforgs. That is, there is a sense in which the poietic agent continually exercises their poietic capacities to preserve the entities’ informational content. It is a task of our responsibilities toward the infosphere to be able to identify which informational entities deserve greater attention and care at the expense of others. Thus, Floridi (2013, 315) notes that our actions of care will often fall short in reducing entropy in the infosphere as we necessarily have to destroy some informational entities.

This is our ‘tragic predicament’ (Floridi, 2013, 315) resolved only by interpreting a morally good life as a balancing act between doing more good than evil.

That is, Floridi (2013, 72-73) presents goodness and evil to be, respectively, non-monotonic and monotonic. In short, good actions, ones which leave the infosphere's condition in a better condition than before, can turn out to be ‘less morally good and sometimes even morally wrong, unintentionally, depending on how things develop’ (Floridi, 2013, 72). That is, actions which may be intended to preserve the status of the infosphere may, depending on ‘what new state the infosphere enters into, as a consequence of the [poietic] process in question’ (Floridi, 2013, 72) may contribute to entropy, and thus to evil. For example, one might fail to preserve an informational entity despite one’s best efforts and thus cause entropy within the infosphere (this is noted in §4.2.5 in the discussion of informational sustainability). According to Floridi (2013, 73), goodness is ‘resilient’ in two senses: (i) tolerance and (ii) error-recovery. Regarding fault-tolerance, acts of goodness possess the ability to ‘keep the level of entropy within the infosphere steady despite the occurrence of several negative processes’ (Floridi, 2013, 73 - my emphasis). That is, one cannot wholly get rid of entropy within the infosphere; one can only attempt to temper its spread. Such tempering is found in the error-recovery associated with goodness, in which goodness can ‘resume or restore the previous entropic state of the infosphere, erasing or compensating any new entropy that may have been generated by processes affecting it’ (Floridi, 2013, 72). Thus, whilst the poietic responsibility of care, and its class, is one directed toward goodness, it being exercised does not entail absolute moral goodness, only that it may guide the moral agent toward doing more good than evil.

The cultivation of an informational entity as an act of care can be understood as contributing additional informational content to an entity (if required). What remains vague, however, is how to cash out the ‘flourishing’ of an informational object (Floridi, 2013, 71). He suggests that to contribute to the flourishing of an informational entity is to contribute to ‘what an informational entity should be and become… determined by the good qualities in, or that may pertain to, that informational entity’ (Floridi, 2013, 77). When speaking of informational entities identical to humans, Floridi’s (2013, 77) use of the term ‘flourishing’ chimes well with virtue ethics and epistemology. As aforementioned, Russo (2022, §9.5.2) notes that a development of information ethics drawing upon virtue ethics and epistemology would be an interesting line of inquiry to

further develop. However, what constitutes the flourishing of informational entities understood as artefacts or technologies remains less clear. Bynum (2006, 169-171), drawing upon the work of Norbert Wiener, suggests that ‘information processing machines’ (Bynum, 2006, 170), understood contemporaneously as digital technologies or artificial poietic agents, aid us in ordering and structuring information insofar as they are adept at processing information. When they flourish, they aid in gathering, storing, sorting, and accessing other informational entities, bringing about order and structure in the infosphere and consequently reducing entropy. Thus, when digital technologies or artificial poietic agents are consistently defective (as outlined in §2.2) by generating or suggesting misinformation, they engender entropy within the infosphere. How else does the responsibility of care pertain to the issue of misinformation?

§4.2.2: Care and Misinformation

Recall from §3.2, I argued that the epistemic climate (the broadest scope of the “epistemic environment”) is dependent upon the wellbeing of the infosphere. As such, to exercise care for the infosphere and the informational entities that constitute it, is to care for the epistemic climate indirectly. That is, by ensuring that informational entities are preserved, cultivated, and directed toward flourishing, the responsible, caring poietic agent also ensures the quality of the epistemic environment. Regarding misinformation, the responsibility of care suggests that responsible poietic agents will not knowingly share or produce misinformation in virtue of caring for the infosphere. This does not mean, however, that they will never do so. Recall the argument given by Floridi (2013, 72) about the non-monotonic nature of goodness. That agents will inevitably share or produce misinformation does not foreclose them from restoring the entropy caused by their actions. For example, suppose one retweets a piece of (mis)information. In doing so, one has brought about entropy within the infosphere, thus violating the null law of information ethics -

‘entropy ought not to be caused in the infosphere’ (Floridi, 2013, 62). However, on the grounds of caring for the infosphere, one can eliminate the entropy caused by either destroying the (mis)information entity or issuing a corrective. Consequently, recognising the responsibility of care relates to the issue of misinformation by appealing to a justified optimism. Whilst misinformation spreads rapidly, and in great quantity, the non-monotonic nature of goodness allows the responsible poietic agent to rectify the entropy it causes by removing misinformation, thus, caring for the infosphere.

§4.2.1: Responsibility of Future Use and Misinformation

The responsibility of future use refers to a poietic agent’s responsibility to anticipate the future use of the result of poietic activity and a willingness to intervene if the use of a (semantic) artefact engenders entropy within the infosphere. This relation between the outcome of exercising creative capacities and future action has been noted by Noonan and Gardner (2014, 92), albeit with a focus on a narrower, professionalised understanding of creativity (artists and scientists). Noonan and Gardner (2014, 93-95) introduce the context of ‘post-creative development’, which denotes how events and actions from various social actors can influence the reception of a created object. That is, a creator produces an object, O, at time T1, then at times T2-n, there exists the period of post-creative development in which other actors can influence the reception and use of O. A key example here is that of Wakefield et al.’s (1998) paper which suggests that vaccines cause autism which then, in the period of post-creative development, was used to justify anti-vaccination sentiment. Noonan and Gardner (2014, 96-98) suggest that creators can remain passive or active regarding the realities of the post-development context. To remain passive is to not intervene, whereas to be active is to intervene in the following manner. First, creators identify whether the post-creative development is a threat to societal wellbeing; second, they assess relevant, responsive options, such as publishing a statement on their position on the development (Noonan & Gardner,

2014, 110-112). Then, depending on whether they are an accredited professional, follow a code of conduct or, if they are not, consult social networks and engage in ethical self-reflection. Finally, they act according to their decision (Noonan & Gardner, 2014, 112).

Expanding this notion to poietic responsibility, Noonan and Gardner’s (2014, 95-110) account chimes well with the diachronicity of information ethics. Insofar as poietic agents remain responsible for the outcomes of their poietic actions, the responsibility of future use is primarily forward-looking. Poietic agents must ensure that their creations do not engender entropy within the infosphere. Applied to misinformation, this pertains directly to current anticipatory practices of “prebunking” possible misinformation rather than debunking existing misinformation (Ecker et al., 2022, 20-21). Prebunking is the practice of exposing agents to what misinformation about a certain topic may look like, its possible sources, and how it exploits psychosocial mechanisms which engender belief (Roozenbek et al., 2020; Ecker et al., 2022, 20-21). This responsibility is also backward looking on the grounds that, if one fails to fulfil the responsibility of future use, understood as an obligation that some state of affairs does not occur, then one can be retroactively held responsible, in the sense of being blameworthy or accountable (depending on whether the moral agent is artificial).

In understanding the responsibility of future use as an ongoing type of vigilance, that is, as a forward-looking responsibility, one can see a close tie to the responsibility of care. In recognising their responsibility of future use and actively intervening in the post-creative development context (Noonan & Gardner, 2014, 112), poietically responsible agents possess the responsibility of care toward the created informational entity. In cruder terms, the responsibility of care “kicks in” or becomes increasingly urgent once the condition of the post-creative development context (Noonan & Gardner, 2014, 112) requires active intervention from the poietic agent.

§4.2.3: Responsibility of Artefactual Autonomy: Against Automating Entropy Within the Infosphere

The poietic responsibility of artefactual autonomy is as follows. If an act of poiesis results in the production of an artificial agent, then its producer possesses a responsibility (understood as responsibility-as-task and authority) to limit the artificial agent’s autonomy insofar as to ensure that it does not cause entropy in the infosphere. A key example is when Microsoft’s Tay AI, which was trained on data harvested from Twitter, began to generate, and tweet, hate-speech and misinformation within sixteen hours. Note here that issues of black-box AI or artificial agents come to the fore. Whilst programmers may not know the processes which lead to an artificial agent producing entropy in the infosphere, and thus do not satisfy the epistemic condition for moral responsibility, they do possess the epistemic responsibility of trying to predict the outcome of an artificial agent’s actions. As such, the programmers of Tay can be held responsible for not fulfilling their epistemic responsibility. That Tay was shut down sixteen hours after its launch and after tweeting more than 96,000 times is suggestive of a poietic agent’s responsibility of artefactual autonomy (Lee, 2016). Thus, Microsoft (the collective agent) partially recognised and fulfilled the responsibility of artefactual autonomy by shutting Tay down.

§4.2.4: Artefactual Autonomy and Misinformation

That entropy is brought about in the infosphere by artificial agents is highlighted by the relation between the spread of misinformation and artificial agents. This has been noted during the COVID-19 Pandemic, with an estimated 20-30% of misinformation surrounding the virus has been either produced or shared by Twitter bots and 66% of known bots, which are more prone

to spread misinformation, posted about COVID-19 (Himelein-Wachiowak et al., 2021, 5).36 Bots are also ‘particularly active in amplifying [low-quality] content in the very early spreading moments, before an article goes viral… [and] target influential users through replies and mentions’ (Shao et al., 2018, 3). From the perspective of information ethics, the use of bots to generate and disseminate misinformation is an automation of entropy within the infosphere. This dovetails with Russo’s (2022, §9.4) claim that semantic artefacts (which encapsulate knowledge) are co-produced in the partnership between artificial and human poietic agents. Thus, artificial poietic agents can be designed in such a way to co-produce misinformation, thereby causing entropy within the infosphere. Furthermore, if entropy within the infosphere is automated, this also results in the degradation of the epistemic climate (the broadest construal of “epistemic environment”), and concurrent epistemic environments. This is because the informational entities which constitute the infosphere also constitute the epistemic climate and epistemic environments (recall from §3.1 the suggestion that the relation between epistemic environments and the infosphere is one of supervenience). As such, to shirk the responsibility of artefactual autonomy and to allow for the automation of entropy is to inflict a harm upon both the epistemic climate and the infosphere.

As to who ought to be held responsible-as-blameworthiness for the automation of entropy within the infosphere is beyond the scope of this thesis, an initial answer could be made by appealing to the idea of diffused responsibility I established in §4.1 (Figure 3). However, a complete answer would depend on the position one takes in the debate on the responsibility gap in AI (Nyholm, 2018; Danaher, 2022), which is further complicated insofar as generative AI disrupts notions of authorship (Kaminski, 2017; Acosta, 2012). Such a project would be a promising avenue for further research. However, what does fall within the remit of this thesis, is the acknowledgement that when acts of poiesis are directed toward the production of artificial agents, those producing them possess an additional responsibility-as-task and responsibility-as-authority to ensure that artificial agents are not causally accountable for bringing about entropy. That is, in virtue of occupying the role of “creator”/programmer, the agent must ensure that what is created does not engender entropy in the infosphere. One might object that this responsibility chalks up to nothing more than “thou shalt not make bots that spread misinformation”; an empty imperative given that the motivations of those producing bots are precisely to spread misinformation and disinformation (Kollanyi, 2016; Shao et al., 2018; Howard et al., 2018; Pomerantsev, 2019; Ferrara, 2019).

However, I suggest that platforms (understood as collective agents), such as GitHub, and Twitter (insofar as it is the platform on which entropy is automated), possess a greater degree of the responsibility of artefactual autonomy because they provide the means to the means of (bot) production.

That is, designers of ‘poietically enabling environments’ (Floridi, 2013, 161) also guide the inhabitants toward certain modes of poiesis. Think, for example, Twitter’s 280-character limit for Tweets; users have their poietic capacities constrained by the design of this environment, which has import on how informational entities are produced, and thus also the state of the infosphere.

That designers possess this double-power, not only of poiesis but also in guiding other poietic agents’ acts of poiesis, implies that they possess a higher degree of the responsibility of artefactual autonomy. This chimes well with Floridi’s (2013, 272-274) notion of infra-ethics, in which he notes that the active structuring of environments can aid, or hinder, good moral action. Yet, given that the production of informational entities, via poiesis, is also an epistemic act, then to guide actions of poiesis is also to guide our epistemological practices. As such, the structuring of poietic environments can also aid, or hinder, good epistemic action – there is, thus there is also scope to understand the architecture of these environments as being an issue of infra-epistemology (to adapt

36 This, however, is only a rough estimation given that methods of detecting bots on social media are only moderately reliable given that different detection tools yield different results (Martini et al., 2021).

Floridi’s (2013, 272-274) term).37 As digital environments are ‘poietically enabling environments’

(Floridi, 2013, 161), and that digital environments are also epistemic environments (recall from

§2.1.1.), some epistemic environments are designed. From this, it follows that designers of such environments have a responsibility to ensure that they are designed in a way that does not negatively alter epistemic practices.38 Furthermore, within our current information systems, there exist ‘obligatory passage points’ (Simon, 2015, 154), which, for the owners of such passage points, affords them a significant amount of poietic-enabling power. As the code on which bots run is often open-source and shared widely on GitHub, and other popular repositories for code (Kollanyi, 2016; Millimaggi & Daniel, 2019), these platforms, understood as passage points, have the responsibility-as-authority to ensure that they do not enable users to flout their responsibility of artefactual autonomy. How, then, in the process of creating these environments, do designers remain responsible? And, on a general level, how do other poietic agents responsibly produce informational entities?

§4.2.5: Responsibility of Process

The responsibility of process is a poietic agent's duty to ensure that the production of informational entities (via poiesis) is done sustainably. Sustainability, understood informationally, can be understood as retaining the informational content, or ontological status, of entities that contribute to the production of a new informational entity. That is, in producing informational entity z, other informational entities, w, x, and y, may have to be incorporated or destroyed, to allow for the emergence of z. Thus, on the RPT model (Floridi, 2013, 21-22), the responsibility of process is concerned with information-as-product. To take the example of printing books, various other informational entities such as trees, chemical compounds required for ink, and so forth must either be destroyed or incorporated into the final product. That this is a permissible act of engendering entropy within the infosphere is justified because, for Floridi (2013, 315), what matters is the cumulative amount of entropy generated. That is, one might argue that the generation of an informational entity in the form of a semantic artefact incurs a net increase in the informational content of the infosphere. Note here that the discussion of the non-monotonic nature of good within the infosphere (Floridi, 2013, 72-73) and the overridability of the respect of an informational entity applies to this issue. The responsibility of process distinctly emerges within contexts when the destruction or incorporation of one informational entity for the generation of another concerns entities of equal or similar, value. When engaged in poietic processes, agents ought to predict whether and ensure that, if necessary, the process of poiesis will be (informationally) sustainable and must attempt to preserve informational entities when engaged in potentially destructive acts of poiesis.

This dimension of informational sustainability is distinct from, or cannot be reduced, to the responsibility of care is rooted in the different senses of the word responsibility. Recall that the responsibility of care is best understood as “responsibility-as-virtue” (van de Poel, 2015, 38; van de Poel & Sand, 2018, 6-8), which may be characterised by a yet-to-be-specified constellation of traits. The responsibility of process, however, is best understood as a “responsibility-as-obligation”

(van de Poel, 2015, 38; van de Poel & Sand, 2018, 15). That is, when one engages in the process of poiesis, one has the obligation to ensure that said process is informationally sustainable insofar as the amount of entropy (unintentionally) caused by the poietic act is as minimal as possible. If this is the case, then an agent still possesses the responsibility of care insofar as goodness is resilient

37 Note that this chimes with the argument made in §2.4.1 calling for a recognition that epistemological analyses of technology ought to examine how technologies significantly alter the horizons of our epistemic practices.

38 Note that this argument would benefit from direct involvement with the literature on value-sensitive design (van der Hoven & Manders-Huits, 2017; Reijers & Gordijn, 2019; Jacobs et.al, 2021) albeit due to the limitations of space, I cannot develop this further. It does, however, remain a fruitful avenue for future research.