EVERYTHING IS UNDER CONTROL:
The function of trust arguments in public deliberations about pursuing a risky course of action.
Jayna Swartzman-Brosky 13312766
University of Amsterdam Master thesis
Word Count: 19,909 Graduate School of Humanities
MA Communication & Information Studies Supervisor: dr. Corina Andone
Second reader: dr. Jean Wagemans
Table of Contents
List of Abbreviations
1. Introduction 4
1.1 Context: Public Faith in Technical Experts & Institutional Policy Makers 4
1.2 Research Question 5
1.3 Critical Discussion 6
1.4 Case-Study: The Kemeny Commission Report on Three Mile Island 8
1.5 Methodology 10
2. Theoretical Framework 11
2.1. Risk Theory: Sociology of Risk 11
2.1.1 Overview of Risk Theory 11
2.1.2 Risk as the Domain of Experts and Authorities 12
2.2 Risk as the subject of public deliberation 13
2.2.1 Risk from the Perspective of Crisis 13
2.2.2 Risk in the court of public opinion: Experts on Trial 13
2.3 Risk Communication 17
2.3.1 Risk Communication and Consensus 17
2.3.2. Normative Considerations in Risk Communication 18
2.4 Accountability in Risk Communication 19
2.5 Conclusion 21
3. Analysis of trust arguments in Risk Discourse 23
3.1. Risk deliberation between experts and laypeople 23
3.2 Appeals to authority to convince laypeople 25
3.2.1 Overview of Authority Argumentation 25
3.2.2. Practical Wisdom: professional expertise meets normative evaluation 27
3.3. Trust Strategies in Risk Deliberation 30 3.3.1. Trustworthiness as a standpoint in organizational communication 30
3.3.2. Assessing trust argumentation 33
3.3.3. Leveling moves: Transparency as a trust claim 35
3.4. Conclusion 36
4. Discussion of The Kemeny Commission Report 38
4.1. Background on the public debate over nuclear regulation 39 4.1.1 A brief history of risk discourse on nuclear energy in the U.S. 39 4.1.2 Description of Data: Kemeny Commission Report as a critical discussion 41 4.2 Data analysis of trust argumentation in The Kemeny Commission Report 43 4.2.1 Pragma-dialectical analysis of trust each stage of a critical discussion 43 4.2.2 Trust claims rejecting the credibility of the NRC in the Argumentation Stage 49 4.2.3 Trust Argumentation and Ethos: the Kemeny Commission 54
4.3 Conclusion 58
5. Conclusion 60
5.1 Overview of findings 60
5.2 Limitations and Implications for future research 62
6. List of References 63
Appendix A Appendix B Appendix C
List of Abbreviations
AEC - Atomic Energy Commission DOE - Department of Energy
MetEd - Metropolitan Edison
NRC - Nuclear Regulatory Commission TMI - Three Mile Island
U.S. - United State of America
1.1 Context: Public Faith in Technical Experts & Institutional Policy Makers
It is tempting to believe that we are witnessing an unprecedented dereliction of popular faith in the experts and institutional authorities we have entrusted to protect public interests through informed policy recommendations and decision-making. One could point to protests against mask-mandate guidelines proposed by the U.S. Center for Diseases Control during the coronavirus outbreak or to climate denial despite overwhelming scientific consensus on the urgent threats of man-made global warming. This cynicism extends even to the
democratic process. During the violent insurrection in the U.S. Capitol over perceived, but unproven, voter fraud in the U.S. 2020 presidential election, the faithlessness of segments of the electorate in their government institutions was apparent. Each of these instances presents qualitative evidence of mass delusion and a general refusal to accept advice from the very institutions we have entrusted with providing advice.
Indeed, reactionary counter-claims to expert opinion seem to take up more space in the media than concurring views. Swelling public skepticism of technically and scientifically deduced risks to human and planetary health do, indeed, obstruct the policy and behavioral changes needed to meet the challenges of our time. However, these protests are not new nor were they born from a vacuum (Trumpian or otherwise) during the mid-naughts. This
“contemporary irrationality” (Beck, 1992, 10) is symptomatic of what Ulrich Beck calls
“reflexive modernity”, and it is a portent of public discourse within the “risk society” (ibid.).
In view of the compounding technological advances reshaping traditional social
categories—family, work, leisure, etc. (ibid.), the “risk society” is preoccupied with safety and the distribution of risk (Weimer, 2017, 11), as well as the accompanying opportunities and threats that our iteration-obsessed modernity presents. The promises of scientific innovation in energy, communication, and medicine promise incredible rewards. But they also come with disclaimers. The extent to which we embrace accelerating technologies and advance along the path of modernity is subject to political will and public deliberation.
In addition to Beck’s social theory of risk under the run-away aegis of modernity, there are also historical-political influences to consider. The aftermath of Watergate and
Vietnam in the 1970s seeded the deterioration of public trust in expert opinion (Zaretsky, 2017) as much as the hyper-advancement of technology. The U.S. has yet to recover from the public relations failures caused by the opacity and myopia of public authorities in their attempts to evade the frustrating and messy process of public evaluation during these highly visible episodes of institutional hubris (ibid.). It is no wonder that, following poor
performance in the face of dire scenario forecasts such as a pandemic or melting ice caps, these institutions struggle to enjoin the public to confront hypothetical harms, but very real risks.
1.2 Research Question
These conditions prompt my research aim for this thesis: How are arguments from authority evaluated as trustworthy in risk discourses advanced by experts and public authorities?
Specifically, I will consider how authority arguments are interpreted and received in the public domain after expert credibility has been undermined by a crisis. Once a crisis has occurred, how do authorities reassure the public that future risk-activities are acceptable and risk-regulations are sufficient? I expect that my research will indicate an imbalance of bilateral trust between the public and risk managers, implying a missing component in the trust-repair strategies formulated for risk communication. I propose that, in addition to ability, integrity, and benevolence, the additional key element of transparency may be included to defend the trustworthiness of institutional authorities.
The saying goes, “you can’t argue with the facts.” This is the basic tenet of the
popular neo-conservative pundit Ben Shapiro’s tagline “Facts don’t care about your feelings,”
which is pinned to the top of his Twitter feed (Shapiro, Feb 5, 2016). He often cites empirical studies from economists and evolutionary psychologists to endorse his claims for traditional family roles and the dissolution of the welfare state. On the other end of the political
spectrum, Greenpeace cites expert authority as the conclusive position on global warming, stating on their website, “There’s no more debating if climate change is a reality. Scientists agree: the world is getting warmer and human activity is largely responsible” (Greenpeace, 2021). However, anyone who has ever read an op-ed or Google searched the “facts” of any issue knows that, in fact, one can argue with facts. Galileo knew this, Donald Trump knows this. And any scholar of argumentation and rhetoric knows this well. If facts are subject to
interpretation, the question is then, who do we trust to interpret and translate them into correct action?
1.3 Critical Discussion
This has implications for risk communication, wherein “the technical information (the message) is secondary to the real goal of the communicator: ‘Have faith; we are in charge’”(Plough & Krimsky, 1987, 7). Risk communication is typically “expert driven”
(Plough & Krimsky, 1987, 5) with emphasis on quantifiable assessments to inform policy decisions related to public health and environmental protection (ibid.). Traditionally, risk communication featured a top-down distribution of information from authorities to lay people. However, “quantitative models of risk including comparative risk assessment disregard the many value issues embedded in risk analysis” (ibid.) In the 1960s, following public disillusionment the U.S. military-industrial complex that produced such technological horrors as agent-orange, napalm, and atomic testing in the Southwest (Zaretsky, 2017), there was “no public consensus that the government can conduct this broad social management of risk in a fair and equitable manner” (Plough & Krimsky, 1987, 6). Additionally, the growing ecological consciousness of the 1970s, prompted a slew of exposés on the environmental tolls of industrial production (Zaretsky, 2017, 183-184)1. From this fresh public awareness,
“inevitably, conflicts arose between the rational quantitative approach to risk assessment and public perceptions of risk” (Plough & Krimsky, 1987, 6).
Foundational to the framework of risk communication is the discursive gap between scientific experts' quantitative analysis and the general public’s qualitative evaluation of risk (ibid.). This tension directly connects to the argument patterns and critical questions Walton formulated in 2002, and which Wagemans revised in 2011 to test the strength of authority argumentation. The tests primarily relate to the macro-contextual and normative evaluations of the source’s expertise and the validity of their assertion (ibid.). The critical questions needed to test authority argumentation align with the ethotic argumentation used in
1Both Beck and Zaretsky describe how books like Rachel Carson’s Silent Spring and Paul Ehrlich’s, The Population Bomb exposed the public to the harms caused by products of industry such as pesticides, pollution, and unequal distribution of resources (Zarevsky, 2017;
trust-repair strategies in crisis communication to reestablish claims of “ability, integrity, and benevolence” (Palmieri & Musi, 2020, 274). For expert claims to be received as trustworthy, the source must be able to demonstrate the characteristics of a positive ethos: wisdom
(phronesis); virtue (arete); goodwill (eunoia) (ibid.). Additionally, authority claims must be able to anticipate criticism or counterclaims in the form of refutations from detractors
(Palmieri & Musi, 2020, 277). Expanding on crisis response strategies and image repair in the rhetorical arena (Benoit, 1995; Coombs, 1995), Pamlieri & Musi considered trust-repair as a form of argumentation in the domain of crisis communication (2020). In crisis
communication a speaker responds to the impacts of a past or occurring event (Coombs, 1995). For this thesis, I explore how an assessment of trust-strategies can be applied to the domain of risk communication, wherein the impacts of a course of action are hypothetical and the event has not yet occurred?
In argumentation theory citing facts or claims from technical or scientific experts as material evidence to support a position is referred to as appeals to expert opinion, or
arguments from authority (Huenemann, 2004; Kutrovátz, 2012; Walton, 1997; Wagemans, 2011; Wierda, 2015; Andone & Hernández, 2019). Walton, for one, recognized that the vulnerability of authority argumentation is interrelated with the ignorance, or lack of expertise of the layperson who is forced to depend on the experts premises to reach a
resolution on a topic beyond the addressees scope of knowledge (Walton, 2002). Wagemans points to Huenemann’s definition of the expert as ‘‘someone who is epistemically responsible for a particular domain of knowledge’’ (Wagemans, 2011, 331; Huenemann, 2004, 250), to underscore that truth is often delegated to experts under normative standards. Wagemans asserts that, in order to evaluate the strength of arguments from authority, “instead of
characterizing an expert in terms of epistemic responsibilities, it is more appropriate to do so in terms of epistemic qualities.” (Wagemans, 2011, 331, original italics). Kutrovátz defines experts as ‘‘people who have, or who are attributed by others, an outstanding knowledge and understanding of a certain subject or field’’(Kutrovátz, 2011, 2). Inherent in the definition of expert opinion is the credibility of the expert and the truth-value of the premises advanced by them (Walton, 1997, 2002; Wagemans, 2011; Kutrovátz, 2011).
When Kutrovátz references the attribution of expertise by others, we must recognize that expertise is conferred and not self-evident. This test of authority is the type of exchange Walton refers to when he says, “rational thinking outside of science is both possible and
necessary and this type of thinking meets normative standards of adequacy of good and reliable reasoning” (Walton, 1997, 25). Arguments from authority are premises advanced in the complex speech act of argumentation, which requires that two parties are engaged in the resolution of a difference of opinion on the merits (van Eemeren & Snoeck Henkemans, 2017, 1). In the context of risk communication, the speaker, typically a scientific,
technological, or government authority, aims to convince their audience—the public—of the hazards or benefits of adopting a course of action by advancing arguments that are both reasonable and effective. Risk communication is primarily concerned with making decisions about a future course of action (Kessler, 2008, 865; de Vries & Fanning, 2017, 21). In this sense, risk communication is a deliberative activity in the domain of policy-making.
Policy-makers often adopt claims made by technical experts and scientists to defend their position on whether or not to pursue certain actions that will impact the public (Andone &
Hernández, 2019, 196). On issues where outcomes are uncertain or untested, “political decisions on risky matters have often been legitimated by pointing at scientific evidence (Andone & Hernández, 2019, 197). The strength of these arguments is dependent upon the beliefs, attitudes, and perceptions of the public being addressed (Rothman & Salovey, 1997, 4). To achieve reasonableness, the claims made must be logically sound based on evidence and normative standards of rationality. To achieve effectiveness, the audience’s experiences, feelings, and observations must be anticipated and addressed; normative values must be accounted for (van Eemeren, 2010).
1.4 Case-Study: The Kemeny Commission Report on Three Mile Island To illustrate my research, I will explore the communication disaster created by a consortium of government and commercial utility actors following an accident at the nuclear power plant on Three Mile Island (TMI), in Middletown, PA (World Nuclear Association, 2020; U. S.
Nuclear Regulatory Commission, 2018; Zaretsky, 2017). On March 28, 1979, Nuclear Reactor II at TMI, run by Metropolitan Edison (MetEd) and overseen by the Nuclear Regulatory Commission (NRC), experienced a partial-meltdown due to substantial leak of nuclear coolant (ibid.). There were no immediate or eventual physical or environmental damages to the surrounding area outside of the facility, yet it is considered to be the most consequential commercial nuclear accident in U.S. history (ibid.). Despite the absence of expert evidence for physical injury or collateral destruction, a large segment of the population
believed the accident caused irreparable harm and that the power plant posed an unequivocal threat to the region and its residents (ibid.). This popular perception was not due to a failure of the technology, but the failure of the communication strategy to effectively respond to the public confusion during and after the crisis. These early failures plagued government and utility efforts to engage the public in a risk dialogue favorable to reopening the plant in years following (Zaretsky, 2017). One could say that the crisis was primarily rhetorical—the only thing physically damaged was the interior-core of one of the reactors. The risk of reopening the plant was primarily rhetorical too—after all, risk is inherently hypothetical. But the consequence was very real: the demise of nuclear energy as a viable clean energy alternative in the U.S. for nearly 30 years due to public disavowal (ibid.).
Following the accident at the TMI nuclear power plant, President Carter mandated an executive commission to investigate the incident in response to public fears about nuclear energy. The commission was also tasked with advancing recommendations to improve nuclear safety and oversight in the U.S. The aim was to position the commission as a trustworthy authority on the matter of public safety policy. Instead of nuclear scientists, President Carter convened a panel of experts in law, policy, organizational management, sociology, ethics, public health and other branches of applied science & technology outside of the nuclear industry (Kemeny, et al., 1979) . The report was met with mixed reactions from pro- and anti- nuclear advocates, with each side interpreting it as validation of their own position (Lanouette, 1980). Ultimately, however, public fear overrode the economic benefits of nuclear power as a clean, cheap, and even safe energy alternative. Shareholders divested from nuclear rendering the entire industry dead in the water for decades (Zaretsky, 2017). The Kemeny Commission’s damning assessment of the NRC’s trustworthiness may have been the nail in the industry's coffin.
I will begin my examination with a discussion of the theoretical framework of risk
communication in chapter 2. The first section will include an overview of risk theory within the context of discourse studies and be expanded into the topic of risk communication in
democratic deliberation. In the second section, this overview of risk communication will be related to the discursive relationship between the public, risk producers, and risk managers.
Because I am primarily concerned with the dialogic aspects of risk communication as both an argumentative and persuasive speech act, I will discuss how crisis communication can inform the dialectic function of risk communication when the source is an institutional authority.
Special attention will be given to the utility of image-repair for evaluating credibility claims.
Chapter 3 will be dedicated to the phenomena of trust-argumentation. This will begin with a description of the role authority argumentation plays in risk communication, followed by a discussion of how the public receives and responds to authority claims on issues of public safety. The last section of this chapter will pivot on how trust arguments can operate as a bridge between the public and policy experts, where appeals to authority fail. My theory will finally be illustrated in chapter 4 with the example of Three Mile Island. In the first section, I will provide a brief historical background of the TMI incident and the respective discursive goals of the parties involved. The case-study will focus on the Preface and Overview sections of the Kemeny Commission Report, which I will analyze according to the pragma-dialectic model of an ideal critical discussion. I will identify the ways in which the report’s authors advance claims against the trustworthiness of the NRC, as well as to defend the
trustworthiness of the Commission itself. Throughout I will incorporate insights from behavioral psychology, risk management, decision analysis, social studies, and political theory. I will conclude my thesis with a summary of the insights generated from my investigation and the implications for evaluating and producing adaptable and responsive risk-messaging in an era of accelerated and unpredictable possibility.
2 Theoretical Framework
In order to understand the utility of trust argumentation in risk discourse, it is first necessary to review the foundations of risk theory advanced by social philosopher, Ulrich Beck (2.1.1).
Risk theory offers a framework to understand the sociological stakes involved in
deliberations about whether or not to accept risk-producing enterprises that pose potential harms and/or benefits to human well-being and the environment (2.1.1). I will focus on risk decisions generated at the institutional level, wherein those responsible for producing and regulating risk-based activities, i.e. commercial industries or government regulatory agencies, defend the decision to pursue or curtail risks (2.1.2). After I have established risk theory as a basis for assessing the sociological dimensions of risk discourse, I will explain how risk moves from the domain of experts into the realm of public deliberation (2.2). I will describe how the public’s relationship to risk and risk managers is influenced by past crises (2.2.1).
Because my thesis is particularly concerned with the functions of trust claims in risk deliberation, I will discuss how expert accountability comes into play in the court of public opinion (2.3). Specifically, I will discuss the aim of consensus communication (2.3.1) and the normative dimensions that should be incorporated into risk communication (2.3.2). In Section 2.4 I will demonstrate how crisis communication and risk communication intersect through accountability strategies. While my thesis concentrates on the argumentation strategies used to establish trustworthiness, crisis communication theory provides a useful jumping-off point for considering communication strategies around institutional accountability for addressing public concerns and anxieties over the handling of controversial decisions. Finally, I will summarize my theoretical insights in section 2.5.
2.1. Risk Theory: Sociology of Risk
2.1.1 Overview of Risk Theory
Ulrich Beck introduced the ‘Risk Society’ in his book with the same name, to characterize a shift in social consciousness from wealth distribution to risk distribution (1983). Where the Wealth Society is primarily preoccupied with generating opportunities for the production of material benefits, the Risk Society is preoccupied with managing safety, security, and certainty (Beck, 1983, Giddens, 1993, Weimer, 2016). Beck describes the Risk Society as a
new world order primarily concerned with managing the potentially deleterious side-effects caused by industrial modernization. Risk theory is particularly concerned with the
aggregation of invisible harms, such as nuclear radiation, chemical toxins, pollution, genetic modifications, novel disease, etc., created by rapidly advancing technologies to eliminate scarcity (Beck, 1983; Giddens, 1992; Weimer, 2016). Giddens refined the definition of risk in modernity as manufactured risks, which is a more useful lexical term for my purposes.
The opportunities and dangers presented by industrialization and technology in the form of risk, illustrate a society “increasingly preoccupied with the future” (Weimer, 2016, 12). Risk assessment is “a systematic way of dealing with hazards and insecurities induced and introduced by modernity itself” (Beck, 1992, 21). In public discourse, risks present a bright-side and a dark-side. On the bright side, technologies suggest the opportunities of pursuing the new and unknown (Weimer, 2016). By their very existence risks are “an empowering technique that allows for rational decision-making even in the face of an uncertain future” (Weimer, 2016, 11). On the dark-side are the potential harms embedded in risk that are “not truly controllable or fully measurable” (Weimer, 2016, 12). The social dilemma around manufactured risks, is whether or not we should accept them (Kasperson, 1983, 15).
2.1.2 Risk as the Domain of Experts and Authorities
Historically, decisions about whether or not to accept risk associated with technological advancement were the responsibility of science and technology experts and working for corporations and/or government research programs (Wiemer, 2016; Lundgren & McMakin, 2009; Reith, 2004). Risk production is the domain of industrial companies, utilities, and research institutions such as Metropolitan Edison (in the case of TMI), Lockheed Martin, or the U.S. Military. Risk regulation is the domain of regulatory authorities such as the Nuclear Regulatory Commission or a local municipal council. Both risk producers and regulators fall under the category of risk managers, or those who are accountable for assessing and
controlling risks. This designation extends to the scientists and technology experts employed by these organizations, who are also perceived as agents accountable for producing and assessing the risks associated with their area of specialization. In this thesis, I am primarily concerned with risk communication generated from risk managers representing the
organizations that are responsible for producing and regulating risks.
In deliberations over the acceptability of various risks, such as the opening of a nuclear power plant or allowing genetically modified foods onto the market, risk managers use risk assessment models to weigh the benefits and costs of pursuing certain manufactured risk (Beck, 1983; Giddens, 1993; Kasperson, 1983). However, “we often don't know what the risks are, let alone how to calculate them accurately in terms of probability tables” (Giddens, 1993, 3). These risks, introduced by new industrial processes and arrived at by technical experts “require the sensory organs of science” in order to be perceived and interpreted as potential threats at all (Beck, 1992, 163). However, when compounded with each other, across the globe and over generations, the invisible side-effects of modern industry evade measurement and present unforeseeable consequences (Beck, 1983). It is nearly impossible to identify and assess all of the risks posed by industrial by-products against the infinite flux of environmental conditions presented by our dynamic reality (shifting climates, chemical reactions or interactions, latent effects, etc.) (ibid.). Still, the discourse around risk
management takes a vertical, or top-down direction, from risk managers (the experts) to risk receivers—the public.
While government and commercial actors plot the efficacy and consequences of industrial interventions using quantitative methods such as statistical analysis, technical evaluations, and specialized testing procedures, the public is left to grapple with “measures of calculations” (Reith, 2004, 385). Deliberation occurs within the silos of science, technology, and governance, and only trickles down to the deliberative democratic sphere of politics after a course of action has already been rubber-stamped or when a general risk has been realized as a specific crisis or harm. At that point, ‘risks confront us at the individual level
(existentially and otherwise), forcing us to make decisions in the awareness of incomplete knowledge,” (de Vries & Fanning, 2017, 20).
2.2 Risk as the subject of public deliberation 2.2.1 Risk from the Perspective of Crisis
Science’s perceived “monopoly on truth” (Beck, 1992, 71) seeds friction and distrust among lay-persons who are excluded from the specialized knowledge production that risk managers use in their decision-making processes (Beck, 1983, de Vries & Fanning, 2017). There are
consequences to lopsided discussions that weigh technical evidence over social values. As Beck explains, “Results of measurements, unburdened by a single evaluative word or even the smallest normative exclamation mark… proceeding with the utmost objectivity in a linguistic desert of figures… can contain a political explosive power never reached by the most apocalyptic formulations of social scientists, philosophers, or moralists” (Beck, 1992, 82). In other words, when risk assessments deduce the value of protecting human life and the environment from complicated predictive models and abstract statistical calculations, if harms eventually do occur, a crisis of credibility erupts. The failure to protect the public from certain technological or industrial harms is interpreted by the public as either betrayal or incompetence on behalf of the risk manager (Coombs & Halloday; 2010; Kasperson & Gray, 1982, Kasperson & Slovic, 1988). Public distrust is directed towards the technical experts and institutional authorities responsible for mitigating risks, not to the technology itself.
The public’s suspicions of risk-generating actions are often disregarded as paranoid or uninformed. However, research has shown that public response to risk is typically affected when risk models and assessments disregard psychological and social characteristics of hazards (Slavic, 1993, 675). Risks related to new technologies or scientific innovation are
“risks that are inherently probabilistic and unpredictable, and thus generate exigencies for decision-making that have both factual (what is the likelihood of harm?) and normative (how acceptable ought that likelihood of harm be to us?) components” (Majdik & Keith, 2011, 372). Risk assessments are generally oriented towards material proofs (equipment standards, statistics, etc.), but often fail to incorporate the values and beliefs that influence deliberative discussions about the advantages or disadvantages of pursuing a major risk activity. The normative dimensions of risk are often revealed once a crisis occurs, and the threats presented by a risk are realized as actual harms in reality (Beck, 1992; Coombs & Halloday; 2010;
Kasperson & Gray, 1982, Kasperson & Slovic, 1988).
Such was the situation presented by Three Mile Island, wherein the public debate over the risks of pursuing nuclear power, only advanced once a nuclear accident occurred
(Zaretsky, 2017; Kasperson, 1983). When Nuclear Reactor Unit II at the TMI power plant suffered a near-melt down, residents in the surrounding area received conflicting messages from MetEd, the NRC, and major news broadcasters warning of potential release of dangerous levels of radiation and the possibility of an atomic explosion (Zaretsky, 2017;
Kemeny, et. al., 1983). The Governor of Pennsylvania advised pregnant women and small
children within a 20 miles radius to evacuate the area to avoid exposure to toxic plutonium that could potentially cause unpredictable birth-defects, reproductive harms, and even generational genetic mutations (ibid.). Needless to say, the combination of uncertainty and the potentially catastrophic magnitude of the situation ignited fear, anxiety, and outrage. This was compounded by the fact that the municipality had made no preparations for such a disaster: no stocks of iodine were available, and local medical professionals had no training on how to treat radiation poisoning or burns (ibid.). Poor risk assessment had turned into an unimaginable crisis. Although the accident was eventually contained and no physical, medical, or environmental damage was ever found, the social damage was done. The safety and future of nuclear power became a pressing public controversy. The question over whether the U.S. should resume a policy in favor of nuclear power entered the rhetorical arena of democratic deliberation. Citizens organized grass-roots anti-nuclear campaigns, advocacy groups such as the Union of Concerned Scientists submitted op-eds condemning the failures of risk managers to regulate nuclear energy and ensure public safety (Zaretsky, 2017).
Pro-nuclear scientists and advocates wrote op-eds promoting the safety of nuclear power compared to other energy alternatives (Kasperson, 1980; Lanouette, 1980). President Carter, organized the Kemeny Commission to resolve the debate by convening a group of neutral panelists to investigate the factors that led to the accident at TMI, and make
recommendations for the industry based on their findings. Their suggestions were broadly anticipated to either defend or reject the continuation of nuclear energy policy in the U.S.
2.2.2 Risk in the court of public opinion: Experts on Trial
Once risks are exposed as a crisis they are subject to public scrutiny. The value, necessity, and consequences of risk enter the domain of democratic deliberation, typically, the jurisdiction of policy-making (Kessler, 2008, 865). In the last half-century or so, risk
discourse has breached the domain walls of science and has been channeled into an important public debate about how we live alongside these technological risks, and if we even want to (Weimer, 2016, 10)1. Risk discourse, in our contemporary rapidly industrializing
technology-dependent society, is omnipresent, because though, “risks are future oriented:
their effect lies in the future while their possible manifestation in the form of disasters is a permanent presence” (de Vries & Fanning, 2017, 21). The haunting of the present by the future is observable in the critical discussions we have about the vulnerability of democracy
against social media, the vulnerability of reefs and forests to carbon emissions, or the
vulnerability of our bodies against new vaccines and medical interventions. The past can also provide acute examples of what happens when memory, imagination, and unpredictability collide: the accident at TMI, influenced public perceptions of risk related to nuclear energy. It also influenced their perception of the institutional authorities who are responsible for
managing those risks. Though the probability of a catastrophic nuclear event didn’t change between the incident and the aftermath, the memory of the near-catastrophe at TMI proved that the narrowly probable is still possible, and popular representation of nuclear annihilation illustrated that the possible can have apocalyptic consequences (Zaretsky, 2017). According to Beck, “If people perceive risk as real, they are real as a consequence” (1992, 77). Once the public becomes aware of the consequences “judgments of appropriate risk levels are
inherently problems of ethics and politics” (Kasperson, 1983, 16). Fear and emotion overcome “measures of calculations” and laypeople demand to be part of the
decision-making process about risks and their regulation. When the public is eventually exposed to the harms posed by technologies that had been previously deemed “acceptable”
by technical experts and institutional authorities, the trustworthiness of those decision-makers becomes subject for debate. In this sense, “debates over risk are often, at root, debates over the adequacy and credibility of the institutions which manage the risk, and not debates over the actual level of risk” (ibid.).
The asymmetry principle asserts that trust is easy to destroy and difficult to create (Slovic, 1993, 676). When the parties involved in a risk discourse have an indirect
relationship mediated by history and mass media, such as the relationship between the NRC, MetEd, and the communities around Three Mile Island, “the playing field is not level. It is tilted toward distrust” (ibid.). Our attention is rarely drawn to positive or neutral events because they are difficult to quantify or define. Additionally, negative or threatening events are often assigned greater significance and salience (ibid.). For example, we are more likely to remember witnessing a car accident than we are to remember any of the cars that we safely pass in traffic on a daily basis. This extends into risk perception where the properties of a hazard—the scope and severity of the consequences—loom larger than potential benefits, which are more difficult to isolate. Similarly, probabilities are more abstract than the popular images and memories that reside in imagination. Even if the risk of a nuclear explosion is .01%, I cannot visualize .01%, nor can I visualize the safety of 99.99%. But I can visualize blistering bodies and the decimated landscape left in the wake of a nuclear explosion because
I watched the HBO series Chernobyl. To some, even the remote possibility of destruction at that level is intolerable. Among risk managers, “Too little attention is paid to risk perception and experience. The normative explosiveness of risks when they materialise, when risks are determined spatially and temporary. At that moment, risks translate into uncertainty,
insecurity, and broken trust” (de Vries & Fanning, 2017, 24). One of the fears evoked from risk perception is the limit of control the public has over the risk activity. Therefore, the public must decide if they are willing to abdicate responsibility for their health and safety to another party, who they don’t really know and are not certain they should trust? (Slavic, 1993). If this is the case, how do risk managers reassure the public that the worst will not happen, and if it does, they will be safe or protected?
2.3 Risk Communication
2.3.1 Risk Communication and Consensus
Risk communication is a research framework for analysing and evaluating communications, typically generated from authorities and institutions, regarding the benefits, hazards, and tradeoffs of pursuing a certain course of action, or adopting certain behaviors (Plough &
Krimsky, 1987; Weimer, 2017). Lundgren and McMakin identify risk communication as a
“subset of technical communication” concerned with addressing issues of potential health, safety, and environmental harms presented by various technologies and their practical applications (Lundgren & McMackin, 2009, 2). The emphasis in both risk communication and technical communication is on specialized knowledge. As a form of technical
communication, Lungdgren and McMackin recognize that risk is typically situated within the domain of technical expertise and scientific decision-making. But, technical communication is primarily monological; it is concerned with the provision of technical information (ibid.).
Risk communication is more often a discursive practice between the “organization managing the risk and the audience carrying on a dialogue” (Lundgren & McMackin, 2009, 3).
Consensus communication is a function of risk communication, wherein all the stakeholders who have an interest in a specific risk activity, participate in a discussion about how to manage the associated risks. In the example of Three Mile Island, the stakeholders are MetEd, the NRC, and the local citizens residing near the TMI plant. Consensus
communication also extends into the realm of conflict resolution. The Kemeny Commission Report on TMI, falls into this category. The commission’s authors were tasked with
conducting an investigation on the conditions that contributed to the accident. Following their
investigation they were to provide recommendations for protecting public safety against the risks associated with nuclear energy. As such the report represents the attempt to resolve a difference of opinion regarding the merits of pursuing nuclear energy, in the wake of the crisis at TMI, considerate of all stakeholders’ perspectives. Given public perception of institutional authority after TMI, the report specifically had to address issues of
accountability in the management of nuclear energy.
2.3.2. Normative Considerations in Risk Communication
Normative arguments derive from common topoi defined by Aristotle as “assumptions common to all subjects” (Hill, 2003, 64). The topoi for deliberative discourse are related to happiness, “the chief good, the one for the sake of which other goods are chosen” (Hill, 2003, 72). While definitions of happiness vary, happiness as a feeling is the absence of unpleasant feelings such as fear, dread, and worry (ibid.). When risk producers and regulators overlook human factors such as uncertainty, risk tolerance, individual consent, personal control, unfair distribution of costs and benefits, and feelings of doom or dread, community impacts are absent from the decision-making equation (Slovic, 1993). It is not hysteria and ignorance that then provokes outrage, but concerns about whether risk calculations factor in the public’s best interests.
This is especially true in risk communication, wherein the audience must make a reasonable assessment of the cost and benefits of pursuing a course of action by reconciling claims provided by experts with their personal experiences and beliefs. The audience’s perceptions of trustworthiness are informed by their personal experiences and attitudes towards both the source of the risk and the risk activity itself. Slovic points out that “the limited effectiveness of risk-communication efforts can be attributed to the lack of trust. If you trust the risk manager, communication is relatively easy. If trust is lacking, no form or process of communication will be satisfactory. Thus, trust is more fundamental to conflict resolution than is risk communication” (1993, 677).
In a sense, the risk manager asks the audience to override their personal doubts and fears in order to accept an expert’s risk assessment. In order to be successful, the risk
communication must provide reasons, in the form of argumentation, for the public to accept the speaker’s assessment above their own pre-existing experience and assumptions. For
instance, locals who lived near the nuclear power plant at TMI had internalized catastrophic images of nuclear devastation, which they witnessed in news reports of atomic bombs in Hiroshima. These images inflamed fear about the magnitude of harm from a nuclear incident (Zaretsky, 2017). Since they had already experienced an accident at TMI, local perceptions of the probability of harm were also distorted. The risk of a nuclear accident at the TMI plant remained relatively unchanged before and after the accident. However, the reality of people’s experience meant concerns over the outcome of a hypothetical event (what if...)
overshadowed their concerns over the likelihood of the event. In fact, a catastrophic event was perceived as more likely once the original crisis exposed flaws and failures of nuclear oversight and plant management. These failures damaged perceptions of the trustworthiness of the industry, regulators, and the utility.
Risk perception is often influenced by beliefs or attitudes stemming from personal experience and cultural context. Past crises are often the origin of distrust impeding
acceptance of future risk-based activities. One could say that risks are a form of unrealized crises, and risk aversion is the side-effect of past crises. Attitudes and perceptions about risk are often colored by previous crisis responses, as well. Once a crisis has occurred related to a particular industry or technology, trust is damaged, and efforts must be made to rebuild it over time. Although relatively minor in the scheme of things, the accident at TMI amounts to a poorly managed crisis situation. A primary objective of the Kemeny Commission was to address and correct perceptions of incompetence and arrogance that the public associated with the nuclear industry and regulators, in order to defend nuclear energy as an acceptable risk if properly managed.
2.4 Accountability in Risk Communication
Anticipating disaster is foundational to the purpose of risk assessment. If an organization fails to foresee a potential crisis, they have failed to adequately assess the risk, and have therefore demonstrated an inability to protect the public from harms related to their operation (Benoit, 2014, 184). In order to restore public trust, after a crisis, organizations must bolster their ethos, by projecting an image of accountability (integrity) and competence (ability) (Mayer et al., 1995). We often associate crisis communication with a direct intentional act however,
“Responsibility can appear in many guises: for example, a business can be blamed for acts
that it performed, ordered, encouraged, facilitated, or permitted to occur (or for acts of omission or poorly performed acts that it appears responsible for)” (Benoit, 2014, 177). Risk consequences fall into the category of harms that are permitted to occur. If an organization or regulator determines a calculated likelihood of harm is acceptable, they essentially approve the possibility of a future crisis. This presumes a certain degree of risk tolerance among those who would absorb the impacts of a crisis situation. However ‘acceptable levels” are often determined by internal metrics based on statistical models of potential material impacts.
Internally determined “acceptable levels” are typically not based on public consent
(Kasperson, 1983, 16). Therefore, if a risk is eventually actualized as a harm, the organization who determined that harm was possible, and acceptable, even if remotely, permitted the harm to occur. Excluding the public from deliberation on “acceptable levels” places accountability for the final decision in the hands of the risk producer. Whether the party associated with the crisis is indeed responsible for the event is irrelevant. When blame is assigned to an
organization for a crisis event, the organization responds to public perception in order to protect their reputation (Benoit, 2014, 178). The same is true for risk communication. The associated risk activity is perceived as a potential threat and the risk manager as the agent responsible for allowing the threat to become a harm.
Since ‘crisis is a risk manifested’ (Coombs & Halliday, 2010, 4), risk communication, like crisis communication, is primarily concerned with the negotiation of accountability: “the ultimate theme featured is the integrity and legitimacy of the organization” (Coombs &
Halliday, 2010, 1), and particularly the organization producing the risks. Risk communication delivered by the organization responsible for managing risks must account for the potential harms suffered by external victims. Risk communication is then “pre-crisis communication”, and therefore must address how risks will be managed to prevent them from becoming harms.
The rationale used to defend risks “must be sustainable against time and scrutiny” (Coombs
& Halliday, 2010, 9). Scientific data and probabilistic models change over time as new information and knowledge are developed (Ceccarelli, 2011, 199). Therefore, the most sustainable defense for pursuing a risk is the credibility and integrity of the organizations in charge of managing it. Successfully defending risk activities requires that the responsible organization anticipates the causal links that can be attributed to their chain of command, should a crisis occur (Kessler, 2008; de Vries & Fanning, 2017). When risk managers
credibility and integrity is called into question by the public, the crisis “is the result of a failed relationship” (Coombs & Halliday, 2010, 9)
Risk communication must contend with the history of previous crises and popular representations that inform public opinion about adopting a certain risk. In a sense, risk communicators must still address the wounds from traumas that have not yet healed in the public psyche. Similar to crisis communication, risk communicators must demonstrate that they understand the nature of the concerns that are relevant to their audience (Lundgren &
McMackin, 2009; Coombs & Halliday, 2010, 4). They must be able to address collateral or indirect consequences that the proposed risk activity implies (Benoit, 2014, 182).
Additionally, risk communicators should address the magnitude of potential harms or
“perceived severity” of a crisis event should it occur (Benoit, 2014, 182). Will radiation exposure from the plant cause cancer or genetic mutations? If so, how many generations will be affected? What is the fall-out radius of a nuclear blast? Will nearby residents have time to escape before being exposed? Benoit warns that “trying to make a serious problem seem trivial can create a backlash (Benoit, 2014, 184). Brushing aside stakeholder concerns is perceived as a signal of indifference and lack of compassion. Failing to address perceived threats behind certain risks makes it appear that the organization cares more about charging ahead than assuring public safety.
In this chapter I described Beck's risk theory as the basis for understanding the issues at stake in democratic deliberations about risk. As new technologies are introduced to enhance the administration and operationalization of resources—energy, information, medicine—on a global scale, governments and policy-makers are tasked with regulating the risks of
embracing these advancements. I explained how risk decisions were traditionally the domain of risk producers and institutional authorities, who rely on technical and scientific evidence to defend the acceptability of certain risk levels for the public. Risk, as a product of perception and manufactured probability, is both socially constructed and institutionally generated (Reith, 2004, 385; de Vries & Fanning, 2017, 19). Risk becomes the subject of democratic deliberation once the public perceives the risk as an actual threat in the form of material harms and consequences. This process is often prompted by a crisis, such as Three Mile Island, that exposes the potential severity and likelihood of the risk realized. The crisis also makes the public aware of their vulnerability to decisions made by risk experts without their
consent. The debate then becomes less concerned with the causes of risk, than those accountable for managing them (Lundgren & McMackin, 2009, 2).
3 Trust-Strategies in Risk Discourse
In the previous chapter, I provided an overview of risk theory, developed by sociologist Ulrich Beck, as a framework for communicating the costs and benefits of accepting major risk activities. Emphasis was placed on the perception of social and environmental risks that are the invisible side-effects of modern industry and technological interference in our daily lives. I then described risk communication in the sphere of democratic deliberation, wherein a critical discussion develops between multiple parties regarding risk policies. Risk
communicators may aim to achieve a variety of outcomes, however my focus will be on two objectives: 1) persuading the public that those responsible for the risk management and assessment are trustworthy, and 2) including public values in the argumentative framework for defending a course of action perceived as high-risk.
In chapter 3, I will describe the function of trust-strategies as a tool of reasonable argumentation and develop a theory for reinforcing trust-strategies around issues of risk, specifically. Section 3.1 will explain how public attitudes are assimilated into risk
deliberations advanced by the institutions that serve as authorities on risk. I will explain how professional expertise and experiential expertise inform the direction of argumentation (3.1.1). Section 3.2 is dedicated to reviewing authority argumentation and claims to
professional expertise, as the source of risk communication (3.2.1). Then I will consider how public refutations can counterbalance authority claims to achieve phronesis or practical wisdom (3.2.2). Section 3.3 will introduce the concept of trust argumentation as an implicit bridge between authority and public perceptions of institutional credibility. First, I will
review Palmierie and Musi’s concept of trust repair developed from crisis management theory (3.3.1). The argumentation structure for trust claims is consistent with extended
pragma-dialectical analysis of how argumentation is advanced in a critical discussion (3.3.2).
Then I will propose how the element of transparency is used to defend organizational in addition to ability, integrity, and benevolence (3.3.3). Section 3.4 with provide an overview of my conclusions.
3.1. Risk deliberation between experts and laypeople
As stated in the previous chapters, there has always been public skepticism of agencies and
organizations responsible for distributing and managing risk. This is particularly endemic in the U.S. where distrust of expertise and authority is a prevalent part of the cultural identity (Hofstadter, 1963). Still, even if risk decisions are finally determined at an institutional level, public support is necessary to avoid controversy, protests, or litigation. Slovic explains,
“Because it is impossible to exclude the public in our uniquely participatory democracy, the response of industry and government to this crisis of confidence has been to turn to the young and still primitive field of risk communication in search of methods to bring experts and laypeople into alignment and make conflicts over technological decisions easier to resolve. (Slovic, 1993, 676).
For organizations responsible for producing or regulating risks, deliberations aimed at achieving public support introduce an exigency. The exigency presents a rhetorical situation or a problem to be resolved through a critical discussion (Bitzer, 1968). A rhetorical arena opens up, within which multiple voices contribute to a difference of opinion on how a
specific event (or potential event) should be interpreted and managed (Frandsend & Johnson, 2017). The responsible organization typically takes the reins of the conversation, advancing authority arguments to defend the proposed course of action as reasonably safe. Authority arguments are claims made by experts in a specialized knowledge area (e.g. nuclear energy, GMOs, etc.) to validate a standpoint (Huenemann, 2004; Kutrovátz, 2012; Walton, 1997;
Wagemans, 2011; Wierda, R.M., 2015). Expert appeals are often associated with scientific claims in a deliberative process, where the scientist submits and certifies evidence derived through specialized scientific methods to reach a conclusion (Andone & Hernández, 2019).
Risk communicators are instrumental in how we come to terms with larger
generalized risks such as industrial pollution or genetically modified foods, and provide the context for “how risks are conceptualized, identified, measured, and managed” (Kasperson, 1996, 97). Industries and regulators have placed risk assessments at the top of their agenda to assure the public that safety is their first priority (Slovic, 1997, 676). In order to maintain support for a major risk activity, the public must trust that risk managers are both capable and motivated to protect the public from harm. However, “the field of risk assessment has
developed to impart rationality to the management of technological hazards” (ibid.).
Technical remedies and equipment evaluations are often confusing or opaque to laypersons, and fail to directly address the fears and anxieties that inform how individuals process risk decisions. Yet appeals to authority are the hallmarks of risk communication.
3.2 Appeals to authority to convince laypeople
3.2.1 Overview of Authority Argumentation
Walton undertook the first comprehensive accounting of appeals to expertise in 1997. He, and successive researchers (Huenemann, 2004; Kutrovátz, 2012; Wagemans, 2011; Wierda, 2015), primarily focused on authority as “expert opinion”, emphasizing epistemic claims defended on the grounds of an individual’s specialized knowledge within a specific field, validated by professional training and certification (professional expertise) or by consistent personal knowledge (experience expertise) (Walton, 1997, 2002; Majdik & Keith, 2011;
Burgers, De Graaf, & Callaars, 2012). The perceived authority of the source of claim warrants the acceptability of the claims as true (Wagemans, 2011).
In risk discourse, professional authority is assigned to institutions charged with producing and regulating risks, and experiential authority is represented by stakeholders who will be impacted by risk outcomes. The authority vested in institutions is based on the
combined expertise of individuals charged with determining policy recommendations. For instance, the NRC was the authority tasked with regulating the nuclear power plant at TMI.
In this role they are responsible for communicating recommendations to the nuclear industry and for assuring the public that nuclear energy is safe on the basis of their oversight. The truth-claims of the NRC are insured by the reputation of scientists who staff the agency. The NRC’s authority is defined by consolidated professional expertise within one institution, whose combined knowledge implicitly warrants the truth-value of their recommendations.
Following the accident at TMI, the NRC’s position as a credible authority for overseeing safety and regulating the energy industry was compromised: the NRC failed their obligation to prevent an accident. Therefore, President Carter established the Kemeny Commission to investigate the NRC’s oversight structure and operations, in addition to the technical factors that contributed to the accident. Based on their investigation, the Kemeny Commission made recommendations for mitigating future risks associated with reopening the TMI plant, and the pursuit of nuclear energy in general, in the form of The Kemeny Commission Report. The commission was composed of a heterogeneous group of energy, ethics, management, policy and technology experts, outside of the nuclear industry. Therefore, the commission’s
institutional authority was underwritten by their combined expertise in assessing risk activities and developing policy recommendations.
Walton recognized that the vulnerability of authority argumentation is interrelated with the ignorance or in-expertise of the layperson who is forced to depend on the expert’s premises to reach a resolution on a topic beyond his scope of knowledge. The Kemeny Report, as a discussion on risk, responds to the public’s doubts and concerns by advancing claims backed by their combined expertise.
Many argumentation scholars have interrogated the soundness of authority arguments related to decision-making policies based on professional expertise alone. Wagemans
essentially collapsed the analysis of authority argumentation into a subset of symptomatic argumentation. By doing so, he simplified the evaluative procedure for assessing relevant sub-arguments necessary to support authority claims. For this reason, I will focus on
Wagemans argument structure for the purposes of this thesis. Wagemans’ argument structure accounts for the justificatory force of the explicit arguments from expert appeals, explaining that “argumentation from expert opinion is conceived as argumentation from authority, which is a subtype of symptomatic argumentation” (Wagemans, 2011, 335). Thus, he formulated expert appeals as follows:
1 Opinion O is true or acceptable.
1.1 Opinion O is asserted by expert E.
1.1’ Being asserted by expert E is an indication of being true or acceptable.
The associated general critical question may then be formulated as follows: ‘‘Is being asserted by expert E indeed an indication of being true or acceptable?’ (Wagemans, 2011, 336).
However, Wagemans’ formulation of authority claims takes for granted one critical question, originally proposed by Walton, relating to the validity of the expert source. This is Walton’s Trustworthy Question 4. “Is E reliable as a source”? (Walton, 2002). In risk
deliberations, such as the one between the federal government and the public about whether to reopen TMI, implicit arguments must be advanced to defend the trustworthiness of the authority defending the risk. Because the NRC betrayed the public’s trust, the Kemeny Commission must implicitly defend their own trustworthiness as a source for the claims advanced in their report in order to persuade the public to accept their recommendations.
3.2.2. Practical Wisdom: professional expertise meets normative evaluation
‘Experiential expertise” and professional expertise” are mutually dependent (Majdik & Keith, 2011). On the one hand, professional or scientific expertise is accepted as a “normative model of inquiry” perceived as the “unbiased accumulation of evidence and impersonal testing matters most. Any evidence that self-interest, advocacy, etc. are present is a breach of standards” (Walton, 1997, 17). On the other hand, individuals are “experts” on their own values and beliefs. They are also experts on the cultural contexts that inform their attitudes on what harms are tolerable and therefore acceptable. As Walton states, “Rational thinking outside of science is both possible and necessary and this type of thinking meets normative standards of adequacy of good and reliable reasoning.” (1997, 25). Further, assimilating the beliefs and values of the affected population “maximizes the available inventional resources that are pertinent to and impinge upon a given problem or situation (Majdik & Keith, 2011, 376). Expert appeals are fallacious only when they are used to dismiss or obstruct doubts or counterclaims that might have a bearing on the direction of a critical discussion (van
Eemeren, Garssen, Snoeck Henkemans, et al, 2014, 97-99).
Recent scholarship on appeals to expertise approach authority claims as an
intersubjective, dyadic activity, “complicating both simplified separations between sciences and publics, and the indiscriminate collapsing of any boundary between technical experts and lay people” (Majdik & Keith, 2011, 372). Developing this research, Majdik and Keith have proposed a reconsideration of the function of authority arguments as a means for aligning expert opinion and public attitudes. From this approach, both experts and laypersons participate as equal contributors to a public deliberation about risk. The public recognizes their dependence on authorities to make informed decisions based on the relevant information provided by experts. “Practically, the question is not whether expertise should have authority, but what the bounds of that authority should be, and how inclusive they are” (Majdik &
Keith, 2011, 372). The public informs experts about what their concerns are related to a particular risk activity, and experts respond according to the contextual factors raised through public input. Specifically, Majdik and Keith propose the following reorientation to authority arguments:
We think there are two moves that would allow us to reduce the tensions between expertiseand democracy. First, understanding expertise not as, at its core,
oriented toward a (specialized) subject matter but toward argument, as a deliberative process, redirects it from a focus on knowledge to a focus on judgment: expert judgments are those backed-up by a certain kind of
argumentation. Second, this process of argumentation is called into being by a problem or exigence. The argumentation that constitutes expertise does not reside in the knowledge or experience of the arguer (thus argumentation is not simply a tool for asserting expertise), but relative to a problem; expertise invokes not a relationship to specialized knowledge but to the ability to respond
appropriately to problems. (2011, 372)
Within this framework, the risk activity presents an exigence to be resolved. The risk communicator, in the role of specialized expert, advances justifications relevant to public perceptions of risk. This formulation remains loyal to normative merits of scientific input as
“a cumulative buildup of knowledge based on premises that are solidly established and verified by objective evidence and conclusions drawn from these premises only by rigorous logical proof.” (Walton, 1997, 13). At the same time, it reconciles the conflict around authority arguments in democratic discourse:
“It is not just the most extreme proponents of postmodernism who have difficulties with science as representing rational thinking. Daniel Yancholovik argues that what he calls the “culture of technological control” is undermining the ability of the public and the experts in democratic countries to make rational decisions on how to deal with problems they currently confront. The culture of technological control assumes rightly that policy depends on highly specialized knowledge and skills possessed only by scientific and technical experts” (Walton, 1997, 12).
While the concerns of laymen are essential to informing what standpoints and arguments risk communicators adopt, Majdik and Keith warn that “the bounds of expert authority narrowed to the individual enacting a personal version of expertise defeats the pragmatic function of expertise” (376). Personal expertise should not negate or replace expert opinion, but instead be incorporated into the process of expert reasoning. Majdik and Keith
instead propose that authority arguments in risk communication adopt Aristotle’s deliberative mentality of phronesis, wherein “normative considerations are fully entwined with the
factual/technical ones” (2011, 376).
Phronesis is a process of practical wisdom (Majdik & Kieth, 2011). It combines professional expertise (the scientific method, empirical study) with normative evaluations (ethics, foresight, fairness, etc.). Prudence and precaution are balanced with ambition and capacity. As an epistemological orientation, phronesis is disposed towards reason and good judgement in the pursuit of happiness-producing enterprises (Aristotle, Nicomachean Ethics, 1140).
When phronesis is assumed within the argumentative logic, two criteria are available to evaluate the strength of expert appeals: “First, whose interests are at stake, and so what norms ought to count? Second, did those with a stake in the resolution of a complex problem or the reduction of harm consider, with reason and from all relevant normative grounds, a set of choices before choosing what action to pursue?” (Majdik & Kieth, 2011, 377). This formulation is especially relevant to policy-makers who rely on scientific expertise to defend actions that may have social and political consequences (Andone & Hernández, 2019, 196).
Whereas the schema for authority argumentation describes the implicit justification between the source’s authority and the soundness of the claim, it does not account for the trustworthiness of the source. Andone and Hernández test the soundness of pragmatic argumentation (Andone & Hernández, 2019, 201) employed in political deliberation.
Prescriptive standpoints are typically supported by pragmatic argumentation: “You should do X” (Prescriptive standpoint) “to achieve Y” (pragmatic argument)”. In policy debates, the pragmatic argument is often supported by appeals to authority. When the public lacks the specialized knowledge to scrutinize causal arguments, the appeal to authority provides the material support that proves X leads to Y (ibid.). This is critical to the function of practical counsel that Majdik & Keith suggest make authority arguments necessary in policy debates.
However, the authority arguments in policy debates are vulnerable to refutations that can rebut or undercut an organization’s trustworthiness.
3.3. Trust Strategies in Risk Deliberation
3.3.1. Trustworthiness as a standpoint in organizational communication
As the saying goes, “trust is earned, not given”. Paglieri explains that, “For a long time, and mostly in the philosophical literature, trust has tended to be seen as an abdication of critical scrutiny, and thus at odds with argumentation, which is the full exercise of that scrutiny”
(2014, 119). In this sense, trust is similar to faith: you either believe or don’t believe in the credibility of the speaker and, by extension, their claims. However, Paglieri disagrees with this interpretation. Instead, he argues that trust has a deliberative function requiring careful reasoning. From this perspective, trust is similar to argumentation: a claim to trustworthiness requires justification (ibid.). Trust is developed through the accumulation of exchanges that reinforce trust between multiple parties over time.
In crisis management theory, trust is defined as “the willingness of a party to be vulnerable to the actions of another party” (Mayer et al., 1995, 712). This willingness is based on the belief that the acting party will keep the vulnerable party’s best interests in mind when pursuing a certain course of action. Later, Mayer refined this definition of trust as the
“willingness to take a risk”, elaborating that “perceived risk moderates the relationship between trust and risk taking.” The level of trust is measured by the amount of risk that one is willing to take on the advice of another (Mayer et al., 2007, 346). Based on this definition, the relationship between trust, authority, and public perception in risk communication is clear. In risk communication, the public is vulnerable to decisions made by risk producers and regulators who are assigned the position of authority, or expert, on the nature and probability of the risks involved. The public must perceive that the authority has prioritized the publics’ best interest in their risk assessment. As established in the previous section, expert claims must anticipate the public’s doubts, fears, and perceptions about certain risk based activities.
Building off of the research in crisis management theory, Palmieri & Musi developed a theory of trust as an argumentative framework. They married rhetorical theory, which emphasizes image-repair and reputation (Benoit 1997; 2014) with management scholarship, which views perception of crises that occur at an organizational level as “a problem of
trustworthiness” (Palmieri & Musi, 2020, 274). After a crisis, organizations “make use of argumentation to justify their self-defensive claims and then persuade the public to (re)-trust the organization” (Palmieri & Musi, 2020, 273). Crisis communicators promote an
interpretation of the crisis event, including the severity and repercussions of the event, in order to influence public perception. The interpretation of events informs how the public will assign accountability, and whether the crisis event can be overcome through reasonable interventions (ibid.) Risk is often perceived as an unrealized crisis. The associated risk
activity is perceived as a potential threat and the risk producer as the agent creating the threat.
The risk manager is entrusted with monitoring and mitigating the threats posed by a risk activity. Therefore, the public assigns accountability to the risk manager.
Aristotle also recognized that trust had to be established in order for argumentation to be effective. He described ethos as a rhetorical means for assuring the audience that the speaker—the source of argumentation—was trustworthy (Palmieri & Musi, 274, 2020). To establish a positive ethos, the speaker must project “practical wisdom (phronesis), virtue (arete), and goodwill (eunoia) (ibid.). In crisis management scholarship, these characteristics of trustworthiness are translated into ability, integrity, and benevolence (Mayer et. al., 2007, 346). The speaker must persuade the audience that the organization represents these qualities, in order to achieve a return to trustworthiness. Therefore, trust-repair claims are equally relevant to risk-communication, wherein the risk communicator must convince the public that risk producers and regulators possess ability, integrity, and benevolence required to
responsibly manage risks.
Palmieri & Musi distinguish ethotic argumentation from authority argumentation. In ethotic argumentation, “the ethos of trustworthiness of a source is taken as a reason to believe a claim” (Palmieri & Musi, 2020, 275). Ethotic arguments are “represented in such a way as to lend credibility to or detract credibility from conclusions which are being drawn” (Brinton, 1986, 246). They are distinct from ethos as a rhetorical device in that ethotic appeals are not
“particularly concerned with the appearance of good character as opposed to the reality”
(Brinton, 1986, 247). Meaning, beyond performing good character in the style of address, the speaker must prove their credibility within the argumentation. Trust claims are then applied as symptomatic sub-argumentation to illustrate good moral character. In risk communication, Aristotle’s topoi of good character (ethos)—phronesis, arete, and eunoia—are substituted for the characteristics of organizational trustworthiness: ability, integrity, and benevolence.