• No results found

Dutch food safety regulations and public trust : the fipronil egg scare

N/A
N/A
Protected

Academic year: 2021

Share "Dutch food safety regulations and public trust : the fipronil egg scare"

Copied!
69
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Dutch food safety regulations and public

trust: the fipronil egg scare.

31-08-2018

UvA Political Science

European Politics & Policy in Times of Crisis Outline of the master thesis topic

Joseph Zekhnini 10649921 joseph.zekhnini@student.uva.nl Supervisor: Rosa Sanchez Salgado Second reader: Dimitris Bouris

(2)

2

Contents

1. Introduction ... 4

1.1 Context of this study ... 4

1.2 Research question ... 5

1.3 Social relevance ... 5

1.4 Academic relevance... 6

2. Literature review ... 6

2.1 Risk ... 7

2.1.1 Risk as probability and as social construct ... 7

2.1.2 Risk as the result of accepted and applied practices ... 8

2.2 Trust ... 9

2.2.1 Trust from a sociologists perspective ... 10

2.2.2 Difference between trust and confidence ... 11

2.2.3 The dynamics of politics and the effects on trust ... 11

2.2.4 Trust as a moral bond of science and trust in numbers ... 13

2.2.5 Implications of public distrust ... 13

2.3 Risk communication ... 14

2.3.1 The deficit model of risk communication ... 14

2.3.2 Outrage and stigma ... 16

2.3.3 Three phases of risk communication ... 17

2.4 Three risk communication strategies used in crisis management ... 18

2.4.1 Media effects ... 19

2.4.2 Studies on media effects ... 20

2.4.3 Media and regulators ... 21

2.4.4 Defining the public ... 21

3 Theoretical framework ... 22

3.1 Trust in the source of risk communication ... 22

3.2 The Trust-confidence-cooperation (TCC) model of risk communication ... 23

3.3 Recommendations on effective risk communication: The Pitfalls of communication... 25

3.4 Combining the TCC model and the list of pitfalls ... 28

3.4.1 Pitfalls that cannot be used in this analysis... 28

3.4.2 Pitfalls combined ... 28

3.4.3 Categorisation of pitfalls within the TCC model as analytical framework ... 29

4. Research design ... 31

4.1 Introduction ... 31

(3)

3

4.2.1 Qualitative analysis ... 32

4.2.2 Case study and case selection ... 32

4.2.3 Sources and data collection ... 33

4.2.4 Drawbacks of content analysis research ... 35

4.2.5 Generalisability ... 35

4.3 Operationalization of variables ... 36

4.3.1 Confidence (trust in competence) ... 37

4.3.2 Social trust (trust in motives) ... 38

5. Fipronil crisis case introduction ... 39

5.1 Institutional context ... 40 5.2 Fipronil in eggs ... 41 5.2.1 Miracle solution ... 41 5.2.2 Contamination ... 42 6.0 Analysis ... 43 6.1 Quantitative exploration ... 44 6.1.1 Overview Figure 2 ... 44 6.1.2 Implications of Figure 2 ... 45

6.2 Qualitative content analysis ... 46

6.2.1 The poultry branch as actor ... 46

6.2.2 The NVWA as actor ... 47

6.2.3 The NGO food watch as actor ... 47

6.2.4 Communication after “its best to not eat eggs until Sunday” ... 48

6.2.5 Communication after the big egg recall operations ... 49

6.2.6 NVWA risk communication questioned in parliament ... 50

7.0 Conclusions ... 52

Bibliography ... 55

(4)

4

1. Introduction

1.1 Context of this study

Food safety policy in Europe has proven to be a highly problematic area for authorities to maintain public trust and at the same time act preventively (PWC, 2017, p. 3). In the public eye, food crises such as Bovine spongiform encephalopathy (BSE), commonly known as mad cow disease and the fipronil scandal could have been prevented if authorities had done their work properly and had chosen to act to safeguard public safety instead of following economic interests (Vogel, 2001, pp. 14-15).

Holding authorities accountable for food safety is nothing new. In late medieval times, city councils were responsible for a steady supply of ‘good and honest’ food. The notion of ‘authentic’ food, food which couldn’t contain anything wrong, was synonymous for a safe product. This meant for example that butchers, with their fresh cuts of meat, were less often accused of tinkering with their product than bakers or beer brewers. When food production chains became more complex and processed foods were introduced to the market, the need for surveillance to guarantee food safety became more prevalent. Up till this day consumers still associate ‘authentic’ with ‘safe’ and

‘tinkering’ with ‘unsafe’. Nevertheless, old ways of guaranteeing food safety are failing. This doesn’t mean that our daily lives have become more dangerous, but it implies that new questions arise which let us re-evaluate risks. These questions, often informed by scientific research, lead to new insights on what could go wrong and on what had already gone wrong (De Boer, Willemsen, & Aiking, 2003, p. 4).

Public demand in the European Union for stricter food safety regulation has grown

substantially over the past 10 years (Vogel, 2001, p. 11). Moreover, contrary to the period before the ‘70’s, food safety is now viewed by media reports as not just a scientific matter but also as a “highly political issue” (Holm & Halkier, 2006, p. 127). Academics such as Knowles et al. (2007) speak of “disembedded trust”, which is a universalistic and institutional-based distrust directed at multiple institutions at the same time. It means that supposedly independent experts are doubted on their independency and on their professionalism. The result is an unworkable cynical worldview shared among large segments of society (Knowles, Moody, & McEachern, 2007, p. 56).

The topic and theme of this master thesis is going to be within the field of tension between the European food safety regulation and the public trust in science and political institutions. This field of tension can be seen as a crisis because the public is uncertain about which facts they should believe and by whom.

(5)

5

1.2 Research question

If we are to look at the institutional setting within the EU concerning food safety governance, we see that policymaking is dispersed over multiple levels of governance: the European level, the national level and the regional level. The ways in which the public is informed of a particular crisis situation is playing an important role on whether the information is trusted and whether it will be acted upon (Fessenden-Raden, Fitchen, & Heath, 1987, p. 96). Furthermore, more important is the question whether government institutions are being trusted by the public.

This public trust in institutions is defined by scholars such as Michael Siegrist and Heinz Gutscher (2005). They theorized a dual-process approach to trust research by using the social trust (trust in motives) and the confidence (trust in competence) dichotomy. Here, the main distinction between trust founded on morality information (motives) and confidence founded on performance information (competence), allows for more conceptual clarity for scholars who seek to research public trust in institutions.

To stay within the above mentioned field of tension between the European food safety regulation and the public trust in science and political institutions, trust has to be examined under specific circumstances. This thesis will examine how public trust was maintained against the backdrop of the 2017 Dutch fipronil crisis.

This leads to the following research question:

How effective was the risk communication, during the Dutch fipronil crisis, at maintaining public trust in institutions responsible for food safety regulations?

This research question is divided into two sub questions:

1. How did social trust (trust in motives) contribute to the effectiveness of risk communication during the Dutch fipronil crisis?

2. How did confidence (trust in competence) contribute to the effectiveness of risk communication during the Dutch fipronil crisis?

1.3 Social relevance

The health and safety of contemporary methods of food production have been met with public distrust in Western European countries. Especially over recent decades, terms such as ‘food scare’ and scepticism of policy measures based on scientific findings have become common. Europeans are worried about a variety of food related issues such as BSE, genetic modification, food additives and salmonella. The scientific institutions, in this case represented by the experts on food hazards, such as risks analysts, producers, government officials, epidemiologists and dieticians, tend to dismiss this public scepticism as exaggerated and irrational. This discrepancy is what Janus Hansen et all (2003, p.

(6)

6

120) call the ‘expert-lay discrepancy’ in the assessment of food-related health risk.

This discrepancy can have crippling effects on the workings of democracy because two vital groups of society are having a dialogue of the deaf. Research into food scares can shed a light on how, when and why communication seems to fail. It may lead to a better managing of food crises and the disparity between science and the public.

1.4 Academic relevance

Investigating cases in which food safety had been compromised can contribute to, the relevantly recently developed but still growing field of risk communication studies.

Qualitative studies on risk communication can further investigate complex factors involved in people’s ambivalences, perceptions, and responses to risk. It can contribute to understanding how people evaluate factors such as understandability, believability and credibility of messages. In addition, this avenue of exploration can further enhance our understanding of the complicated roles of social, psychological and cultural effects on risk perception which are fundamental to a pragmatic planning of risk communication.

Qualitative studies can also contribute to research on public trust. The ideological, ethical and political implications behind risk communication can give insights and inspirations to further explore causal relations between certain aspects of communication and public trust. Furthermore, case studies on food crises can contextualize the field of tension between individual choice and responsibility, our dependence on science and expert knowledge, and the vague accountability relationship present in modern governance structures.

2. Literature review

This literature review section will be divided into the themes and aspects of food scares that the research question crosses. Along the way, important concepts such as crisis management, (public) trust, risk communication and crises will be reviewed.

As stated by Siegrist and Gutscher (2005, p. 14), trust (based on common values) is divergent from confidence (based on experience and evidence). Yet, they do interact depending on the context of the situation. Both trust and confidence can contribute to various forms of cooperative behavior. This is vital when large groups of people are facing a dangerous situation where little is known about the issue in question. Under these circumstances, trust and confidence in authorities and regulators are a prerequisite for the public acceptance of advice and policy measures (Siegrist & Gutscher, 2005, p. 13). Risk moves the public into a vulnerable position, thereby putting both confidence and trust to the test.

(7)

7

Public trust, in general terms, is an important but elusive concept because it is a vital element in understanding how and why the public reacts to risk information in a particular way. This concept is even more important when dealing with a food crisis, since this poses a direct threat to public health and because tensions rise with people accusing authorities for not being able to provide protection (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003, p. 118).

This literature review will first focus on risk by examining how social scientists and risk assessment specialists view this concept. Then literature on trust will be reviewed. This chapter will finish with risk communication as theme where risk and trust concepts seem to meet.

2.1 Risk

The Bureau of risk assessment and research (BuRO) is part of the Dutch Food and Drug

Administration (Nederlandse Voedsel- en Warenautoriteit or NVWA). It has as core task to indicate new risks, their magnitudes and their relation to other risks. New scientific insights or a change in public opinion on societal risks can lead to new, additional demands on products (NVWA, 2018). The following two paragraphs will use academic literature to explain how risk and society are related.

2.1.1 Risk as probability and as social construct

The ways risk is perceived is culturally dependent. Ideas and conduct concerning risks are taught by socially and culturally structured impressions and judgements of our surroundings (Sjöberg, Elin Moen, & Rundmo, 2004, p. 7). Children learn how to approach the world around them with a certain trust. It is within this context where people learn what is safe and what is dangerous and how they should behave accordingly (De Boer, Willemsen, & Aiking, 2003, p. 3).

Although the list of definitions of risk is almost endless, scholars seem to agree on two notions which are: “the likelihood that an individual will experience the effect of danger” and “consisting of the probability of an adverse event and the magnitude of its consequences” (Sjöberg, Elin Moen, & Rundmo, 2004, p. 7). The latter notion is commonly used among risk assessment specialists who judge risks based on probability calculations. Quantitative risk assessment (QRA) is the science that should warn the public about which risks they should be worried. On a side note, the fact that it’s a quantitative science does not implicate that its conclusions are always accurate, as these assessments are sensitive to human errors and intended manipulation (Sandman, 2012, p. 4). Moreover, the latter notion is inaccurate if we are to give a broader, more encompassing definition of risk that can be of use at large-scale societal risk management. Expressing risks in terms of probabilities and consequences does not always seem to be the most fruitful definition.

Another element that all risk conceptions share is “a distinction between reality and possibility” (Sjöberg, Elin Moen, & Rundmo, 2004, p. 5). Uncertainty and how humans respond to situations with unknown outcomes are also closely related to the concept. This is clearly the case in

(8)

8

many theories of behaviour where uncertainty is seen as a psychological construct because it “exists only in the mind; if a person’s knowledge was complete, that person would have no uncertainty” (Sjöberg, Elin Moen, & Rundmo, 2004, p. 7).

Although risk is socially constructed, dangers are real. Risk assessment is part science and at least an equal part subjectivity. Both parts consist of social, political, cultural and psychological factors. An important notion by Slovic (1999, p. 699) is that our democratic institutions are

functioning in a way that will unavoidably nourish distrust in the risk field. The ones who control the definition of risk, also control the solution to the problem. All possible solutions are dependent on a variety of possible definitions of risk. One definition of risk may lead to the most cost-effective or safest solution. Yet another definition, which perhaps incorporates contextual and qualitative aspects, will lead to a whole different arsenal of possible solutions. So defining a risk is an exercise in power (Slovic, 1999, p. 699).

2.1.2 Risk as the result of accepted and applied practices

In the past decades scholars have been busy trying to find answers to the question: “How do people perceive risk?” while, easily looking past the question: “Why do things go wrong?”. In relation to complex societies, the concept of risk is founded on the idea that the public loses something valuable because of (unintended) errors. This often caused by the expectation of a beneficiary effect as long as things do not go wrong. Thus, if a certain decision would not lead to a beneficiary effect there would be no reason to accept the risk that comes with the decision. If we are to look at disasters and accidents, we may discern that these events have precedents that could indicate the danger of a situation. Often things go wrong when people decide to undertake certain actions on the verge of what’s possible, constantly searching for boundaries. Or when they do things that they don’t quite understand leading them to be oblivious to the indications of possible problems. Or when people notice something is wrong but they still continue by pretending its normal and thereby spreading the problem (De Boer, Willemsen, & Aiking, 2003, pp. 3-4). These cases should not be seen as human error defined as a misunderstanding or miscalculation. Cases that seem to be the result of individual actions, often lead to a widely accepted and applied practice. This is demonstrated in the use of bone meal in cattle feed (causing the BSE disease). This practice went through a fast development

involving numerous individuals and institutes who have, unknowingly, caused a large-scale spread of the use of bone meal and, as we now know, along came the BSE outbreak (De Boer, Willemsen, & Aiking, 2003, p. 4).

The model used internationally for setting food standards and for analysing risks is made of three segments: risk assessment, risk management and risk communication. Each of these is independent with its own capabilities and responsibilities although there is some overlap between

(9)

9

them. Institutions such as the World Health Organisation (WHO) and the ‘Expert Consultation on Risk Management and Food Safety’ stress the importance of separating risk management from risk assessment to make sure that they are based on scientific facts and not influenced by the pressure of political agendas.

Transparency of the process is also important, which is an argument in favour of more public participation and stakeholder involvement. However, it remains controversial when and to what extent the public and the stakeholders should participate in the communication process. Within the current context of democratisation and public consent, it seems early entry of participatory

strategies is the new way to go about risk communication. This is Illustrated by what the Canadian Food Inspection Agency stated in one of its reports: "Engaging citizens is not merely a fashionable concept for public policy-makers: participatory democratic values have emerged and shaped the way risk communication is done". A key factor to the success of participatory strategies is that they induce trust (Probart, 2003, p. 16). The following paragraphs will review literature on trust.

2.2 Trust

Theories proposed by public administration and democratic theory are contradictory when it comes to prescribing what citizens’ stance towards government should be. Some urge that distrust is sensible and trust naïve, others argue that trust is constructive while distrust is detrimental (Van De Walle & Six, 2014, p. 160). These notions stem from political scientists’ contributions to the definition and measurement of trust. A commonly used conceptualization, to describe public attitudes towards government institutions, is provided by the confidence/trust– scepticism–distrust–cynicism–

alienation continuum. Here, two interesting things stick out when we relate them to the dual-process approach, trust in motives and trust in competence, used in this thesis.

First, ‘distrust’ is the middle and most preferable attitude since distrusting citizens “will voice discontent, participate in the political debate and mobilize themselves against the government of the day” (Van De Walle & Six, 2014, p. 161). The dual-process model used in this thesis does not seem to deal with distrust. This is because the dual model emerged from social psychology and its well established tradition of differentiating the cognitive (trust in competence) and the affective (trust in motives) processing of information. Specifically this focus on information processing presents a useful perspective to study trust in risk communication.

Second, on the trust end of the continuum, the first attitude is called confidence.

“Confidence, despite its conditionality, frees individuals from the need of constant monitoring and thus can ultimately take the form of a naïve and unquestioned leap of faith” (Van De Walle & Six, 2014, p. 161). The conceptualization of trust in this thesis uses confidence to describe the judgement based on the record of quality information provided by the risk communicator in the past. This

(10)

10

judgement depends on personal experiences with the source and on statements heard from others about the quality of information that it has provided in the past (Harvey & Twyman, 2007, p. 5). Before we can speak of a “naïve and unquestioned leap of faith”, to which our model refers as ‘collaborative behavior’, the trust in motives of the source needs to be present. Trust in motives is based on the similarity of values between the risk communicator and the receiver of information (Harvey & Twyman, 2007, p. 6).

2.2.1 Trust from a sociologists perspective

By the end of the last century an elaborate framework of concepts of trust was provided by Georg Simmel, a cultural sociologist from Germany. Central to this framework was the term confidence which was used equivalent in meaning with trust. He defined the nature of trust/confidence by referring to the two sometimes overlapping concepts of faith and knowledge. To Simmel, trust is firstly an ubiquitous faith in the persistent conditions of our existence, and secondly a balance between knowing and not knowing (Zachmann & Østby, 2011, p. 2). This balance between knowing and not knowing is given by the historical context because science and technology evolve alongside with societal changes. In every societal context a conventional dispersion of knowledge would unfold, allowing people to make decisions based on a learned and experienced relation to knowing and trusting. In other words, in a situation where it is unclear how to act, people refer to their colleagues, the traditions and institutions, the force of public opinion and the circumstances of the situation to learn precisely what is needed and expected (Zachmann & Østby, 2011, p. 3).

Another scholar who wrote extensively on the concept of trust is the British sociologist Anthony Giddens. For him, trust is an important concept on which he further built his theories on the consequences of modernity. Modernity has brought about a whole new dynamic of which the development of disembedding mechanisms plays a central role. One of these disembedding mechanisms is professional knowledge or expertise, on which we almost permanently rely in traffic or when we buy our food in the supermarket. Here trust plays a role because we put our faith in the integrity and rightness of principles that lay far beyond the scope of knowledge possessed by the vast majority of people. The tendency to minimize danger is relying heavily on trust. Nevertheless,

unfortunate accidents still happen.

Therefore, to minimize danger we are to consider an acceptable risk when we make a decision. Such decisions are made only when the risks are acceptably low and when we can be sure that possible dangers are known. The same goes for our food consumption. This is an illustration of Giddens’ definition of safety based on acceptable risk and trust: we consume food every day even though most of us only vaguely know its origin and the production process. Consequently, food consumption is at best permeated with trust in the safety of food, or is at least based on the

(11)

11

assumption that the risk of getting a disease caused by food is acceptably low (Zachmann & Østby, 2011, p. 3).

2.2.2 Difference between trust and confidence

Trust is also used as a mechanism for reducing social complexity when we make decisions based on incomplete knowledge. For example when we take advice from friends on which cites to visit when we plan to travel to a destination we have never been to. In relation to using trust as reducing social complexity, other authors differentiate between trust and confidence. Here, confidence is based on the idea that the outcome of future conduct can be expected to be positive. This outcome is not within our control and the positive expectation is based on a sense of continuity of past experiences with events and circumstances (Zachmann & Østby, 2011, p. 3). In other words, confidence has a distinct performance criterion to gauge whether the object will behave as predicted.

Trust, in contrast, is not placed in performance but it is placed in the target’s freedom to act as the trusting person would in that situation (Siegrist & Gutscher, 2005, p. 37). Furthermore, trust has also to do with systems. The implanted control mechanisms which are based on distrust, are the reason systems are trusted to work. This can be illustrated when we look at trust in systems of knowledge. Systems of knowledge are a set of measures to guide communicative behaviour in such a way that every assertion is bound to a set of rules to secure reliability. Because people trust this inbuilt system and its reliability restriction, nobody has to be in personal contact with the original producer of certain information. So, in order to secure the reliability of the system it has

institutionalized distrust with inbuilt control mechanisms (Zachmann & Østby, 2011, p. 3). An example of institutionalized distrust related to food safety control is that by the 1950’s the use of additives and pesticides was rapidly growing. Simultaneously, new pharmaceutical studies resulted in more knowledge on carcinogenic toxic substances urging food legislators in the United States and Europe to form policies that would ensure public health. This resulted in the food additives amendments of the 1950’s which codified procedures of approval. The amendment required food producers to prove that their new additives were safe for consumption. The inbuilt control

mechanism is the rule that all new additives were now deemed unsafe until the contrary had been proven. Here, institutionalized distrust is installed to bolster the trustworthiness of industrialized food chains (Zachmann & Østby, 2011, pp. 6-7).

2.2.3 The dynamics of politics and the effects on trust

It is noteworthy that the dynamic between different echelons of governance such as: the international Food and Agriculture Organization (FAO), World Health Organisation (WHO), the International Atomic Energy Agency (IAEA), the United Nations (UN), and (trans)national and regional levels of governance can make rules and regulation surrounding food safety rather pliable.

(12)

12

This is illustrated by the tendency among many nations to use irradiation as a food preservation technique. It must be mentioned that this took place time of the Cold War when nations and international actors hot-headedly sought to harness the sheer endless applications of the ‘blessing of the atom’. Nevertheless, food additive regulators decided that irradiation was an additive and thus ruled out the use of this preservation technique. The treatment had to be proven to be harmless and so the activists of national food irradiation programs with the aid of international institutions like the FAO, WHO and the IAEA conducted wholesomeness studies (Zachmann & Østby, 2011, p. 7). To judge the wholesomeness, extensive toxicological feeding studies in mammals have been and still are a prerequisite to collect the required data to prove its harmlessness (Elias, 1980, p. 1).

While this was going on, activists and lobbyists managed to convince regulators that food irradiation was a process instead of an additive. For processes different rules applied, namely the ones of the science-based trust regime, which relied on the conviction of numbers and the theory of a limited capacity to a human’s self-detoxifying potential. This meant that, under the prerequisite of a dose limit, food irradiation was permitted and that the regime of institutionalized distrust was circumvented to make place for the science-based trust regime (Zachmann & Østby, 2011, p. 7). By drawing a division between a safe amount of irradiation and a risky amount, the FAO, WHO and the IAEA sought to build trust in irradiated foods. Yet, national legislators did not accept the new internationally approved irradiation standard because they deemed food safety to be their responsibility as protector of the nation. To them, the benefits did not outweigh the risks (Zachmann & Østby, 2011, p. 7).

So this case demonstrates that the political forces, present between several levels of governance, can influence food safety regulation. It shows that scientifically justified risk assessments do not always matter to citizens and neither to authorities. Here, as much as a gut feeling of responsibility for the public had played a role in risk assessment.

According to Giddens trust is being build when laypersons meet agents, experts or other representatives of abstract systems such as the food industry. These so called ‘points of access’, faceless abstract systems, are becoming more tangible because of face-to-face connections between people. He further states that trust is built because of a personified performance of integrity by the agents of abstract systems. An example of such point of access is when a demonstrator of a new food product is promoting its adoption at public food markets or households (Zachmann & Østby, 2011, p. 3).

(13)

13

2.2.4 Trust as a moral bond of science and trust in numbers

The American historian and sociologist of science, Steven Shapin, states that when we acquire new knowledge, we trust researchers, educators and the sources they have used. According to him, we presuppose a great deal of our empirical knowledge based on the reliability of our sources. He therefore sees this as institutionalized trust. He further states that trust counts as a moral bond of science. So trust is not only the foundation of the relationship between experts and laypersons. Correspondingly, science-based expert systems and their internal relations also count on trust as the dynamic that keeps their relations healthy and productive. Nevertheless, trust as the moral bond of science only effectively provides us with a consensus when it is conducive to seeking scientific truth. In order to do so, it was paired with a new epistemological virtue. This virtue is ‘mechanical

objectivity’ or, the more commonly used term ‘structural objectivity’ which implies making

knowledge more abstract and detached by using quantification as a frame for a universal language. This language forms the medium by which researchers operate within a global network (Zachmann & Østby, 2011, p. 4).

Simultaneously, quantification serves as a technology of trust and as a method to attain objectivity. Much like algebra it is an anonymous and institutional form of trust based on the authority of protocols on how to interpret and use numbers. It plays an essential role in food

labelling and regulating by relying on average nutrient values or tolerable levels of additives. It is safe to say the history of science and social theories provide a variety of conceptions on trust (Zachmann & Østby, 2011, p. 4). But what about the other side of the coin?

2.2.5 Implications of public distrust

Heavily distrusting citizens are problematic for governments because, if their main attitude towards government is distrustful, it can have an impact on their perceptions and on their behaviours. A distrusting attitude towards all government communications and measures leads to an ambience of suspicion where citizens resolve uncertainty in their interaction with government with suspicion as their basic attitude. Trusting citizens deal with uncertainty towards government through trust. Distrust increases transaction costs, or makes transactions impossible, whereas trust lowers transaction costs. Furthermore, a lack of trust does not necessarily obstruct implementation, although trust helps governments to implement policies. However, distrust can cause problems when governments try to implement policies that are invasive to peoples personal lives (Van De Walle & Six, 2014, p. 167).

Research on trust in government show effects of low trust on rule obedience, payment of taxes and voting behaviour. Most researchers tend to investigate moderate expressions of low trust, for example voting for protest parties or diminishing tax discipline. They found that moderate expressions of low trust can lead to tax evasion, resisting enrolment in government databases or

(14)

14

refusing to vote. Therefore, it is important to know whether citizens display a low level of trust or if they are blatantly distrustful. Because researchers have a hard time accessing blatantly distrustful citizens, there is little data available on this subject (Van De Walle & Six, 2014, p. 169).

2.3 Risk communication

In the beginning of the 1970’s, risk communication started as a scientific study and research that sought to solve practical problems with its own unique field of investigation. It merged the

theoretical structures of communication, decision science, education, utility theory, psychology and sociology. Initially its field of work had been environmental hazards but gradually it had expanded towards economic-, health- and social-risk issues (Probart, 2003, p. 14).

In the following paragraphs, theories on risk communication will be reviewed. Starting with the deficit model which is used as an argument to why the public is refusing to accept advice on risk provided by experts and government officials. Then the terms outrage and stigma will be discussed, followed by a more historical view which describes the phases risk communication went through.

2.3.1 The deficit model of risk communication

Decades ago it was assumed that experts were ‘correct’, which brought about the conclusion that the laypersons were ‘wrong’. This idea also assumed that the public refusal of scientific facts was caused by a lack of knowledge and understanding of the issues. This explanation of the ‘expert-lay discrepancy’ is referred to as the ‘knowledge deficit model’ of consumer attitudes. The public perception of hazards related to food was distorted by their lack of knowledge and so this problem was seen as an educational problem. As a result, the goal was to develop ways to encourage the public to adopt the ‘correct’ views of the experts (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003, p. 112).

The deficit model forms a foundation for the work of policymakers, food producers and scientific risk-assessors. The model’s principles are similar to those of the ‘public understanding of science’ movement, a movement that aims at the popularisation of the rationalistic and reductionist methods of, for example, probabilistic risk assessment and technology acceptance issues. It is based on the following four assumptions. First, the optimisation of productivity, while being subject to tolerable levels or risk, is a habitually shared value in modern societies. Second, these levels of risk associated with optimal productivity are widely agreed upon. Third, scientific knowledge is the most desirable basis on which to produce goods and to manage risks. Fourth, if the public does not accept scientific advice given by experts, it is due to insufficient understanding of scientific reasoning on which the advice is based (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003, p. 112).

The deficit model has been criticized because the model fails to take into account the diverse and flexible values that people can have when it comes to safety and production. Some people may

(15)

15

accept higher levels of risk if it means a higher production level of goods. Others are less risk-tolerant (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003, p. 113). Another more urgent problem is one that transcends the model. It seems that people refuse to trust the messengers spewing scientific facts over them. If the source isn’t trusted, just stating facts will not be a viable way to get a message across. Social scientists do not fully understand how trust is built and what sustains credibility. But the most logical assumption is best described by a media cliché “—if people do not trust the messenger, they will not trust the message” (Slovic, 1999, p. 699).

Other researchers such as Brian Wynne (2006, p. 214) speak of the ‘public rejection of science’. This rejection is based on the idea that the public rejects scientific information because it is coupled with a certain value committed policy. Here, scientific facts are seen as a justification for a predetermined policy course.

Policy experts and scientists fail to represent values and scientific evidence as separate paradigms. Similarly, it becomes problematic when public issues, involving scientific questions, are being portrayed as scientific issues. In other words: the public is not only rejecting measures taken in the name of science, but also the science behind the measure (Wynne, 2006, p. 214).

Another discrepancy between experts and the public is caused by a so called ‘perception filter’ which causes a bias between scientific objectivity on one hand, and the way this objectivity is perceived by the public on the other hand. Here, theoretically measurable and replicable findings on certain aspects of food, such as nutritional value or toxicity levels, are still subject to aspects of scientific flaws such as precision of the instruments and the validity of the analysis. These concepts can mean completely different things from the perspective of a layperson. In normal daily language the word precision is tied to trueness, while the term precision for experts relates to the

reproducibility of an analysis. The same goes for the terminology “measurement uncertainty”, because from a consumer’s perspective a measurement is by definition certain (Verbeke, Frewer, Scholderer, & De Brabander, 2007, p. 4).

Although in some situations the ‘knowledge deficit model’ is still relevant, it has become more evident to academics that this model is too simplistic and in some events even irrelevant. This insight has come from cognitive and social psychologists who adopted the ‘psychometric’ approach which resulted in an interest in researching the dynamics of risk perception, and later during the 1980s and early 1990s, psychologists focussed on risk communication. Similarly, sociologists took interest in the consumer’s strategic response to risk, which mainly focussed on the ways people respond to risks in their daily lives. More recently, both psychologists and sociologists have done studies on the specific nature of public trust in institutions concerned with risk management (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003, p. 114).

(16)

16

The psychometric approach is trying to answer the question of why lay people and experts evaluate risks differently, and tries to come with an analytical framework to examine the ways in which various risks are being judged. They argue that lay people perceive risk multi-dimensionally, which means that they don’t evaluate risks using one scale. Instead, they include a variety of factors such as: controllability, dread, equity, uncertainty and risk to future generations. Furthermore, lifestyle hazards (hazards which people feel to be in control of) are evaluated less risky than

technological hazards (hazards over which people feel less in control of because the management of these hazards is taken care of by institutions) (Haselton, Nettle, & Andrews, 2005, p. 730).

Additionally, manmade hazards induce more dread than natural hazards (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003, p. 113). This is an example of what psychologists call a cognitive bias which is a systematic pattern of deviation from norm or rationality in perception and thought. People form their own "subjective social reality" from impressions they get from the environment. This notion eventually leads to the process where people’s construction of social reality, not the objective observations, will shape their behaviour towards other people (Haselton, Nettle, & Andrews, 2005, p. 733). The biases present in people’s day to day lives also apply to the perception of risks. One of such biases is the ‘optimistic bias’ which is studied in relation to health education and lifestyle hazards such as drinking and smoking (hazards which people feel to be in control of). It occurs when people shrug off information about health risks because their own health risk is perceived to be less great than those of other people: the particular information is directed at the other and it is not relevant to you. This is problematic when reaching out to a public that thinks they are immune to certain hazards and therefore will not be responsive to messages (Hansen, Holm, Frewer, Robinson, & Sandøe, 2003).

2.3.2 Outrage and stigma

Inadequate risk communication can have dire consequences for short-term and long-term relations throughout society. Historically, failed risk communication had sometimes led to the escalation of mild public unease into a lingering conflict between regulators, consumers and industry. To name a few other effects of failed risk communication: unwilling and cynical employees, the distraction from important managerial obstacles to less important obstacles, irreversible damage to the credibility of management, expensive and unneeded conflict with government, trudging and costly validation processes for projects, and last but not least, futile human suffering due to a strong sentiment of dread (National Research Council, 1989, pp. 17-19).

Risk communication experts use the term ‘outrage’ to describe this strong sentiment of dread, caused by what the public deems an unacceptable hazard. If a risk or controversy has drawn a degree of dread that spills over to an entire branch or sector, experts use the term ‘stigma’. The two

(17)

17

stages, outrage and stigma, have a negative effect on problem solving, the possibility to come to a compromise and policy-making will become antagonistic and polarized (Probart, 2003, p. 15). Why when and how these stages occur are the questions that are occupying researchers, continuously shaping and directing risk communication as a scientific study (Probart, 2003, p. 15). Historically, three phases of risk communication can be distinguished in practice, and to a lesser extent provided by scholars.

2.3.3 Three phases of risk communication

The first risk communication endeavours strongly represent the earlier mentioned deficit model. As a direct response to environmental calamities such as chemical spills and oil leaks the priority was to inform. Risk researchers Doug Powell and Bill Leiss (1997) have distinguished three phases of risk communication. The first phase is mainly responsible for educating the public after an event takes place and is therefore called the education phase. The idea behind this phase is that civilians are worried about risks because they do not sufficiently understand the scientific foundations that underlie issues such as toxic chemicals and radiation. Moreover, people are oblivious to statistical analysis and the laws of probability. This phase suggests that when people are better schooled in certain aspects of scientific method, they would agree with accepting certain risks in consonance with expert judgements (Probart, 2003, p. 14).

Nevertheless, the method in this phase does not pose a solution to the public controversy and unease over risk decisions made by regulators. Studies showed that the disputes were not the result of public ignorance, but a result of the fact that people had different perceptions, beliefs and attitudes towards accepting risks. So researchers concluded that while education is needed, it is not enough to guarantee that civilians will accept risks that are, according to their perception, opaque and too high (Probart, 2003, p. 14).

In the past thirty years researchers have shown that the amount of dread experienced by the public is the result of perceived risk, and to be more precise, certain characteristics that come along with these hazards. The public is willing to tolerate some characteristics of hazards more than others and this tolerance is unaffected by lethality, morbidity or the likelihood of a hazard to happen. On the contrary, research shows that risk characteristics that the public deems important are fairness, control, impact on future generations, volition and familiarity (Probart, 2003, p. 15).

Consequently, authorities gradually came to understand that educational programs by themselves were not as effective as expected. So a second phase, which Probart (2003, p. 15) named the marketing or persuasion phase, became fashionable. It relied on public-relations campaigns and was becoming the new way to deal with the public’s perceptions of risk. A new method was

(18)

18

marketing strategies and techniques.

Although this phase had some successes, it failed to convince the public to accept risks that they deemed unfair or undesirable (Probart, 2003, p. 15). As stated by Paul Slovic (1999, p. 36), professor of psychology at the University of Oregon and the president of Decision Research (a

collaborative group of researchers from all over the United States and other countries that study decision-making related to risks): ''Although attention to communication can prevent blunders that exacerbate conflict, there is little evidence that risk communication has made any significant contribution to reducing the gap between technical risk assessments and public perceptions or to facilitating decisions about major sources of risk conflict”.

One could ascertain that the first and the second phase of risk communication are a form of one directional top-down (hierarchically structured) communication, which results in a closed decision-making process where the public has no say in defining the problem and the solutions. This is detrimental for public trust because the public feels left out and subjected to an opaque

policymaking process. So lessons had to be learned from previous mistakes to come up with a new approach as the third phase of risk communication, one that is centred around making way for more public participation in the risk assessment and the risk decision making process making both more egalitarian, relevant and legitimate for the public (Slovic, 1999, p. 699). This ‘participatory phase’ embraced deliberative decision processes that include mediation, negotiation, oversight committees, shared stakeholder involvement and other forms of public involvement (Probart, 2003, p. 18).

Slovic (1999, p. 697) sees the lack of trust between the public, industry and risk management professionals as the reason why efforts in risk communication have shown so little effect. With Trust as the “coinage” of consensus building, with an increasing loss of trust in regulatory institutions and polarization of positions, it is hardly surprising that risk regulators and policymakers find it difficult to find compliance for regulatory decisions. To avoid such controversy, he stresses the importance of three factors: creating opportunities for early and meaningful participation, gaining public trust, and recognizing public perceptions. These factors are even more important for government institutions when they manage a rampant food crisis, as described in the next paragraphs.

2.4 Three risk communication strategies used in crisis management

The ways in which information on food safety is transferred to consumers, effectively influencing their behaviour, plays a vital role when governments try to manage a crisis situation. From this perspective, one can make distinctions between the varying natures of certain decisions which determine the goal of a communication strategy. These decisions can be divided into three types: type 1: the containment of a possible crisis, type 2: supplying information on risks, type 3: the development of strategies and priorities for government policies. The distinctions between these

(19)

19

communication strategies can be found in their goals and their form. Differences become apparent when we are to look at adjacent activities such as the choice of partners to cooperate with in developing and spreading information. In this context, ‘form’ would mean a one directional or a two directional stream of communication where the former can be seen as an example based on the above mentioned deficit model and the latter can be seen as a form of active stakeholder involvement (De Boer, Willemsen, & Aiking, 2003, p. 5).

In the Netherlands, the Ministry of Health, Welfare and Sport has the task to communicate with citizens in times when certain food related risks are at hand. Physical barriers (distance, the absence of communication means) can hinder a timely transmission of valuable information. Social- , and psychological barriers can cause problems when scientific information needs to be translated into appropriate behaviour. The three communication types mentioned above are an attempt to overcome these barriers. Communication when dealing with a possible urgent crisis (type 1) will be more effective when the authorities and consumers have already exchanged food safety information at an earlier stage in a manner which contributes to mutual trust. The information supply on risks (type2) can contribute to the consumer’s awareness of risks tied to certain products, the measures taken by the authorities to solve the problems and what the consumer’s role would be.

Communication dealing with the development of strategies and priorities for government policies related to food safety (type 3), can help authorities identify and act upon specific interests of

consumer groups. Mutual trust is generated by supplying correct information and by showing a sense of responsibility (De Boer, Willemsen, & Aiking, 2003, p. 5). Supplying correct information can be hindered by so called media effects when they act as message conductors.

2.4.1 Media effects

Media is responsible for conveying messages and information to the public. This dispersion of information can take on several formats such as television, magazines, radio, newspapers, social media and internet websites. Televised information has proven to have even greater influence than newspapers, radio or magazines. This is due to the combined impact of moving images, sounds and texts. Even if the content of other media is of a higher quality, people still prefer the television. People perceive televised information as more professional and therefore more trustworthy (Xie, Wang, Zhang, Li, & Yu, 2011, p. 452).

Media also influence the content of the information they convey. This is referred to as ‘framing a topic’. A frame is a socially constructed set of beliefs, ideas, and perspectives, on an individual level or shared among groups of people. It is a mechanism that structures the way we perceive, process and organize our impressions of the world. By reproducing the frame in mass media, a problem is defined along with its solutions and moral judgements. Because media is

(20)

20

produced by journalists there is always some form of internal interpretation involved based on personal views, ideas and referential structures. This subjective process is present from the start of media creation when topics and themes are selected and can therefore have influence on risk perception (Xie, Wang, Zhang, Li, & Yu, 2011, p. 462). Nevertheless, scholars seem to disagree on the precise effects that media can have on public perception of risk.

2.4.2 Studies on media effects

The earlier mentioned psychometric paradigm posits a thesis that media have a strong influence on the public’s risk perception. In experts’ debates on risk policy, the media is seen as a necessary evil. Media outlets are often blamed for the wrongdoings of others because they are irresponsible and only focus on the dispersion of negative information, especially when they can report on a risk with high consequences and a low probability (Sjöberg, Elin Moen, & Rundmo, 2004, p. 20). This can lead to what risk researchers call risk amplification. It occurs when the media and individuals give greater significance to negative events compared to positive events, and negative information is judged as more reliable than positive information (de Jonge, et al., 2004, p. 840).

Studies show that perceived causes of death seem to correlate to a greater extent with media report frequency than with measured mortality statistics. It should be noted that many other explanations can be given for this finding, there is no causal effect and that this study was focusing on accidents and illnesses, which are frequently researched topics by risk researchers and

sociologists (Xie, Wang, Zhang, Li, & Yu, 2011, p. 477). Nevertheless, suppose this study was based on data from pesticides or genetically modified foods, where there are no available statistical data and there had been no deaths reported in the media. And suppose that the public’s perception is permeated with uncertainty and fear. Here, the media would be the first to be seen as the culprit (Sjöberg, Elin Moen, & Rundmo, 2004, p. 21).

On the other hand, research indicates that media coverage on risks and accidents had been rather fair. It also shows that the accusation of being prone to report on small probability/large consequence news is false. It is refuted by the frequent report on common traffic accidents that have no deadly consequences and only a small amount of people involved. Nevertheless, as watchdogs of society, media are interested in conflicts, unfounded opinions, dramas, technological hazards and the prospect of finding someone to blame. Researcher and risk theorist Ortwin Renn (2008, p. 129) states that, when society lacks familiarity when confronted with a certain risk, media reports become more salient. Besides the media there are other channels of information involved in the public’s risk perception. Television dramas, movies, personal contacts and rumours are also expected to play their parts (Sjöberg, Elin Moen, & Rundmo, 2004, p. 21).

(21)

21

2.4.3 Media and regulators

Exposure to information on risks is one of the most important factors for perception and acceptation of that risk. Consumers perceive a subject as more important when more media coverage is involved. A lot of media attention can transform a small risk and plunge society into a crisis. To prevent this from happening, it is important for regulators to monitor the salience of risks among the public (van Wagenberg & Mihaylov, 2012, p. 15).

Regulators are generally invisible in society. The more success they have at maintaining standards, the less visible their work becomes for the public. Society permanently assumes that regulators are functioning well so that their interests are taken care of. Efforts to do so remain mostly invisible which causes the public to rely on media coverage in order to form an image of how regulators perform. The less direct observance of an issue, the greater the media influence in shaping this image (Steen, Klijn, Wijk en Scherpenisse, 2013, p. 4). This is because people lack a point of reference. Furthermore, the repetition of media messages also play a role at shaping public opinion. The repetition of certain narratives can influence public opinion in a direction (Fombrun, 2004).

There is a field of tension in the relation between regulators and the media. There are legitimate reasons to why regulators would try to avoid the media as much as possible. One of which is having to withhold information due to running criminal investigations. Aside from negative aspects of media attention, there are also positive aspects for regulators. Media attention could help

enforcing policy and can, in some occasions, cause a valuable public debate (Schillemans & Jacobs, 2013, p. 135).

2.4.4 Defining the public

Before we are to continue to further investigate how trust and risk communication are related in the theoretical framework, it seems necessary to clarify how ‘the public’ is defined. First of all, we cannot speak of the public as a homogeneous group. If the public is made out of people, then still there are huge differences among these people. For practical reasons risks cannot be discussed with them one by one, therefore the risk communication specialist and advisor Peter Sandman (2012, p. 5) speaks of recurring groups of “key” publics. The identity of these publics depend on the risk issue at hand but he gives the following examples:

1. Industry

2. Regulators (at all levels of governance)

3. Elected officials (at all levels of governance)

4. Activists (at all levels of governance)

(22)

22

6. Neighbours (everyone who is especially impacted by this particular issue)

7. Concerned citizens (everyone who expressed a desire to get involved in this particular issue)

8. Experts (everyone who has specialized knowledge of this particular issue)

9. The media (and through the media, the rest of the public)

These different groups have varying opinions and frames of reference on the risk issue and therefore a wide range of possible communication strategies should be considered.

Vincent Covello (2002, p. 24) uses another definition of public. Even though regulators, officials and experts are technically also part of the public, he uses the ‘general public’ to refer to as the main recipients of risk communication. According to him: “Risk debates often are interpreted by the general public in two ways: the world is a dangerous place, and risk managers either do not know what they are doing or do not understand what they are supposed to be doing (National Research Council, 1989).”

3 Theoretical framework

3.1 Trust in the source of risk communication

It is safe to say that people trust some sources more than others. This also holds for risk communication. In this field, studies have shown that people have more trust in consumer organisations, family and friends, than in government sources. An example of these findings is brought to light in a study by Frewer et al. (1995, p. 477) where people in an interview were asked to express their distrust in 15 sources of information. Ranking them from least trusted to most trusted showed: tabloid newspapers, government agencies, friends, pressure groups, quality newspapers, television news and documentaries, consumer groups, medical doctors and university scientists. In this case it should be noted that this ranking relied on the type of risk that was being examined (such as alcohol use, pesticide residues, smoking) (Harvey & Twyman, 2007, pp. 3-4).

Another study used factor analysis to investigate the different possible factors that can influence trust. They have identified different features of sources of information as factors that affect trust. In one study they identified: competence, objectivity, fairness, faith (goodwill) and consistency as factors that determine trust. In another study they have found ability, benevolence and integrity to be playing a role in predetermining trust in a source of information.

These studies have resulted in a variety of models used to analyse the role of trust in

communication and the models can be divided based on a dichotomy of two broad groups of factors that determine trust. The first factor that can be identified is ‘competence’ and the second is

(23)

23

affective information processing theories developed by psychologists. In this field of study scholars have developed a two-route (affective/cognitive) model of the processing of risk information. In this model differences in risk perception are argued in terms of processing via an affective or a cognitive route or both of them combined (Harvey & Twyman, 2007, p. 4).

The cognitive route depends on processing information about the reliability, competence, knowledge, and dependability of the source. The affective foundations of trust rely on emotional ties: “People make emotional investments in trust relationships, express genuine care and concern for the welfare of partners, believe in the intrinsic virtue of such relationships, and believe that these sentiments are reciprocated”. Other scholars have further generalized and expanded this model (Harvey & Twyman, 2007, p. 4).

However, this model is more applicable to explaining how interpersonal trust is built rather than how trust in an agency tasked with risk communication is developed. This is because the model is used to explain trust relations between professionals and managers, where frequent personal interactions and personal care play a role. These types of relations between the public, government institutions and consumer organizations, charged with risk communication, are uncommon (Harvey & Twyman, 2007, p. 5). Therefore, it is necessary to refer to a more suitable model in order to form a theoretical framework that can be used to answer the research question.

3.2 The Trust-confidence-cooperation (TCC) model of risk communication

A more useful dichotomy for modelling trust in risk communication as provided by organizations can be found in the dual-mode model of cooperation based on the opposing elements of trust and confidence. The model is called the trust-confidence-cooperation model of risk communication (TCC). Michael Siegrist and Heinz Gutscher (2005) formed this model by combining research on trust, risk perception and risk management conducted by researchers from a wide array of disciplinary backgrounds. The purpose of the TCC model is to improve understanding of the relation between trust and risk perception and addresses the question of how individuals foresee cooperation in others.

Additionally, the model has some empirical backing and it is well specified (Frewer, Howard, Hedderley, & Shepherd, 1995, p. 5). The following figure shows its main features:

(24)

24

Figure 1. Trust-confidence-cooperation model of risk communication

(Frewer, Howard, Hedderley, & Shepherd, 1995, p. 5)

Trust in the source’s motives (social trust) depends on the perceived similarity between the source’s values and those of the risk information receiver. Furthermore, attributes such as benevolence, integrity and honesty also play a role. Here, narrative information given by the source is the basis on which the receiver can base its judgement. This judgement is targeted towards an agent, which is an entity to which behaviour-controlling beliefs, desires, and intentions are attributed.

The trust in the other part of the dichotomy: competence (or confidence) is formed by an assessment of the source’s record of information quality. It therefore depends on personal experiences or shared experiences by others who have used this source. Here the judgement is targeted towards an object such as the targets perceived abilities, past performances and experiences, in order to judge whether a certain future event will occur as expected.

Both social trust and confidence determine the degree to which cooperation between the receiver and the source take place. According to the theories on which the model is founded, this should lead to cooperative behaviour such as complying with advise and expressing trust in the

(25)

25

source (Frewer, Howard, Hedderley, & Shepherd, 1995, p. 5). The figure shows that, within the TCC model, the performance information (past performance) that determines confidence in the source is filtered by the level of social trust. So if a person sees that a source’s motives are benevolent, they are prone to assess poor performance by the source with more generosity. This filtering process is based on studies providing evidence that social desirability has a higher priority than intellectual desirability. This asymmetry results in the notion that processing related to trust-in-motives can influence processing related to trust-in-competence but not the other way around (Frewer, Howard, Hedderley, & Shepherd, 1995, p. 6).

The TCC model also shows that the dichotomy (trust and confidence) merge into a shared pathway. Which means that in reality, behaviours should transform accordingly. For example, if a person shows more trust in a consumer organization than in a corporate organization, then he or she should be more inclined to accept risk advice from the former than from the latter. Likewise, if persons say that they have lost trust in a government agency, it should mean that they are less willing to accept advice from that agency than before. These two dynamics in the TCC model are referred to as the asymmetric influence assumption and the final common pathway assumption, respectively (Frewer, Howard, Hedderley, & Shepherd, 1995, p. 6).

There can be multiple reasons as to why communication becomes problematic and eventually even fails. Communicators may be oblivious to why the public responds to risks in a certain way. Presenting only the facts backed by statistical analysis is a somewhat stereotypical way of portraying risk communicators. Unfortunately, even at this day and age, it is still an accurate description. Experts on the matter may despise conflicting reactions to risk information as irrational, even though this is playing a vital role in the stigma or outrage situation that they seek to solve.

3.3 Recommendations on effective risk communication: The Pitfalls of communication

Communication can fail for a number of reasons. Communicators may not recognize why people, or specific stakeholders in particular circumstances, may respond to risks the way they do. Steeped in statistical analysis and actuarial charts, a risk communicator tends to express risk from the viewpoint of an expert. Much like in the TCC model a communication failure is more likely to emerge when empathy with the public is lacking. Another factor that has adverse effects on effective

communications, is that communication attempts are often made after a problem had appeared, and people have already formed their views and ideas on who is to blame and the magnitude of the problem. During the aftermath of a crisis situation a spokesperson, (with a perceived or real conflict of interest) of the responsible organization is put forward with the duty to explain everything. He or she then proceeds to give explanations of risk in a school lecture format instead of a conversation, aggravating the situation. The outrage overshadowing a crisis is usually not the result of the facts of

(26)

26

the uncovering risk, but the way it was communicated. More often it is driven by the measures (or the lack of measures) taken to protect the public and by whether or not responsibility is taken for the incident. These and other mistakes can severely harm trust in those who are entrusted with risk communication. In a paper with practicalities on how to earn and maintain credibility and trust, Vincent Covello (2002, p. 24) made a list of recommendations on how to avoid the “pitfalls” of communication.

Table 1: Avoiding the Pitfalls of communication

Pitfall Do Don’t

Jargon Define technical terms Don’t use language that the audience does not understand

Humour Direct it at yourself Use it in relation to the environment, health and safety issues

Negative allegations

Refute the allegation without repeating it Repeat or refer to them

Negative words/phrases

Use positive or neutral terms Refer to national problems (e.g., “This is not Love Canal”)

Reliance on words Use visuals to emphasize key points Rely entirely on words

Temper Remain calm. Use a question or allegation as a springboard to say something positive

Let your feelings interfere with your ability to communicate clearly

Clarity Ask whether you have made yourself clear Assume you have been understood

Abstractions Use examples, stories, and analogies to establish a common understanding

Talk about new or unfamiliar topics without grounding the audience

Nonverbal messages

Be sensitive to nonverbal messages you are communicating. Make them consistent with what you are saying

Allow your body language, your position in the room, or your dress be inconsistent with your message

Attacks Attack the issue Attack the person or organization (e.g.,

“You’re being irrational”)

Promises Promise only what you can deliver Make promises you can’t keep or fail to follow up.

Guarantees Emphasize achievements made and ongoing efforts

(27)

27

Speculation Provide information on what is being done Speculate about worst cases

Money Refer to the importance you attach to EH&S issues; your moral obligation to public health outweighs financial considerations

Refer to the amount of money spent as a representation of your concern

Organizational identity

Use personal pronouns (I, we) Take on the identity of a large organization

Pitfall Do Don’t

Blame Take responsibility for your share of the problem

Try to shift blame or responsibility to others

Off the record Assume everything you say and do is part of the public record

Make side comments or “confidential” remarks

Risk/benefit/cost comparisons

Discuss risks and benefits in separate communications

Discuss your costs along with risk levels

Risk comparison Use them to help put risks in perspective Compare unrelated risks (e.g., “compared to driving drunk while talking on your cell phone, this risk is miniscule”)

Health risk numbers

Stress that true risk is between zero and the worst-case scenario. Base actions on federal and state standards rather than risk numbers.

State absolutes or expect the lay public to understand risk numbers.

Numbers Emphasize performance, trends, and achievements

Mention or repeat large negative numbers.

Technical details and debates

Focus your remarks on empathy, competence, honesty, and dedication

Provide too much detail or take part in protracted technical debates

Length of presentation

Limit presentations to 15 minutes, if possible

Ramble or fail to plan the time well

Source: (Adler & Kranowitz, 2005, pp. 27-28)

Although some notions seem obvious, they do stem from experiences and past mistakes made by spokespersons and authoritative figures. Even though not all pitfalls are relevant or practical (non-verbal messages) to use in this thesis, many of them seem to connect with the Trust-confidence-cooperation model of risk communication (figure 1). As explained in paragraph 3.2, the factor ‘Trust in the source’s motives’ (social trust) in the TCC model depends on the perceived similarity between the source’s values and those of the risk information receiver, and on the attributes benevolence, integrity and honesty. So one could ascertain that the risk communicator should attune its information in a way that it matches the public’s values. In doing so, pitfalls such as ‘blame’, and

Referenties

GERELATEERDE DOCUMENTEN

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

(2017) did the first attempt of ordering those of smart homes into groups. When looking at the number of skills smart speakers offer these days, one could argue that these

• Voor grasranden langs hoogsalderende slakkengevoelige gewassen (zoals spruitkool) valt te overwegen deze randen in het najaar vóór de teelt te maaien, zodat zij minder

Furthermore, Pasini, Lorè, and Ameli (2006) show that a neural network is able to reconstruct the time series of global temperature better than a linear model, where both models

In Duiven konden kiezers zich niet uitspreken vóór of tegen een door de raad genomen besluit, maar kon worden gekozen uit drie alternatieve plannen.. Noch het college noch de raad

Twintig jaar later mag Verbeek zich hoogleraar noemen op de persoonsgebonden leerstoel Filosofie van mens en techniek, naast zijn functie als opleidingsdirecteur van

is één heerlijke, duidelijke reclamespot. De buitenkant is afgeschermd door , blinde muren, of spiegelglas. Eventuele tuintjes en patio's liggen ilig aan de

Dutch measurement practice covers a number of components of trust in the police very well, in particular through the ´Veiligheidsmonitor´.. Trust in the