• No results found

Acta Psychologica

N/A
N/A
Protected

Academic year: 2022

Share "Acta Psychologica"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Contents lists available at ScienceDirect

Acta Psychologica

journal homepage: www.elsevier.com/locate/actpsy

False prophets and Cassandra's curse: The role of credibility in belief updating

Toby D. Pilditch a,b,⁎ , Jens K. Madsen b , Ruud Custers a,c

a

University College London, Department of Experimental Psychology, 26 Bedford Way, London WC1H 0AP, UK

b

University of Oxford, School of Geography and the Environment, South Parks Road, Oxford OX1 3QY, UK

c

Utrecht University, Department of Psychology, Heidelberglaan 1, 3584 CS Utrecht, Netherlands

A R T I C L E I N F O

Keywords:

Belief updating Credibility Confirmation bias Trust

Heuristics

A B S T R A C T

Information from other sources can be bene ficial or detrimental, depending on the veracity of the report. Along with prior beliefs and context, recipients have two main routes to determine veracity; the perceived credibility of the source and direct-evaluation via first-hand evidence, i.e. testing the advice against observation. Using a probabilistic learning paradigm, we look at the interplay of these two factors in the uptake (or rejection) of communicated beliefs, and the subsequent evaluation of the credibility of the communicator in light of this process. Whether the communicated belief is false (Experiment 1), or true (Experiment 2), we show that beliefs are interpreted in light of the perceived credibility of the source, such that beliefs from high trust sources are taken up (hypothesis 1), whilst beliefs from low trust sources are treated with suspicion and potentially rejected – dependent on early evidence experiences (hypothesis 2). Finally, we show that these credibility-led biased interpretations of evidence (whether belief or suspicion con firming) lead to further polarization of the perceived credibility of communicators (hypothesis 3). Crucially, this occurs irrespective of the veracity of the commu- nication, such that sources accompanied by a high trust cue not only get away with communicating falsehoods, but see their perceived credibility increase, whilst sources accompanied by low trust cues not only have truthful communications rejected, but have their low trust penalized even further. These findings carry important im- plications for the consequences of arti ficially inflating or deflating the credibility of communicators (e.g., po- liticians or scientists in public debate).

1. Introduction

As humans developed the capacity to communicate, the capacity to convey misinformation also emerged. Whether spread maliciously or unintended, the consequences of misinformation can be severe (e.g.

spreading the belief that vaccines cause autism). In recent years, the potential to communicate to large audiences has exponentially in- creased with the advent of mass-communication, the Internet, and so- cial media. Given the fact that people can share erroneous beliefs with those around them, beliefs and misinformation may spread to new people, carrying the costs with them rather than dying with the origi- nator. However, some beliefs can be evaluated via repeated first-hand experience. For example, someone's belief in homeopathic medicine may be revised after taking it, as it does nothing to the illness. This raises the question whether recipients, given the opportunity, can dis- miss (or ameliorate the effects of) misinformation, or whether the process of evidence evaluation is either ine ffective, or even exacerbates

(erroneous) adherence?

Whilst senders can freely transmit any belief, recipients do not al- ways unthinkingly accept the claim. Without any further information, communicated beliefs are just propositions without a truth-value (Mitchell, De Houwer, & Lovibond, 2009). In evaluating reports with no further information, perceived source credibility is an important cue to assess the truth-value of the proposed belief (Briñol & Petty, 2009;

Chaiken & Maheswaran, 1994; Petty & Cacioppo, 1984). For instance, people are inclined to dismiss a statement from a drunkard on the street, such as an impending economic crash, but when the country's chief economist makes the same statement, the results are alarmingly different, as many citizens presumably rate her statements as highly credible. As such, source credibility may be (and arguably normatively should be) an important moderator in belief revision.

Credibility is not the only cue available to evaluate reports. A per- son's prior belief regarding the hypothesis, the surrounding context, available evidence, and even the sequence of information presentation

https://doi.org/10.1016/j.actpsy.2019.102956

Received 2 March 2019; Received in revised form 31 October 2019; Accepted 9 November 2019

Corresponding author at: University College London, Department of Experimental Psychology, 26 Bedford Way, London WC1H 0AP, UK E-mail address: t.pilditch@ucl.ac.uk (T.D. Pilditch).

Available online 30 November 2019

0001-6918/ © 2019 Elsevier B.V. All rights reserved.

T

(2)

may all contribute to belief uptake. For example, studies suggest the first evidence people encounter in the environment is used to verify the truth-value of a communicated belief (Pilditch & Custers, 2018). If in- itial evidence confirms the belief, it is adopted and maintained (even if later evidence contradicts it). If it is instead initially disconfirmed, though, the belief is dismissed and abandoned. Critically, in many real- world situations, perceived source credibility and initial evidence not only coexist, but may, such as in the case of fake news, contradict – with communicated beliefs from credible sources being discredited by initial evidence, or the other way around.

In the current paper, we use an established method (Pilditch &

Custers, 2018) to implement and test credibility and experienced evi- dence against each other to investigate the dynamics of belief adoption and judgements of source credibility. We show that perceived trust- worthiness (a component of credibility, see below) dominates the ef- fect, irrespective of the veracity of the belief communicated and the opportunity to evaluate the belief via first-hand observation. Initial evidence only plays a role in gatekeeping the uptake of beliefs when perceived trust is low. Finally, we find tentative evidence of deleterious cyclical consequences of the dominance of trust over evidence, wherein sources may in fact be perceived as more trustworthy despite providing misinformation.

1.1. Evaluating anonymous sources

Pilditch and Custers (2018); see also Staudinger & Büchel, 2013) demonstrate that beliefs communicated by others influence the in- tegration of objective evidence in a recipient. Given an advice regarding one of two lottery machines from an anonymous previous participant, participants were more inclined to adhere to advice (and trust the source more) when initial evidence confirmed it, even if subsequent evidence contradicted it for an extended period. Conversely, an initial contradiction led to immediate belief abandonment for the duration.

This “gatekeeping” effect of initial evidence fits with previous findings in several neuroscience studies exploring the impact of pri- macy e ffects on advice uptake ( Decker, Lourenco, Doll, & Hartley, 2015; Doll, Hutchison, & Frank, 2011; Doll, Jacobs, Sanfey, & Frank, 2009; Staudinger & Büchel, 2013). Critically, Pilditch and Custers (2018) demonstrate that communicated beliefs can cause a prolonged bias if the initial evidence corroborates it. Notably, through the provi- sion of counterfactual information, this e ffect was attributed to a con- firmation bias in evidence integration ( Hahn & Harris, 2014; Klayman, 1995; MacDougall, 1906) rather than evidence selection (Klayman & Ha, 1987; Lord, Ross, & Lepper, 1979).

To understand these processes better, it is of great interest to explore whether source cues like trustworthiness will subsume gatekeeping of initial evidence (especially given the established impact of credibility on belief revision, see Section 1.2), and whether this occurs irrespective of the veracity of the belief itself. Given previous work has considered this gatekeeping effect a product of validating the uncertain (or un- known) truth value of the communicated belief (Pilditch & Custers, 2018), such an in fluence of trust would fit with this account of belief uptake.

1.2. Source credibility

When assessing a belief against first-hand evidence, characteristic of the source itself may in fluence the assessment. For example, although both a drunkard and a nurse may provide the same information re- garding a health issue, the persuasiveness of the claim differs based on the source; the nurse is more likely to have relevant knowledge (ex- pertise) and a motive to convey it honestly (trustworthiness). As the truth-value of a proposition can in some cases be assessed in- dependently of the source, appeals to authority have traditionally been considered an argument fallacy or a shallow heuristic (see e.g. Briñol &

Petty, 2009; Chaiken & Maheswaran, 1994; Petty & Cacioppo, 1984).

These accounts consider source cues as capable of providing directional predictions, but given the opportunity for increased consideration, a belief recipient should instead defer to the message content, minimising the influence of source characteristics. It is worth noting, that unlike in the present work (in which the truth-value of evidence is always as- sumed to be 100%), these models are based on work in which “evi- dence” or persuasive arguments are either weak or strong (and ac- cordingly have a variable truth-value).

Comparatively, coherence-based accounts argue credibility is es- sential to reasoning. For instance, we might reasonably reject seemingly strong evidence if the source is discredited (e.g. as the person may have fabricated the evidence). These accounts conceptualise the in fluence of credibility within the integration of domain expertise, trustworthiness, coherence, and back-up evidence. Such a conceptualization has, for instance, met with success in qualitatively formalising appeals to au- thority (Walton, 1997, Table 1, p. 102). Bovens and Hartmann (2003) developed this concept further in their proposal of a formal Bayesian model of reliability and coherence (see also Schum, 1981). This work has since provided the formal foundation for a Bayesian source cred- ibility model (Hahn, Harris, & Corner, 2009; Hahn, Oaksford, & Harris, 2012), which operationalizes credibility as an amalgam of expertise and trustworthiness (which are operationally independent). Here, expertise refers to the degree of access to accurate information the source is believed to have for the domain in question, whilst trustworthiness refers to the degree of belief in the source being willing to communicate information as faithfully as possible, to the best of the source's ability.

In short, expertise refers to competence whilst trustworthiness refers to intention. The model has found empirical support in argumentation (Harris, Hahn, Madsen, & Hsu, 2015) and political endorsements (Madsen, 2016), capturing essential and predictive characteristics of appeals to authority, given the fit with observed posterior degrees of belief.

Rather than conceptualising credibility as a shallow cue, we are interested in the evaluation of a belief (specifically, a recommend ac- tion, rather than an argument) via an evidence integration process.

There is potential for cycles of credibility and evidence-based belief updating. Following previous studies, coherence-based models inform our predictions (e.g., Bovens & Hartmann, 2003; Hahn et al., 2009, 2012). This allows for not only gradated and speci fic (rather than solely directional) predictions of credibility on belief updating, but also the impact of updated beliefs on credibility. More precisely, high credibility sources will engender greater adherence to advice, whilst low cred- ibility sources are detrimental to it.

1.3. Hypotheses

Previous work has found that initial evidence in relation to the belief of an unknown source plays a critical role in belief consolidation (Pilditch & Custers, 2018; Staudinger & Büchel, 2013). Initial evidence has therefore been argued to play a pivotal role in shaping not only the perceived validity of the communicated information, but by proxy the credibility of the source as well (Pilditch & Custers, 2018). Extending this argument, we hypothesise that for cues indicating an unreliable source (and thus confidence in the truth of the belief is low), initial evidence will continue to play a role in consolidation/refutation (or

“gatekeeping”). However, for an unreliable source – contrast to a reli- able source – this may result in belief inversion (i.e. rather than not knowing the validity of the belief, an audience may actively suspect it as being false).

Conversely, if the source of the belief is indicated as being credible,

this credibility cue may act as evidence to inform con fidence in the

belief being true prior to evidence exposure. Thus, we predict that the role

of initial evidence in consolidating/refuting will be subsumed (in line

with expectations of the Bayesian source credibility model, see Harris

et al., 2015). Furthermore, given this supplanting role of source cred-

ibility cues, it may be predicted that cues indicating higher credibility

(3)

will (irrespective of initial evidence) elicit higher degrees of belief compliance.

Finally, we explore how source credibility is updated, in light of first-hand evaluation of the communicated belief. Moreover, we explore how much the veracity of the communicated belief matters. For in- stance, it is possible that sources perceived to be credible from the outset can lead to a biased evaluation of evidence, such that the belief is not only maintained, but also the source is perceived to be more cred- ible post-evaluation. Such a finding represents a dangerous unmooring of credibility and veracity, and an illustrated mechanism of the misuse of trust.

2. Experiment 1: False prophets

Experiment 1 assesses how source credibility impacts the uptake and maintenance of a communicated belief, when the belief recipient is subsequently exposed to a prolonged period of first-hand evidence.

Specifically, Experiment 1 focuses on erroneous beliefs, where the source advises to choose the probabilistically inferior option. Besides following a tried and tested method (Pilditch & Custers, 2018), three reasons motivate the experiment design.

First, if initial evidence, shown to act as a gatekeeper (Pilditch &

Custers, 2018), is overlooked or disregarded when the belief comes from a credible source, it suggests initial evidence plays a secondary role to high credibility (hypothesis 1). Comparatively, when beliefs originate from low credibility sources, initial evidence may continue to play a gatekeeping role. This explores the dangers of authority abuse in the spread of misinformation.

Second, the design assesses how low credibility sources are inter- preted. Specifically, whether low trustworthiness (or rather, distrust) is interpreted as a sign of ulterior motives (Twyman, Harvey, & Harries, 2008). If so, we predict that choices re flect this assumption, such that recipients choose the opposite option from the indicated by the belief (hypothesis 2). This demonstrates belief processing within a social context and demonstrates how one can use the trustworthiness of a source to “flip” the biasing effect of the belief in either direction (where trust leads to a bias towards and distrust leads to a bias away from the belief).

Finally, we explore how experienced evidence not only maintains communicated beliefs, but also updates the credibility of the source of the belief. The perception of a sources' credibility may be polarized by this process (hypothesis 3). For instance, highly credible sources may not only produce biased responding that corroborates the belief com- municated, but as a result may be considered more credible afterward.

Conversely, a low credibility source may provoke suspicion (Priester &

Petty, 1995), belief abandonment, and thus the confirmed suspicion results in an even lower perception of credibility. How ratings of trust and expertise of the original source of the belief are a ffected by the combination of a communicated belief and sustained, probabilistic evidence (that fails to support it) has important ramifications for the cyclical, long-term impact of sources.

2.1. Method

Following the outline set out above, Experiment 1 combined a probabilistic evidence integration task (amalgamating learning tasks from Pilditch & Custers, 2018, and Staudinger & Büchel, 2013) using a medical context and a preceding “comment section” that allowed for the manipulation of source credibility factors. As detailed below, par- ticipants made choices in two parallel disease-treatment sets of trials, for which they had received a belief from a source about one medicine being the better choice. Whether this belief came from a perceivably trustworthy and/or expert source was manipulated between-subjects, along with whether initial evidence supported or undermined the be- lief. Along with the proportion of choices made in favour of the sub- optimal medicine (central to hypotheses 1 and 2), posterior binary

preferences, confidence in that preference, and probability estimates were also measured. Lastly, the trustworthiness and expertise of the source were also rated (relevant to hypothesis 3). All materials and method summaries may be found in Supplementary Materials D.

2.1.1. Participants

Participants were recruited and participated online through MTurk.

Those eligible for participation had a 95% and above approval rating from over 500 prior HITs, and could not have participated in previous experiments/pre-tests. Participants believed the purpose of the study was to improve medical decision making. Participants were English speakers between the ages of 18 and 65, located in the United States.

Informed consent was obtained from all participants in all experiments.

2.1.2. Procedure and design

2.1.2.1. Instruction and “belief”. Participants were told they would see a number of trials. In each trial, a new patient is described as having one of the two diseases. Participants prescribe one of the two medicine options for the disease for that patient and see the outcome (patient was cured or not). Participants were told that the two medicine options ( “Mox” and “Nep” for the “Lannixis” disease, and “Byt” and “Zol” for the “Deswir” disease) all generated either “Cured” of “No Effect”

outcomes – this feedback being received by participants immediately after each choice. Unknown to participants, one medicine cured at a 60% rate (optimal option), and one cured at a 40% rate (sub-optimal option). Successful cures earned participants points (+3), whilst failures to cure cost them points (−1). Participants were told each patient may react di fferently to the medicines, and that their job was to discern the overall efficacy of the medicine options. Participants were incentivized by a performance bonus on top of the standard payment, based on the amount of points earned (see Supplementary Materials D for full details).

Before starting the trials, participants were given advice from a ( fictitious) previous participant regarding one of the two diseases.

Which medicines were optimal/sub-optimal were counterbalanced, meaning that the advice also was counterbalanced. Crucially, the ma- nipulation of a communicated belief pertaining to only one of two diseases patients (trials) would present with allowed for the testing of a within-subject difference in the choices and judgements for the “con- trol” disease (the disease that did not have a comment section), and the

“belief” disease. As the sole difference between the two diseases was the presence or absence of a communicated belief, any subsequent differ- ence could be discerned as due to this within-subject manipulation.

The manipulated “comment” was constructed to appear to be from a previous MTurk participant (complete with fabricated MTurk ID number), and indicated a directional hypothesis regarding one of the medicines for the disease ( “I think the Zol medicine was the most ef- fective ”; see Supplementary Materials B for an example screen). The medicine indicated as superior, was in fact always (unknown to the participant) the sub-optimal option.

2.1.2.2. Expertise and trust statements. Along with the comment, the previous participant's trustworthiness and expertise were described (each either low or high, and randomly assigned between-subjects).

Trust and Expertise were independently manipulated as high or low via the following statements accompanying the belief manipulation:

High [Low] Trust: “The participant below was told they would be paid double if the next participant group performed better [worse]

than them. ”

High [Low] Expertise: “This participant was asked to make a com- ment after completing all of the 1000 trials [only 1 trial of 1000]. ”

1

To ensure the manipulation of trust and expertise yielded

1

See Supplementary Materials B for an example screenshot of this im-

plementation.

(4)

reasonable high/low ratings, pre-tests were conducted prior to Experiment 1, in which participants rated the sources following the comment section, and were then debriefed. These ratings are used as a tentative baseline comparison for trust and expertise ratings in Experiments 1 and 2.

2

2.1.2.3. Trials and initial evidence. Having been given the comment, participants moved on to the trials. On each trial, participants select which medicine to prescribe to the patient (the side of the screen on which each medicine appeared was randomized). Participants could see their current total points earned during the trials. The ‘Cured’ outcome was written in green if the medicine led to a cure, and “No Effect” in red if the medicine was unsuccessful. This gave the participant feedback for the selected medicine option.

3

Initial evidence experienced during the trials was a between-subject manipulation. Having seen the comment, the first few trials either supported (initially supportive evidence condition; IE+) or under- mined it (initially undermining evidence condition; IE-). As the belief (falsely) indicated the sub-optimal option as superior, in the supporting initial evidence (IE+) condition, the sub-optimal options for both dis- eases first received two positive trials (cures), followed by one negative (no effect), whilst the optimal options for both diseases received the opposite pattern; two negative trials (no e ffect), followed by one posi- tive (cure). Conversely, the undermining initial evidence (IE-) condition followed the opposite pattern, with the optimal options now receiving two positive trials followed by one negative, and the sub-optimal options now receiving two negative trials followed by one positive. In this way, for the belief disease, the belief receives initially undermining evidence.

All trials following this three-trial manipulation followed the 60/40 probability distribution outlined above.

2.1.2.4. Posteriors and demographics. Once all 100 trials (50 per disease, alternating each trial) were completed, participants provided posterior measures for each disease: a binary preference for the medicine for each disease, the con fidence in that preference, and a probability estimate for the distribution of cures between those medicines (see Supplementary Materials D for complete description of wording).

Following this, participants were asked if they could recall the previous participants comment (manipulation check), after which they could then post an open text response “comment” in the comment section they had seen before the task. Participants were instructed that when posting their comment, they would receive an additional bonus if they either successfully deceived or aided the participant that followed them. The instruction to deceive or help was randomized between-subjects, and further assisted the validity of the context for the trust statement they had seen regarding the previous participant.

Having posted their comment, participants provided trust and ex- pertise ratings for the previous participant who provided them with the initial comment (in the same format used in the pre-test methodology, see Supplementary Materials A). This allows for comparisons of pre- trial and post-trial estimations of expertise and trustworthiness. This tracks the credibility impact of providing erroneous advice. Finally, participants provided demographics and completed a Need for Closure measure (Roets & Van Hiel, 2011; Webster & Kruglanski, 1994), fol- lowing the protocol of Pilditch and Custers (2018). Following comple- tion of the task, participants were debriefed and given an email to contact if they had any further questions.

2.1.3. Method summary

There were, along with the within-subject belief-control disease di fference, three between subject factors under investigation: initial

evidence (supportive or undermining), source trustworthiness (high/

trustworthy, or low/untrustworthy), and source expertise (high/expert, or low/novice). The main dependent variables under investigation were the proportion of choices made in favour of the sub-optimal medicine (central to hypotheses 1 and 2), posterior measures of binary pre- ferences, con fidence in that preference, and probability estimates.

Lastly, ratings of the trustworthiness and expertise of the source were also added, following evidence exposure (relevant to hypothesis 3).

Supplementary Materials D summarises the key information from the above methodological description. This includes the phrasing of the task instructions, incentives scheme, and belief manipulation, as well as the measures taken (including posteriors and manipulation check question phrasing).

2.2. Results

2.2.1. Descriptives and processing

The 2 (source trustworthiness; Trust) × 2 (source expertise;

Expertise) × 2 (Initial evidence) between-subject factors resulted in 8 groups for analysis (see Table 1 below), with a calculated sample size of 520 (50 per group).

4

Participants were randomly assigned to one of the eight possible conditions (see the first column of Table 1). The mean age was 36.85 years (SD = 12.42) and the sample was 55.2% female. After completing the task, all participants were asked a series of filter ques- tions to determine whether the comment manipulation had been re- membered. If participants had no recollection of the manipulation comment they were removed from subsequent analysis (see Table 1).

The decision to remove those who failed was taken given the explicit nature of the belief manipulation, in conjunction with trust and ex- pertise statements.

5

The analyses were conducted on the remaining 401 participants, with a mean age of 36.54 years (SD = 12.25) and 56.6%

female, leaving ~50 participants for each between-subjects condition.

2.2.2. Choice data

To assess the impact of the belief, initial evidence and source cred- ibility manipulations on choices, a mixed ANOVA was run using the total number of optimal choices for the belief disease and the total number of optimal choices for the control disease as the two-level within-subjects factor (belief). The between-subject factors included in the analysis were initial evidence, trust, and expertise. As can be seen in Fig. 1, there were significantly fewer optimal choices for the belief disease (M = 26.99, SD = 12.42) than in the control disease (M = 28.91, SD = 11.68), F(1,393) = 6.272, p = .013, η

2

= 0.016, CIdiff = [0.407, 3.376]. Similarly, there were fewer optimal choices when initial evidence was supporting (M = 25.74, SD = 11.05) rather than undermining (M = 30.17, SD = 12.68), F(1,393) = 24.803, p < .001, η

2

= 0.059, CIdiff = [2.662, 6.134], and when trust was high (M = 26.82, SD = 11.90) rather than low (M = 29.09, SD = 12.18), F(1,393) = 6.462, p = .011, η

2

= 0.016, CIdi ff = [0.509, 3.981], whilst expertise, showed no main effect (p = .096).

2

Full details of the pre-testing can be found in Supplementary Materials A.

3

See Supplementary Materials C for an example feedback screen.

4

This calculation was based on the Belief × Initial Evidence interaction e ffect size in choice data found in previous testing, in conjunction with effects found in pilot-testing source credibility e ffects and previous estimations of likely manipulation check failures from previous, similar studies (Pilditch & Custers, 2018). The most conservative was selected for a power analysis, run using G*power (Faul, Erdfelder, Buchner, & Lang, 2009; Faul, Erdfelder, Lang, &

Buchner, 2007) with 80% power and a signi ficant effect of interaction at the 0.05 level, to estimate sample sizes required for Experiment 1. In fact, the computed achieved power in the post-exclusion sample size (400) yielded an expected power (1 – β error probability) of 93.76%, given the same criteria.

5

When all participants are included in the analysis, results broadly conform

to those reported when exclusion criteria is applied. The only exception being a

reduction of e ffects in posterior probability estimates, however these do not

affect conclusions.

(5)

This shows significantly more choices were made for the sub-op- timal medicine when favoured by initial evidence, having received a belief favouring that option, or receiving a belief from a high trust source. This is in line with expectations. Furthermore, there was a significant interaction between belief and trust, F(1,393) = 29.926, p < .001, η

2

= 0.071, indicating that beliefs from high trust source lead to significantly more suboptimal (belief-congruent) choices, whilst beliefs from low trust sources can, depending on initial evidence, lead to more choices in the opposite direction (which, as the advice was erroneous, happens to be the optimal choice in this experiment, see right-hand facets of Fig. 1). Together, this suggests beliefs are processed in the context of source cues and that perceived trustworthiness predicts the direction of first choices.

6

Fig. 1 suggests high trust groups (left- hand facets) are unaffected by supporting or undermining initial evi- dence (left versus right pairs of bars, within-facet) whilst low trust

groups (right-hand facets) seems influenced by undermining versus supporting initial evidence (higher belief (white) bars in right-hand pairs, relative to control (grey) bars as compared to left-hand pairs, within-facet), suggestive of the belief by initial evidence interactions found when trust cues are absent (see e.g., Pilditch & Custers, 2018), however, no signi ficant three-way interaction was found for belief, trust and initial evidence (p = .127) was found, although the pattern indeed suggest that the gating effect of initial evidence may be larger under low than high trust.

7

2.2.3. Posteriors

The same format of mixed ANOVA as conducted choice analysis was

used to assess the impact of the independent variables on posterior judgements.

2.2.3.1. Probability estimates. Using this protocol, the effect of the independent variables on posterior probability estimates (estimates of the percentage of optimal outcomes in favour of initially dominant medicine) in the belief and control diseases (as the within-subject belief factor) was assessed. Although belief disease estimates (M = 54.27, SD = 19.94) were signi ficantly lower than control disease estimates (M = 56.57, SD = 18.64), F(1,393) = 4.432, p = .036, η

2

= 0.011, CIdi ff = [0.153, 4.465], and initially supported condition estimates (M = 52.25, SD = 18.10) were signi ficantly lower than initially Table 1

Experiment 1: participant breakdown by group, along with the number of participants passing the belief manipulation check.

Trust Expertise Initial Evidence N Passed Manipulation Check

High High Supporting 63 52

Undermining 60 50

Low Supporting 66 49

Undermining 67 50

Low High Supporting 64 52

Undermining 68 49

Low Supporting 65 48

Undermining 72 51

Fig. 1. Experiment 1: Proportion of Optimal Choices. White bars present the disease that received a belief indicating the sub-optimal option. Error bars re flect 95%

Con fidence Intervals.

6

This claim is supported by a Chi squared analysis on first choice data, finding a highly significant effect of trust, χ

2

(1, N = 401) = 95.373, p < .001, with those in low trust groups more often choosing the non-speci fied option, rather than the option speci fied by the belief – corroborating hypothesis 2, whilst the reverse is true of high trust groups.

7

To check for possible learning e ffects, choices were also split into two 25

trials blocks. Block was then tested in a repeated measures ANOVA (along with

factors from main choice analysis), finding a significant overall learning effect,

F(1,393) = 75.43, p < .001, η

2

= 0.161. This e ffect was exacerbated in

supporting initial evidence conditions (where participants had a further from

optimal starting point), F(1,393) = 8.3, p = .004, η

2

= 0.021. Importantly, the

Trust × Belief interaction was not a ffected by block, suggesting the influence of

trust on beliefs was not mitigated by learning effects.

(6)

unsupported condition estimates (M = 58.61, SD = 20.01), F (1,393) = 16.656, p < .001, η

2

= 0.041, CIdiff = [3.292, 9.411], there were no main effects for either trust or expertise. Interestingly, the interaction between belief and trust was signi ficant, F(1,393) = 3.888, p = .049, η

2

= 0.01, indicating that those who received a belief that was coupled with a high trust source showed a greater degree of bias in posterior probability estimates (left hand column of Fig. 2).

In general, trends indicate the e ffects found in probability estimates are subsumed by the impact of initial evidence. Notably, moving from left to right across the columns of Fig. 2, the number of optimal choices increases as factors that should in fluence poor choices decreases (note:

both belief and control diseases have their initial evidence manipulated in the same direction, whilst trust and expertise manipulations are lo- calized to the belief disease).

8

Speci fically, when the communicated

belief is sub-optimal (i.e. erroneous), probability estimates re flect less influence of belief (smaller differences between belief disease – white bars, and control disease – dark grey, estimates) when factors that should increase belief uptake (e.g. high trust sources, and supporting initial evidence) are no longer present.

2.2.4. Ratings of trust

An analysis of variance was conducted to assess the e ffect of the trust manipulation as a factor on posterior ratings of trust. This analysis demonstrated that irrespective of condition and experienced evidence, those in high trust groups rated the source as signi ficantly more trust- worthy (M = 64.84, SD = 27.32) than those in low trust groups (M = 26.92, SD = 24.06), F(1,400) = 217.518, p < .001, η

2

= 0.353, CIdi ff = [32.87, 42.98].

Secondly, trust ratings (elicited after experiencing evidence) were tentatively compared to the pre-test baseline.

9

We show a polarizing effect found, supporting hypothesis 3, where participants receiving in- formation from high trust sources rate the trustworthiness of the source as even higher after experiencing the evidence (pre-evidence, M = 54.35, SD = 33.169; post-evidence, M = 64.84, SD = 27.321), F (1,288) = 7.887, p = .005, η

2

= 0.027, CIdi ff = [−17.84, −3.138], despite the fact that the comment indicated the sub-optimal choice.

Conversely, those receiving comments from low trust sources rate the trustworthiness of the source as even lower (pre-evidence, M = 35.94, SD = 28.113; post-evidence, M = 26.92, SD = 24.062), F (1,299) = 8.364, p = .004, η

2

= 0.027, CIdiff = [2.88, 15.17], having con firmed their suspicions.

Taken together, trust ratings analyses indicate (albeit tentatively) Fig. 2. Experiment 1: Posterior Probability Estimates (estimate of percentage of optimal outcomes in favour of initially dominant medicine), split by group. Error bars re flect 95% Confidence Intervals.

8

These effects were replicated in the analysis of binary preferences. Using a mixed-e ffects logistic regression, only the inclusion of initial evidence as a factor was found to signi ficantly improve the base model (subject only), χ

2

(1) = 35.174, p < .001. However, as no main effects were found for belief, χ

2

(1) = 2.0943, p = .148, trust, χ

2

(1) = 0.5406, p = .4622, or expertise, χ

2

(1) = 0.0102, p = .9195, no further models including interaction terms were assessed. Con fidence in these preferences (and returning to the mixed ANOVA) showed no main e ffect of initial evidence, belief, trust or expertise, but there was a Belief × Trust × Initial Evidence interaction, F(1,393) = 15.994, p < .001, η

2

= 0.039. Investigating this interaction further, by splitting participants by initial evidence condition, found those in supporting initial evidence conditions were more confident in belief (vs control) preferences, F(1,197) = 4.945, p = .027, η

2

= 0.024. Further, although trust did not have a signi ficant main e ffect on confidence, it did interact significantly with belief, F(1,197) = 4.284, p = .04, η

2

= 0.021, indicating that the coherence of a belief supported by both high trust and initial evidence results in greater con fidence, whilst a belief from an untrustworthy source which still receives supporting evidence leads to no such difference. Conversely, those in undermining initial evidence conditions showed no main e ffect of belief or trust, but did show a strong interaction be- tween belief and trust, F(1,196) = 12.093, p = .001, η

2

= 0.058, in this case showing the combination of low trust (one reason to distrust the belief) and

(footnote continued)

undermining initial evidence (another reason for distrust in the belief) resulted in greater con fidence that the opposite option to that indicated by the belief is true.

9

See Supplementary Materials A for full details of pre-testing.

(7)

that highly trusted sources might not only retain, but also increase their perception of trust, via higher levels of belief uptake and maintenance, despite communicating erroneous advice. Conversely, those initially perceived to be untrustworthy, via correspondingly low levels of belief uptake and maintenance (and increased scepticism), may have their ratings of trust lowered further by communicating erroneous advice.

2.2.5. Ratings of expertise

An analysis of variance was conducted to assess the e ffect of the expertise manipulation as a factor on posterior ratings of expertise.

Irrespective of group and experienced evidence, those in high expertise groups rated the source as signi ficantly more expert (M = 57.37, SD = 27.27) than those in low expertise groups (M = 20.75, SD = 26.867), F(1,400) = 183.432, p < .001, η

2

= 0.315, CIdiff = [31.3, 41.94], suggesting that the expertise manipulation was successful.

Secondly, expertise ratings (elicited after experiencing evidence) were tentatively compared to the pre-test baseline ratings of expertise.

Expertise ratings, as a consequence of communicating an erroneous belief, decreased when participants evaluated the belief against first- hand evidence in expert conditions only (pre-evidence, M = 68.44, SD = 27.785; post-evidence, M = 57.37, SD = 27.27), F (1,290) = 10.008, p = .002, η

2

= 0.033, CIdi ff = [4.18, 17.96]. Trust ratings, however, did not decrease high-trust conditions. Sources al- ready considered low in expertise did not see a signi ficant decrease (pre-evidence, M = 23.51, SD = 29.562; post-evidence, M = 20.75, SD = 26.867, p = .419).

2.3. Discussion

Experiment 1 explored the role of source credibility factors for the uptake and maintenance of an erroneous belief, when exposed to pro- longed first-hand evidence. The results strongly suggest perceived credibility influences initial choice directly and may even supersede the impact of initial evidence when the source is perceived as very trust- worthy.

Supporting hypothesis 1, the effects of communicated beliefs were moderated and sometimes even flipped by trust, which seems to almost completely overrule the ‘gatekeeping’ effect of initial evidence found with anonymous sources (Pilditch & Custers, 2018; Staudinger &

Büchel, 2013). When a source is seen as credible, con fidence in the belief being true is already in place prior to initial evidence exposure, supplanting its consolidating (or refuting) role, although the pattern of results suggests that initial evidence may still have some effect for be- liefs communicated by low-trust sources. Such a mechanism makes sense in an uncertain world, wherein one is more likely to stick with the advice of a source perceived as credible, despite a few initially un- successful first-hand experiences.

Supporting hypothesis 2, we find participants act as if the opposite of the belief is true when the belief comes from an untrustworthy source (i.e. if told that “A > B”, they act as if “B > A”). This demonstrates that beliefs are processed in light of the credibility cues, and participants act accordingly. Further, when such suspicions (“the source is lying to me”) are confirmed by initial evidence (i.e. the belief, which is suspected of being a lie, is undermined by initial evidence, thus con firming the suspicion), we find the inverse of the gatekeeping effect (Pilditch &

Custers, 2018).

Finally, comparing the impact of experienced evidence on trust and expertise ratings, tentative support was found for hypothesis 3. If a belief comes from a source believed to be highly trustworthy, it not only yields greater belief preservation (despite its incorrectness) but, having believed the source (i.e. believe they have been told the truth), the source is rated as even more trustworthy, whilst a belief from a low trust source is distrusted, leading to bias against the belief, and a perception of the source as being less trustworthy.

This is in line with advice taking literatures, whereby an advisor

believed to be trustworthy is favoured more (Schöbel, Rieskamp, &

Huber, 2016; Twyman et al., 2008). Going beyond the immediate effect of advice-taking, the results suggest that trustworthy sources not only get away with, but may even profit from, communicating falsehoods.

This is a novel (albeit tentative) finding, which may be termed a false- prophet cycle. Following this, a highly trusted politician may not only get away with making erroneous pronouncements (especially when evidence is ambiguous or hard to find), but may even profit from it even when faced with contrary evidence.

3. Experiment 2: Cassandra's curse

Experiment 1 investigated the roles of source credibility and initial evidence in the adoption and maintenance of erroneous beliefs. We found beliefs from high trust sources were taken up, even when con- tradicted by initial evidence, whilst beliefs from low trust sources were suspected of falsity, which initial evidence was then used to confirm.

Consequently, it is interesting to explore how credibility and initial evidence impact adoption of valid (i.e. directionally truthful) beliefs?

The design can test whether a low trust source benefits from telling the truth in an uncertain environment.

The character of Cassandra in Greek mythology is the daughter of the King of Troy. The god Apollo falls in love with her unmatched beauty, and in attempting to woo her, gives her the gift of prophecy.

However, when Cassandra refuses Apollo's advances, he curses her so that nobody will ever believe her. In this way, Cassandra is doomed to always foresee events, but have her warnings ignored. We draw a parallel here to the methodological set up of a low trust source at- tempting to convey a truthful (and thus, bene ficial) belief to others.

In line with predictions from Section 1.3, we predict the same general pattern of results as Experiment 1: High trust sources result in higher proportions of belief congruent (and in this case, optimal) choices and judgements, with initial evidence being overruled if con- tradicting the high trust source (hypothesis 1). Conversely, low trust sources result in initial choices that re flect the anterior of the belief (hypothesis 2; which may well, when consolidated by initial evidence, result in the aforementioned “Cassandra's curse” outcome), as initial evidence that contradicts the suspicion ( “the source is lying and the opposite is true ”) resulting in no effect of belief in either direction.

Finally, tentative predictions are made regarding reliability updating:

In line with findings from Experiment 1, we expect source expertise to have a role limited to a ffecting confidence levels, and high trust sources to benefit most from the belief integration process (i.e. receive the greatest increase in trust ratings, hypothesis 3). That is, we expect the same mechanism as in Experiment 1, but inverted given the reversed veracity of the communicated belief.

3.1. Method

In Experiment 1, the source always suggested that the sub-optimal medicine was optimal. We reverse this in Experiment 2 so that the optimal medicine is correctly indicated as such by the source (i.e. the advice is truthful). Aside from this, all remaining features of design and procedure are identical to Experiment 1.

3.1.1. Participants

Participants were recruited and participated online through MTurk, following the criterion from Experiment 1 (additionally, they could not have participated in previous experiment 1 or the pre-tests). Informed consent was obtained from all participants and payment incentives followed those laid out in Supplementary Materials D.

3.2. Results

3.2.1. Descriptives and processing

Given the similarity between Experiment 1 and 2, the projected

(8)

sample size was the same, resulting in a total sample size of 500. The coding of initial evidence in all analyses reflects the change from a belief indicating the sub-optimal option (Experiment 1), to a belief indicating the optimal option (Experiment 2). As such, supporting groups still support the belief, and undermining still undermine the belief, but do so by providing evidence for the optimal (in the former case) and sub- optimal (in the latter).

Participants were randomized into one of eight possible conditions (see Table 2). The average age was 36.09 years (SD = 11.40) and the sample was 59% female. Participants who had no recollection of the source comment in the post-experiment filter question were removed from subsequent analyses (see Table 2), following the protocol used for Experiment 1.

10

The following analyses were conducted using the re- maining 401 participants, with an average age of 35.65 years (SD = 11.42) and 58.5% female, leaving ~50 participants for each between-subject condition.

3.2.2. Choice data

To assess the impact of the belief, initial evidence and source cred- ibility manipulations on choices, a mixed ANOVA was run using the total number of optimal choices for the belief disease and the total number of optimal choices for the control disease, with the di fference between the two as the within-subjects factor (belief). The between- subject factors included in the analysis were initial evidence, trust, and expertise. As can be seen in Fig. 3.1, there were signi ficantly more op- timal choices in the belief disease (M = 31.91, SD = 12.65) than in the control disease (M = 29.33, SD = 11.40), F(1,394) = 11.593, p = .001, η

2

= 0.029, CIdi ff = [1.08, 4.02]. There were significantly fewer optimal choices in the initial evidence unsupported condition (M = 27.18, SD = 11.71) than in the supported condition (M = 34.06, SD = 11.52), F(1,394) = 67.208, p < .001, η

2

= 0.146, CI- di ff = [5.22, 8.52], and significantly more optimal choices in the high trust condition (M = 32.12, SD = 12.09) than in the low trust condi- tion (M = 29.10, SD = 11.95), F(1,394) = 12.808, p < .001, η

2

= 0.031, CIdi ff = [1.35, 4.65], whilst expertise showed no main effect (p = .210). Such a pattern corroborates the general findings of Experiment 1.

In line with previous findings, participants chose the optimal med- icine when favoured by initial evidence significantly more (difference between white and grey bars in left-hand columns of Fig. 3 above) and when they received an advice favouring that option from a high trust source. Furthermore, replicating the pattern from Experiment 1, we observe a significant interaction between belief and trust, F (1,394) = 20.269, p < .001, η

2

= 0.049, indicating that the con- junction of both a belief and a high trust source results in signi ficantly more optimal choices. Comparatively, beliefs from low trust sources can, as in Experiment 1, (depending on initial evidence; based on visual

inspection of right-hand facets of Fig. 3) lead to more suboptimal choices (the opposite direction of the belief). This demonstration that beliefs are processed in the context of source cues is also again supported by the finding that high or low trustworthiness predicts the direction of first choices.

11

In support of hypothesis 1, we additionally find a sig- ni ficant three-way interaction of belief, trust, and initial evidence, F (1,394) = 5.825, p = .016, η

2

= 0.015. More precisely, when trust is high, there is no interaction of belief and initial evidence (i.e. a gate- keeping e ffect), whilst when trust is low, belief and initial evidence in- teract, with initial support for the suspected opposite of the belief (given the low trustworthiness of the source) leading to a stronger be- lief-control di fference (i.e. gatekeeping).

12

3.2.3. Posteriors

The same analysis protocol was used to assess posterior judgements as in Experiment 1.

3.2.3.1. Probability estimates. The mixed ANOVA protocol was used to assess the e ffect of the independent variables on posterior probability estimates (estimates of the percentage of optimal outcomes in favour of initially dominant medicine) in the belief and control diseases (as the within-subject, belief, factor; white versus grey bars in Fig. 4).

Estimates were significantly higher in the belief disease (M = 56.95, SD = 19.53) than in the control disease (M = 54.42, SD = 17.88), F (1,394) = 5.184, p = .023, η

2

= 0.013, CIdi ff = [0.341, 4.66].

Estimates were also significantly higher when initial evidence was supportive (M = 59.17, SD = 18.61) than unsupportive (M = 52.20, SD = 18.27), F(1,394) = 22.893, p < .001, η

2

= 0.055, CIdiff = [4.12, 9.86], and higher in high trust (M = 57.32, SD = 18.75) than in low trust (M = 54.02, SD = 18.64) conditions, F (1,394) = 4.924, p = .027, η

2

= 0.012, CIdi ff = [0.37, 6.11], however, there was no main e ffect of expertise. Further, the interaction between belief and trust, unlike Experiment 1, did not reach significance (p = .163).

13

3.2.4. Ratings of trust

An ANOVA assess the e ffect of the trust manipulation on posterior ratings of trust. Irrespective of group and experienced evidence, those in high trust groups rated the source as significantly more trustworthy (M = 68.15, SD = 27.356) than those in low trust groups (M = 40.35, SD = 27.336), F(1,401) = 103.896, p < .001, η

2

= 0.206 CIdiff = [22.44, 33.17].

The role of experienced evidence on trust ratings was assessed by comparing trust ratings (elicited after experiencing evidence) tentatively Table 2

Experiment 2: Participant breakdown by group, along with the number of participants passing the belief manipulation check.

Trust Expertise Initial evidence N Passed manipulation check

High High Supporting 67 51

Undermining 63 52

Low Supporting 60 51

Undermining 63 49

Low High Supporting 61 49

Undermining 65 51

Low Supporting 62 50

Undermining 61 49

10

As in Experiment 1, when all participants are included, results remain in line with those reported with exclusion criteria applied. Again, the only ex- ception to this is reduced e ffects in some posterior measures. However, these do not affect the primary conclusions.

11

This claim is supported by a Chi squared analysis on first choice data, finding a highly significant effect of trust, χ

2

(1, N = 402) = 66.586, p < .001, with those in low trust groups more often choosing the non-speci fied option, rather than the option speci fied by the belief – again corroborating hypothesis 2, whilst the reverse is true of high trust groups.

12

To check for possible learning e ffects, following the protocol of Experiment 1, choices were also split into two 25 trials blocks. A repeated measures ANOVA was used to test the e ffect of block (along with factors from main choice ana- lysis). This found a signi ficant overall learning effect, F(1,394) = 19.952, p < .001, η

2

= 0.048. This effect was exacerbated in undermining initial evidence conditions (where participants had a further from optimal starting point), F(1,394) = 8.066, p = .005, η

2

= 0.02. Distinct from Experiment 1, there was a significant interaction between block and trust, F(1,394) = 6.001, p = .015, η

2

= 0.015, indicative of the starting point di fferences dictated by trust directionality (high trust groups were already at optimal, whilst low trust groups had to learn the optimal option).

13

As in Experiment 1, binary preferences were analysed using a mixed-e ffects

logistic regression (in R). The inclusion of initial evidence as a factor was found

to significantly improve the base model (subject only), χ

2

(1) = 24.182,

p < .001, as did belief, χ

2

(1) = 4.6976, p = .03, but no e ffect of trust, χ

2

(1) = 1.5671, p = .2106, or expertise, χ

2

(1) = .3328, p = .564. No signi ficant

interactions were found.

(9)

to the pre-test baseline.

14

As all participants received valid (truthful) beliefs, trust ratings should increase given the opportunity to evaluate the belief first-hand. Although there is an overall significant increase in trust ratings across groups (pre-evidence, M = 44.56, SD = 31.86; post- evidence, M = 54.39, SD = 30.66), F(1,589) = 12.843, p < .001, Fig. 3. Experiment 2: Proportion of Optimal Choices. White bars present the disease that received a belief indicating the optimal option. Error bars re flect 95%

Con fidence Intervals.

Fig. 4. Experiment 2: Posterior Probability Estimates (estimate of percentage of optimal outcomes in favour of dominant medicine), split by group. Error bars re flect 95% Con fidence Intervals.

14

See Supplementary Materials A for full details of pre-testing.

(10)

η

2

= 0.021, CIdiff = [−15.22, −4.44], such an effect is driven by increases for high trust sources (pre-evidence, M = 54.35, SD = 33.169; post-evidence, M = 68.15, SD = 27.356), F (1,290) = 13.686, p < .001, η

2

= 0.045, CIdiff = [−21.14, −6.46], whilst low trust sources see no significant improvement in trust ratings (pre-evidence, M = 35.94, SD = 28.113; post-evidence, M = 40.35, SD = 27.336, p = .194). Although tentative, the findings suggest that repairing trustworthiness is difficult, and is in line with hypothesis 3 regarding the polarizing e ffect of initial trust and biased learning. This finding is in line with loss of reputation findings in advice taking ( Yaniv

& Kleinberger, 2000).

3.2.5. Ratings of expertise

An ANOVA assessed the effect of the expertise manipulation on posterior ratings of expertise. This analysis demonstrated that irre- spective of group and experienced evidence, those in high expertise groups rated the source as significantly higher in expertise (M = 63.6, SD = 25.523) than those in low expertise groups (M = 24.43, SD = 27.357), F(1,401) = 220.422, p < .001, η

2

= 0.355, CIdiff = [33.98, 44.36].

In line with the analysis of trust ratings, and following the protocol of Experiment 1 (see Section 2.2), analyses were conducted on the impact of evidence on ratings of expertise. Expertise ratings (elicited after experiencing evidence) were tentatively compared to pre-test baseline. This found no signi ficant changes to expertise ratings as a consequence of first-hand experience, either overall (p = .912), or when breaking down into high (p = .149) and low (p = .789) expertise sub-groups.

3.3. Discussion

Experiment 2 used the same design as Experiment 1 – however, sources provided a valid instead of an erroneous belief. As in Experiment 1, choice data revealed that not only did high trust lead to signi ficantly more belief-medicine choices (which were in this case optimal, rather than suboptimal), but it overruled the “gatekeeping”

effect of undermining initial evidence that was obtained under low trust (Pilditch & Custers, 2018). This further supports hypothesis 1 and re- plicates the finding from Experiment 1.

Also replicating Experiment 1, beliefs from low trust sources re- sulted in choices for the opposite medicine than that indicated by the belief, supporting an account of belief processing in light of the source credibility cues with which it is communicated. Additionally, despite low trust sources communicating a valid belief, the participant's sus- picion ( “The source is likely lying, so I shall choose the opposite”) – if confirmed by initial evidence – resulted in significantly more choices made in con firmation of the suspicion, corroborating the predicted in- fluence of initial evidence when source reliability is low. This replicates the pattern found in Experiment 1, only in this case – given that the belief is in fact valid – the confirmation of participant's assumption resulted in signi ficantly more sub-optimal choices.

15

This illustrates the potentially deleterious effects of erroneously attributed low trust cues.

Further, as in Experiment 1, if the suspicion is undermined by initial evidence, there is no signi ficant impact of belief. This supports hy- potheses 2: in cases of uncertainty regarding the belief, whether using unknown sources in previous research (Pilditch & Custers, 2018;

Staudinger & Büschel, 2013), or suspicions that the source is lying (as in the present work), then initial evidence plays a gatekeeping role.

Lastly, the findings regarding ratings of trust and expertise again bear a close resemblance to Experiment 1. When comparing trust

ratings to pre-test values (i.e. ratings of the source and belief prior to any evidence-exposure), high trust sources receive significantly higher ratings of trust after evidence exposure (supporting hypothesis 3).

Conversely, low trust sources (although trending positively given their communication of a valid belief) get no significant increase in trust ratings, despite having the most to gain from telling the truth. This difficulty in “repairing” estimations of trust has a parallel in the im- pression formation literature, in which early, negative impressions of an individual have been demonstrated as hard to overcome (Anderson, 1965; Mann & Ferguson, 2015) and in advice taking, the ease with which advisors may lose reputation, relative to regaining it (Yaniv &

Kleinberger, 2000).

4. General discussion

The capacity to communicate beliefs about our lived environment yields fantastic informational advantages for dealing with novel situa- tions (e.g. a doctor providing a patient with a medical diagnosis).

However, whether through ill-intent or unintentional error on the part of the communicator, these beliefs may not always be accurate (e.g. a misdiagnosis).

Using a probabilistic learning paradigm that integrates the per- ceived credibility of the source and direct-evaluation via first-hand evidence, we demonstrate when falsehoods survive (Experiment 1) and where truths are ignored (Experiment 2). The experiments additionally illustrate how a credibility-induced bias in belief-evaluation can lead to potentially harmful reinforcement of the perceived credibility of the source. Specifically, sources believed to be very trustworthy are seen as more credible even when they provide erroneous advice whilst sources perceived to be of low trustworthiness are distrusted further – even in cases where they actually provide accurate advice.

We find that high trust sources not only yield the greatest degree of belief uptake and adherence, but that high trust supplants whether initial evidence supports or undermines the belief. Given a specified and described source, the paper extends previous research into belief uptake and maintenance that show how initial evidence can play a gatekeeping role in validating beliefs from unknown sources (Pilditch &

Custers, 2018; Staudinger & Büchel, 2013). The e ffect is also in line with findings on the impact of confidence in the source on the efficacy of advice (Harvey & Fischer, 1997; Twyman et al., 2008), in coopera- tion (Earle, Siegrist, & Gutscher, 2010), risk communication (Siegrist, Gutscher, & Earle, 2005), and argumentation (Harris et al., 2015).

Importantly, results suggest that perceived high trustworthiness im- pacts belief adherence whether or not the belief is actually valid (which we term a “false prophet” effect). This carries implications for the po- tency of perceived trustworthiness over “truth” in applied settings, in- cluding politics and marketing.

Whilst high trust supplants the gatekeeping e ffect of initial evi- dence, results (of primarily Experiment 2) show initial evidence still acts as a gatekeeper when sources are perceived to be low in trust. First, those receiving a belief from a low trust source choose as if the opposite is true (i.e. recipients seem to assume the opposite of the belief is likely to be true when the belief comes from an untrustworthy source, and choose accordingly). Second, this suspicion is only consolidated if it has been con firmed by initial evidence. This fits with prior work on un- known sources (Pilditch & Custers, 2018; Staudinger & Büchel, 2013) where uncertainty surrounding the source (and by proxy, the belief) results in reliance on initial evidence to act as a validator. Critically, in either situation (initial evidence confirms the suspicion, or not), re- cipients fail to take advantage of truthful communications. This latter finding is termed Cassandra's Curse, as the low trust status of the source prevents the recipient from believing the truth.

The results strongly indicate that beliefs are processed in light of the source credibility context from which they are communicated – a finding that fits with the Bayesian source credibility model ( Bovens &

Hartmann, 2003; Hahn et al., 2009, 2012; Harris et al., 2015) in which

15

An analysis of the low trust, and undermining initial evidence group of

participants (N = 100) reveals a signi ficant effect of belief, F(1,99) = 5.964,

p = .016, with belief disease choices signi ficantly more sub-optimal than

control disease choices.

(11)

belief content and source (reliability) should inform one another. Fur- ther, the interplay between credibility cues, belief content interpreta- tion, and evidence evaluation of said interpretation, raises challenges for Elaboration Likelihood and the Heuristic-Systematic Model (Briñol

& Petty, 2009; Chaiken & Maheswaran, 1994; Petty & Cacioppo, 1984) where source (credibility) cues are deemed a shallow cue to be over- ridden by engagement with argument content. For example, the impact of source cues on belief content interpretation, and the high levels of erroneous belief adherence (despite an accuracy incentive) do not readily fit a dual process account.

Besides the effect of credibility on the updating of beliefs, we also investigated how people update perceived credibility of a source when seeing first-hand evidence in relation to the communicated belief.

Unsurprisingly, and in accordance with research in advice taking (Schöbel et al., 2016; Twyman et al., 2008), when a trustworthy source gave valid information (Experiment 2), trust ratings of those sources increased. However, strikingly, when high trust sources communicated an erroneous belief (Experiment 1), sources were found to benefit, as perceived trustworthiness was higher post-evaluation (part of the “False Prophet” effect). This is a novel demonstration of a source benefiting from dissemination of a falsehood despite the ability to verify the belief against first-hand evidence. Whilst this research is a first step to in- vestigate the relationship between evidence, reports, and perceived credibility, it is of great interest for further research to determine how clear and contradictory evidence must be for a drop in trust to occur.

Conversely, low trust sources were appropriately penalized for communicating an erroneous belief (Experiment 1), but inappropriately remained low despite communicating a valid belief (Experiment 2; part of Cassandra's Curse). Taken together with the findings regarding high trust sources, such polarizing effects demonstrate the dominating ef- fects of pre-decisional cues (Nurek, Kostopoulou, & Hagmayer, 2014) and in particular source trustworthiness over the validity of the belief being communicated. To extend the politics example, a candidate without prior ‘baggage’ that reduces credibility will have a distinct advantage over a (rightly or wrongly) perceived low credibility candi- date. This advantage may even grow over the course of a campaign, as the outsider benefits from pronouncements (irrespective of truth), whilst the insider is penalized for his/her pronouncements (irrespective of truth). Indeed, it shows how di fficult it may be to ‘repair’ perceived credibility once credibility is initially lost.

Interestingly, our manipulation of expertise only had a limited e ffect on belief uptake and the maintenance process in the present paradigm.

In line with previous research in advice taking, a relationship was ob- tained between the perceived expertise of the source, and confidence ratings (Sniezek & Van Swol, 2001), with expert sources provoking higher confidence in poster binary preferences. However, in the ab- sence of other effect we refrain from further theoretical interpretations, as this could be the result of a suboptimal manipulation.

Several limitations should be taken into account with the results discussed here. First, some of the subtler findings did not extend into posterior measures. In particular, the mirrored bias occurring when suspicions are con firmed by initial evidence, although trending in the correct direction, did not quite reach significance in posteriors. One possible reason for this is the learning that has taken place before posterior measures are taken; participants have experienced a large amount of evidence by the time posteriors are taken (50 trials per disease). As such, when extrapolating to real world parallels, the central findings still bear some validity (given the unlikely capacity to wash- out the interaction of source trustworthiness and initial evidence with large quantities of evidence in real world settings). Further, effects in- cluding the large degree of bias resulting from initially supported be- liefs from high trust sources did carry through into posterior measures (irrespective of belief validity).

Finally, the present work carries important implications for the many domains where communications are evaluated in light of source credibility cues, including persuasion in politics (Lewandowsky, Ecker,

Seifert, Schwarz, & Cook, 2012; Madsen, 2016) and consumer research (Ha & Hoch, 1989; Mandel, Petrova, & Cialdini, 2006; Metzger &

Flanagin, 2013). For example, the finding that a source perceived to be high in trust can communicate falsehoods with relative impunity, whilst a source believed to be low in trust is doomed to be confirmed as sus- picious, despite conveying the truth, carries particularly deleterious consequences.

Declaration of competing interest None.

Acknowledgements

This work was supported by the Economic and Social Research Council [grant number ES/J500185/1].

Appendix A. Supplementary Material

Supplementary data to this article can be found online at https://

doi.org/10.1016/j.actpsy.2019.102956.

References

Anderson, N. H. (1965). Primacy effects in personality impression formation using a generalized order effect paradigm. Journal of Personality and Social Psychology, 34(1), 1–9.

Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.

Briñol, P., & Petty, R. E. (2009). Source factors in persuasion: A self-validation approach.

European Review of Social Psychology, 20(1), 49–96.

Chaiken, S., & Maheswaran, D. (1994). Heuristic processing can bias systematic proces- sing: Effects of source credibility, argument ambiguity, and task importance on at- titude Judgement. Journal of Personality and Social Psychology, 66(3), 460–473.

Decker, J. H., Lourenco, F. S., Doll, B. B., & Hartley, C. A. (2015). Experiential reward learning outweighs instruction prior to adulthood. Cognitive, Affective, & Behavioral Neuroscience, 15(2), 310–320.

Doll, B. B., Hutchison, K. E., & Frank, M. J. (2011). Dopaminergic genes predict individual differences in susceptibility to confirmation bias. The Journal of Neuroscience, 31(16), 6188–6198.

Doll, B. B., Jacobs, W. J., Sanfey, A. G., & Frank, M. J. (2009). Instructional control of reinforcement learning: A behavioral and neurocomputational investigation. Brain Research, 1299, 74–94.

Earle, T. C., Siegrist, M., & Gutscher, H. (2010). Trust, risk perception and the TCC model of cooperation. Trust in risk management: Uncertainty and scepticism in the public mind (pp. 1–50). .

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G * Power 3: A flexible statistical power analysis program for the social , behavioral , and biomedical sciences.

Behaviour Research Methods, 39(2), 175–191.

Ha, Y., & Hoch, S. (1989). Ambiguity, processing strategy, and advertising-evidence in- teractions. Journal of Consumer Research, 16(3), 354–360.

Hahn, U., Harris, A. J. L., & Corner, A. (2009). Argument content and argument source:

An exploration. Informal Logic, 29(4), 337–367.

Hahn, U., Oaksford, M., & Harris, A. J. L. (2012). Testimony and argument: A Bayesian perspective. In F. Zenker (Ed.). Bayesian argumentation (pp. 15–38). .

Hahn, U., & Harris, A. J. (2014). What does it mean to be biased: Motivated reasoning and rationality. Psychology of learning and motivation, 61, Academic Press41–102.

Harris, A. J. L., Hahn, U., Madsen, J. K., & Hsu, A. S. (2015). The appeal to expert opinion:

Quantitative support for a Bayesian network approach. Cognitive Science, 39(7), 1–38.

Harvey, N., & Fischer, I. (1997). Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes, 70(2), 117–133.

Klayman, J. (1995). Varieties of confirmation bias. Psychology of Learning and Motivation, 32, 385–418.

Klayman, J., & Ha, Y. (1987). Confirmation, disconfirmation, and information in hy- pothesis testing. Psychological Review, 94(2), 211–228. https://doi.org/10.1037/

0033-295X.94.2.211.

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012).

Misinformation and its correction: Continued influence and successful debiasing.

Psychological Science in the Public Interest, 13(3), 106–131.

Lord, C., Ross, L., & Lepper, M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109.

MacDougall, R. (1906). On secondary bias in objective judgments. Psychological Review,

13(2), 97. https://doi.org/10.1037/h0072010.

Referenties

GERELATEERDE DOCUMENTEN

two blocks), and for comparing the same task in all pairs of successive sessions as well as for comparing the first with the last session (series 1 and series 2 correspond then to

However, the mediation analysis on the average number of popped balloons re- vealed that agency fully mediated the e ffects of balloon-condition on the number of popped balloons (LLCI

Moreover, all guests that have shared the accommodation with a host also mentioned that inter-personal trust further increases when spending more face to face time during the

To be able to answer these questions, we have conducted fifty in-depth inter- views with respondents from seven regions about the lack of trust in the democratic constitutional

4.4 Designing the immersive news environment The aim of cycle 4 is to design the immersive news environment in which ‘real’ journalists can experience the future

The narrow bandwidth of the vacuum ultraviolet source yielding high-resolution spectra, the high sensitivity of the laser-induced fluorescence method, and the simplification of

The two most important features used by high school students to evaluate the trustworthiness of Wikipedia articles were textual features (75.83%) and pictures (11.27%).. Lucassen

Keywords: Interpersonal trust, management control, contingency theory, task uncertainty, environmental uncertainty, organizational strategy, length of relationship...