• No results found

Understanding Backlash Effects of Negative Campaigning in a Digital Era: who is at Risk? : a Comparative Study of the Causes and Effects of Negative Campaigning in the U.S. Senate Elections

N/A
N/A
Protected

Academic year: 2021

Share "Understanding Backlash Effects of Negative Campaigning in a Digital Era: who is at Risk? : a Comparative Study of the Causes and Effects of Negative Campaigning in the U.S. Senate Elections"

Copied!
40
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Understanding Backlash Effects of Negative Campaigning in a Digital Era: Who is at Risk? A Comparative Study of the Causes and Effects of Negative Campaigning in the U.S. Senate

Elections

Master Thesis Author: Lieke Bos Student Number: 10747087

University of Amsterdam Graduate School of Communication

Research Master Program Communication Science Supervisor: dhr. dr. Alessandro Nai

(2)

Abstract

Up till this moment, little comparative research exists respecting what causes politicians to use negative campaigning strategies, and whether, and when this backfires. In this study, I introduce the results of a semi-automated content analysis, looking at (1) the causes and (2) effects of negative campaigning in the 2018 U.S. Senate elections, that fills this gap in the literature. The analyses were based on three levels: context-level (safe state versus swing state), candidate-level (partisanship and gender) and tweet-level (negative strategies and gender of the target). The results reveal several general trends: female candidates (especially Democrats) go negative more often than male candidates, and Democrats go negative more often in safe states than in swing states. Regarding backlash, female candidates are more at risk than males when they go negative, especially when they use uncivil language. When it comes to partisanship, Republicans face harsher punishment for negative campaigning than Democrats. The context matters only indirectly, by altering the effects of partisanship and target of the attack. Based on the results, political candidates can strategically choose what negative strategies they should use, and which are better to avoid due to a higher risk of backlash.

(3)

Introduction

“When they go low, we go high”, is what Michelle Obama said at the Democratic National Convention in 2016, referring to the negative (Twitter) campaign of the Republican party, and especially President Trump (NRC, 2016). On a daily basis, Trump posts messages on Twitter like: “.@HillaryClinton- you have failed, failed, and failed. #BigLeagueTruth Time to #DrainTheSwamp!” (Trump, 2016a), “#CrookedHillary's plan will add $1.15 TRILLION in new taxes. We cannot afford her! #DrainTheSwamp…” (Trump, 2016b)). However, Trump’s strategy of going negative is certainly not new in politics, and he is neither the first nor the only one going negative on Twitter.

Negative campaigning has become such a common strategy in U.S. politics, that it is unlikely that there are still campaigns without containing any aspects of negativity (Lau & Pomper, 2002; Mattes & Redlawsk, 2015; Nai & Walter, 2015). As Lau and Pomper (2001) defined the concept, we speak of negative campaigning when politicians are “talking about the opponent criticizing his or her programs, accomplishments, qualifications, and so on” (pp. 805–806). It is amongst others, an effective instrument of emphasizing candidate differences to the people. Candidates try to build their own reputation, by putting their opponent in a bad light (Lau et al., 2007; Martin, 2004; Nai & Seeberg, 2018; Walter, Van der Brug & Praag, 2014).

However, negative campaigning comes certainly not without risks. “Candidates that play with fire face a clear risk of getting burned in return” (Nai, 2018, p.3). It is thus not surprising, that many studies already extensively looked into the use of negative campaigning, and its results in terms of attitudes and voting intentions towards the sponsor (e.g. Carraro & Castelli, 2010; Ceron & d’Adda, 2016; Francia & Herrnson, 2007 Gross & Johnson, 2016; Nai, 2018). However, the literature is still inconclusive about whether, and when negativity

(4)

backlashes. Possibly, because some crucial concepts underlying the evaluation of attacks are, yet, rather unexamined.

For instance, much research showed that candidates’ partisanships affect the usage and effects of negative campaigning (e.g. Ansolabehere & Iyengar, 1995; Auter & Fine, 2016; Lau & Pomper, 1988; Nai, 2018), and so does their gender (e.g. Cassese & Holman, 2018; Evans & Clark, 2016; Francia & Herrnson, 2007) and the competitiveness of the election they are running in (e.g. Auter & Fine, 2016; Gainous & Wagner, 2014). However, comparative research of these three concepts is relatively limited, and not many studies looked

simultaneously at the consequences for the sponsor in terms of backlash effects due to these factors. In addition, not much attention has been paid to different forms of negative

campaigning (incivility, attacking male or female opponents, and issue-based versus person-based attacks), that possibly influence whether an attack backfires or not for these different candidates (e.g. Budesheim, Houston & DePaola, 1996; Carraro & Castelli, 2010; Fridkin & Kenney, 2011). So, that is what will be done in this study. First I will examine who go negative and in what context. Then, I will examine what affects backlash effects. Negativity on Twitter

In the last couple of years, the influence of social media became increasingly important during election campaigns, amongst others, due to the opportunity to interact directly with the electorate (Kruikemeier, 2014; Nulty et al., 2016). Especially Twitter is heavily used by political consultants, since its influence goes way beyond the platform itself. Twitter links social media to traditional media, in a way that tweets can be picked up by traditional media, which broadenings the audience than only a candidate’s followers. (Gross & Johnson, 2016; Lee & Xu, 2018). Additionally, due to the fact that Twitter does not have any budgetary constraints, and gatekeepers (i.e. journalists) can easily be avoided, Twitter is used as an unfettered communication device (Gross & Johnson, 2016; Lee & Xu, 2018).

(5)

Since the primary season of the U.S. election in 2016, the platform also emerged as an important ‘weapon’ for unlimitedly attacking opponents (Gross & Johnson, 2016). Where creating attack advertisements is usually very time consuming, Twitter enables politicians to ultimately attack opponents, regardless of budgets (Auter & Fine, 2016; Gainous & Wagner, 2014). The massive usage of Twitter for negative campaigning, is therefore not entirely surprising (Gross & Johnson, 2016).

This Study

Because of the extensive usage of negative campaign in U.S. politics, and its growth presence on Twitter (Gross & Johnson, 2016), it is essential to have a general understanding of when and which candidates go negative, and the effects of this (Mattes & Redlawsk, 2015). Since comparative research is still limited, I will examine (1) the causes of negative

campaigning on Twitter, and (2) whether, for whom, and in which context, which negative campaigning strategies affect backlash effects. This will be done by answering the following research questions:

To what extent is there a difference in the use of negative campaigning by

Democratic and Republican candidates running for Senate, and how is this affected by their gender and the competitiveness of the race? (RQ1), and when and for whom do negative tactics lead to a backlash effect? (RQ2)

In order to test this, tweets from competing candidates during the last midterm

(Senate) elections (2018), and the comments of the public in response to these tweets, will be analysed with a semi-automated content analysis.1 The results of this study will not only be valuable from an academic perspective by closing the gap in the literature on comparative research of the causes and effects of negative campaigning, but will also be of great value for campaign makers, spin doctors and political candidates. Based on the results, they can

(6)

strategically choose what negative strategies they should use, and which are better to avoid due to a higher risk of backlash. This knowledge could in turn result in a more effective and successful use of social media strategies, resulting in more popularity on Twitter. Since being popular on Twitter often results in favourable electoral results (Kruikemeier, 2014), knowing how to act on Twitter is very important.

Studying the midterm (Senate) elections is most relevant for answering the research questions, since it adds the comparative element to the analyses. As argued by Lau and Pomper (2004), Senate elections “are methodologically superior” (p. 6), since there are far more cases to study. Where in presidential elections there were only two candidates to

compare, 33 similar elections took place (with 66 competing candidates) during the Midterms in 2018. Because of this, it is possible to compare gender- and party differences of candidates, and the state the candidate is running in. Thereby, since all these elections happened

simultaneously, the general context in which the elections took place (e.g. who the president is, whether the U.S. is at war, etc.) can be controlled, since this is similar for all candidates running. In addition, candidates in the midterms are in general less well-known figures than candidates running in presidential elections. Therefore, we can see more of an effect caused by the campaign, than caused by the reputation a candidate already has (Lau & Pomper, 2004).

Theoretical Framework

This study focusses on two important aspects of negative campaigning. In the first part of this study, I will look at the causes of negative campaigning. Here, I will take into account whether tweets were posted by candidates in swing or safe states, to test for the

competitiveness of the race (context-level) (Ansolabehere & Iyengar, 1995). Then, at the candidate-level, I will look at candidates’ partisanships and gender (Evans & Clark, 2016). In the second part of the study, I will try to gain more insights in whether, when and which

(7)

negative strategies backfire, and for who. Again, the context and candidate characteristics will be taken into account, but now I will also look at the tweet-level. Here, I will compare the risk of backlash for the different negative strategies (issue- versus person-based attacks, and incivility), and the target of the attack, combined with the context- and candidate-level. Causes of Negative Campaigning

Partisanship. When studying campaigning techniques, it is important to take

candidates’ partisanships into account. This not only influences how candidates behave during the campaign, but also how the public judges their behaviour based on their party expectations (Cassese & Holman, 2018). The political preferences of Americans can generally be divided over two broad categories: Republican and Democrat2. Supporters of each party have

expectations about how their representatives should act in government, and all candidates try to live up to these expectations (Ansolabehere & Iyengar, 1995). These behavioural

expectations might thus also affect candidates’ decisions to go negative, since violating expectations might result in a backlash (Ansolabehere & Iyengar, 1995; Cassese & Holman, 2018).

Although studies show mixed results about the effects of partisanship on going

negative (Auter & Fine, 2016), a recent study looking at negativity on Twitter during Midterm elections, showed that Republicans were more likely to post negative tweets than Democrats (Gainous & Wagner, 2014). This is not entirely surprising, considering the fact that especially Republican campaign consultants believe in the effectiveness of negativity, and thus advise Republican candidates to go negative (Ansolabehere & Iyengar, 1995). This can be explained by the fact that Republicans want to live up to the image that they are forceful, and believe that attacking others strengthens this image (Winter, 2010). Therefore, it is expected that:

2 And the Independents, who are not affiliated with either of the two parties. In this study, however, the focus

will be only on the differences between Democratic and Republican candidates, and not on Independent candidates. This is decided, because in most cases during the Midterms, it is basically a battle between Republicans and Democrats.

(8)

H1: Republican candidates post more negative tweets than Democratic candidates. Gender. Aside from partisanship, previous studies also examined the differences based on candidates’ gender when it comes to attacking opponents. Whether male or female candidates use more negative tactics is, however, still debated in the literature. Whereas some showed that men were more likely than women to send negative tweets (e.g. Parmelee & Bichard, 2012), more recent studies showed that actually female candidates are more likely to go negative than males (e.g. Evans & Clark, 2016; Evans, Cordova & Sipole, 2014). This might be caused by the fact that female candidates want to get rid of conventional stereotypes that depict women as sympathetic, kind, helpful and passive, but not as independent and forceful (like men) (Kahn, 1993). It is therefore expected that:

H2a: The use of negativity can be predicted by gender, in a way that female candidates are more likely to go negative on Twitter than male candidates.

Moreover, as described in the previous section, it is expected that Republicans are more likely to go negative than Democrats. Therefore, I expect that the effect from gender on negativity is moderated by partisanship, in a way that this effect is stronger for Republicans than for

Democrats. Thus:

H2b: Republican female candidates go negative more often than Democratic female candidates.

Context. Lastly, the context of the election might influence the decision to go negative. Like in all sorts of competitions, the higher the perceived closeness of a race, the more the tension rises and people get more competitive (Auter & Fine, 2016). Thereby, the literature points out that campaigns therefore get meaner when the contest is tight

(Ansolabehere & Iyengar, 1995). Hence, it will be taken into account whether candidates run in safe states or swing states. While in ‘red states’ the public is largely in favour of

(9)

in swing states it can go either way. Thus, in swing states, the race is tighter. Therefore, it is likely that the perceived closeness of a race is higher in swing states, and thus the race is more competitive and campaigns get more negative in swing states than in safe states.

Gainous and Wagner (2014) indeed showed that candidates in more competitive races post more negative tweets than candidates in less competitive races, and that this was done more often by Republicans than Democrats. To see whether these results are still

generalizable in the current political landscape, and whether campaigns in swing states indeed contain more negativity, I test the following hypothesis:

H3: There are more negative tweets posted by candidates running in swing states than in safe states (a), and this effect is stronger for Republicans (b).

The Effect of Negative Campaigning

Backlash. Despite the potential ‘ideal’ outcome of negative campaigning where the reputation of the target is harmed, this effect is far from predictable. By attacking opponents, candidates run the risks that these attacks backfire, and generate negative feelings towards themselves, instead of their target (Lau et al., 2007; Nai & Seeberg, 2018; Walter et al., 2014). In this section, it will be discussed when backlash effects are likely, and for whom. To do so, I will distinguish the effects of different negative strategies on backlash, and look at the type of attack (person-based vs. issue-based), the use of incivility, and attacks on different genders. Thereby, for each individual strategy, I will compare the differences in backlash based on characteristics of the candidate (partisanship and gender), and the competitiveness of the race.

Strategies. First, it is important to understand that a negative campaign always has a directional approach, and can be categorized in the following categories: person-based attacks or issue-based attacks (Carraro & Castelli, 2010). In a person-based attack, the target is

attacked on personal features (e.g. age). An issue-based attack, on the other hand, is an attack based on the target’s political stances, program or ideology (Budesheim et al., 1996). Attacks

(10)

on a candidate’s personal characteristics are generally expected to be less accepted by the public, since “personal attacks are strikingly in contrast with both a general positivity bias in person perception…and with social norms that prescribe fairness in interpersonal relations” (Carraro & Castelli, 2010, p. 636). In other words, attacking an opponent based on personal characteristics is more likely to backfire against the sponsor, since people perceive them as less fair. Hence, I expect that:

H4: Tweets with person-based attacks receive a stronger backlash than tweets with issue-based attacks.

In addition, aside from the type of attack, the tonality of the tweet can differ in terms of (in)civility. We speak of incivility when harsh, shrill, uncivil, offensive, vulgar language is used in the attack (Brooks & Geer, 2007). Attacks differing in civility vary in their impact on candidate evaluations, since people hold on to norms that guide interactions with other people. From political candidates, people expect a certain level of civility (Mutz & Reeves, 2005). Uncivil messages, are therefore more likely to create negative views of politicians (Fridkin & Kenney, 2011). Thus, it is expected that:

H5: Backlash effects are likely to be stronger when the tonality of the attack is uncivil, than when it is civil.

Partisanship. But, how are these negative strategies evaluated by different voters? Research shows that voters identifying with the Democratic party are generally less sympathetic towards the use of negative strategies than Republicans (Ansolabehere & Iyengar, 1995; Mattes & Redlawsk, 2015). This emanates from an ideological perspective. Republicans tend to oppose new and expanded government programs. Due to this, messages containing promises of new government actions thus resonate less well with their supporters than messages emphasizing that an opposing candidate failed on existing policies. Democrats, on the contrary, do not want to hear failures, but what will be done to solve problems

(11)

(Ansolabehere & Iyengar, 1995; Mattes & Redlawsk, 2015). Thus, while Republicans tend to appreciate negativity from their candidates, Democrats are more receptive to positive

messages, and are not fond of the use of negativity by candidates. From this perspective, it is expected that:

H6: A backlash effect of negative campaigning is likely to be stronger for Democrats than for Republicans.

Aside from these general perceptions towards negativity, the literature points out that for Democratic candidates there is also a difference in attitudes towards what type of attack is used, while this is not the case for Republican candidates. Francia and Herrnson (2007), for instance, point out that Democratic candidates in general only justify issue-based attacks, not person-based attacks. Therefore, I will examine whether their voters share the same ideology regarding the use of negative strategies, by testing the following hypothesis:

H7: Backlash effects are likely to be stronger when Democratic candidates use person-based attacks than when they use issue-based attacks.

Moreover, since there is no literature supporting that either Democrats or Republicans are more open towards the use of incivility, or attacking male or female opponents, I will conduct explorative analyses to see whether there is a difference in backlash here between Democrats and Republicans.

Gender. Not only partisanship, also a candidate’s gender raises behavioural

expectations amongst voters (Cassese & Holman, 2018), and thus to what extent negativity is considered acceptable. Studies found evidence that respecting gender, female candidates face strategic disadvantages compared to male candidates when it comes to using negative

strategies (Nai, 2018). This is a result of the social stereotypes that depict men as forceful, independent and aggressive (Evans & Clark, 2016), and women as sympathetic, kind, helpful and passive. These stereotypes translate into voter expectations about how both male and

(12)

female candidates should behave. Going negative, thus contradicts the shared expectations of women (Fridkin, Kenney & Woodall, 2008; Krupnikov & Bauer, 2014).

Although many female politicians live up to their stereotypes and prevent negative campaigning, others want to dispel them and opt to engage in negative tactics (Evans & Clark, 2016). The literature shows mixed results about how this overcoming of stereotypes is perceived by the public. Some studies show that overcoming these stereotypes by going negative bolsters people’s evaluation of female politicians’ competency (Kelley & McAllistar, 1983; Lau & Pomper, 2004), but, other – somewhat more recent – studies claim the opposite effect. Namely, that a disruption of this gender stereotype makes that female candidates get penalized harder for using negative campaigning tactics than male candidates (e.g. Evans & Clark, 2016; Trent & Freidenberg, 2011). Cassese & Holman (2018) showed that this is particularly true for Democrats. Therefore, the following hypothesis will be tested:

H8: A backlash effect of negativity is stronger for female candidates than for male candidates (a), and this effect is even stronger for Democrats (b).

Nonetheless, not only attacks from female candidates might result in backlash;

attacking a female opponent might also have a higher chance to backfire than attacking a man. Attacks towards women are generally perceived as unfair, due to the expectation that women are less likely to have provoked attacks than man (Fridkin et al., 2008). For this reason, it is expected that:

H9: A backlash effect is likely to be stronger when a female candidate is attacked than when a male candidate is attacked.

Moreover, since there is not much knowledge yet about which negative strategies backlash for male and female candidates, some explorative analyses will be done to examine this.

Context. Lastly, also respecting the context of the election as elaborated in the earlier context-section, there is no literature yet that examined whether the competitiveness of the

(13)

race in terms of safe states and swing states contributes to a possible backlash effect. Therefore, some additional explorative analyses will be done.

Method

Different from much previous research on the effects of negative campaigning, the research questions in this study will not be answered by survey-based experimental research. As Martin (2004) noted, negative campaigns might have different effects in an experimental setting than in the outside world. Moreover, many survey-based studies were not “able to adequately measure … the level of reception of advertising messages within the mass public” (Martin, 2004, p. 547). Therefore, to examine the use of negativity, I analysed the tweets from the 633 competing Senators in the U.S. Midterm Elections (2018), who competed in 33 states (see Appendix A). Additionally, I analysed the comments of the public in response to these tweets, to measure the sentiment in order to detect backlash effects. The coding was done in two phases. First, a semi-automated content analysis4, followed by a human coded content analysis. This way, it was possible to gain insights in what strategies different candidates use in specific contexts, and how people react on these attacks in a natural setting. In total, 12,892 tweets from 63 political candidates were collected, and the first five comments of a sub-sample of 300 negative tweets (so 1,500 comments in total).

Sample

All 63 candidates competing in the Senate elections of 2018, who were active on Twitter, were selected. With the Vicinitas tool, all candidates’ tweets (minus retweets) posted during the two-month period prior to the elections (September, 1st), until one day after the elections (November, 7th) (N = 12,892) were collected. These tweets were used for the first

3 Only the candidates that were active on Twitter during the Midterms were included in the study, thus not all 66

competing candidates were included in the sample.

4 This study is part of a bigger study from ASCoR: Negative Campaigning, Populism and Emotions in the 2018

Midterms Elections. The automated-content analyses was executed by a team of four student researchers, and Principal Investigator Alessandro Nai.

(14)

part of this study, to measure negativity. Continuously, to examine backlash effects for the second part of the study, the first five comments of a random sample of 300 tweets were collected. For this, it was important to be aware of biases in the web content. Algorithms determine that certain information is more relevant for the user, and push that information forward while suppressing other (Baeza-Yates, 2016). By creating a new Twitter account and using a Windscribe VPN connection, these biased algorithms were avoided.

Coding Procedure

First, to detect the presence of negative campaigning on Twitter, the complete set of tweets posted by the candidates were analysed. Then, to see whether there was a backlash, I analysed the comments on the sub-sample of the negative tweets. The units of analysis for both parts of the study were tweets, and only textual content was coded; visuals and links were not included in the coding process5.

Classification schemes. For the coding of the first dependent variable, negativity, four dimensions were independently coded. The first one, was the presence of negative

campaigning in general. The measurement of negativity was based on the definition of Lau and Pomper (2001), which refers to an explicit attack towards an opponent. Negativity was either present (1) or absent (0). Then, the tweets containing negativity were coded for the presence of person-based attacks, issue-based attacks, and incivility, as defined in the theoretical framework.

A random sample of 200 tweets was coded independently by four human coders. First, each coder individually coded 200 tweets on the presence of negativity and the negative strategies. After completing this procedure, the discrepancies between coders were checked by a meta-coder, who then decided how an item should be coded. Next, each human coder

5 I did not have the resources to train the algorithm to code videos and images. To be consistent, and due to the

fact that videos and images are multifaceted and might contain different meanings that are harder to interpret and code in a reliable, valid way, it was decided to also focus purely on text when coding the comments. In case a comment only contained a visual, this was coded as ‘0’ (neutral).

(15)

was asked to identify 30+ good examples for each dimension, which were used to improve the precision of the trained algorithm.

Algorithms. Based on this dataset, a model was developed to find patterns of negative campaigning in the total dataset, with supervised machine learning techniques (Larose, 2005). The examples of the classifiers were the ‘learner’. They served as a training set of examples to detect similar patterns in the dataset, in order to put corresponding output in the correct

classifiers (Domingos, 2012): ‘negativity’, ‘person-based attacks’, ‘issue-based attacks’ and ‘incivility’. For instance, an example that was fed to the algorithm as a person-based attack, was “Bob Menendez holding himself up as a paragon of virtue regarding women is the very definition of hypocrisy. Absolutely shameless.”(Hugin, 2018).

First, the algorithm appeared not to be entirely accurate. After feeding the algorithm an extra set of 200 examples, the accuracy of the coding improved to a satisfying level, compared to a manually coded sample of 300 tweets. There was 77.4% overlap6. Overall, the results reveal that 3,511 out of 12,892 (27.2%) tweets contained negative strategies, which is less than what could be expected based on the share of negative messages in earlier studies examining Senate elections (e.g. Fridkin & Kenney, 2004; Lau & Pomper, 2002).

Coding backlash. To examine the second dependent variable, backlash, I manually coded the first five comments (n = 1,500) on a sample of negative tweets (n = 300). The consistency of the coder was tested with an intra-coder reliability test, in which 150 comments (10.0 % of the sample) were coded for a second time. Krippendorff’s alpha = .95, and thus showed an acceptable level of reliability.

A backlash effect was defined when the attack evoked negative feelings towards the candidate instead of their target (Walter et al., 2014). To measure this, it was coded whether

6 When writing this thesis, we were still working on optimizing the algorithm to make it even more accurate.

Due to time restrictions, a quasi-final version of the algorithm is used here. The results should therefore be treated carefully.

(16)

people expressed negative feelings or attitudes towards the candidate, or whether they were neutral or positive towards the candidate. A negative comment (e.g. “Just go away”) received the code ‘-1’. Neutral comments and visuals were coded as ‘0’, and positive comments–when positive feelings towards the candidate were expressed– as ‘1’ (e.g. “Gee! Hope you win Governor. I am with you.”). To assign a ‘backlash score’ to each tweet, these scores were computed. The sum of these scores formed a 10-point scale that indicated how positive or negative the public responded on negative tweets (-5 = Very negative, 5 = Very positive effect). A negative score indicated a backlash effect, while a positive score indicated support for the sponsor. On average, the sentiment was relatively neutral (M = 0.09, SD = 2.67) (See Figure 1).

(17)

Independent Variables

The independent variables in this study can be divided into three levels: the context-level, candidate-context-level, and tweet-level (See Figure 2). On the context-context-level, the

competitiveness of the race was determined based on whether a state was a swing state or safe state. This was decided based on the 2018 Senate Battleground Map7, that showed in which states the race was extremely tight (See Appendix B). On the candidate-level, it was coded whether a candidate was (0) Republican (55.0%) or (1) Democrat (45.0%) to define the candidates’ partisanship, and whether candidates were (0) male (74.7%) or (1) female (25.3%).

Figure 2. Levels of Analyses.

On the tweet-level, I measured whether tweets contained (0) issue-based attacks, (1) person-based attacks or (2) both. In total, 1,912 of the negative tweets contained an issue-based attack (54.5%), 1,395 a person-issue-based attack (39.7%) and 204 tweets contained both (5.8%). However, ‘both’, was not included in the analyses, since the aim was to compare only person-based versus issue-based attacks. Type of attack was therefore considered a dummy variable (0 = issue-based, 1 = person-based). Next, also the presence of incivility in tweets was coded as (0) absent or (1) present. In 463 of the negative tweets, politicians made use of

7https://www.270towin.com/2018-senate-election/2018-senate-battlegrounds Backlash(?)

Strategy Incivility Gender target

Tweet-level (Negativity) Candidate-level

Partisanship Gender

Context-level Competitiveness race

(18)

uncivil language (13.2%). Lastly, when possible, it was coded whether the target of the attack was male (0) or female (1). In 19.2% of these cases the attacks were directed at females, while 80.2% of the attacks were directed at males, which matches the distributions in candidates’ genders.

Data Analysis

To analyse the causes of negative campaigning, a three-level multilevel logistic regression was executed, and for the effects of negativity I ran a three-level multilevel linear regression. It was decided to use these multilevel models, because the individual tweets are nested in the characteristics of candidates, which are in turn nested under the context of the elections. It is thus more likely that they exhibit a larger degree of similarity compared to clusters from randomly selected subjects. Context was included as the highest level units, followed by candidate characteristics. The tweet-level variables were included as lower-level units (see Figure 2). Intercepts could vary randomly across the profiles of the different candidates, in order to account for possible differences between these candidates (Allison, 1999).

Results

The first research question asked to what extend there is a difference concerning the use of negative tactics by Democratic and Republican candidates running for Senate, and whether this is affected by their gender and competitiveness of the race. This question was answered by studying the tweets of those candidates. To do so, I ran a three-level multilevel logistic regression analysis (see Table 1) on the complete dataset (N = 12,892). State-level was included as the highest level variable, followed by gender and partisanship of candidates, and negativity (the dependent variable) as the lowest level. Since the model is significant F(3, 12888) = 3.44, p = .016 and the strength of the prediction is strong (R2 = .72), it can be used to

(19)

Table 1

The Causes of Negativity

Note: * p < 0.05; ** p < 0.01; *** p < 0.001; a This coefficient is set to zero because it is redundant. Dependent variable: Negativity (0 = not present, 1 = present).

First, I will turn to the results regarding the influence of partisanship on going

negative. The results indicate that partisanship on its own did not predict the use of negativity (b = -.09, SE = .05, p = .096). Therefore, H1 had to be rejected. Second, and in line with H2a, the results indicated that a candidate’s gender predicted their use of negative strategies. As expected, female candidates were more likely to go negative than male candidates (b = .30, SE = .10, p = .003). H2a could thus be confirmed. Then, to see whether partisanship

moderated this effect from gender on negativity, I looked at the interaction-effect between the two variables. The results showed indeed a moderating influence from partisanship on the effect from gender on negativity. However, in contrast to the expectations, the effect was

Model 1 Model 2 b (SE) 95% Confidence Interval b (SE) 95% Confidence Interval Democrat (vs. Republican) -.09 (.05) (-0.20, 0.01) -.41*** (.09) (-0.59, -0.23) Female (vs. male) .30* (.10) (0.11, 0.50) -.01 (.17) (-0.33, 0.35) Safe state (vs. swing

state) .22 (.26) (-0.30, 0.73) -.01 (.27) (-0.54, 0.53) Female*Democrat .40* (.17) (0.07, 0,73) Safe state*Democrat .41*** (.11) (0.19, 0.62) Constant -1.18*** (.22) (-1.62, -0.75) -1.04*** (.23) (-1.49, -0.60) Random Effect Variance (Intercept) .46 .47 N(level 3, states) 33 33 N(level 2, candidates) 63 63 N(level 1, tweets) 12,892 12,892 Log Likelihood 59,043.76 59,212.05

(20)

stronger for Democrats (b = .40, SE = .17, p = .017), instead of Republicans (See figure 3). Thus, H2b had to be rejected.

Figure 3. Interaction-Effect Gender and Partisanship on Negativity

Moreover, regarding the context in which the tweet was posted, I found that there was no difference in the use of negativity in safe states and swing states (b = .22, SE = .26, p = .406). However, I did find a moderating role of partisanship (b = .41, SE = .11, p <.001) on this effect, but in the opposite direction as expected. Democrats went more negative in safe states than in swing states, while for Republicans there was no difference (see Figure 4). So, both H3a and H3b had to be rejected.

Next, I hoped to gain more insights in the effects of negative campaigning by forming an answer on RQ2: When do negative tactics lead to a backlash effect towards Democratic or Republican candidates, and is this affected by the gender of the attacker or target? For this, I

(21)

used the dataset that included only negative tweets, and the coded comments from followers on it (n = 300). I ran a three-level multilevel linear regression, with the context-level (swing state vs. safe state) as the highest level, followed by candidate-level (partisanship and gender of the candidate), and with backlash (the dependent variable) as the lowest level. Type of attack (issue-based vs. person-based), gender of the target (male vs. female) and incivility (present vs. absent) were included as covariates at the lowest level. The model was significant F(1, 190) = 5.70, p = .001), so it could be used to predict a backlash effect. The results (see Table 2) show that in contrast to the expectations, type of attack (b = 0.21, SE = 0.41, p = .616), the gender of the target (b = -0.01, SE = 0.51, p = 0.977) and the presence of incivility (b = -0.40, SE = 0.53, p = .456) did not have direct effects on backlash. Therefore, H4, H5 and H9 had to be rejected.

On candidate-level, however, the results do indicate that partisanship was a predictor of backlash. But, not in the direction that was expected. Instead, the results indicate that comments were more negative for Republican candidates than for Democratic candidates (b = 2.04, SE = 0.51, p <.001). Thus, also H6 had to be rejected, since the effect was reversed. To zoom in on the differences between Democratic and Republican candidates that affect backlash effects, I will look at the effects of them using different negative strategies.

With regards to H7, it was expected that a backlash effect was likely to be stronger when a Democratic candidate used person-based attacks than issue-based attacks. The interaction-effect of partisanship and type of attack (see Table 2), however, shows that there was no moderation of partisanship on the effect of type of attack on backlash (b = 0.62, SE = 0.98, p = .531). Also H7 thus had to be rejected. Furthermore, there was also no moderation of partisanship on the effect of incivility on backlash (b = 2.45, SE = 1.52, p = .108).

(22)

Table 2

Note: * p < 0.05; ** p < 0.01; *** p < 0.001; The dependent variable is Backlash, measured by the sentiment of the comments (-5: very negative, +5: very positive).

Next, aside from candidates’ partisanship, the results indicate that also their gender affected the strength of a backlash effect. As expected in H8a, there was a higher backlash when a female candidate went negative on Twitter, than when a male candidate went negative (b = -1.29, SE = 0.57, p = .027). Thereby, as expected, this effect was even stronger for The Direct and Indirect effects of Negativity on Backlash

Main effects Interaction-effects

Model 1 Model 2 b (SE) 95% Confidence Interval b (SE) 95% Confidence Interval Person-based attack (vs.

issue-based) 0.21 (0.41) (-0.54, 1.02) 2.70 (1.83) (-0.91, 6.31)

Female target (vs. male) -0.01 (0.51) (-1.02, 0.99) -1.31 (2.04) (-5.34, 2.73) Incivility present (vs. absent) -0.40 (0.53) (-1.45, 0.65) 1.50 (2.93) (-4.28, 7.28) Female candidate (vs. male) -1.29** (0.57) (-2.42, 0.17) 18.11* (7.79) (2.75, 33.48) Democrat (vs. Republican) 2.04*** (0.51) (1.03, 3.04) 12.38** (3.50) (5.47, 19.28) Safe state (vs. swing state) 0.65 (0.42) (-0.19, 1.48) 3.18 (2.63) (-2.02, 8.37)

Person-based attack*Democrat 0.62 (0.98) (-1.32, 2.56) Female candidate*Democrat -6.28* (2.99) (-12.18, -0.38) Incivility*Democrat 2.45 (1.52) (-0.55, 5.44) Female target*Democrat 0.10 ( 1.15) (-2.17, 2.36) Person-based attack*female candidate -1.50 (1.16) (-3.79, 0.78) Incivility*female candidate -4.28* (1.77) (-7.78, -0.78)

Female target*female candidate -3.20 (1.64) (-6.43, 0.04)

Democrat*safe state -3.08** (1.11) (-5.27, -0.89)

Female candidate*safe state -0.48 (1.34) (-2.45, 0.97)

Person-based attack*safe state -0.74 (0.87) (-3.12, 2.16)

Female target*safe state 2.96** (1.08) (0.84, 5.08)

Incivility*safe state -0.26 (1.17) (-2.57, 2.06) Constant -2.69* (1.36) (-5.38, -0.06) -25.63** (7.73) (-40.87, -10.39) Variance (Intercept) 7.12***(0.73) (5.83, 8.71) 5.70*** (0.59) (4.66, 6.97) N(level 3, states) 33 33 N(level 2, candidates) 63 63 N(level 1, tweets) 12,892 12,892

(23)

Democrats (b = -6.28, SE = 2.99, p = .037). So, H8 could be confirmed. For explorative reasons, I also looked at the backlash of the different negative strategies used by male and female candidates. Here, I saw that although there was no difference in backlash for males and females regarding the use of issue- or person-based attacks (b = -1.50, SE = 1.16, p = .197), nor for the target of their attack (b = -3.20, SE = 1.64, p = .053), the use of incivility did have an effect. Namely, female candidates received more negative comments for using

incivility than male candidates (b = -4.28, SE = 1.77, p = .017).

Lastly, also for explorative reasons, I checked whether the third-level variable, state-level (swing-state vs. safe state) affected backlash. However, no differences were found respecting backlash effects in both type of states (b = 0.65, SE = 0.42, p = 0.129). Looking at the interaction effects, however, I found that there was a moderating influence of partisanship here, in a way that for Democrats the sentiment of the comments became more negative when they went negative in safe states (b = -3.08, SE = 1.11, p = .006). In addition, also the target of the attack affected the effect of state-level on negativity. When the target of the attack was female in a safe state, this led surprisingly to a more positive sentiment in the comments (b = 2.96, SE = 1.08, p = .007). Lastly, there were no moderations of gender of the candidate (b = -0.48, SE = 1.34, p = .722), type of attack (b = -0.74, SE = 0.87, p = .395) or the use of

incivility (b = -0.26, SE = 1.17, p = .827), on the effect of state-level on backlash. Conclusion and Discussion

This study, based on a semi-automated content analysis of tweets from candidates running in the 2018 U.S. midterm elections, tried to fill the gap in comparative research regarding (1) the causes and (2) effects of negative campaigning in U.S. politics. To have a better understanding of which candidates go negative, and when this backlashes, I examined three important factors. For the causes, I studied the state-level (swing state versus safe state)

(24)

and candidate-level (partisanship and gender). For the backlash effect, I also took a third factor into account: the tweet-level (negative strategies and target).

First, the results regarding the causes of negative campaigning showed that especially gender affected the use of negativity, while partisanship only moderated the effects of state-level and gender on the use of negativity. Second, with respect to backlash effects, it appeared that a backlash is mainly affected by a candidate’s gender and partisanship. And, although there were no direct effects from negative strategies on backlash effects, I did find indirect effects of these strategies when they were moderated by candidate’s gender, partisanship or the state-level.

To be more precise, in the first part of this study, I focused on the causes of negative campaigning. In line with the results of the study of Evans & Clark (2016), this study showed that female candidates were more likely to go negative than male candidates. Thus, as

expected, they try not live up to their conventional stereotypes (e.g. sympathetic, kind,

passive) anymore (Kahn, 1993). Interesting, however, was that the results did not indicate that Republicans went negative more often than Democrats. Thereby, a moderation of partisanship and gender of the candidate revealed that against the expectations, Democratic female

candidates went negative more often than Republican females. This could possibly be explained by the fact that during this period in time, there was a massive debate going on between Republicans and Democrats about appointing Judge Kavanaugh as Associate Justice of the Supreme Court, after him being accused of sexually harassing women. Kavanaugh was backed by the Republicans, and especially Democratic women fiery opposed this on Twitter. Senator Dianne Feinstein tweeted, for example, “The recalcitrance, stubbornness and lack of cooperation we’ve seen from Republicans is unprecedented. And candidly, the dismissive treatment of Dr. Ford is insulting to all sexual assault survivors.” (Feinstien, 2018). It might thus be the case, that this reversed moderation effect is found due to the special circumstances

(25)

of this elections. Future research should therefore look at a longitudinal trend (i.e. by comparing elections of multiple years), to see whether this true.

Regarding state-level, it was assumed that due to the competitiveness of races in swing states, the tenses there get higher and campaigns get meaner than in safe states (Ansolabehere & Iyengar, 1995). However, in contrast to the results of Gainous and Wagner (2014) –who argued that candidates in more competitive races post more negative tweets than candidates in less competitive races– I did not find any differences in the use of negative campaigning on Twitter in swing states and safe states as a main effect. A reason for this, could be that in safe states many challengers frequently post negative tweets. This, because the challengers in a safe state are lagging far behind in the polls, and have thus little to lose and much to gain by negative campaigning (Nai, 2018). Negativity is then basically used as a desperation strategy by underdog candidates (Auter & Fine, 2016) in safe states. Possibly, the gap between the use of negativity in swing states and safe states is therefore not as big initially expected.

Surprisingly, however, I did find that partisanship moderated the effect of state-level on the use of negativity. Democrats went negative more often in safe states than in swing states. Possibly, this could be explained by the fact that these Democrats were often the challengers. In this case, the above described theory would explain this result. However, a follow up study on this is necessary, where candidates’ positions (challenger or incumbent) will be taken into account.

In the second part of this study, I focused on the effects of negative campaigning. Considering the strategies, it can be concluded that despite the expectations based on previous research, the results indicated that type of attack, the gender of the target and the use of incivility all did not have direct effects on backlash. However, there were interaction-effects between the use of incivility and gender of the candidate, and between the state-level and gender of the target (which will be discussed below).

(26)

Since I did not find any effects of the type of strategy (issue- or person-based attacks) on backlash, it could be helpful for future research to examine whether people nowadays still perceive person-based attacks as more negative and less acceptable than issue-based attacks, as was argued by Carraro and Castelli (2010). If not, that would explain why I did not find a difference in backlash in this study. Additionally, the fact that there was no main effect, but an interaction-effect indicating that there was significantly less backlash when the target of an attack is female in a safe state, is reason to re-examine the claim that attacks against women are perceived as less fair than attacks towards men (Fridkin et al., 2008). Not only did this study not confirm this result, it also showed the reversed effect in safe states.

Furthermore, respecting the use of incivility, I found that only for female candidates this results in a serious backlash effect. This might be explained by the fact that uncivil language does not fit the ‘kind’ stereotype of women (Kahn, 1993), and therefore it gets penalized harsher. However, against the expectations, there was no main effect of incivility on backlash. Possibly, because people are nowadays more often exposed to incivility by

politicians online, than they were a decade ago during the study of Brook and Geer (2007). Hmielowski, Hutchens and Cicchirillo (2018), for example, argued that online political discussions made seem attacking opponents as acceptable. This might explain that incivility does not affect backlash as much as initially expected. However, incivility by politicians can result in political distrust (Forgette & Morris, 2006). Therefore, future research on the downsides of negative campaigning could focus more on political distrust caused by uncivil language, instead of a backlash in terms of negative comments, as was done in this study.

Aside from negative strategies, the results on the candidate-level indicated that partisanship and gender of the candidate both affected backlash. As expected, a backlash effect of negative campaigning was stronger for female than for male candidates, and this was even stronger for Democrats. Interesting though, is that although it was expected that there

(27)

would generally be a stronger backlash effect for Republicans than for Democrats, the results point out the opposite. In this study, it was not possible to control for the party affiliation of the people who commented on the posts, but it is very likely that the reason for the reversed effect can be found here. Statistics show, for instance, that Twitter users are generally more likely to identify with Democrats than Republicans (Wojicik & Hughes, 2019). If more Democrats than Republicans are represented on Twitter, it could be that these Democrats mainly cause the backlash effect for Republicans, and not their ‘own’ supporters. In future research, commenters’ party affiliations should therefore be controlled as well. Lastly, despite its moderating role on the effects of partisanship, and target of the attack on backlash, there was no general difference in backlash effects between swing states and safe states. Overall, the state-level thus has an limited impact on both the causes and effects of negative

campaigning.

In short, the results of this study reveal several general trends: female candidates (especially Democrats) go negative more often than male candidates and, Democratic candidates go negative more often in safe states than in swing states. Regarding backlash, female candidates are more at risk than males when they go negative, especially when they use uncivil language. When it comes to partisanship, Republicans face harsher punishment for negative campaigning than Democrats. The competitiveness of the race matters only

indirectly, by altering the effects of partisanship and target of the attack. Limitations

Nonetheless, the findings of this study are not without limitations. First, the algorithm for the automated part of the content analyses could still be optimized, and trained to code more reliable as was done so far. Therefore, this study should be replicated with a finalized algorithm to check whether the results of this study are indeed valid. For now, the results of

(28)

the first part of this study (examining the causes of negative campaigning) should therefore be treated with care.

In addition, due to a lack of resources, the second part of this study (examining the effects of negative campaigning) had to be coded manually. Although I had a reliable (manually coded) sub-sample of the complete dataset (n = 300), having the comments of all collected tweets (N = 12,892) would have improved the generalizability and preciseness of the results regarding the backlash effect.

Third, the analyses in this study focused on written text by the candidate and

commenters, but neglected visuals. On social media, however, parts of the messages in posts can be communicated through visuals (Bouvier & Machin, 2018). Therefore, it would be interesting for future research to also code the visuals used in tweets, to see whether this alters the outcomes. Additionally, in this study I only focused on one specific social media channel: Twitter. Upcoming studies could continue the approach of this study, but include more

channels, like Facebook, Instagram and YouTube. People have different motivations for using each platform, and adjust their behaviour to the norms of that platform (Alhabash & Ma, 2017). Different platforms, thus may lead to different outcomes.

Lastly, this study only looked at the U.S. context, with a two-party political system. Nevertheless, it would be interesting for future studies to look into the causes and effects of negative campaigning in the context of a multiparty system, to examine whether there are similar effects.

Those limitations notwithstanding, this study contributes to the literature on negative campaigning by shedding light on the differences in causes and effects of negative

campaigning for candidates coming from either swing or safe states, different party

affiliations, and differences in gender and negative tactics. Additionally, the findings of this study add to the ongoing debate about the consequences for candidates going negative, in

(29)

terms of backlash effects. Therefore, the findings also offer some practical implications for candidates and campaign consultants, for whether or not negativity is the way to go for their (candidate’s) campaign.

References

Alhabash, S., & Ma, M. (2017). A tale of four platforms: Motivations and uses of Facebook, Twitter, Instagram, and Snapchat among college students? Social Media + Society, 3(1), 1–13. https://doi.org/10.1177/2056305117691544

Allison, P. (1999) Multiple regression: A primer. Thousand Oaks, CA: Pine Forge Press. Ansolabehere, S., & Iyengar, S. (1997). Going negative; How political advertisements shrink

& polarize the electorate. New York, NY: The Free Press.

Auter, Z. J., & Fine, J. A. (2016). Negative Campaigning in the Social Media Age: Attack Advertising on Facebook. Political Behavior, 38(4), 999–1020.

https://doi.org/10.1007/s11109-016-9346-8

Baeza-Yates, R. (2016). Data and algorithmic bias in the web. Presented at the WebSci ’16, Hannover, Germany. https://doi.org/10.1145/2908131.2908135

BobHugin. (2018, October5). Bob Menendez holding himself up as a paragon of virtue regarding women is the very definition of hypocrisy. Absolutely shameless. [Tweet].

https://twitter.com/bobhugin/status/1048327220966699009

Bouvier, G., & Machin, D. (2018). Critical Discourse Analysis and the challenges and opportunities of social media. Review of Communication, 18(3), 178–192. https://doi.org/10.1080/15358593.2018.1479881

Brooks, D. J., & Geer, J. G. (2007): Beyond negativity: The effects of incivility on the electorate. American Journal of Political Science, 51(1).

(30)

Budesheim, T. L., Houston, D. A., & DePaola, S. J. (1996). Persuasiveness of in-group and out-group political messages: The case of negative political campaigning. Journal of Personality and Social Psychology, 70(3), 523–534. https://doi.org/10.1037/0022-3514.70.3.523

Carraro, L., & Castelli, L. (2010). The implicit and explicit effects of negative political campaigns: Is the source really blamed? Political Psychology, 31(4), 617–645. https://doi.org/10.1111/j.1467-9221.2010.00771.x

Cassese, E. C., & Holman, M. R. (2018). Party and Gender Stereotypes in Campaign Attacks. Political Behavior, 40(3), 785–807. https://doi.org/10.1007/s11109-017-9423-7 Ceron, A., & D’Adda, G. (2016). E-campaigning on Twitter: The effectiveness of distributive

promises and negative campaign in the 2013 Italian election. New Media & Society, 18(9), 1935–1955. https://doi.org/10.1177/1461444815571915

Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78–87. https://doi.org/10.1145/2347736.2347755

Evans, H. K., & Clark, J. H. (2015). “You tweet like a girl!”. American Politics Research, 44(2), 326–352. https://doi.org/10.1177/1532673x15597747

Evans, H. K., Cordova, V., & Sipole, S. (2014). Twitter style: An analysis of how house candidates used twitter in their 2012 campaigns. PS: Political Science and Politics, 47, 454-462. https://doi.org/10.1017/S1049096514000389

Francia, P. L., & Herrnson, P. S. (2007). Keeping it Professional: The Influence of Political Consultants on Candidate Attitudes toward Negative Campaigning. Politics & Policy, 35(2), 246–272. https://doi.org/10.1111/j.1747-1346.2007.00059.x

Fridkin, K. L., & Kenney, P. J. (2004). Do negative messages work? American Politics Research, 32(5), 570–605. https://doi.org/10.1177/1532673x03260834

(31)

Fridkin, K. L., & Kenney, P. J. (2011). Variability in citizens’ reactions to different types of negative campaigns. American Journal of Political Science, 55(2), 307–325.

https://doi.org/10.1111/j.1540-5907.2010.00494.x

Fridkin, K. L., Kenney, P. J., & Woodall, G. S. (2008). Bad for men, better for women: The impact of stereotypes during negative campaigns. Political Behavior, 31(1), 53–77. https://doi.org/10.1007/s11109-008-9065-x

Fridkin, K. L., Kenney, P. J., & Woodall, G. S. (2009). Bad for men, better for women: The impact of stereotypes during negative campaigns. Political Behavior, 31(1), 53–77. https://doi.org/10.1007/s11109-008-9065-x

Gainous, J., & Wagner, K. M. (2014). Tweeting to power: The social media revolution in American politics. New York, NY: Oxford University Press.

Gross, J. H., & Johnson, K. T. (2016). Twitter taunts and tirades: Negative campaigning in the age of Trump. PS: Political Science & Politics, 49(04), 748–754.

https://doi.org/10.1017/s1049096516001700

Hmielowski, J. D., Hutchens, M. J., & Cicchirillo, V. J. (2014). Living in an age of online incivility: examining the conditional indirect effects of online discussion on political flaming. Information, Communication & Society, 17(10), 1196–1211.

https://doi.org/10.1080/1369118X.2014.899609

Kahn, K. F. (1993). Gender differences in campaign messages: The political advertisements of men and women candidates for U.S. Senate. Political Research Quarterly, 46(3), 481-502. https://doi.org/10.1177/106591299304600303

Kahn, K. F., & Geer, J. G. (1994). Creating impressions: An experimental investigation of political advertising on television. Political Behavior, 16(1), 93–116.

(32)

Kelley, J., & McAllister, I. (1983). The electoral consequences of gender in Australia. British Journal of Political Science, 13(03), 365. https://doi.org/10.1017/s0007123400003306 Kruikemeier, S. (2014). How political candidates use Twitter and the impact on votes.

Computers in Human Behavior, 34, 131–139. https://doi.org/10.1016/j.chb.2014.01.025

Krupnikov, Y., & Bauer, N. M. (2013). The relationship between campaign negativity, gender and campaign context. Political Behavior, 36(1), 167–188.

https://doi.org/10.1007/s11109-013-9221-9

Larose, D.T. (2005). Discovering knowledge in data. John Wiley & Sons, New York, NY. Lau, R. R. (1985). Two explanations for negativity effects in political behaviour.. American

Journal of Political Science, 29(1), 119. https://doi.org/10.2307/2111215

Lau, R. R., & Pomper, G. M. (2001). Effects of negative campaigning on turnout in U.S. Senate elections, 1988-1998. The Journal of Politics, 63(3), 804–819.

https://doi.org/10.1111/0022-3816.00088

Lau, R. R., & Pomper, G. M. (2002). Effectiveness of negative campaigning in U.S. Senate elections. American Journal of Political Science, 46(1), 47.

https://doi.org/10.2307/3088414

Lau, R. R., & Pomper, G. M. (2004). Negative campaigning: An analysis of U.S. Senate elections. Lanham, MD: Rowman & Littlefield.

Lau, R. R., Sigelman, L., & Rovner, I. B. (2007). The effects of negative political campaigns: A meta-analytic reassessment. The Journal of Politics, 69(4), 1176–1209.

https://doi.org/10.1111/j.1468-2508.2007.00618.x

Lee, J., & Xu, W. (2018). The more attacks, the more retweets: Trump’s and Clinton’s agenda setting on Twitter. Public Relations Review, 44(2), 201–213.

(33)

Martin, P. S. (2004). Inside the black box of negative campaign effects: Three reasons why negative campaigns mobilize. Political Psychology, 25(4), 545–562.

https://doi.org/10.1111/j.1467-9221.2004.00386.x

Mattes, K., & Redlawsk, D. P. (2015). The positive case for negative campaigning. Chicago, United States: University of Chicago Press.

Nai, A. (2018). Going negative, worldwide: Towards a general understanding of determinants and targets of negative campaigning. Government and Opposition, 0(0), 1–26.

https://doi.org/10.1017/gov.2018.32

Nai, A., & Seeberg, H. B. (2018). A series of persuasive events. Sequencing effects of negative and positive messages on party evaluations and perceptions of negativity. Journal of Marketing Communications, 24(4), 412–432.

https://doi.org/10.1080/13527266.2018.1428672

Nai, A., & Walter, A. (Eds.). (2015). New perspectives on negative campaigning. Why attack politics matters. https://doi.org/10.1080/23248823.2017.1287751

NRC. (2016, July 27). When they go low, we go high... NRC. Retrieved from

https://www.nrc.nl/nieuws/2016/07/27/when-they-go-low-we-go-high-3407203-a1513443

Nulty, P., Theocharis, Y., Popa, S. A., Parnet, O., & Benoit, K. (2016). Social media and political communication in the 2014 elections to the European Parliament. Electoral Studies, 44, 429–444. https://doi.org/10.1016/j.electstud.2016.04.014

Parmelee, J. H. (2013). The agenda-building function of political tweets. New Media & Society, 16(3), 434–450. https://doi.org/10.1177/1461444813487955

realDonaldTrump. (2016a, October, 19). .@HillaryClinton- you have failed, failed, and failed. #BigLeagueTruth Time to #DrainTheSwamp! [Tweet].

(34)

realDonaldTrump. (2016b, October, 19). #CrookedHillary's plan will add $1.15 TRILLION in new taxes. We cannot afford her! #DrainTheSwamp #Debate [Tweet].

https://twitter.com/TeamTrump/status/788918981818134529

SenFeinstien. (2018, September, 21). The recalcitrance, stubbornness and lack of cooperation we’ve seen from Republicans is unprecedented. And candidly, the dismissive

treatment of Dr. Ford is insulting to all sexual assault survivors [Tweet].

https://twitter.com/senfeinstein/status/1043324014008127488

Trent, J. S., & Friedenberg, R. V. (2008). Political campaign communication: Principles and practices. Lanham, MD: Rowman & Littlefield.

Walter, A. S., Van der Brug, W., & Van Praag, P. (2013). When the stakes are high: Party competition and negative campaigning. Comparative Political Studies, 47(4), 550– 573. https://doi.org/10.1177/0010414013488543

Winter, N. (2010). Masculine republicans and feminine democrats: Gender and Americans’ explicit and implicit images of the political parties. Political Behavior, 32(4), 587– 618. https://doi.org/10.1007/s11109-010-9131-z

Wojick, S., & Hughes, A. (2019, April 24). Sizing up Twitter users. Retrieved June 6, 2019, from https://www.pewinternet.org/2019/04/24/sizing-up-twitter-users/

(35)

Appendix A List of candidates.

Candidate Gender Partisanship State Tweets

Rob Arlett M R Delaware 140

Tammy Baldwin V D Wisconsin 153

David Baria M D Mississippi 475

Lou Barletta M R Pennsylvania 72

John Barasso M R Wyoming 48

Marsha Blackburn V R Tennessee 39

Eric Brakey M R Maine 292

Mike Braun M R Indiana 214

Phil Bredesen M D Tennessee 571

Sherrod Brown M D Ohio 411

Tony Campbell M R Maryland 28

Maria Cantwell V D Washington 286

Ben Cardin M D Maryland 142

Tom Carper M D Delaware 140

Bob Casey Jr. M D Pennsylvania 72

Matthew Corey M R Connecticut 157

Kevin Cramer M R North Dakota 54

Ted Cruz M R Texas 278

Ron Curtis M R Hawaii 98

Kevin de León M D California 180

Geoff Diehl M R Massachusetts 8

Joe Donelly M D Indiana 209

Dianne Feinstein V D California 317

Deb Fischer V R Nebraska 86

Robert Flanders M R Rhode Island 82

(36)

Josh Hawley M R Missouri 288

Martin Heinrich M D New Mexico 75

Heidi Heitkamp V D North Dakota 401

Dean Heller M R Nevada 139

Mazie Hirono V D Hawaii 31

Bob Hugin M R New Jersey 350

Susan Hutchison V R Washington 169

John James M R Michigan 377

Tim Kaine M D Virginia 72

Angus King M D Maine 54

Amy Klobuchar V D Minnesota 306

Joe Manchin M D West Virginia 63

Claire McCaskill V D Missouri 136

Martha McSally V R Arizona 71

Bob Menendez M D New Jersey 171

Patrick Morrisey M R West Virginia 188

Chris Murphy M D Connecticut 480

Bill Nelson M D Florida 107

Jim Newberger M R Minnesota 66

Beto O’Rourke M D Texas 570

Jane Raybould V D Nebraska 142

Jim Renacci M R Ohio 227

Mick Rich M R New Mexico 103

Mitt Romney M R Utah 21

Jacky Rosen V D Nevada 117

Matt Rosendale M R Montana 381

Bernie Sanders M D Vermont 227

Rick Scott M R Florida 558

(37)

Debbie Stabenow V D Michigan 67

Corey Stewart M R Virginia 550

Jon Tester M D Montana 87

Gary Trauner M D Wyoming 29

Elizabeth Warren V D Massachusetts 380

Sheldon Whitehouse M D Rhose Island 324

Roger Wicker M R Mississippi 120

Jenny Wilson V D Utah 390

Totaal 12,892

Appendix B

Safe States Swing States

California Arizona Connecticut Florida Delaware Indiana Hawaii Missouri Maine Montana Maryland Nevada

Massachusetts North Dakota

Michigan Tennessee

Minnesota Texas

Mississippi West Virginia

Nebraska New Jersey New Mexico New York Ohio Pennsylvania

(38)

Rhode Island Utah Vermont Virginia Washington Wisconsin Wyoming Appendix C

Codebook for manual and machine-learning coding.

* Note: everywhere, code 1 for “presence” and 0 for “absence” (see instructions). In case of doubts, rather code 0.

Concept Dimension Subdimension / indicator

Codes Instruction for coding Additional instructions Negativity Positive tone Promotion of self 1 Presence of explicit promotion of self 0 No explicit promotion of self Negative tone Attack or critique of opponent 1 Presence of explicit attack or critique toward opponent 0 No explicit attack or critique toward opponent

(39)

Policy attack Attack or critique of a policy position of the opponent 1 Presence of explicit attack or critique toward a policy proposition, political position, record once in office, ideas of the opponent Code only if 'Negative tone' = 1 (leave empty otherwise) 0 No explicit attack or critique toward a policy proposition, political position, record once in office, ideas, of the opponent

Code only if 'Negative tone' = 1 (leave empty otherwise) Personal attack Attack or critique of the character or persona of the opponent 1 Presence of explicit attack or critique of the character, profile, personality, persona, figure, image, aspect, physical attributes of the opponent Code only if 'Negative tone' = 1 (leave empty otherwise) 0 No explicit attack or critique of the character, profile, personality, persona, figure, image, aspect, physical attributes of the opponent Code only if 'Negative tone' = 1 (leave empty otherwise) Incivility Use of an uncivil

1 Use of a harsh, shrill, uncivil, offensive, vulgar language used

Code only if 'Negative tone' = 1 (leave empty

(40)

language in the attack

in the attack (code only in the attack!)

otherwise); if presence of vulgarity but not as an attack, code only 1 in 'Demagogy'.

0 Normal' and civil

language in the attack, without use of a harsh, shrill, uncivil, offensive, vulgar language used in the attack Code only if 'Negative tone' = 1 (leave empty otherwise)

Referenties

GERELATEERDE DOCUMENTEN

aangeraden om 20 mg/kg vitamine E aan het rantsoen van fokleghennen toe te voegen (Hennig ea., 1986). Voorbeelden van de positieve effecten van vitamine E toegevoegd aan voer zijn

Koeien met een driehoekig pro- fiel en of een lange fase 4 (einde melken) blij- ken een hoger celgetal te hebben dan koeien met een rechthoekig of vierkant profiel met een snel

This understanding is supported by additional relevant findings: both the very few cases of effective treatment at all (18% of all cases), as the high drop-out rate (46% of

7, right, shows the response of four single-hair sensors in one row, when they are exposed to a transient airflow produced by a moving sphere.. As a first trial, we have been able

However, they concluded that additional studies at locations in immediate proximity to large point sources (e.g. coal-fired power plants) is important as coal combustion is

Cumulative failure distributions of RFID tags exposed to ever- increasing high temperatures in vacuum for a given amount of soak time t s.. Sample size is 30 RFID tags per

CHANGE DETECTION BETWEEN DIGITAL SURFACE MODELS FROM AIRBORNE LASER SCANNING AND DENSE IMAGE MATCHING USING CONVOLUTIONAL NEURAL NETWORKSZ. of Earth Observation Science, Faculty

Corrigendum to ‘‘A revised glossary of terms most commonly used by clinical electroencephalographers and updated proposal for the report format of the EEG findings.. Published