• No results found

Emotions as the Impetus of Negative Campaigning Effects. Investigating Voters' Perception of Campaign Negativity, Voter Turnout, and Vote Intention

N/A
N/A
Protected

Academic year: 2021

Share "Emotions as the Impetus of Negative Campaigning Effects. Investigating Voters' Perception of Campaign Negativity, Voter Turnout, and Vote Intention"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Vladislav Petkevič 10827668 Master’s Thesis University of Amsterdam Graduate School of Communication Master’s Programme Communication Science

Alessandro Nai 2020-06-26

Existing research on negative campaigning has so far produced inconsistent findings on the effects of negativity on voter turnout and voter intention. On the other hand, a relative scientific consensus exists about the ability of negative campaigning to induce emotions and of emotions to affect voter behavior. Consequently, to better understand how negative campaigning affects the voter, the present study examines the possibility of the effects of negative campaigning being mediated by the emotions it elicits in the electorate. Specifically, I examine the relationship between negativity, anger, fear, and voter turnout and voting intention. Employing natural language processing and supervised machine learning techniques, I estimate the levels of negativity in the 2019 UK general election candidates’ Twitter campaigns and the levels of anger and fear in the public discourse surrounding these campaigns expressed on Twitter and in the Daily Mail comments. The estimates are utilized in three ways. First, I test whether negativity elicits anger and fear on the national, party and constituency levels. Then I examine whether negativity, anger, and fear affect voting intention by estimating their effect on a given party’s position in the polls. Finally, I test whether negativity, anger, and fear are predictive of voter turnout on the constituency level. The analyses provide partial support for the mediating role of fear in the effect of campaign negativity on voting intention. No significant results are found on the constituency level of analysis. On the party level, negativity appears to be able to both inhibit and elicit fear with fear having both negative and positive effect on voting intention depending on the party.

(2)

Table of Contents

Table of Contents

Table of Contents ... 1

Introduction ... 2

Theoretical Background ... 3

Current State of Affairs ... 3

Negativity Dimensions ... 4

Campaigning and Emotions ... 5

Affective Intelligence ... 6

Present Study ... 7

Methodology ... 8

Twitter Data Collection ... 10

Daily Mail Data Collection ... 10

Calculating Negativity ... 11

Estimating Anger and Fear ... 13

Results... 14

Negativity and Emotions ... 14

Negativity, Emotions, and Voting intention ... 16

Negativity, Emotions, and Turnout ... 17

Discussion and Conclusion ... 18

Results at a Glance ... 18

Limitations... 20

Discussion ... 21

References ... 23

Appendix A List of the sampled UK electoral constituencies ... 29

Appendix B Model parameter spaces ... 31

1. Negativity Estimation (MLP) ... 31

2. Anger Estimation (SVM) ... 31

3. Fear Estimation (LSTM) ... 31

(3)

Introduction

So far, research on negative campaigning has produced inconsistent findings as to its effects on voter turnout and voter intention (Lau et al., 2007; Haselmayer, 2019). It has been suggested that these inconstancies stem in part from the fact that researchers often fail to account for dimensions of negativity crucial to voters’ evaluation of political campaigns (e.g. Lau & Rovner, 2009; Krupnikov, 2011). It has been shown that that citizens evaluate

campaigns ads on multiple criteria such as the relevance of the ad, its truthfulness, informativeness, etc. (e.g. Brooks & Geer, 2007; Mattes & Redlawsk, 2014). Given the multitude of criteria that voters take into account, evaluating the impact negative

campaigning produces on the voter based purely on the intrinsic qualities of the campaign messages might become troublesome. Addressing this issue, the present study examines whether the effects of campaign negativity on voter turnout and voting intention are in fact mediated by and better understood through the emotions experienced by the voters as a result of said campaigns. Negative information has been consistently shown to elicit strong

psychophysiological responses, resulting in higher cognitive processing and stronger

emotional reactions than neutral or positively-valanced information (Ito et al, 1998; Rozin & Royzman, 2001; Vaish, Grossmann & Woodward, 2008). This negativity bias has also been shown to occur in the realm of political campaigns (Namkoong, Fung, & Scheufele, 2012), with candidates often resorting to negative messages as a way of eliciting strong emotional responses (Mark, 2006). Such increased emotional reaction is notable as it has been

demonstrated that emotional states of voters influence their political decisions (Marcus et al., 2000, Valentino et al., 2011). Emotions such as anger, fear, and enthusiasm impact the voters’ interest in politics (Marcus et al., 2000; Brader, 2005), stimulate voters’ participation (Smith, Seger, & Mackie, 2007; Groenendyk & Banks, 2010), and influence policy

preference (Nabi, 2003; Brader, Valentino, & Suhay, 2008). Overall, there is an argument to be made that it is the emotions citizens experience when confronted with campaign ads that determine what effects the ads have on them (Marcus et al., 2000). Then, given that negative campaigning messages elicit emotional reactions in the voter and that these reactions impact the voter’s political behavior, it could be hypothesized that the effects of negative

campaigning are in fact mediated by the emotional reactions it elicits.

To put this hypothesis to a test, I use data from the 2019 UK general election analyzing the negativity in the candidates’ campaigns and emotions expressed in the public discourse surrounding these campaigns. By employing supervised machine learning, I estimate the

(4)

Theoretical Background

levels of negativity in the candidates’ rhetoric on Twitter and the levels of anger and fear in the public’s reactions to the candidates’ campaigns on Twitter and in the Daily Mail

comments. I use these estimates to test whether the levels of negativity are predictive of levels of anger and fear and whether both negativity and emotions are predictive of two commonly-studied voter behaviors – turnout and voting intention. First, time series and regression analyses are conducted to test whether emotions expressed in reaction to the candidates’ campaigns compare to the levels of negativity expressed by the candidates. Then, bivariate timeseries of the levels of negativity, anger, and fear on the one hand and the

electorate’s voting intention (captured in electoral polls) on the other is modelled. Finally, regression models are estimated for the relationship between the levels of negativity, anger, and fear on the one hand and voter turnout per constituency on the other. Together, these tests allow for answering the research question of whether negativity exerts an effect on voter turnout and voting intention via a mediating effect of anger and fear.

The UK general election held on December 12, 2019 was an abundant source of data for these analyses. The UK is divided into 650 electoral constituencies. In each constituency, candidates compete against each other (first-past-the-post) for representing their constituency in the Parliament. The number of candidates competing in each constituency depends on the number of political parties (or independent candidates) that choose to run in that

constituency, as each candidate represents a certain party. That number ranged between 3 and 8 candidates in the 2019 election (M = 5.1). From the methodological standpoint, studying this particular event creates a unique opportunity for analyzing what are essentially many similar elections taking place concurrently. Despite potential constituency-level peculiarities, all candidates were competing within the same broader societal, cultural, and political context (Graham et al., 2013; Anstead & O’Loughlin, 2015). All large parties were represented in the overwhelming majority of constituencies; in all of them, the same ‘big’ issues (Brexit, the NHS reform, taxation policies) were pivotal to the political discussion; and all candidates were ultimately competing for a seat in a legislative body that oversees all constituencies to the same degree.

Theoretical Background Current State of Affairs

The body of research on negative campaigning has been expanding rapidly, with the yearly average number of articles on the topic exceeding 100 by 2019 (Haselmayer, 2019).

(5)

Despite the abundance of studies, there is little agreement in the field as to what effects negative campaigning produces on the voters (Haselmayer, 2019; Lau et al., 2007). Some scholars (e.g. Ansolabehere & Iyengar, 1995; West, 2014; Ansolabehere et al., 1994) suggest negative political ads have a demobilizing effect, while others contest these findings (e.g. Finkel & Geer, 1998; Valentino & Neuner, 2017). A meta-analysis of studies on negative campaigning and voter turnout (Lau et al. 2007) found that the relationship is not statistically significant. The state of affairs of research on negative campaigning and voting intentions is analogously inconclusive: the same meta-analysis (Lau et al., 2007) found no overall

statistically significant relationship, with six analyzed studies reporting negative statistically significant effects and four studies reporting positive significant effects. Haselmayer (2019), in a review of negative campaigning research, concludes that “[o]n balance, there is no evidence supporting common wisdom about negative campaigning representing an effective strategy for maximizing votes” (2019, p. 364).

Negativity Dimensions

A possible explanation for such inconsistent findings lies in the variability of research strategies and variable operationalizations employed by the researchers (Lau & Rovner, 2009). Adding to that, Krupnikov (2011) and Nai (2013) suggest that research in the field often fails to consider (or agree upon) dimensions of negative campaigning that are crucial to determining both the direction and strength of the effect. The definition of negative

campaigning that has been favored by researchers concerns solely the use of attacks directed at one’s opponent: a campaign ad is negative if its focus is on criticizing the opponent, whether in terms of policies or character, rather than on promoting own agenda (Geer, 2006; Nai & Walter, 2015: Lipsitz & Geer, 2017). This definition entails a dichotomous

classification of campaign ads: a candidate can either focus on attacking the opponent or on promoting their own policies and/or character, thus employing either negative or positive campaigning strategies.

Such a simple yet strict dichotomy has received some criticism from scholars in the field (Allen & Stevens, 2010; Mattes & Redlawsk, 2015; Lipsitz & Geer, 2017). Under this

definition, incivility, mudslinging, believability, just like any other possible characteristics of a campaign ad, are neglected under the umbrella of negativity. Still, an argument has been made that these characteristics do play a role in what effects negative campaigning has on the voter. A number of studies have shown that citizens evaluate campaigns ads on multiple criteria, rather than categorizing them as either negative or positive. For instance, voters take

(6)

Theoretical Background

into account the relevance of the ad (Brooks & Geer, 2007; Fridkin & Kenny, 2011), its believability (Mattes & Redlawsk, 2014; Lipsitz & Geer, 2017), informativeness (Sides, Lipsitz, & Grossman, 2010), and harshness of the language used ( Sobieraj & Berry, 2011; Mattes & Redlawsk, 2014). Given this multitude of message dimensions that voters take into account, assessing what effects negative campaigning produces based purely on the intrinsic qualities of the campaign messages might be troublesome.

Campaigning and Emotions

On the other hand, while there is little agreement about the effects of negative

campaigning on voting behavior, researchers are in consensus about the ability of campaign messages to evoke emotions in the electorate. For instance, Brader (2005) conducted a set of experiments where people were exposed to different emotionally manipulated political ads. The researcher found that exposure to fear-charged ads resulted in participants experiencing higher levels of anxiety, while ads with uplifting cues resulted in higher feelings of

enthusiasm. Weber (2013) confirmed and expanded upon these findings by demonstrating in an experimental setting that campaign ads can cue feelings of fear, enthusiasm, anger, and sadness. These findings are mirrored in other studies (e.g. Soroka & McAdams, 2010; Huddy, Mason, & Aarøe, 2015).

When it comes to negative campaigning specifically, it has been consistently demonstrated that people react to negative information more strongly, in terms of both emotional and cognitive responses, than to positive or neutral information – a tendency that became known as negativity bias (Ito et al, 1998; Rozin & Royzman, 2001; Vaish,

Grossmann & Woodward, 2008). Ito and colleagues (1998) showed that increased cognitive resources are dedicated to negative information as soon as its valence is established (i.e. when it is decided that the information is in fact negative). This increased attention results in

negative information being remembered better (Kahn & Kenney, 1998), and producing a stronger psychological impression (Rozin & Royzman, 2001). In terms of political

campaigning, this bias is manifested in the candidates often resorting to negative messages as a way of eliciting strong emotional responses (Mark, 2006), believed to be an effective way of reaching the voter (Perloff & Kinsley, 1992). More specifically, it has been suggested that blame attribution resulting from the attacks in negative campaign messages provokes feelings of anger (Bang‐Petersen, 2010; Brader, Groenendyk, & Valentino, 2010). Kiss and Hobolt (2012) also suggested that negative messages can activate voters’ cognitive surveillance systems and induce feelings of fear. Results of an experimental study by Russo (2016)

(7)

reaffirmed these findings showing that exposure to negative messages can induce both fear and anger.

Affective Intelligence

So, negative campaigning elicits emotions. But how is that relevant to its effects on voting behavior? Marcus et al. (2000) proposed that it is the emotional appraisals citizens experience when confronted with campaign ads that determine what effects the ads have on them.

Drawing on research from psychology and neuroscience, the authors proposed the Affective Intelligence theory (AIT) that explains changes in citizens’ political behavior as a

consequence of different pre-conscious information processing strategies (appraisals) they employ. The theory holds that when processing information three types of emotion-inducing appraisals take place, while which of them assumes a dominant role and determines the person’s reaction to the information depends on the information’s congruency and familiarity. The three types of appraisals are: (1) ‘enthusiastic’, where the current approach to the

information at hand is deemed to be appropriate and effective, thus increasing reliance on habitual routines; (2) ‘angry’, where information is deemed to be violating or threatening one’s convictions, thus increasing reliance on heuristics and prior convictions; and (3) ‘fearful’, where the information is unfamiliar or provokes, thus increasing one’s reliance on detailed information processing and decreasing reliance on convictions. The authors then employed 15 years of election campaigns data to test the robustness of their constructed model. Their findings suggest that fear can discourage a voter from using habitual behaviors and instead make them rely on substantial information processing; additionally, when combined into one indicator, fear and anger exhibit a mobilizing tendency.

More recently, Valentino and colleagues (2011) also employed AIT in their research to explore the effects of discrete emotions on political behavior. They combined evidence from an experimental study, survey data gathered during the 2008 US general elections, and several decades worth of data on emotions from the American National Election Studies (ANES) to examine how, if at all, anger, enthusiasm, and fear differ in their effects on voter mobilization. Results from all three data sources converged to suggest that anger has a positive mobilizing effect on the voter and increases turnout, while the effects of enthusiasm never reached statistical significance. The suggested effects of fear were less straightforward. The experimental data showed no statistically significant effects, the 2008 election survey data found a weak negative effect on voter mobilization, and the ANES data showed a weak positive relationship, leaving much uncertainty about the effects of fear.

(8)

Theoretical Background Present Study

To sum up, there is little agreement among scholars about the effects negative campaigning has on the voters. It has been argued that one reason for such lack of consensus stems from the wide variety of message dimensions that voters take into account when evaluating

political campaigns. On the other hand, prior research consistently demonstrates that political messages can elicit emotional reactions in voters, with negative messages having a tendency to elicit stronger reactions. Furthermore, these emotional responses have been shown to influence people’s political behavior such as voter turnout and voting intention. Then, perhaps it is reasonable to hypothesize that to gain a better understanding of the effects of negative campaigning on voter behavior one should also investigate the effects of the emotions negative campaigning elicits. Considering all of the above, a mediation model whereby negativity exerts an effect on voter behavior via the mediating effect of emotional response is hypothesized. In the scope of this paper, I limited my investigation to two voter behavior, voting intention and voter turnout, and two emotions - anger and fear. The study focuses on these specific effects since the primary reasons for ‘going negative’ as an electoral strategy are to persuade undecided voters not to vote by lowering their trust in a candidate or a party and to mobilize own supporters (Ansolabehere et al. 1994; Riker 1996).

Consequently, these effects also received the most attention from the scientific community. The choice of anger and fear was based on the AIT framework which suggests that these are the primary emotions (together with enthusiasm) experienced by voters exposed to political messages and the fact that negative political messages are likely to elicit negatively-valenced emotions (Soroka,Young, & Balmas, 2015; Russo, 2016).

Formalizing the above logic, the following research question and hypotheses are postulated:

RQ: Is the effect of negative campaigning on voter turnout and voting intention mediated

by anger and fear experienced by the electorate?

H1a: The level of negativity in electoral campaigns is predictive of levels of anger in the

public discourse surrounding these campaigns.

H1b. The level of negativity in electoral campaigns is predictive of levels of fear in the

public discourse surrounding these campaigns.

(9)

H2b. The level of anger in the public discourse surrounding electoral campaigns is

predictive of voting intention.

H2c. The level of fear in the public discourse surrounding electoral campaigns is

predictive of voting intention.

H3a. The level of negativity in electoral campaigns is predictive of voter turnout. H3b. The level of anger in the public discourse surrounding electoral campaigns is

predictive of voter turnout.

H3c. The level of fear in the public discourse surrounding electoral campaigns is

predictive of voter turnout.

Methodology

In testing these hypotheses, I employed supervised machine learning (SML) to measure (1) the levels of negativity in electoral campaigns of the 2019 UK election candidates and (2) the levels of anger and fear in members’ of the public reactions to these campaigns. I used these data to first test whether campaign negativity influences the emotions experienced by the electorate. I then examined whether negativity and the emotions in the public’s reaction correlate with voter turnout and vote intention, thus testing if the relationship between negativity and voter turnout and vote intention is mediated by the public’s emotional reactions. To do so, I selected a random sample of 120 UK electoral constituencies (out of 650). I then collected the tweets (N = 53,180) posted by the candidates from these

constituencies (N = 449) between November 16, 2019 and December 12, 2019 (the day of the election)1. A full list of sampled constituencies is available in the Appendix A. I also

collected a large sample of replies by the members of the public (N = 305,837) posted in response to the candidates’ tweets (i.e. tweets posted as direct replies to the candidates’ tweets made in the aforementioned period).

The choice of using candidates’ Twitter rhetoric as a proxy for their broader electoral campaigns stemmed from the fact that Twitter has become an important platform for political

1 The constituency sample size and the period for which the tweets were collected were constrained by the limitations of the Twitter API used to collect the data. The API allows for a maximum of 100 requests per 15 minutes. Collecting data for each candidate often requires multiple requests (i.e. queries of the Twitter

database). Consequently, the sample size was a compromise between representativeness of the sample and time considerations. Additionally, the API allows for querying of only the latest seven days of historical data. Since I first began collecting the data on November 23, the oldest available tweets were dated November 16.

(10)

Methodology

actors to conduct their electoral campaigns and convey their messages to the potential voter, at least in Western democracies (e.g. Conover et al., 2011; Yaqub et al., 2017; Graham, Jackson, & Broersma, 2016). Similarly, employing Twitter public discourse around electoral campaigns as a proxy for the public’s reactions to these campaigns has been demonstrated to be an effective approach in prior research (e.g. Wang et al., 2012; Murthy, 2015; Kwon et al., 2016). However, whether politically active Twitter users are representative of the whole population is not quite clear, as they have been shown to be younger and disproportionally male (Vaccari et al., 2013), more interested in politics (Bode & Dalrymple, 2014; McKinney, Houston, & Hawthorne, 2014), and exhibit stronger partisan tendencies (Barberá & Rivero, 2014).

To try and mitigate such potential discrepancies, I also gathered comments made by the readers of the Daily Mail news outlet’s (DM) discussing the election candidates or their parties for the same time period. The DM was chosen for two reasons. Firstly, it is one of the most popular news outlets in the UK with a reach of over 25 million monthly readers, second only to the BBC (Schwartz, 2016) which, unlike the DM, does not allow for user commentary on most of its articles. Secondly, The DM reader demographics are in a sense quite the

opposite of those of the politically active Twitter users as the DM readers tend to be older and more often female than the readers of other national news outlets (Taylor, 2017).

Consequently, the addition of the DM user comments (N = 63,265) to the Twitter public discourse data was made to potentially improve the data’s validity.

These data were used to train three machine learning models to: estimate the level of negative campaigning in the candidates’ tweets, estimate the level of anger in the Twitter replies and the Daily Mail comments, and estimate the level of fear in the Twitter replies and the Daily Mail comments. The resulting estimates were used to (1) model a time series of the relationship between the candidates’ negativity and the public’s emotions (on national, party, and constituency levels), (2) model time series for the relationship between the public’s emotions and a party’s position in the polls (voting intention), and (3) construct a linear regression model between the public’s emotions and voter turnout (on the constituency level).

(11)

Twitter Data Collection

All of the tweets were gathered using a Python implementation of the Twitter Standard Search API2. The process was as follows. First, the API was queried for tweets made by the candidates. Then, the API was queried for tweets that were posted in reply to the already collected candidates’ tweets. Here, a location filter was added to only return tweets made by users from the UK. Overall, 53,180 candidate’s tweets (M = 114.4, Md = 107.5) and 305,837 replies (M = 502.2, Md = 46) were collected. Importantly, an overwhelming majority of the replies concerned the three major parties: Conservatives, Labour, and Liberal Democrats (N 276,782).

The candidates’ tweets data consisted of the following fields: the text of the tweet, the date it was created, the Twitter handle of the election candidate, and the real name of the election candidate. The replies data consisted of: the text of the reply, the date it was created, the ID of the tweet in reply to which the given tweet was posted, the Twitter handle election candidate in reply to whom the given tweet was posted. No personally identifiable

information of replies’ authors was collected. Daily Mail Data Collection

User comments left under the Daily Mail articles concerning the UK economy, politics, and elections were collected for the same period (November 16, 2019 – December 12, 2019). To do so, I first downloaded all the articles published on the website in that period (N = 2,590). I then employed Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) modeling to estimate what topics were discussed in these articles and retain only those

articles that were relevant. To determine the likely number of topics in the LDA model, I first estimated multiple models ranging from 20 to 200 topics and relied on the perplexity and log likelihood statistics to choose the best fitting one. The best fitting model had 81 topics. These topics (i.e. combinations of words) were examined to infer their meaning and relevance. In the table below (Table 1) the topics that were determined to be relevant to the study are presented using five most-associated words per topic.

Topic Words most associated with topic

1 tax cost pension fund scheme

2 vote conservative johnson corbyn election

3 leader british community labour right

4 government uk people system nation

Table 1. Five top terms most associated with the chosen topics

(12)

Methodology

Only the articles most associated with the relevant topics were retained in the corpus (N = 282). Then, the comments left under these articles were downloaded using the Daily Mail API3 (N = 139,755). Only the text of the comments, the dates on which they were posted, and the ids of the corresponding articles were collected, omitting any personal information of the comments’ authors. To further reduce the number of potentially irrelevant data, only those comments that mentioned the election candidates or their parties were kept (N = 63,265). The list of matching entities based on which the corpus was pruned was created by using the candidates’ names, their parties’ names and common nicknames, and the party leaders’ common nicknames. Mirroring the Twitter reply data, the overwhelming majority of DM comments concerned the three biggest parties: Conservatives, Labour, and Liberal Democrats (N = 54,155). Furthermore, only 37 (out of 449) individual candidates were mentioned at least once.

Calculating Negativity

A machine learning model was trained to measure the negativity in the candidates’ tweets. Negativity was operationalized as use of attacks (policy or personal) towards one’s opponent. To train the model, a random sample (N = 4,000) of the candidates’ tweets was manually coded for presence/ absence of attacks. If a tweet contained either a policy or a personal attack it was coded as negative, in all other cases it was coded as non-negative. The coding was done by three coders. Initially, a sample of 100 tweets was coded by all three coders to assess the intercoder reliability (Krippendorff’s α = .888). Each coder then annotated 1,300 tweets independently. The resultant manually annotated dataset was used to train a machine learning model to automatically classify the remainder of the candidates’ tweets.

I conducted the model training in Python using scikit-learn4 and SpaCy5 packages. The scikit-learn package was used for the machine learning algorithm itself (Multi-layer Perceptron Classifier - MLP; see Suter, 1990), model training, and its performance

evaluation. SpaCy was used for data preprocessing – stop words removal and text conversion to feature vectors. The process was as follows. First, the tweets were converted to lowercase, stripped of punctuation, stop words, and lemmatized. The processed textual data was then converted to numeric feature vectors (word embedding) using a pre-trained model6. The new

3 http://www.dailymail.co.uk/reader-comments/p/asset/readcomments/xxxxxx – where ‘xxxxxxx’ should be

replaced with the ID of the desired article (a seven-digit string that can be found in the article URL).

4 https://github.com/scikit-learn/scikit-learn 5 https://github.com/explosion/spaCy

(13)

numeric data was split into training and testing datasets (75% - 25% of all data respectively). The former was used to train an MLP classifier using five-fold cross-validation for parameter optimization (for a full list of model parameters please consult Appendix B.1). With the best parameters determined, the model’s performance was evaluated by comparing the accuracy of its predictions against the testing dataset. The weighted accuracy was .89, with the f1 scores of .92 for absence of negativity and .80 for its presence. All the remaining candidates’ tweets were automatically annotated using this model. In Table 2, descriptive statistics of negativity by party are presented. Figure 1 shows the distribution of average negativity by day by party.

Party Daily average Daily Total

Mean SD Mean SD Conservatives .30 .08 192.5 143.34 Labour .33 .07 297.41 144.77 Liberal Democrats .36 .06 210.16 74.76 Greens .30 .04 134.34 53.89 Plaid .27 .09 15.16 7.58 SNP .22 .06 24.77 11.83 Brexit .34 .06 99 54.174 Other .24 .25 8.05 16.39

Table 2. Descriptive statistics of negativity in tweets by party.

(14)

Methodology

Estimating Anger and Fear

Similarly to campaign negativity, machine learning models were employed to estimate the levels of anger and fear in the reply tweets. However in this case, the training data was

collected via crowdsourcing on the Amazon Mechanical Turk website7. The workers were presented with a tweet and asked to indicate whether, in their opinion, the tweet conveyed the emotions of anger or fear. The answer options were: “fear”, “anger”, “both”, and “neither” (verbatim worker instructions are available in the Appendix C). Each tweet was coded by three workers, Overall, 7,500 tweets were coded by three coders each. Only those tweets on which at least two out of three coders provided the same answer were used for the training dataset (N = 5,327).

Two separate models were built for predicting anger and fear. The process of training the models was essentially the same as with the negativity model, with a few exceptions. A support vector machine classifier (SVM; Suykens & Vandewalle, 1999; Appendix B.2) was used for anger predictions as it exhibited better performance than the MLP classifier.

Otherwise, the same procedure of data pre-processing and cross-validating five times on the training dataset and then assessing the performance on the testing data set was employed. The model’s weighted accuracy was .82, the f1 score for absence of anger was .89 and .62 for presence.

When predicting fear, the classifiers available with the scikit-learn package did not demonstrate a satisfactory performance. Consequently, I opted for a package with more customization options, Tensorflow (Abadi et al., 2016), which allows for fine-grained model specification. With Tensorflow, a bidirectional LSTM model (Appendix B.3) was

constructed and trained for three epochs on the training dataset. Evaluating the performance against the testing dataset suggested the weighted accuracy of .91, f1 score of .95 for absence of fear and .67 for its presence. The table below (Table 3) presents the descriptive statistics of public emotional reactions by party. Figure 2 shows the distribution of average anger and fear levels by day by party.

7 https://www.mturk.com/. The workers were paid $0.05 per task (HIT).

Party

Daily average Daily Total

Anger Fear Anger Fear

Mean SD Mean SD Mean SD Mean SD

Conservatives .09 .05 .01 .04 11.61 9.25 23.51 11.36

Labour .05 .03 .06 .03 28.97 23.52 38.82 24.69

Liberal Democrats .05 .05 .07 .05 15.64 12.96 24.66 20.09

(15)

Results Negativity and Emotions

To test whether negativity in the candidates’ tweets is predictive of the emotions

expressed in the replies to these tweets (H1a and H1b), time series (N=26) were constructed for average8 daily values of negativity, anger, and fear at total daily and by party daily aggregation levels. Additionally, linear regressions were modeled for the relationship

between values of negativity and of anger and fear by constituency. In this and all subsequent analyses, only the major three parties (Conservatives, Labour, and Liberal Democrats) are discussed due to the extreme skewness of the Twitter reply and DM comments data in favor

8 For this and all subsequent analyses, sum total values were also checked, exhibiting comparable results and are not included here out of conciseness considerations.

Plaid .02 .05 .05 .12 0.34 1.01 1.66 3.16

SNP .02 .03 .07 .08 2.26 3.71 6.32 8.46

Brexit .06 .04 .09 .07 11.61 9.26 15.51 10.99

Other .03 .09 .05 .12 8.05 16.39 2.65 5.87

Table 3. Descriptive statistics of public emotional reactions by party.

(16)

Results

of these parties as mentioned in the methods section. Daily Mail data was not used for the constituency-level analysis due to insufficient data in some of the constituencies (as discussed above).

The time series were checked for stationarity using the ADF and KPSS tests. In all cases, the results of the tests suggested that the time series were not stationary. Accordingly,

differencing transformations were performed on the time series and the tests rerun. In all cases the tests indicated that the time series are stationarity (p < .05 for ADF tests and p > 0.1 for KPSS tests for level stationarity). Afterwards, vector autoregression models (VAR) were estimated for bivariate time series (negativity vs anger/fear on the two aggregation levels). The appropriate lag values were chosen on the basis of AIC, HQ, SC, and FPE tests (Table 4). The models were tested for normality of residuals using the Jarque-Bera test (with p > 0.1 in all cases). Figure 4 presents the time series of differenced negativity, anger, and fear levels by party and in total. Finally, Instantaneous and Granger causality tests were run to

determine whether the level of negativity predicts the levels of anger and fear.

Lag of negativity and:

Anger Fear

Total 3 1

Conservatives 2 2

Labour 2 4

Liberal Democrats 3 2

Table 4. Chosen lag values in the VAR models of the relationship between negativity and emotions.

(17)

On the aggregate level, the test for Granger causality between negativity and fear

produced no significant results (p > .05). The test for instantaneous causality was significant (p < .05). The negativity values explained 38% of the variance in fear values9. The results of both Granger and instantaneous causality tests were not statistically significant for the

relationship between negativity and anger.

The results of these tests by party are summarized in the table below (Table 5). For the relationship between negativity and fear, the analyses of the Conservative and Labour parties produced significant results (p = .02, R^2 = .37 and p = .01, R^2 = .04). For the relationship between negativity and anger, no statistically significant results were found.

Party Anger η2 Fear η2

Granger Instantaneous Granger Instantaneous

Conservatives >.10 >.10 - >.10 .02(-) .37

Labour >.10 >.10 - .01(+) >.10 .32

Liberal Democrats .09 >.10 - >.10 >.10 -

Total .07 >.10 - .02(-) <.01(-) .38

Table 5. Granger and instantaneous causality p values for the relationship between negativity and emotions by party and in total. Significant values are denoted in bold. For the significant effects, direction of the effect is indicated in brackets.

To estimate the constituency-level regression models, the time series of the daily

negativity and emotions levels were used to forecast their respective values on the day of the election. The predicted values were using ARIMA modeling. The ARIMA parameters were chosen using the algorithm proposed by Hyndman and Khandakar (2008) whereby a number of models with varying parameter combinations is estimated and the best-fitting model is returned. Effective number of candidates (Laakso & Taagepera, 1979) and constituency size were used as controls. Negativity values did not significantly predict anger values, R2 = .04, F(116) = 1.45, p >.10. Negativity values also did not significantly predict fear values, R2 = .03, F(116) = 1.30, p >.10.

Negativity, Emotions, and Voting intention

To test whether levels of negativity are predictive of voting intention (H2a) and whether anger and fear are predictive of voting intention (H2b – H2c), I modeled bivariate time series (daily; N=26) for a party’s position in the polls and the average levels of negativity, anger, and fear for that party. Again, the time series were made stationary using differencing, appropriate lags chosen on the basis of AIC, HQ, SC, and FPE tests (Table 6), the VAR models were checked for normality and Granger and instantaneous causality tests were

9 Since neither of the two tests innately produce a statistic for explained variance, the R^2 was calculated using the formula: η2 = (SS

(18)

Results

carried out. Table 7 summarizes the results of these tests. Figure 5 presents the (differenced) time series of negativity, anger, fear, and position in polls by party.

Party Lag for poll position and:

Negativity Anger Fear

Conservatives 2 2 3

Labour 2 2 1

Liberal Democrats 1 1 1

Table 6. Chosen lag values in the VAR models of the relationship with the position in polls.

Negativity, Emotions, and Turnout

To test whether negativity has an effect on voter turnout via mediation of emotions, linear regression models were estimated with voter turnout per constituency as the DV and the levels negativity, anger, and fear per constituency as the IVs. Incumbency, effective number of candidates (Laakso & Taagepera, 1979), constituency size, and constituency urbanization

Party Negativity η2 Anger η2 Fear η2

Granger Instant. Granger Instant. Granger Instant. Conservatives .01(+) >.10 .32 >.10 >.10 - .04(-) >.10 .20

Labour >.10 >.10 - <.01(+) >.10 .30 .05(+) .10 .14

Lib-Dems >.10 >.10 - >.10 >.10 - >.10 >.10 -

Table 7. Granger and instantaneous causality p values by party for the relationship with position in polls. Significant values are denoted in bold. For the significant effects, direction of the effect is indicated in brackets.

(19)

level were included as controls. Same as before, the time series of the daily negativity and emotions levels were used to forecast their respective values on the day of the election using automatic ARIMA modeling (Hyndman & Khandakar. 2008). For voter turnout, two

measures were used: the proportion of voters who took part in the election out of all eligible voters per constituency (turnout; 0-100) and the fraction of change in this proportion (turnout

change; 0-1) from the previous general election of 2017 (calculated by subtracting the 2017

turnout value from the 2019 turnout value). The results of these regressions are presented below (Table 8). Turnout b t p F df p Adj. R^2 Negativity 4.01 113 >.10 .15 Negativity -9.56 -1.17 >.10 Const. size 8.3 3.39 <.01 ENC -1.22 -.57 >.10 Incumbency .68 .35 >.10 Urbanization -.04 1.40 >.10 Emotions .97 112 >.10 .16 Anger -33.3 1.63 >.10 Fear 11.29 .57 >.10 Const. size 7.82 3.12 <.01 ENC -1.52 -.69 >.10 Incumbency .07 .04 >.10 Urbanization -.03 -1.29 >.10

Turnout change b t p F df p Adj. R^2

Negativity .95 113 >.10 .12 Negativity .09 3.8 >.10 Const. size .05 1.69 .09 ENC .03 .93 .07 Incumbency .10 .32 .75 Urbanization -.01 -1.2 .22 Emotions .97 112 >.10 -.01 Anger -.32 -1.08 >.10 Fear -.09 -.30 >.10 Const. size .05 1.42 >.10 ENC .02 .70 >.10 Incumbency .01 .20 >.10 Urbanization .01 -1.22 >.10

Table 8. Regression results for the relationship between negativity and turnout and emotions and turnout. ENC stands for the effective number of candidates

Discussion and Conclusion Results at a Glance

This study investigated whether the effects of negative campaigning on voter turnout and voting intention are mediated by the emotions of anger and fear experienced by the

(20)

Discussion and Conclusion

electorate. To answer this research question, I estimated the levels of negativity in 2019 UK general election candidate’s campaigns and the levels of anger and fear in the public

discourse surrounding these campaigns using supervised machine learning. I then used time series and regression analyses to test whether negativity in political campaigns predicts levels of anger and fear surrounding these campaigns on the national, party, and

per-constituency levels of analysis. Afterwards, I tested whether negativity predicts voting intention by modelling bivariate time series between campaign negativity of a party and the party’s position in election polls. In like manner, I also tested whether anger and fear

surrounding party campaigns is predictive of voting intention. Lastly, I used linear regression analyses to determine how the levels of negativity, anger, and fear in a given constituency relate to voter turnout in that constituency.

The results provided no definitive evidence for the relationship between negativity in the electoral campaigns and emotions experienced by the electorate. On the national aggregate level, negativity significantly positively predicted fear, but no statistically significant relationship was observed for the relationship between negativity and anger. With examination on the party level, Conservative and Labour candidates’ negativity also significantly predicted fear in the public discourse around their campaigns. Curiously, negatively values were positively associated with fear values for Labour, whereas for Conservatives the relationship was negative. No such relationship was observed for Liberal Democrats. There were no significant relationships between negativity and anger for any of the parties. Down at the constituency level, no statistically significant relationship was observed neither between negativity and anger nor between negativity and fear. Overall, partial support is found for hypothesis H1b (negativity to fear) and no support is found for hypothesis H1b (negativity to anger).

The investigation of the effects of negativity, anger, and fear on voting intention produced equally uncertain results. For the Conservatives, negativity significantly and positively

predicted the party’s position in the polls; fear, on the other hand, had a significant negative effect. For Labour, negativity did not significantly predict the party’s polls standing; the effects of anger and fear however were both significant and positive. No statistically significant relationships were found for Liberal Democrats. Overall, only partial support is found for hypotheses H2a – H2c.

(21)

No significant relationship between negativity and turnout was observed. Anger and fear also did not predict turnout. Thus, hypotheses H3a – H3b were rejected.

Interpreting such mixed results presents quite a challenge. Still, when focusing only on the two largest parties – Conservatives and Labour – a clearer pattern emerges: campaign negativity impacts the levels of fear in the electorate and fear exerts an effect on voting intention. So for these two parties, there is an argument to be made that fear does in fact mediate the relationship between negativity and voting intention. Interestingly, it seems this relationship can go in opposite ways. Negative campaigning reduces the levels of fear and fear negatively affects voting intention for the Conservatives; while for Labour negative campaigning has a positive effect on fear and fear positively affects voting intention.

Overall, in answering the research question of whether the effect of negative campaigning on voter turnout and voting intention is mediated by anger and fear, what can be concluded is: yes, but only for certain parties, only when it comes to voting intention, and only via mediation of fear.

Limitations

Before delving into the discussion of these results, the limitation encountered in this study ought to be mentioned. Firstly, all the analyses in this study were conducted on an aggregate level, examining the trends by party and by constituency. Consequently, caution should be exercised when extrapolating these results to a lower level of analysis, such as the

relationship between negativity of an individual candidate and the emotions it evokes. Secondly, the anger and fear classification accuracies were suboptimal. While the classifiers were able to reliable predict which tweets did not contain expressions of anger or fear, the predictions of which tweets did express these emotions were accurate only in slightly over a half of the cases. It is in the realm of possibility that in such conservative classification some specific contexts in which these emotions had been expressed were overlooked. It is also possible that the emotions expressed in these omitted contexts have a distinct relation to negative campaigning or voting intention from the emotions that the classifiers managed to detect.

Adding to that, it should be mentioned that the Amazon Mturk workers who provided the crowdsourced data annotation (used with the anger and fear classifiers) were not UK citizens. Although the workers were asked to indicate what emotion they think the author of the tweet experienced, it is possible that their interpretation of the emotionality of the tweet was

(22)

Discussion and Conclusion

inaccurate due to their lack of the relevant societal and political knowledge that would be available to a UK citizen.

Lastly, enthusiasm was not examined as a possible emotional response to campaigning messages in this study. AIT postulates that voters can experience the feeling of enthusiasm when confronted with messages that conform to their pre-existing convictions (Marcus et al., 2000). While it reasonable to expect that attacks in negative campaigning messages would expose the voter to incongruent information, thus resulting in anger or fear per AIT, it has been suggested that in some cases, when voters are exposed to novel albeit negatively-framed information, negative campaign messages can also induce enthusiasm (e.g. Nai, 2013). And since it has been demonstrated that enthusiasm impacts voter behavior, it might be reasonable to consider this variable when examining the mediating role of emotions in negative

campaigning. Discussion

These limitations notwithstanding, the study contributes several theoretical insights to the body of research on negative campaigning. The results help elucidate the relationship

between negative campaigning and emotional reactions it elicits in the voter. The few

previous studies that examined the relationship between negative messages and the emotions it elicits found that they can evoke both fear and anger (Russo, 2016) with Soroka, Young, and Balmas (2015) finding that anger is more strongly linked to negativity than fear. In the present study, on the contrary, no relationship between negativity and anger is observed. As such, it seems that the evocation of anger from negative campaigning could be context-dependent and does not occur universally. This possibility should be addressed in future research as to better understanding the dynamics between negative campaigning and emotions.

Furthermore, the results of the present study suggest that negativity can sometimes suppress fear, contrary to theoretical expectations (Martin, 2004; Russo, 2016). To speculate, this discrepancy could be a result of the voters’ enthusiasm – an emotion postulated by the AIT but not addressed in this study. Nai (2013) suggested that certain style of negative campaigning (negative policy change campaigns specifically) can elicit feelings of

enthusiasm in the electorate. Following AIT, fear and enthusiasm have contrasting effects and are unlikely to occur simultaneously to equal degree. Perhaps, then, if the Conservatives’ negative campaigning elicited enthusiasm, it also, on the flip side, decreased the general

(23)

levels of fear associated with the campaign. Of course, this is a purely speculative

proposition, but it would be curious to see the interplay between different emotions produced by negative campaigning addressed in future research.

The opposite effects of fear on voting intention also deserve some attention. Prior

research suggests that fear tends to dissuade the undecided voters from political participation (e.g. Marcus et al., 2000; Weber, 2013). In every election there is a base of voters with no strong preference for any given party. The Conservatives were, on average, the most popular party during the pre-electoral period; then perhaps the undecided voter base had, on average, a slight preference for the conservatives, too. In that case, increased fear would hinder the Conservative’s standing in the polls by deactivating the undecided voter base; for the same reason, it would also benefit Labour. Again, it is merely a speculation, but such a dynamic could explain the inconsistencies in the observed effect of fear on the voter (e.g. Valentino et al., 2011).

From the methodological standpoint, this paper introduces a novel approach to studying the relationship between political campaigning and voters’ emotional responses. Previous studies relied on either survey or experimental data when examining this relationship (e.g. Marcus et al., 2000; Brader, 2005; Valentino et al., 2011). Conversely, in this paper I made use of data derived directly from messages posted during the electoral campaign period. This approach, coupled with time series modeling, made it possible to analyze how the

relationship between negativity and emotions developed over time in a real-life election setting. Additionally, it allowed for investigation of how evolution of these variables over time affected citizens’ voting intention using contemporaneous election polls. This

framework should be implemented in future studies on the topic to determine if the findings derived using this methodology corroborate the resultss arrived at through survey and experimental data.

(24)

References

References

Allen, B., & Stevens, D. P. (2010). Truth in advertising? Visuals, sound, and the factual accuracy of political advertising. In Annual Meeting of the American Political Science

Association (pp. 1-4).

Ansolabehere, S., & Iyengar (1995). Going Negative: How Attack Ads Shrink and Polarize the Electorate. New York: Free Press.

Ansolabehere, S., S. Iyengar, A. Simon, & N. Valentino. (1994). Does Attack Advertising Demobilize the Electorate? American Political Science Review 88(4): 829–838 Anstead, N., & O'Loughlin, B. (2015). Social media analysis and public opinion: The 2010

UK general election. Journal of Computer-Mediated Communication, 20(2), 204-220. Barberá, P., & Rivero, G. (2014). Understanding the political representativeness of Twitter

users. Social Science Computer Review. Advance online publication.

Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal

of personality and social psychology, 51(6), 1173.

Bode, L., & Dalrymple, K. E. (2014). Politics in 140 characters or less: Campaign

communication, network interaction, and political participation on Twitter. Journal of

Political Marketing, 1–22 Advance online publication.

Brader, T. (2005). Striking a responsive chord: How political ads motivate and persuade voters by appealing to emotions. American Journal of Political Science, 49, 388–405. Brader, T., Valentino, N. A., Suhay, E. (2008). What triggers public opposition to

immigration? Fear, group cues, and immigration threat. American Journal of Political Science, 52, 959–978.

Brooks, D. J., & Geer, J. G. (2007). Beyond negativity: The effects of incivility on the electorate. American Journal of Political Science, 51(1), 1-16.

Civettini, A. J., & Redlawsk, D. P. (2009). Voters, emotions, and memory. Political

(25)

Conover, M. D., Ratkiewicz, J., Francisco, M., Gonçalves, B., Menczer, F., & Flammini, A. (2011). Political polarization on twitter. In Fifth international AAAI conference on weblogs and social media.

Crigler, A., Just, M. & Belt, T. (2006). The three faces of negative campaigning: The

democratic implications of attack ads, cynical news and fear‐arousing messages. In D. Redlawsk (ed.), Feeling politics: Emotion in political information processing. New York: Palgrave Macmillan.

Druckman, J. N., Kifer, M. J., & Parkin, M. (2010). Timeless strategy meets new medium: Going negative on congressional campaign Web sites, 2002–2006. Political

Communication, 27(1), 88-103.

Fridkin, K. L., & Kenney, P. (2011). Variability in citizens’ reactions to different types of negative campaigns. American Journal of Political Science, 55(2), 307-325. Fridkin, K. L., & Kenney, P. (2011). Variability in citizens’ reactions to different types of

negative campaigns. American Journal of Political Science, 55(2), 307-325.

Geer, J.G. (2006). In Defence of Negativity. Attack Ads in Presidential Campaigns. Chicago: University of Chicago Press.

Graham, T., Broersma, M., Hazelhoff, K., & Van'T Haar, G. (2013). Between broadcasting political messages and interacting with voters: The use of Twitter during the 2010 UK general election campaign. Information, communication & society, 16(5), 692-716. Graham, T., Jackson, D., & Broersma, M. (2016). New platform, old habits? Candidates’ use

of Twitter during the 2010 British and Dutch general election campaigns. New media & society, 18(5), 765-783.

Groenendyk, E. W., Banks, A. J. (2010). Emotional rescue: How affect helps partisans overcome collective action problems. Paper presented at the International Society of Political Psychology Annual Meeting, San Francisco, CA.

Haselmayer, M. (2019). Negative campaigning and its consequences: a review and a look ahead. French Politics, 1-18.

Hyndman, RJ & Khandakar, Y (2008) "Automatic time series forecasting: The forecast package for R", Journal of Statistical Software, 26(3).

(26)

References

Ito, T. A., Larsen, J. T., Smith, N. K., & Cacioppo, J. T. (1998). Negative information weighs more heavily on the brain: the negativity bias in evaluative categorizations. Journal of

personality and social psychology, 75(4), 887.

Jungherr, A. (2016). Twitter use in election campaigns: A systematic literature review. Journal of information technology & politics, 13(1), 72-91.

Kahn, K. F. & Kenney, P. J. (1998). Negative advertising and an informed electorate: How negative campaigning enhances knowledge of senate elections. Paper presented at the Conference on Political Advertising in Election Campaigns, Washington, DC.

Kiss, Z., & Hobolt, S. (2012). Negative campaigning, emotions and political participation. In EJECDEM Final Conference., Florence.

Krupnikov, Y. (2011). When does negativity demobilize? Tracing the conditional effect of negative campaigning on voter turnout. American Journal of Political Science, 55(4), 797-813.

Kwon, K. H., Bang, C. C., Egnoto, M., & Raghav Rao, H. (2016). Social media rumors as improvised public opinion: semantic network analyses of twitter discourses during Korean saber rattling 2013. Asian Journal of Communication, 26(3), 201-222. Laakso, M., & Taagepera, R. (1979). “Effective” number of parties: a measure with

application to West Europe. Comparative political studies, 12(1), 3-27.

Lau R., Sigelman L., Rovner I.B., (2007). The effects of negative political campaigns: a meta-analytic reassessment. J. Polit. 69(Nov.):1176–209

Lau, R., & Pomper, G. M. (2002). Effectiveness of negative campaigning in US Senate elections. American Journal of Political Science, 47-66.

Lau, R., and I. Rovner, (2009). Negative Campaigning. Annual Review of Political Science 12(1): 285–306.

Lipsitz, K., & Geer, J. G. (2017). Rethinking the concept of negativity: An empirical approach. Political Research Quarterly, 70(3), 577-589.

Marcus, G. E., Neuman, W. R., & MacKuen, M. (2000). Affective intelligence and political

(27)

Mark, D. (2006). Going dirty: The art of negative campaigning. Lanham, MD: Rowman & Littlefield.

Martin, P. S. (2004). Inside the black box of negative campaign effects: Three reasons why negative campaigns mobilize. Political psychology, 25(4), 545-562.

Mattes, K., & Redlawsk, D. P. (2014). The positive case for negative campaigning. University of Chicago Press.

McKinney, M. S., Houston, J. B., & Hawthorne, J. (2014). Social watching a 2012

Republican presidential primary debate. American Behavioral Scientist, 58(4), 556– 573. doi:10.1177/0002764213506211

Murthy, D. (2015). Twitter and elections: are tweets, predictive, reactive, or a form of buzz?. Information, Communication & Society, 18(7), 816-831.

Nabi, R. L. (2003). Exploring the framing effects of emotion: Do discrete emotions differentially influence information accessibility, information seeking, and policy preference?. Communication Research, 30(2), 224-247.

Nai, A. (2013). What really matters is which camp goes dirty: Differential effects of negative campaigning on turnout during Swiss federal ballots. European Journal of Political

Research, 52(1), 44-70.

Nai, A., & Walter, A.S. (2015). The War of Words: The Art of Negative Campaigning. Why Attack Politics Matter. In New Perspectives on Negative Campaigning, ed. A. Nai and A.S. Walter, 3–33. Colchester: ECPR Press.

Namkoong, K., Fung, T. K., & Scheufele, D. A. (2012). The politics of emotion: News media attention, emotional responses, and participation during the 2004 US presidential election. Mass Communication and Society, 15(1), 25-45.

Perloff, R. M., & Kinsey, D. (1992). Political advertising as seen by consultants and journalists. Journal of Advertising Research, 32(3), 53–60.

Riker, W.H. (1996). The Strategy of Rhetoric: Campaigning for the American Constitution. New Haven: Yale University Press.

Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and social psychology review, 5(4), 296-320.

(28)

References

Russell, J.A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11(3):273–294

Russo, S. (2016). Explaining the effects of exposure to negative campaigning. The mediating role of emotions. Psicologia sociale, 11(3), 307-318.

Schwartz, J. (2016). Top U.K. Media Publishers and Publications – Ranked for 2015. Retrieved from https://www.similarweb.com/blog/index-top-u-k-media-publishers-and-publications-of-2015

Sides, J., Lipsitz, K., & Grossmann, M. (2010). Do voters perceive negative campaigns as informative campaigns?. American Politics Research, 38(3), 502-530.

Smith, E. R., Seger, C. R., Mackie, D. M. (2007). Can emotions be truly group level? Evidence regarding four conceptual criteria. Journal of Personality and Social Psychology, 93, 431–446

Sobieraj, S., & Berry, J. M. (2011). From incivility to outrage: Political discourse in blogs, talk radio, and cable news. Political Communication, 28(1), 19-41.

Soroka, S. & McAdams, S. (2010). An Experimental Study of the Differential Effects of Positive versus Negative News Content. Paper presented at the Elections, Public Opinion and Parties Annual Conference, University of Essex, 10–12 September. Soroka, S., & McAdams, S. (2015). News, politics, and negativity. Political

Communication, 32(1), 1-22.

Soroka, S., Young, L., & Balmas, M. (2015). Bad news or mad news? Sentiment scoring of negativity, fear, and anger in news content. The ANNALS of the American Academy of

Political and Social Science, 659(1), 108-121.

Taylor, H. (2017). How Old Are You Again? UK Newspaper Age Demographics in 4 Charts. The Media Briefing.

Thorson, E., Ognianova, E., Coyle, J., & Denton, F. (2000). Negative political ads and negative citizen orientations toward politics. Journal of Current Issues & Research in

(29)

Tumasjan, A., Sprenger, T. O., Sandner, P. G., & Welpe, I. M. (2010). Predicting elections with twitter: What 140 characters reveal about political sentiment. In Fourth

international AAAI conference on weblogs and social media. Vaccari, C., Valeriani, A., Barberá, P., Bonneau, R., Jost, J. T., Nagler, J.,

& Tucker, J. (2013). Social media and political communication: A survey of Twitter users during the 2013 Italian general election. Rivista Italiana Di Scienza, 43(3), 325– 355. doi:10.1426/75245

Vaish, A., Grossmann, T., & Woodward, A. (2008). Not all emotions are created equal: the negativity bias in social-emotional development. Psychological bulletin, 134(3), 383. Valentino, N. A., & Neuner, F. G. (2017). Why the sky didn't fall: mobilizing anger in

reaction to voter ID laws. Political Psychology, 38(2), 331-350.

Valentino, N. A., Brader, T., Groenendyk, E. W., Gregorowicz, K., & Hutchings, V. L. (2011). Election night’s alright for fighting: The role of emotions in political participation. The Journal of Politics, 73(1), 156-170.

Wang, H., Can, D., Kazemzadeh, A., Bar, F., & Narayanan, S. (2012). A system for real-time twitter sentiment analysis of 2012 us presidential election cycle. In Proceedings of the

ACL 2012 system demonstrations (pp. 115-120). Association for Computational

Linguistics.

West, D.M. (2014). Air Wars. Television Advertising and Social Media in Election Campaigns 1952–2012. Thousand Oaks: Sage.

Yaqub, U., Chun, S. A., Atluri, V., & Vaidya, J. (2017). Analysis of political discourse on twitter in the context of the 2016 US presidential elections. Government Information Quarterly, 34(4), 613-626.

(30)

Appendix A

List of the sampled UK electoral constituencies

Appendix A

List of the sampled UK electoral constituencies

Constituency Constituency ID Constituency Constituency ID

Faversham and Mid Kent E14000700 Bishop Auckland E14000569

Cambridge E14000617 Wallasey E14001010

Salisbury E14000912 Derby South E14000663

Sleaford and North Hykeham E14000929 Clwyd West W07000059

Sedgefield E14000915 South West Surrey E14000953

Scarborough and Whitby E14000913 Loughborough E14000797

South West Norfolk E14000952 Cardiff Central W07000050

North West Leicestershire E14000858 Houghton and Sunderland South E14000754

Birmingham, Ladywood E14000564 Broxbourne E14000606

Llanelli W07000045 Stockton North E14000970

Derby North E14000662 Stretford and Urmston E14000979

Preseli Pembrokeshire W07000065 Scunthorpe E14000914

Hemsworth E14000740 Birmingham, Hodge Hill E14000563

Leeds West E14000781 Bath E14000547

Worcester E14001052 Walthamstow E14001013

Ribble Valley E14000894 St Helens South and Whiston E14000963

Croydon South E14000656 Denton and Reddish E14000661

Brighton, Pavilion E14000598 Camborne and Redruth E14000616

Chesterfield E14000632 Kensington E14000768

Bolton South East E14000579 Sunderland Central E14000982

Maldon E14000806 Eltham E14000690

South Staffordshire E14000945 Aylesbury E14000538

Romford E14000900 Redditch E14000892

Walsall South E14001012 Bury North E14000611

Edinburgh East S14000022 Corby E14000648

Central Suffolk and North Ipswich E14000624 North Warwickshire E14000854

Bristol East E14000599 Plymouth, Moor View E14000879

North Thanet E14000852 Dulwich and West Norwood E14000673

Winchester E14001041 Ogmore W07000074

Edinburgh South S14000024 Edinburgh West S14000026

Islington North E14000763 Uxbridge and South Ruislip E14001007

Aldershot E14000530 Workington E14001053

North Wiltshire E14000860 West Bromwich East E14001029

Leicester West E14000784 Southport E14000958

Weston-Super-Mare E14001038 Banbury E14000539

Easington E14000677 Warrington South E14001018

Newcastle upon Tyne East E14000832 Rother Valley E14000903

Brent Central E14000591 North West Hampshire E14000857

Halton E14000725 Braintree E14000590

Slough E14000930 Penrith and The Border E14000877

(31)

Portsmouth North E14000883 East Renfrewshire S14000021 Bridgwater and West Somerset E14000595 Warwick and Leamington E14001019

Oldham East and Saddleworth E14000870 Sheffield, Hallam E14000922

Huntingdon E14000757 Dumfries and Galloway S14000013

Sefton Central E14000916 Aberdeen North S14000001

Pudsey E14000886 Glasgow North East S14000032

Darlington E14000658 Lanark and Hamilton East S14000042

Maidstone and The Weald E14000804 Broadland E14000603

North East Derbyshire E14000843 Fareham E14000699

Bexhill and Battle E14000557 Penistone and Stocksbridge E14000876

Aberconwy W07000058 Cynon Valley W07000070

Warrington North E14001017 Vauxhall E14001008

Halifax E14000723 Orpington E14000872

East Worthing and Shoreham E14000682 Leigh E14000785

Woking E14001047 Stoke-on-Trent South E14000974

Kingston upon Hull East E14000771 East Lothian S14000020

High Peak E14000748 North Norfolk E14000848

Maidenhead E14000803 Ayr, Carrick and Cumnock S14000006

(32)

Appendix B

Model parameter spaces

Appendix B Model parameter spaces 1. Negativity Estimation (MLP) Parameter Options Hidden later sizes (50,100,50) (300) (100) (150,30) (300, 150, 50) Activation

function tanh relu

Solver sgd adam lbfgs

Alpha 0.0001 0.001 0.05 0.1

Learning rate constant adaptive

Note. Values chosen for the final classifier are highlighted in bold.

2. Anger Estimation (SVM)

Parameter Options

C .001 .01 .1 1 10

Gamma .001 .01 .1 1 auto

Kernel liner rbf polynomial

Note. Values chosen for the final classifier are highlighted in bold.

3. Fear Estimation (LSTM)

Layer (type) Output Shape Param #

input_3 (InputLayer) (None, 200) 0

embedding_3 (Embedding) (None, 200, 300) 4638900

lstm_2 (LSTM) (None, 128) 219648

(33)

Appendix C

Amazon Mechanical Turk worker instructions

Please read the tweet and select the emotion you think the author experienced when writing this tweet.

It is entirely possible the language is non-emotional and it is impossible to tell if any particular emotion was conveyed, in that case you can choose 'Neither'.

In this task, we are only interested in expressions of anger and fear/anxiety. If neither of the options apply, please choose 'Neither'. If you think both anger and fear are present, please choose 'Both'.

Most of the tweets you will see were posted in reply to another tweet. Reply tweets contain a Twitter handle (username) of the author of the original tweet to which the reply is posted. For the purposes of this research, instead of the original Twitter handle you will see @addressee

Due to varying level of support, all emojis have been replaced with their description (e.g. ":face_with_tears_of_joy:")

Referenties

GERELATEERDE DOCUMENTEN

For this reason, I find no significant evidence in support of change in future CFO short-term compensation when firms just beat last year’s earnings, nor do my results

On one hand, the existing empirical evidence suffers from several shortcomings: (i) it has analysed the effect a bank- or market-based system has on the economy as a whole, (ii)

Deze gang van zaken wordt bevestigd door het afzetten van Paul Chevrier als woordvoerder van het Front National in de Yvelines, naar aanleiding van zijn sympathiebetuiging

Even in the event the list would be seen as overwhelming - too many things have to be learnt; the ranking procedure forces learners to make decisions about what is critical to

The rationale behind building different instances is to test the “balance” of a network (i.e., delivery and pickup freight characteristics are the same or different), the

waardoor het een negatief effect heeft op het innerlijk integratieproces (2017, pag. 448-449) het belang van de positieve verandering, die uit hun onderzoek naar het rouwproces naar

Besides, these study findings support the notion that the migration industry has indeed taken over international education because the migration policies influence the daily

Corrigendum to ‘‘A revised glossary of terms most commonly used by clinical electroencephalographers and updated proposal for the report format of the EEG findings.. Published