• No results found

What, who and how? : how German journalists used and normalized Twitter during the 2017 German Federal Election

N/A
N/A
Protected

Academic year: 2021

Share "What, who and how? : how German journalists used and normalized Twitter during the 2017 German Federal Election"

Copied!
39
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

What, who and how?

How German journalists used and normalized Twitter

during the 2017 German Federal Election

Student Name: Anna Dittrich Student Number: 11896701

Master’s Thesis

Graduate School of Communication

Master’s Programme: Communication Science (Erasmus Mundus Master “Journalism, Media and Globalisation)

Supervisor: Dr. Judith Möller Date: 30.05.2018

(2)

2

Abstract

Past studies have found that journalists mostly normalize their online behavior, meaning they act in similar ways online as they would offline, while upholding traditional journalistic norms such as objective reporting. This study shows how journalists’ behavior on social media

changes during elections and if they are more likely to normalize how they act. I observed twenty German journalists on Twitter over a time period of one month before and during the 2017 German federal election to see if times of intense journalistic work such as elections contributed to normalization with journalists falling back on established norms due to the importance of election coverage for democratic systems. By looking at tweets containing opinions, opinion polls and how the journalists interacted with audience members I

investigated if the journalists adhered to traditional journalistic norms such as objectivity and the gatekeeper role and if they followed election coverage trends such as horse-race

coverage. The results showed that German journalists were not more likely to normalize their behavior during elections than non-election times in most areas. Some aspects such as tweeting opinions were even more likely to not follow established journalistic norms during elections. However journalists also proved to be more reluctant to interact with civilians during elections while interacting even more with other journalists. These results point towards journalists using Twitter as a medium to discuss and renegotiate journalistic norms and guidelines among themselves. Journalists might use social media like Twitter to develop new journalistic norms in the digital era inside the journalism profession.

(3)

3

1. Introduction

How journalists change their behavior on social media has been the subject of many studies, from their use of humor on Twitter (Holton & Lewis, 2011; Mourão, Diehl, & Vasudevan, 2016) to the expression of opinions (Lasorsa, Lewis, & Holton, 2012; Lawrence, Molyneux, Coddington, & Holton, 2014) and to interactions with audience members (Molyneux &

Mourão, 2017). While journalists were more open to express opinions and display humor and interacted more with audience members, those studies also found that mostly journalists still normalized some of their online behavior, showing the same patterns and norms as they did offline. Journalists for example prefer talking to other journalists on social media while ignoring civilians (Molyneux & Mourão, 2017), upholding their role as gatekeepers. With the increasing importance that social media has as a news source for audiences all over the world (Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2017) it is important to understand exactly how journalists act on social media. If social media drastically alters established journalistic norms, it could also change journalism itself which would affect citizens using media to stay informed or to take part in public discussions or democratic processes. Very few studies so far however have looked into how journalistic behavior on social media changes during elections. Elections are an important part of democratic political systems and the media is needed to provide voters with information and to act as watchdogs to ensure transparent elections (Ravi, 2017). Does the importance of elections and the media’s role in them therefore influence journalistic behavior on social media? Do journalists

behave differently during elections to non-election times? Are they more likely to normalize their behavior during elections in order to conform to established journalistic norms due to the importance of election coverage? I aim to answer these questions through a content analysis of German journalists’ behavior on Twitter before and during the 2017 German

federal election. First I will introduce my main theories and hypotheses, then I will explain my methods and present my results. Lastly I will analyze and discuss my findings.

(4)

4

2. Theories And Hypotheses

The Normalization Theory And Its Evolution

The internet and social media provide completely new ways for journalists to conduct their work, from getting story ideas and finding sources (Lariscy, Avery, Sweetser, & Howes, 2009) to journalists being able to communicate with their audiences. However studies from the early days of the internet (Singer, 2005) to recent times (Bentivegna & Marchetti, 2018) have found that journalists tended to replicate already established behavior and norms in their online behavior. This process is called “normalization”. Margolis and Resnick introduced

the term in 2000 when they talked about the normalization of online political communication where equilibria and dynamics of the offline world were reproduced on the internet, leading to a process of normalization instead of one of revolution (Margolis & Resnick, 2000). Jane Singer’s study about journalist bloggers in 2005 then adapted this normalization theory for

journalism studies as she found that journalists would uphold their role as gatekeepers, even in a medium such as blogs specifically designed for communication with audiences. Through a content analysis of twenty journalistic blogs over one week, she found that journalists saw the online predominantly as a mere extension of the offline, with content being recreated and little thought given to the potential and differences of producing online content (Singer, 2005).

However, as the web and social media evolved and more studies were conducted, the view of journalists completely recreating established behaviors and norms has been challenged. Several studies for example found that journalists on Twitter were more likely to express opinions (Lasorsa et al., 2012; Lawrence et al., 2014) compared to their behavior offline, while other studies found that journalists were more likely to promote themselves and their work or use social media to share jokes or humorous content (Molyneux, 2015; Mourão et al., 2016). As the internet and social media evolved so did journalistic online behavior, away from a complete normalization and towards a “hybrid-normalization” in which journalists do

(5)

5

adapt some of the characteristics of social media (Bentivegna & Marchetti, 2018). Some parts of this behavior could already be seen as early as 2005, when Singer (2005) observed that some of the journalist bloggers were more likely to express opinions, indicating that even back then journalists moved partly away from such established norms as objectivity.

However Singer also noted that those expressions of opinions depended heavily on whether or not the journalist in question was a columnist or a reporter, with reporters being less likely to express opinions. This finding points towards an important reason why journalists

normalize their behavior online, namely their own self-perception as journalists and how important they consider journalistic norms such as objectivity or the gatekeeper role. Deuze (2003) divided online journalism into different categories with different ways of dealing with the challenges of the digital world and noted that elite media outlets were more likely to recreate traditional norms and guidelines compared to other sites. Similarly Lasorsa et al. (2012) found that journalists from mainstream media outlets were less likely to relinquish their gatekeeper role, meaning they were less likely to give opinions, talk about how they created news stories or socialize with audiences.

The question is now whether there are other reasons for journalists to normalize their online behavior. One such potential field could be election times. Elections are a time of increased journalistic behavior (Molyneux & Mourão, 2017). An increased workload, coupled with the particularities of election coverage such as televised debates and campaign speeches might lead to journalists falling back on already established behaviors, unable or unwilling to spend too much time pondering alternatives offered by social media. Though to my knowledge no study has yet compared election reporting on social media with non-election reporting, I believe journalistic norms might be more commonly observed during election times than non-election times. Since many studies (Singer, 2005; Lasorsa et al., 2012; Lawrence et al., 2014) have looked at opinions of journalists and their role as gatekeepers and how those behaviors are normalized on social media, I believe these would be good areas to focus on. Another interesting area would also be horse-race coverage, a specific characteristic of

(6)

6

election coverage. If horse-race coverage increases during elections it would also point towards journalists normalizing their online behavior. First I will however explain why I have chosen Twitter for this analysis.

Twitter

Twitter is a micro-blogging platform that allows its users to share short texts messages, tweets, images or videos with other users. While among the most used social networks in

Germany (Newman et al., 2017) it plays a much smaller role in political campaigns compared to such countries as the UK or the US (Davies, 2017). However it is still an important part of German election campaigns. The biggest political party in Germany for example used Twitter to launch the slogan for their election campaign in July, 2017. While there are already studies that looked at the use of Twitter in German elections, one very recent example by Kratzke (2017) who investigated the 2017 election, most of these studies were focused on politicians or the election in general (Thimm, Einspänner, & Dang-Anh, 2012; Jürgens & Jungherr, 2015). Nuernbergk (2016) looked at the behavior of German journalists but not during elections. There is very little research about the behavior of German journalists on Twitter in general, especially when compared to the vast amount of studies about this topic focused mainly on US journalists (for example Holton & Lewis (2011) Lawrence et al. (2014) and Molyneux & Mourão (2017). I believe that taking a closer look at this is relevant due to the increasing importance of Twitter during campaigns and is also of academic value due to the little research done so far. Considering Germany has a different media system to the US (Hallin & Mancini, 2004) and a different political system, German journalists might act differently to US-journalists, especially during elections. In the next part of this study I will explain which behaviors could be more normalized during elections, starting with horse-race coverage.

(7)

7

Horse-race coverage focuses on opinion polls or poll numbers, the current position of candidates in the polls and how their actions affect their standings in those polls. Journalists prioritize those numbers over other areas such as policy proposals (Oh, 2016), believing that this focus on who is winning or losing the election creates more exciting and compelling stories (McLeod & Sotirovic, 2009). Over the years this type of coverage has steadily increased, a phenomena that can also be observed in European countries (Banducci & Hanretty, 2014). Since horse-race coverage occurs during the election, the first hypothesis assumes that German journalists will tweet increasingly about polls during the election compared to the non-election time, recreating established offline behavior on Twitter.

H1: There will be more polls during the election than before the election.

Since horse-race coverage is about the standing of individual politicians or political parties, it should also affect the main topics of tweets. During the election there should be an increased amount of tweets about politicians and political parties as journalists normalize their reporting on Twitter to fit in with the horse-race coverage, making politicians and parties the center of media attention.

H2: There will be more tweets about politicians and parties during the election than

before.

Objectivity And Sentiments

Many studies have found that journalists on Twitter are less likely to adhere to the objectivity norm and are trying to find ways to combine opinion-focused social media with their own journalistic norms about objective reporting (Lasorsa et al., 2012; Molyneux, 2015; Mourão et al., 2016). Objective reporting is one of the most important norms in journalism and has, as a concept, been widely discussed (Muñoz-Torres, 2012). In journalism objective reporting means journalists report only the facts without emotions and without commenting on them

(8)

8

(Schudson, 2001). Wien (2005) argues that most journalists follow a positive concept of objectivity. Objectivism is seen as a binary choice between either facts or opinions (Wien, 2005). If something is affected by an assessment, a statement ceases to be objective. While this definition of objectivism might be overly simplistic - and indeed, Wien continues to explore several other definitions of objectivism in her article- this definition fits the aim of this paper. It has also been widely debated whether or not objective journalism is possible considering the myriad of factors that influence a journalists’ work from something as

mundane as the printing space allocated to a story to the complex hierarchy of influences developed by Shoemaker and Reese (1996). For this paper the important part is not whether objectivity is obtainable but rather the fact that objectivity is considered by many journalists to be an important norm that needs to be upheld (Schudson & Anderson, 2008). Twitter,

however, has led to journalists expressing more opinions (Lasorsa et al. 2012; Lawrence et al. 2014), with less focus on objective and purely fact-based reporting. I assume that during elections the opposite will occur, with journalists being more objective due to an increased normalization effect. Elections and their importance for the democratic system might influence journalists to be more conscious about the effects their opinions and their non-objective reporting could have on election results, leading to them trying to be more non-objective and tweet less personal opinions.

H3: Journalists will produce fewer opinionated tweets during the election than before

the election.

This does not mean though that journalists will not produce any kind of opinions during the election. Following the assumption made in H1 and H2 about horse-race coverage (Oh, 2016), I believe that the main topics of opinions will also change during elections. Due to an increasing focus on politicians and parties during elections, there might also be an increase in opinionated tweets about politicians and parties during the election. Politicians and political parties are among the most visible actors of elections and actively try to get media attention

(9)

9

to present themselves, leading to an increased amount of opportunities for journalists to form opinions about them.

H4: There will be more opinionated tweets about politicians and political parties

during the election than before the election.

Another subcategory of objectivity would be the use of sentiments in opinions. Sentiments are the attitude, judgement or evaluation the journalists display in their opinions and they can be analyzed in different categories such as strength of the sentiment or orientation of the sentiment (Luo, Chen, Xu, & Zhou, 2013). Sentiments analysis can be used to differentiate between facts and opinions, however for this paper sentiment orientation is important. Sentiment orientation refers to whether an opinion is positive, negative or neutral in tone. Neutral opinions would be considered to be more objective compared to negative or positive opinions. A neutral opinion would be for example “I doubt Schulz will get much attention talking about labor laws today” while a negative opinion would be “Schulz talking about labor laws today will be incredibly boring.” Since I argue that during the election journalists will

orientate themselves more towards objective reporting due to a normalization of their online behavior this might also be expressed in the sentiment orientation of their opinions with less opinions being positive and negative and more neutral opinion.

H5: There will be more neutral opinions during the election than before the election.

Gatekeeper Function And Audience Engagement

Social media and Twitter has also changed the way journalists interact and engage with their audiences. Traditionally journalists could be seen as gatekeepers (White, 1964). They were the ones who had full access to almost all information and decided which information to relay to a bigger audience (White, 1964). The internet has changed the accessibility of information. The knowledge of the audience is no longer reliant on just the information provided by

(10)

10

journalists or media outlets, with social media allowing for more feedback and conversations between journalists and their audiences. Some journalists have difficulties navigating

between their roles as mere providers of information and their roles as active audience engagers (Hanusch & Bruns, 2017) while other studies found that some journalists are ready to embrace this kind of journalism (Brems, Temmerman, Graham, & Broersma, 2017). However journalists tend to show a bias towards interactions with other journalists or people inside the media business. A study by Molyneux (2015) found that the most retweeted sources of journalists on Twitter were other journalists. Mourão et al. (2016) write that this is an example of normalization with journalists trying to maintain their gatekeeping authority by mostly interacting with other journalists and elite sources to reduce being seen as biased. This paper would therefore argue that potential normalization effects during election times can be observed in two areas. Firstly journalists will interact less in general during the election than during non-election times. This is an orientation towards the standards of the gatekeeping role, with journalists not interacting with their audiences.

H6: There will be fewer interactions during the election than before the election.

Secondly journalists will interact less with non-elite sources. Non-elites sources in this context will be sources who are not politicians or other journalists, meaning civilians. Since journalists show a tendency to disfavor civilian sources even during non-election times due to gatekeeping processes those effects might intensify during the election.

(11)

11

3. Methods

Data collection and study design

To answer the research question whether German journalists normalize their behavior during election times I conducted a quantitative content analysis of tweets produced by German journalists during two months of the election cycle in Germany. The population is therefore a) German-based journalists reporting about German news for a German media outlet who b) are active on Twitter, meaning they had produced at least 1,000 tweets since they joined Twitter. The journalists were selected through several Twitter lists of German media outlets including public broadcasting companies such as the “Bayerischer Rundfunk” and the “ARD” and German print media like the “Süddeutsche Zeitung” and “BILD”. Other lists used

included the Twitter lists by the German journalist union “Deutscher Journalisten-Verbund” as well as the German freelance journalist union “Freischreiber”. All Twitter lists were created by verified users with the exception of “Freischreiber” where other sources were checked to

ensure the authenticity of the list.

The identity of every selected journalist was cross-checked with the media outlet that employed them. Through this method a sample of 20 journalists was selected, ten of which worked for television and ten working for print. This ensured that the results would not apply only to print or television journalists. The tweets were collected through use of the website “TweetTunnel” which displays not only tweets and retweets but also answers to tweets by the

journalists. The coding was then carried out manually.

4,267 tweets produced by those 20 journalists during two time periods, each lasting four weeks, were coded. The first time period was chosen from January 2nd to January 29th, seven months before the election in September, as the time period before the election. The second time period was set in the month leading up to the election from August 28th to September 24th. It was chosen since the eight weeks leading up the election are considered

(12)

12

to be the hot phase of German elections (Brettschneider, 2009). Since it was not possible to code eight weeks due to time limitations it was decided to code the four weeks leading up to the elections compared to the pre-election time in January. Several events that are specific for German elections also fall into the second time period, for example the televised debate between political candidates, the “TV-Duell”. Only tweets produced before September 24th

17:59 were coded since the first exit polls about the election came in at 18:00.

During these time periods the journalists produced 4,267 tweets, all of which were coded to see whether the tweets were political or not. Some of the journalists also used their Twitter accounts to post about their private lives such as holidays or the weather. Since this paper is focused on the professional behavior of journalists only tweets about political matters were coded. This ensured that tweets containing variables like opinions were made in the context of their professional and not their private life. Political tweets were defined using the three categories of polity, politics and policy (Henn, Dohle, & Vowe, 2013). Several definitions and lists for all three of these categories were developed which are explained more deeply in appendix 2. After this 2,597 tweets remained out of a sample size of 4,267 tweets.

Since only text was coded tweets that contained only images, video or emoticons were disregarded and not coded. In tweets that contained both text and an image or video only the text was coded. If a tweet was a reply or a comment to another tweet that has since then been deleted, making it impossible to understand the context of the tweet, it was also not coded.

Language-wise, most of the tweets were German with some English tweets, mostly concerning the US president Donald Trump whose inauguration took place during the first coding period. A few journalists also had some tweets in French, Turkish and Arabic, either due to formerly working in countries were those languages were spoken or due to an affinity for those countries such as one journalist being of German-Turkish background. Due to

(13)

13

language barriers of the coders only the German, English and French tweets were coded. Out of all the coded tweets, 171 tweets could not be coded due to missing context, language barriers or since they did not contain any text. In the end 2,426 tweets were fully coded. To determine inter-coder reliability a coder was schooled for five hours on April, 26th through explanation of the codebook and the coding of 20 test tweets. Krippendorff's Alpha for all variables falls between 0.72 and 0.93, expect for opinion sentiments where Krippendorff's Alpha equaled only 0.67, showing the difficulties of coding opinion sentiments. The entire codebook can be seen in appendix 1.

Due to some interesting findings after the quantitative analysis a small qualitative analysis was conducted afterwards. Its focus was primarily on opinions of the journalists about other journalists. 104 tweets were analyzed to find out more about the discourse surrounding opinions made about other journalists during the election.

3.1 Measures

Main actors

The tweets were first coded to determine the main topic or actor of each tweet

such as politicians or political parties, political events like the election or laws being passed, journalists or media outlets, civilians or other or several main actors. Krippendorff’s Alpha

equals 0.722 for this variable.

Polls

Every tweet was coded to see whether or not it contained a mention or reference to a poll or poll numbers. Only 1% of all coded tweets mentioned either. Krippendorff’s Alpha equals

(14)

14

Opinions

Using theories Serrano-Guerreroa, Olivas, Romero and Herrera-Viedma (2015) and Lawrence et al. (2014) the tweets were coded to see whether they included opinions. Opinions were defined as an attitude, emotion or evaluation of a person, event or

organization which offered commentary not attributed to a source that went beyond mere facts. 42.4% of all tweets contained an opinion and intercoder reliability (Krippendorff’s Alpha

equaling 0.808) was good, showing that this was a reliable operationalization. Opinions were then further divided into strong or weak opinions, adapting a definition by Lasorsa et al. (2012). Strong opinions were tweets that consisted only or mostly of opinions such as

“Speech by Schulz today was fierce and aggressive” while weak opinions were mostly about facts with an opinion added such as “Schulz spoke today in Magdeburg. He talked about plans regarding labor laws reforms. Was very fierce and aggressive.” Krippendorff’s Alpha

equals 0.73 for strong and weak opinions. 22% of opinions were weak and 78% were strong opinions.

Sentiments

Tweets with opinions were further coded to see whether or not they contained negative, neutral or positive sentiments using the NRC emotion lexicon by Mohammad and Turney (Mohammad & Turney, 2010; Mohammad & Turney, 2013). 37% of all opinionated tweets contained neutral sentiments, the rest contained either negative or positive sentiments. Krippendorff’s Alpha being only 0.67 shows that sentiments were much harder to code than

the other variables.

Interactions

Tweets were then coded to see whether they were a retweet, a retweet with added

commentary or a reply to another Twitter user. It was noted whether the original Twitter user was a politician or political party, a journalist or media outlet or a civilian or “something else”

(15)

15

job descriptions such as politician or journalist. If a user was not verified their identity was checked through further research. If a Twitter user was identified as neither a politician nor a journalist they were coded as a civilian. Krippendorff’s Alpha for each of these categories

was higher than 0.8.

Since those categories deal with categorical variables, dummy variables will be created. Then individual 2 × 2 chi-square tests will be used to analyze the differences in frequencies of each variable between the two coding periods. The test will show how much the

frequencies of those variables have changed between the two coding periods and whether those changes are statistically significant. Logistic regressions will also be used to test some of the other hypotheses.

4. Results

Polls

The first hypothesis assumed that journalists would post more opinion polls during the election. While this could be observed with 10 tweets mentioning polls before the election compared to 17 tweets during the election, only 27 tweets out of the 2,426 coded tweets mentioned polls in the first place. An individual 2 × 2 chi-square test revealed that this increase of tweets with polls between the two coding periods is not statistically significant (p = .074).

Main actors

The second hypothesis assumed an increase in tweets about politicians and political parties in the month during the election. To investigate these potential differences dummy variables were created to the see chi-square test results for each coded value of the variable. The results showed that a majority of all tweets had a politician or political party as a main actor (59.6%). However fewer tweets focused on politicians and parties in the month during the election compared to the non-election month. While 63.6% of all tweets produced before the

(16)

16

election focused on politicians or parties, only 54.3% focused on them during the election (χ2(1) = 21.399 p < .001). Meanwhile all other potential actors were mentioned more often

during the election month, though only journalists and media outlets and political events showed statistically significant differences (see table 1 for an overview). Before the election journalists were the main actors in 3.6% of all tweets, during the election they were the main actors in 6.8% of all tweets (χ2(1) = 15.457 p < .001]. Political events or acts were the main

topic of 13.6% of all tweets before the election and the main topic of 19.6% of tweets during the election (χ2(1) = 15.457 p < .001). Though as Cramer’s V shows, reported in Table 1,

the actual effects of these increases were rather weak.

Table 1

Cross-tabulation between time period and main topic/actor of a tweet (not showing tweets with no main actor/topic) Main Actor/Topic Of Tweet Coding Period χ2 (df = 1) p-value Cramer’s V Pre-election Election Politician/Political Party 63.6% 54.3% 21.399*** < .001 .094 Political Act/Event 13.6% 19.6% 15.457*** < .001 .089 Journalists/Media Outlets 3.6% 6.8% 13.140*** < .001 .074 Civilians 4.3% 5.8% 2.802 = .094 .034 Other Actors 3.0% 3.1% .00 = .996 .000 More Actors 1.1% 2.0% 2.774 = .096 .033 Note. n = 2,426, *** = p ≤ .001. Opinions

H3 stipulated that journalists would produce fewer opinionated tweets in the month during the election than in the month before the election. The opposite of this can be observed, with 43.9% of all tweets produced during the election being opinionated tweets compared to the 41.1% tweets produced before the election. However those differences are not statistically

(17)

17

significant (p = .162). Similar effects can be seen with strong and weak opinions. Strong opinions were tweets that consisted only, or mostly, of an opinion while weak opinions were tweets which consisted of both a neutral statement and a shorter opinion about it. The amount of strong opinions increased in the election period period from 31.1% to 35.2%. Weak opinions however decreased from 10% in the pre-election period to 8.7% during the election. Once more those differences are not statistically significant (p = .085). H4 then assumed that there would be more opinions about politicians or political parties in the month leading up to the election. Opinions about politicians or parties decreased during the election, from 18.5% to 16.8% (χ2(1) = 6.23 p = .013). This is similar to the results for H2 where

politicians and parties were also mentioned less in general during the election. The opinions about political events, journalists and civilians meanwhile increased. While the increase in opinionated tweets about political events was not statistically significant, the increase in opinionated tweets about journalists and civilians was. Opinionated tweets about journalist increased from 5.3% to 9.3% (χ2(1) = 15.118 p < .001) and for civilians from 2.2% to 4.0% (χ2(1) = 6.816 p = .009) as can be seen in Table 2.

Table 2

Cross-tabulation between time period and main topic of an opinion (not showing tweets with no opinion topic)

Topic Of Opinion Coding Period χ2 (df = 1) p-value Cramer’s V

Pre-election Election Politician/Political Party 18.5% 16.8% 6.230* = .013 .051 Political Act/Event 7.3% 7.5% .044 = .834 .004 Journalist/Media Outlet 5.3% 9.3% 15.118*** < .001 .079 Civilian 2.2% 4.0% 6.816** = .009 .053 Note. n = 2,426, * = p ≤ .05, ** = p ≤ .01, *** = p ≤ .001.

(18)

18

These findings were rather interesting and I therefore decided to further look into these results by conducting a small qualitative content analysis of those opinionated tweets after the quantitative analysis had been done. Since opinions about journalists increased significantly in the second coding period, the paper decided to look specifically at tweets containing opinions about journalists or media outlets produced during the election.

The analysis illustrates that the journalists tended to compliment journalists or media outlets for specific journalistic products such as comments, analysis or projects. The words “worth reading” were used most often in those tweets, at times with added adjectives to further

convey specific attitudes. Many of the positive compliments were single adjectives describing stories as “great”, “interesting” or “smart” (words translated from German by the author).

While most of the opinions about journalists or media outlets were about journalistic products the tweets would also give opinions regarding journalistic behavior, such as moderating during political talk shows or decisions made. Sometimes the same compliments could be observed in tweets made by different journalists, with several of the journalists for example complimenting the same journalist about her moderating skills during a televised debate. Furthermore the journalists did also not shy away from retweeting tweets complimenting their own work. One notable example was a retweet in which another journalist complimented the work of the analyzed journalist while also mentioning that the media outlet of the

complimenting journalist had done similar work.

When it came to negative opinions journalists tended to use more words, a contrast to the single-word compliments used for positive opinions. Journalists mostly explained their criticism and mostly criticized behavior instead of products. For example during a televised political debate journalists criticized the misuse of quotes and a lack of questions about specific topics. While those criticisms were predominantly limited to one tweet, there were two instances in which criticism of journalistic behavior led to debates. In one case a journalist was criticized for moderating at the election event of a party. This led to a

(19)

19

discussion about journalistic objectivity, with several journalists taking part in it, retweeting and replying to each other and writing out their opinions in longer pieces outside of Twitter.

Lastly H5 assumed that there would be more neutral opinions during the election. A binomial logistical regression was performed to see the effect that the coding period had on neutral opinions. However the model was not statistically significant (p = .867). To see whether the other two opinion sentiments were influenced by the coding period two more binomial logistical regressions were performed. The model for negative opinions was equally not statistically significant (p = .643) though the model for positive opinions was statistically significant with χ2(1) = 10.895 and p < .05. The actual differences in positive opinions

between the coding periods were not very big though, with the model revealing that positive opinions were 1.65 times more likely to occur during the election than before the election.

Interactions

Hypothesis 6 dealt with interactions and assumed that journalists would interact less during the election. Interactions were divided into three categories with retweets, mentioning other twitter users and replying to comments or tweets. All three of these categories had different results.

There were no significant differences between the amount of retweets in the two coding periods (p = .103). However, there was a statistically significant increase of Twitter users being mentioned in tweets from 25.5% to 32.9% (χ2(1) = 15.716 p < .001). Compared to this replies decreased in the election period, from 18.1% to 16.4% (χ2(1) = 6.031 p = .014). A

closer look reveals that those effects were not equal for all actors. While there was an

increase in the number of Twitter users being mentioned in tweets, this increase only applied for journalists who went from being mentioned in 12.7% of tweets to 18.2% (χ2(1) = 14.494 p

< .001). Civilians on the other hand were mentioned much less in the election period, going from 1.8% to 0.7 (χ2(1) = 5.115 p = .024), as has been assumed in H7.

(20)

20

This is a trend that can also be observed when it comes to replies. The only statistically significant difference is with replies to civilians who went from 11.5% of all tweets to 8.4% (χ2(1) = 7.063 p = .008.).

To further see which variables influence the type of interaction three binomial logistical regressions were conducted. The first regression model, as seen in table 3, showed that neither main actor, nor topic of the tweet, nor the coding period had an impact on a tweet being a retweet, with the model explaining only 5.5% of the variance observed. The other two models, also shown in table 3, revealed that main actor and topic and the coding period did however partly influence the occurrences of tweets mentioning other Twitter users and replies. Tweets about political events made it less likely for tweets to mention other Twitter users (b* = -0.582, p = .003) or to be replies (b* = -0.645, p = .01). The coding period also influenced the likelihood of a tweet mentioning other Twitter users (b* = - 0.366, p < .001) or the tweet being a reply (b* = 0.265, p = .045). A tweet about politicians or a party also made it more likely for a tweet to mention other Twitter users (b* = 0.289, p = .027). Tweets about other actors, such as companies, decreased the likelihood of another Twitter user being mentioned (b* = -0.693, p = .042).

The regression models explained 7.8% of the variance observed for tweets mentioning other users and 31% of the variance observed for replies. The only noticeable exceptions in all three models were tweets that had several actors or topics which all influenced the likelihood of a tweet being a retweet (b* = 1.367, p < .001), mentioning other users (b* = 1.33, p < .001) or being are reply (b* = 3.195, p < .001).

(21)

21

Table 3

Regression Model predicting if tweet is a retweet, mentions another Twitter user or is a reply

Type Of Interaction

Retweet Mentioning Users Replies Main Actor/Topic Politician/Party 0.014 0.289* 0.22 Political Event 0.024 -0.582** -0.645* Journalist/Media -0.103 0.066 0.108 Civilian -0.341 -0.023 0.139 Other Actor 0.010 -0.693* 0.190 Several Actors 1.367*** 1.33*** -3.195*** Coding Period -0.119 -0.366*** 0.265* R2 (%) 5.5% 7.8% 31% Note. n = 2,426, * = p ≤ .05, ** = p ≤ .01, *** = p ≤ .001. 5. Discussion

The study looked at tweets published by German journalists before and during the 2017 German Federal Election to see if journalists normalized their behavior throughout the election period. The results reveal that in almost all areas journalists do not normalize their behavior more strongly during the election than before. Norms and behaviors such as horse-race coverage or objective reporting were not normalized and journalists only partly

normalized their gatekeeper role. The results of several extant studies which found

journalists to focus on polls during election times (Benoit, Stein, & Hansen, 2005; Strömbäck & Shehata, 2007) could not be replicated in this study. While there was an increase in polls between the two coding periods, the difference was not statistically significant. Hypothesis 1 of a normalization of horse-race coverage is therefore not supported.

(22)

22

There could be several empirical reasons for this lack of a normalization effect. Firstly the number of tweets with polls was very low to begin with. This might be because polls are more easily displayed visually and might therefore merely be underrepresented in the purely text-based analysis. Another reason could be that poll numbers were mentioned in linked articles while journalists focused on different aspects in their own tweets. This also ties in with most journalists retweeting poll numbers instead of tweeting about them themselves, with 77.8% of all tweets with poll numbers being retweets.

The lack of an increased normalization during elections for the second hypothesis is harder to explain. H2 stated that journalists would tweet more about politicians and parties. The opposite happened, the number of tweets about politicians and parties decreased while the tweets about journalists, political events, civilians and other actors increased. H2 is therefore also not supported. Only the increase of tweets about journalists and media outlets and political events are however statistically significant.

The increase in tweets about political events can be explained since more tweets mentioned the upcoming election. The increase in tweets about journalists and media outlets might be due to a higher focus on individual journalists during the election. For example during the “TV-Duell”, a televised debate between candidates, the journalists presenting the debate

were often mentioned and opinions were given regarding their performances. Another potential explanation for this might also be Shoemaker and Reese’s model of influences at the routine level (Reese & Shoemaker, 2016). Shoemaker and Reese talk about “unstated rules and ritualized enactments” (Reese & Shoemaker, 2016, p. 399) in regards to routine

journalistic behavior. The increased number of tweets about journalists and media outlets, especially in connection with opinionated tweets, could point towards an explicit judgement as to whether or not those rules and rituals were fulfilled satisfactorily by the journalists in question. Since elections are a time of heightened journalistic activities (Molyneux & Mourão,

(23)

23

2017) individual journalists and media outlets are put more into focus, which might increase those judgements.

H3 then assumed that journalists would produce less opinionated tweets during the election due to objectivity norms. The opposite once more occurred, with journalists posting more opinions during the election than before. H4 assumed that there would also be an increase of opinionated tweets about politicians and political parties. Just like H3 however it can’t be

supported, with fewer opinions about politicians and parties being tweeted during the election compared to before the election. Once more no normalization during the election time can be observed. The results of those two hypotheses point towards the assumption made before, namely that journalists use the election to discuss established journalistic norms by talking about journalistic performances. This would explain why there was an increase of opinions in general but a decrease of opinions about politicians and parties, who should be among the most important actors of an election.

The results of the qualitative analysis also support this conclusion of journalists discussing established journalistic norms during the election. While compliments about other journalists or media outlets were usually short and focused on products, criticism was more focused on behavior and norms and was discussed in more depth. As has been noted in the analysis, in two cases such criticisms about journalistic behavior did in fact lead to longer discussions between journalists. Marwick and Boyd (2011) found that journalists tend to imagine their audiences to be similar to themselves. Even if the opinionated tweets about journalistic behavior are not directly made for other journalists, the journalists produce them for an audience they imagine to be interested in such discussions. And German journalists on Twitter exist in journalism-centered bubbles in which they mostly interact with other

journalists (Nuernbergk, 2016), making the audience taking part in such discussions mostly other journalists. While technically open to their entire audience, such discussions about journalism are therefore still mainly held between journalists without any outside voices.

(24)

24

Even if the discussions about journalistic routines are not a conscious decision by the journalist, the nature of Twitter and the huge amount of interactions between German journalists on Twitter (Nuernbergk, 2016) almost force these debates to take place between journalists. And while the qualitative analysis found that critical opinions led to more in depth discussions about journalistic behavior, those discussions are however not predominantly negative as the results for H5 illustrate. H5 assumed that journalists would produce more neutral tweets during the election and once more the opposite can be observed, with journalists producing more positive tweets, showing that the discussions are not just negative.

Going back to the normalization process, H6 assumed that journalists would uphold their gatekeeper role during the election and interact less with their audiences. Mixed results can be observed when looking at H6. The three different categories of interactions this paper looked at - retweets, mentioning other Twitter users and replies - all had different results. Retweets showed no statistically significant change between the two coding periods and while the number of Twitter users mentioned did increase, it did so in a statistically significant way only for journalists, while the number of civilians being mentioned in tweets decreased. Similar trends can be observed when looking at replies. Only the number of replies towards civilians decreased in a statistically significant way. These findings clearly show that when it comes to interactions there were two big groups with different results, namely journalists and civilians. Interactions with other journalists and media outlets increased during the election while interactions with civilians decreased.

While H6 and its assumption that interactions would decrease generally during election times cannot be confirmed, H7 can be confirmed. H7 assumed that there would be fewer

interactions with civilians during the election compared to before the election. In two out of the three areas of interactions this paper looked into this can be observed, with civilians showing statistically significant decreases in those areas when compared to changes in

(25)

25

interactions with other actors. These results could once more tie in with the theory about journalists using Twitter as a way to discuss journalistic norms and guidelines during the election. Journalists on Twitter tend to focus on other journalists or politicians (Bentivegna & Marchetti, 2018). Combined with the fact that discussions about journalistic routines will mainly attract other journalists, journalists might foster an even stronger discussion culture on Twitter by mostly focusing on other journalists when it comes to interactions.

6. Conclusion

The main aim of this study was to find out whether journalists normalized their behavior on Twitter during election times. This could not be observed in the results. Instead of

normalizing their behavior, journalists even increased some activities such as posting

opinions, going against the assumed norm of objectivity. Other activities such as interactions had more mixed results, with journalists showing a preference for interactions with journalists over interactions with civilians and engaging less with civilians during elections. This study was limited in its scope and has therefore only focused on 20 journalists over only eight weeks. Normalization effects might become more obvious if more journalists are observed over a longer time period. This study, like many before, also had a few smaller problems with its definition of opinions. In some instances tweets were coded as not containing opinions since they only contained facts. However, the specific facts that were chosen did show an underlying agenda and a potential opinion by the journalists. Future studies must find a way to also potentially code underlying opinions. Another limitation was that the study focused only on text and, due to time and resources constraints, did not also code visuals and videos. Images play a very important role on social media (Towner, 2017). At times images in tweets might have put tweets into a different context, leading to different conclusions regarding their contents. Ignoring important functions of Twitter such as sharing videos does limit the scope of the study. Future studies should therefore try to also include images and videos in their analysis if possible.

(26)

26

One important finding of this study was also that instead of a normalization processes during the election, journalists rather seemed to use Twitter as a platform to discuss journalistic norms and guidelines. Some studies have already looked at the fact that journalists tend to mostly interact with other journalists (Molyneux, 2015) and this study potentially points into the direction of journalists using Twitter to navigate and renegotiate journalistic norms on social media or potentially creating new ones. While many studies have noted that journalists have adapted their online behavior, the so-called hybrid-normalization and its mix between traditional norms and characteristics of social media, there has been less focus on how exactly journalists negotiate these adaptions between themselves. Qualitative analysis of journalistic interactions on social media might be a way to see how this adaption process is seen, discussed and evaluated inside the journalist community. While many studies note that journalists interact mostly with each other on Twitter, few have actually taken a closer look at what journalists talk about. Journalists might use Twitter just to inform other journalists about political events or share extra information but even the small qualitative analysis I have conducted points towards journalists also using Twitter to talk about journalistic work. Looking into exactly how journalists discuss those norms on Twitter is therefore an

interesting and important research area. It could show if only elite or mainstream media take part in such discussions, if there are differences between the generations of journalists who have grown up with social media and those who have not, and if journalists writing for online publications are more willing to discuss new norms than their colleagues from traditional media outlets. While the study did not find any increasing normalization processes during election times, it does therefore still contribute to the academic discussions in this area.

7. List Of References

Banducci, S., & Hanretty, C. (2014). Comparative determinants of horse-race coverage. European Political Science Review, 6(4), pp. 621-640.

(27)

27

Benoit, W., Stein, K., & Hansen, G. (2005). New York Times coverage of presidential campaigns. Journalism and Mass Communication Quarterly, 82(2), pp. 356–376.

Bentivegna, S., & Marchetti, R. (2018). Journalists at a crossroads: Are traditional norms and practices challenged by Twitter? Journalism, 19(2), pp. 270–290.

Brems, C., Temmerman, M., Graham, T., & Broersma, M. (2017). Personal Branding on Twitter. Digital Journalism,, pp. 443-459.

Brettschneider, F. (2009). Die “Amerikanisierung” der Medienberichterstattung über Bundestagswahlen. In O. W. Gabriel, & J. Falter, Die “Amerikanisierung” der Medienberichterstattung über Bundestagswahlen (pp. 510 - 536). Springer.

Davies, J. (2017). State of social platform use in Germany in 5 charts. Digiday, Available at: https://digiday.com/marketing/state-social-platform-use-germany-5-charts/.

Deuze, M. (2003). The web and its journalisms: considering the consequences of different types of newsmedia online. New Media & Society, 5(2), pp. 203-230.

Hanusch, F., & Bruns, A. (2017). Journalistic Branding on Twitter: A representative study of Australian journalists’ profile descriptions. Digital Journalism, 5(1), pp. 26-43.

Henn, P., Dohle, M., & Vowe, G. (2013). "Politische Kommunikation": Kern und Rand des Begriffsverständnisses in der Fachgemeinschaft. Ein empirischer Ansatz zur Klärung von Grundbegriffen. Publizistik, 58(4), pp. 67-387.

Holton, A., & Lewis, S. (2011). Journalists, social media, and the use of humor on Twitter. Electronic Journal of Communication, 21 (1&2).

Jürgens, P., & Jungherr, A. (2015). The Use of Twitter during the 2009 German National Election. German Politics, 24(4), pp. 469-490.

Kratzke, N. (2017). The #BTW17 Twitter Dataset–Recorded Tweets of the Federal Election Campaigns of 2017 for the 19th German Bundestag. Data Descriptor.

Lariscy, R., Avery, E., Sweetser, K., & Howes, P. (2009). An examination of the role of online social media in journalists’ source mix. Public Relations Review, 35(3), pp. 314-316.

Lasorsa, D., Lewis, S., & Holton, A. (2012). Normalizing Twitter Journalism practice in an emerging communication space. Journalism Studies, pp. 19-36.

Lawrence, R., Molyneux, L., Coddington, M., & Holton, A. (2014). Tweeting Conventions. Journalism Studies, 15(6), pp. 789-806,.

Luo, T., Chen, S., Xu, G., & Zhou, J. (2013). Trust-based Collective View Prediction. Springer.

Margolis, M., & Resnick, D. (2000). Politics as Usual: The Cyberspace ‘Revolution’. Thousand Oaks: Sage.

Marwick, A., & Boyd, D. (2011). “I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience. New Media & Society, 13(1), pp. 114– 133.

(28)

28

McLeod, J., & Sotirovic, M. (2009). Media Coverage Of U.S. Elections: Persistence of Tradition. In J. Strömbäck, & L. Kaid, The Handbook of Election News Coverage Around the World (pp. 21-40). Routledge.

Mohammad, S., & Turney, P. (2010). Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In Proceedings of the NAACL-HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. LA.

Mohammad, S., & Turney, P. (2013). Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence, 29(3), pp. 436-465.

Molyneux, L. (2015). What journalists retweet: Opinion, humor, and brand development on Twitter. Journalism, 16(7), pp. 920 - 935.

Molyneux, L., & Mourão, R. (2017). Political Journalists’ Normalization of Twitter. Journalism Studies, pp. 1-19.

Mourão, R., Diehl, T., & Vasudevan, K. 2. (2016). “I Love Big Bird: How Journalists Tweeted Humor During the 2012 Presidential Debates. Digital Journalism, 4(2), pp. 211–228.

Muñoz-Torres, J. (2012). TRUTH AND OBJECTIVITY IN JOURNALISM. Anatomy of an endless misunderstanding. Journalism Studies, 13(4), pp. 566-582.

Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D., & Nielsen, R. (2017). Digital News Report 2017. Reuters Institute for the Study of Journalism.

Nuernbergk, C. (2016). Political Journalists’ Interaction Networks. Journalism Practice, 10(7), pp. 868-879.

Oh, P. (2016). Horse-race coverage includes candidates’ policy positions. Newspaper Research Journal, 37(1), pp. 34 - 43.

Ravi, B. (2017). Modern Media, Elections and Democracy. SAGE Publishing India. Reese, S., & Shoemaker, P. (2016). A Media Sociology for the Networked Public Sphere:

The Hierarchy of Influences Model. Mass Communication and Society, 19(4), pp. 389-410.

Schudson, M. (2001). The objectivity norm in American journalism. Journalism, 2(2), pp. 149–170.

Schudson, M., & Anderson, C. (2008). Objectivity, Professionalism, and Truth Seeking in Journalism. In J. Wahl-Jorgensen, & T. Hanitzsch, Handbook of Journalism Studies (pp. 88-101). New York: Routledge.

Serrano-Guerreroa, J., Olivas, J., Romero, F., & Herrera-Viedma, E. (2015). Sentiment analysis: A review and comparative analysis of web services. Information Sciences Volume, 311, pp. 18-38.

Shoemaker, P., & Reese, S. (1996). Mediating the message: Theories of influences on mass media content. White Plains: N.Y: Longman.

(29)

29

Singer, J. (2005). The political j-blogger ‘Normalizing’ a new media form to fit old norms and practices. Journalism, pp. 173-198.

Strömbäck, J., & Shehata, A. (2007). Structural biases in British and Swedish election news coverage. Journalism Studies, 8(5), pp. 798–812.

Thimm, C., Einspänner, J., & Dang-Anh, M. (2012). Twitter als Wahlkampfmedium

Modellierung und Analyse politischer Social-Media-Nutzung. Publizistik, 57(3), pp. 293–313.

White, D. (1964). "The 'Gatekeeper': A Case Study In the Selection of News. In D. Lewis, & D. White, People, Society and Mass Communications (pp. 160 - 172). London. Wien, C. (2005). Defining Objectivity within Journalism. An Overview. Nordicom Review, 2,

(30)

Appendix 1

Codebook:

NOTE: Originally the paper contained hypotheses about the use of irony and jokes by

journalists on Twitter. Due to a lack of space the hypotheses and their results were deleted from the final version. Since the two variables were however coded for in all tweets they will remain in this codebook.

Formalities:

Tweet ID (Number of journalist/Publication Date/Number Of Tweet On That Day)

e.g. (1/29.01.2017/1)

Number of journalist: Every journalist will receive a number for data protection. Publication Date: Date the tweet was published, given in DD.MM.YYYY

Number Of Tweet: Number of tweet published on given day A: Political Tweet

A1: Does the tweet mention policy matters, a political actor, a political party, a political

organization/institution, a political act or a person taking part in a political act or is the tweet a reply to such a tweet?

YES (1) or NO (2)

If NO (2) then stop coding here.

A2: Does the tweet mention a politician or political party (1), a political process or event (2),

a journalist or media outlet (3), a civilian (4), somebody else (5), more than one actor (6) or non-applicable (99) as the main actor or main event?

B: Polls

B1: Does the tweet mention poll numbers or percentages of at least one political party,

politician or political decision? YES (1) or NO (2)

C: Opinions, Irony And Jokes

C1: Does the tweet contain an opinion?

YES (1) or NO (2)

C2: Is the opinion in the tweet a strong opinion (1) a weak opinion (2) or non-applicable

(99)?

C3: Is the opinion negative (1), neutral (2), positive (3) or non-applicable (99)?

C4: Is the opinion made about a politician or a political party (1), a political process or event

(2), a journalist or media outlet (3) or a civilian (4) or non-applicable (99)?

C5: Is the journalist being ironic in the tweet?

(31)

C6: Is the journalist trying to be funny in the tweet?

Yes (1) or No (2)

D: Interactions

D1: Is the tweet a retweet?

YES (1) or NO (2)

D2: Is the person retweeted a politician or political party (1), a journalist or media outlet (2),

a civilian (3), somebody else (4) or non-applicable (99)?

D3: Did the twitter user add extra commentary to the retweet?

YES (1), NO (2) or NON-APPLICABLE (99)

D4: Is another user mentioned in the tweet?

YES (1) or NO (2)

D5: Is the person mentioned a politician or political party (1), a journalist or media outlet (2),

a civilian (3), somebody else (4), non-applicable (99)?

D6: Is the tweet a reply?

YES (1) or NO (2)

D7: Is the reply to a politician or political party (1), a journalist or media outlet (2), a civilian

(32)

Appendix 2

Codebook Explanation:

General:

- Images, links and emoji will not be coded, only text in tweets will be coded

- If a tweet does not contain any sort of text the tweet will not be counted and shall be coded as (100)

- If a tweet is written in a language the coder is not at least an advanced speaker in it shall be coded as (100)

- If a tweet has been deleted or is not readable making it impossible to understand the context of a tweet, the tweet will not be counted and must be coded with (100)

- If a tweet is a reply or a commentary added to a retweet the original tweet must be used to infer the correct context for the tweet

- When it doubt whether or not a category does apply it should always be coded as not applying

- All coders must be advanced speaker of German and English and must be familiar with Twitter and its functions such as retweets or replies

Explanation of questions:

A1 Policy Matters/Political Actors/Parties/Political Acts:

Policy matters: Policy matters are defined as principles or actions being taken in specific

matters. This might happen in the form of laws, guidelines or in positions taken by

governments. Those areas must be named explicitly (e.g. labor laws, immigration policies, health care). Implicit mentions will not be counted. This means tweets which for example talk about refugees will not be included unless they specifically talk about immigration. A tweet linking to an article or a website about a policy topic will not be counted as a mention of policy unless the tweet itself explicitly mentions it.

Political Actor: Political actors are defined as either a) named politicians (e.g. Angela

Merkel), b) politicians addressed by their role (e.g. CDU parliamentary party leader) or c) members of the administrative system (e.g. members of the interior ministry).

Named politicians include:

a) The cabinet of Germany during the 2013 to 2017 term

b) Leading candidates of the seven major German parties (CDU, CSU, SPD, Grüne (The Green Party), Linke, FDP, AfD)

(33)

d) Current and former parliamentary party leaders of German parties e) Current and former general secretaries and federal whips of parties f) Current and former heads and mayors of the 16 German Federal States g) Current and former foreign heads of state

h) Members of foreign governments

If an unknown person is named without any context given to their possible political position further research must be done to determine their potential political position if the person in question is not known to the coder.

Politicians addressed by their role: Politicians addressed by their role are defined as

politicians who are only addressed by their political role (e.g. CDU parliamentary party leader, SPD members of parliament)

Members of the administrative system: People active inside the political system who are

not politicians themselves such as diplomats or people who work for ministries or political parties.

Political parties or political organization: Political parties are defined as currently active

political parties. The seven major parties in Germany include the CDU, the CSU, the SPD, the Grüne (Green Party), the Linke, the FDP and the AfD. In the case of the CDU and CSU both parties may be mentioned as one party or as “the Union Parties/Unionsparteien”. This will still be defined as a mention of a political party. Smaller parties will be defined as having reached at least 40.000 votes in the 2013 election. This includes the Piraten, the Freie Wähler, the Tierschutzpartei, the ÖDP, the Republikaner, the NPD, the Partei and the Bayernpartei. Political parties will also include coalitions formed out of political parties and their colloquial names which include GroKo or Große Koalition (CDU/CSU and SPD), R2G or Rot-Rot-Grün (SPD, Linke and Grüne), Schwarz-Grün (CDU/CSU and Grüne), Schwarz-Gelb (CDU/CSU and FDP) or Jamaika (CDU/CSU, FDP and Grüne).

This definition also includes political parties currently active in other countries such as the Republicans or the GOP in the United States or “En Marche” in France. If an unknown party is named without any context given further research must be done to determine whether they are a currently active political party.

Political organization will be defined as all institution that are part of a government such as its ministries or unions with a political aim such as the European Union or the United Nations.

Political Acts or a person taking part in a political act: A political act is defined as an

event or an action that is connected to politics which is either conducted by a politician or by a civilian. This can include a politician giving a speech in parliament or a citizen voting. A person taking part in a political act is defined as the person who is conducting this political act such as a voter or the politician giving the speech.

If a tweet is a reply or a retweet the original tweet will be coded for context to see if the tweet can be considered political.

(34)

Using the definitions provided in A1 and the identification methods mentioned in D2/D5/D7 the coder must code in which category the main actor of the tweet or the main event described in the tweet fall.

Example for A1/A2:

A journalist tweets “Will interview Sigmar Gabriel tonight.” A twitter user replies “Looking forward to a good interview!”

The journalist replies to this “No pressure please, will talk in-length about the upcoming elections”

There are two tweets in this exchange that must be coded, namely the first and the third tweet.

The first coded tweets would be “Will interview Sigmar Gabriel tonight.”

Sigmar Gabriel is mentioned. He is a German politician who is both a) a member of Merkel’s cabinet from 2013 to 2017 and b) a party leader of the SPD from 2009 to 2017.

A1 will therefore be coded as YES (1), since a politician is mentioned.

A2 will be coded as “politician or political party (1)” since Sigmar Gabriel is a politician.

The second tweet to be coded would be “No pressure please, will talk in-length about the upcoming elections”. On the first look A1 does not seem to apply. However since the tweet is a reply the original tweet must be taken into account. This would be the first coded tweet, namely “Will interview Sigmar Gabriel tonight.” Category A1 would therefore apply, a politician is mentioned.

For category A2 the main event mentioned in this tweet would be the “upcoming elections”. This is a political event, it would therefore be coded as such.

If the exchange however would read as such it must be coded differently: A journalist tweets “Will interview Sigmar Gabriel tonight.”

A twitter user replies “Looking forward to a good interview!” The journalist replies to this “No pressure please.”

Since the first tweet did not change, nothing would change here. For the second tweet category A1 would also still apply since the original tweet did not change. What change would be category A2 for the second tweet. There are no mentions of upcoming elections. There is no main actor or main event named in the tweet. It would therefore be coded as “non-applicable ” in this case.

(35)

B: Polls

Polls are survey results showcasing the answer or answers to a specific question. Only four kinds of polls are important for this analysis namely polls that measure a) the popularity of politicians or b) the voting intentions for a specific party or a specific politician or c) support of the German electorate for a specific policy and d) the winner of the televised debate between the Chancellor candidates.

To see if a given number falls into one of these categories should the tweet not put the number into context the coder must check for context clues such as:

1) Is a polling institute mentioned? Polling institutes are responsible for creating opinion polls and are often named in connection to poll numbers as a source. If a polling institute is

mentioned a number or percentage will be considered to be part of a poll. The seven biggest German polling institutes that could be mentioned are: Forsa, Forschungsgruppe Wahlen, infas, infratest dimap, Allensbach Institut, Ipsos, TNS Emnid

2) Does the tweet mention the word “Sonntagsfrage”? “Sonntagsfrage” is a colloquial term used in Germany to signify polls about voting intentions. The use of this word will therefore be considered to be connected to polls.

Example for B1:

A journalist tweets “Sonntagsfrage: SPD gained 5% after making Schulz the leading candidate.”

Are poll numbers or percentages mentioned? Yes, a 5% increase is mentioned. No further context for the number is given.

The number is therefore checked for context clues to see if it is connected to polls.

Is a polling institute mentioned? No, no polling institute is mentioned.

Is the word “Sonntagsfrage” mentioned? Yes, it is mentioned. The number is therefore considered to be connected to polls. Since “Sonntagsfrage” is used for a poll measuring voting intentions it falls into category b) polls measuring voting intentions.

The tweet must be coded as YES (1) for question B1.

C: Opinions, Irony And Jokes

NOTE: Originally the paper contained hypotheses about the use of irony and jokes by journalists on Twitter. Due to a lack of space the hypotheses and their results were deleted from the final version. Since the two variables were however coded for all tweets their definitions will remain in this explanation of the codebook.

C1 Opinion: Opinions will be defined using a mix between definitions by Serrano-Guerreroa,

Olivas, Romero and Herrera-Viedma (2015) and Lawrence, Molyneux, Coddington and Holton (2014). An opinion is an attitude, emotion or evaluation of a person, event or organization which offers commentary not attributed to a source that goes beyond mere facts. The definition of attitudes, emotions and evaluations will be based on Jurafsky and

Referenties

GERELATEERDE DOCUMENTEN

It also tries to reconcile differences of opinion of journalism educators, practitioners and journalism students, who will eventually be infusing the profession with

The perfusion variables in the nail bed of dig III sin before the digital nerve block were: average AUC 9.7 PU, perfusion dip time 10.9%, average dip amplitude 89.0 PU,

By combining near-real-time estimates of ground shaking with globally available landslide susceptibility data, this model provides a tool to estimate the distribution of

Pluralism underwrites Justice Kriegler's assessment – in Gauteng Education Bill - of the limits placed upon what a public school may do with regard to a learner's rights

“To ensure the right of access to public educational institutions and programmes on a non-discriminatory basis; to ensure that education conforms to the objectives set out in Article

De functie van EPIPRE blijft de advisering van de zin van bestrijding, gegeven een bepaalde waarneming, indien andere maat- regelen in preventie en beheersing van ziekten en

De combinatie van beide behandelingen, grond- en knolbehandeling, heeft het resultaat in 1987 verbeterd, in Valthermond in beide rassen en in Rolde bij Prominent, en in 1988 was

Inmiddels zijn er meer dan 30 emissie-arme syste- men, waarvan per systeem informatie beschikbaar is in de vorm van onderzoeksverslagen en artikelen, Wanneer een Groen