• No results found

What are the influences of emotional sentiments on online sharing behavior? Understanding the dissemination of fake news on social media

N/A
N/A
Protected

Academic year: 2021

Share "What are the influences of emotional sentiments on online sharing behavior? Understanding the dissemination of fake news on social media"

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

What are the influences of emotional sentiments on online sharing behavior?

Understanding the dissemination of fake news on social media

Master Thesis 12-01-2020

T.F. Kasper S2487934

Master Marketing Intelligence Rijksuniversiteit Groningen First supervisor: A. Bhattacharya Second supervisor: M. Gijsenberg

Abstract

The rise of social media usage and dissemination of fake news, offers an opportunity as well as a threat for companies. By studying the dissemination of fake news on social media, companies can gain insight in the characteristics of the most shared stories and thus act accordingly to protect them

from harm. Traditional research on online content sharing has been concerned with positive and negative sentiments. This paper aims to give a new perspective on the sharing dynamics on social

media and give new insights in which sentiments perform the best regarding shares.

The results confirmed that there are preferable sentiments which get shared more often compared to other sentiments. These sentiments are ’disgust’ and ‘joy’. Another finding suggests that headlines

(2)

2

Table of Contents

Introduction...3

Literature review ...4

Fake news ...4

Social Media Usage ...5

Sentiments...6

Hypotheses development ...6

Title structure ...6

Sharing behavior ...7

Data and methodology ...9

(3)

3

Introduction

The rise of internet usage and use of social media can be called a double-edged sword. The usage of social media is relatively easy, low cost and it enables people and companies to rapidly share their thoughts and reach a lot of people in a fast manner. However, this trend also enables the widespread of the so called ‘fake news’, Which can be described as intentionally false, realistic and fabricated stories (Visentin, Pizzi, Pichierri, 2019). Fake news is nothing new, during World War II complete propaganda programs were used to alter the opinion of the people considering certain subjects. However, these days fake news in combination with social media is everywhere around us and way more subtle. Due to the low barriers to entry and the low degree of control, producers of fake news have new incentives to produce more and more fake news. They generate absurd and shocking headlines to generate clicks and traffic to their advertising filled websites (Ormond et al. 2016). Nonetheless, it is not known which specific factors triggers people to share these articles. Therefore, research regarding sharing behavior of fake news is needed. The increasing amount of fake news being shared is due to the immense growth of the presence of Social media in our lives. The fast growth of social media is remarkable, Facebook for example went from covering around 1.5% of the world population in 2008, to around 30% in 2018 and the usage of social media in the United States has risen from 5% in 2005 to 79% in 2019 (Our world in Data, 2020). Accompanying this rapid growth in the usage of social media and internet is the buzz around fake news. It is thus not surprising that for the past three years, three different dictionaries have chosen terms related to fake news as their word of the year. In 2018 Dictionary.com crowned ’misinformation’ as their word of the year, The American Dialect Society chose ‘’Fake news’ as their word of 2017 and in 2016 the Oxford Dictionary named ‘post-truth’ as their winner (What’s New in Publishing | Digital Publishing News, 2020). Another growing trend in social media usage is the fact that people tend to use social media to get their news. A survey conducted by the Pew Research Center in 2016 shows us that 62% of U.S. adults get their news on social media. And 18% of them do so often. Twitter, Facebook and Reddit are most often used for looking up news. The danger of using social media for getting news is that in most cases you are not actively looking for news and just passively passing by news articles by chance. This results in 64% of the people only using 1 article to verify their news (Pew Research Center’s

Journalism Project, 2020). This trend will further increase as the internet keeps replacing other information platforms such as printed newspapers or television (Xu et al., 2014). This passive style of browsing behavior and news reading is suited perfect for fake news producers. Because of the lack of interest and need to verify, people will click on the most sensational headlines and thus feed the fake news market with more and more traffic.

This phenomenon can be seen as a threat as well as an opportunity for companies. The

understanding and detection of fake news is not sufficient at the moment. Shu et al. (2017) suggests that a deeper understanding of the detection of fake news is needed. They suggest word embedding and deep neural networks are needed to further improve the knowledge regarding fake news recognition.

(4)

4 This new tactic of online advertising could give your company a competitive advantage. Other

researches have concentrated on the different influences of emotions on real life behavior (Littrell, 2009, Bazarova et al. 2015, Heath, Bell and Stemberg 2001). Stieglitz and Dang-Xuan (2013) have done research regarding positive and negative sentiments and the amount of fuzz a tweet creates. But none research has been done regarding the influence of sentiments on the dissemination of fake news. Companies do not know why and how fake news disseminate and how to alter this process. Understanding this dissemination process, companies will be able to alter the process and protect themselves from harming content to be shared amongst the internet. Thus, prevent your brand of being hurt by fake news. For example, in January 2019, a video surfaced of a self-driving Tesla car slamming into a ‘promotion robot’. It did not take long for the video to go viral. This video was staged and thus fake news. However, with 3 in 4 Americans saying they are afraid of self-driving cars, this incident did only strengthen the assumptions of the public regarding self-driving cars. Consequently, hurting Tesla in their mission to get the public to accept self-driving cars (NBC News, 2020).

Companies now, are hiring third-party firms with an army of human monitors to trawl social media to search for harmful content regarding their brand. Using the results of this paper, companies can save money on their spending regarding those third-party companies and spend this money elsewhere. Consumers on the other hand could also benefit because their online marketing environment could become less crowded with irritating and fake advertisements. Companies like Facebook could benefit from this paper because this paper aims to provide ways to distinct real and fake news from each other. Thus, further improve their algorithms regarding fake news detection. This paper aims to give more clarification about this gap in the current literature. We compare two datasets with each other to see whether fake news and real news can be identified and sorted.

Literature review

Fake news

Fake news is defined by dictionary.com as: ‘False news stories, often of a sensational nature, created to be widely shared or distributed for the purpose of generating revenue, or promoting or

(5)

5 the drive for immediacy, due to social media as Twitter, which sees their platform as a breaking-news platform. The environment of Twitter is characterized by: ‘first come, first served’. On top of that the outlet with the first article about breaking news gets also best served. Because they will be shown at the top of the search page of twitter and thus attract a lot of retweets and likes. This triggers

journalists into delivering news the fastest way possible which means they have less time for fact-checking. This increases the change of news-brands sharing and using fake PR-material and reporting on these stories (Jackson and Moloney, 2016).

Another feature of fake news is clickbait. Clickbait originates from the curiosity gap of the people. This curiosity gap is easily accessible through creating absurd and provocative titles. Exciting headlines are generating curiosity among readers which tricks them into clicking on the link

(Chakraborty et al. 2016). The current trend in online advertising environment is ideally suited for the use of clickbait. The revenue and publicity created does not rely on the accuracy or truthfulness of the articles, but it is all about attention and clicks. News-outlets can thus use shocking headlines to attract consumers to their articles/websites and create a fuzz around their articles, thus create additional revenue.

Social Media Usage

Social Media Usage is depicted as the amount of time someone spends on a Social Networking Site (SNS). Social media are constantly changing the way information is spread among the people. Sites like Facebook and Twitter offers companies and people to interact freely with each other and thus share information and opinions. Other sites like YouTube in which people can give their opinion on certain subjects through videos (Greenwood et al., 2016).

Social media usage behavior is affected by extrinsic and intrinsic factors. Extrinsic motivation refers to the perceived helpfulness in achieving value. Intrinsic motivations for Social Media usage can be specified as committing to an action because of self-interest, for example: enjoyment (Lin & Lu, 2011).

The rapid diffusion of the use in social media can be explained through network externalities. When enough people use your product, other people will get motivated to also join in on the use of your product (Katz and Shapiro, 1985). Therefore, when SNS reach a certain number of users, the benefits network externalities emerge and the growth further increases.

Accompanying the constantly developing technologies, the ease with which SNS can be accessed and used keeps increasing. This enables people to keep increasing their social media usage. Over the last years this trend of increasing SNS usage has shifted the research regarding social media usage. The majority of research regarding social media usage is focused on theories like the theory of planned behavior (TPB) (Pelling & White, 2009) or the theory of reasoned action (Valente, Gallaher, & Mouttapa, 2004). These papers are getting outdated because SNS usage is no longer an action with a planned goal in mind. It has shifted from planned behavior to spending leisure time, or just

browsing because people are bored. These days SNS are offering different kinds of services to their users. They serve for interaction and communication, content sharing, information seeking and sharing and entertainment (Koohikamali & Sidorova, 2019).

(6)

6

Sentiments

Previous research on emotions and their influence on behavior, has found that titles containing emotional stimuli elicit a higher cognitive process which in turn will lead to more attention (Bayer, Sommer and Schacht, 2012). This increased level of cognitive involvement may lead to a higher likelihood of a behavioral response, e.g. clicking on the headline or even sharing it (Luminet et al., 2000). Stieglitz and Dang-Xuan (2013) confirm this phenome with their research regarding

emotionally charged tweets. They also found that when tweets contain emotional triggers, they tend to be shared a lot more. However, previous research has only be done on whether something contains emotional triggers and if these triggers where positive or negative. Due to the growth of subjective and opinionated content on the internet the popularity for sentiment analysis has risen. These developments in the field of Natural Language Processing offers a broad range of new research techniques and possibilities (Balahur and Jacquet. 2015). Instead of just looking at whether

something is positive or negative these new techniques are capable of subtracting different emotional sentiments from online content. By looking at different sentiments in headlines, we can further improve our knowledge on online behavior regarding certain emotions. The dictionary depicts a sentiment as follows: ‘the thought or feeling intended to be conveyed by words, acts or gestures distinguished from the words acts or gestures themselves’. The sentiment which an acts or gestures evokes, has an influence on the acts or gestures which will follow. So, different sentiments would call for different kinds of response. There are many different emotions which can be present in fake news titles, but some are more interesting to look at than other. People often use anger, disgust or fear as an excuse to do things they would normally not do, therefore it would be

interesting to look at the influence of anger on online behavior. On the other hand, the influence of sadness will also be interesting because people who feel sad are more prone in seeking comfort and thus changing their behavior accordingly. People who experience joy are most often uplifted and therefore more unreserved. People who feel surprised often show a need to tell people in order to get less surprised and thus are expected to behave in a certain way. The last sentiment in which we are interested is in the trustworthy sentiment. When people feel that they do not trust something they will feel the need to confirm this. So, people will behave different when they are confronted with something they do or do not trust. Positive and negative segment have also been researched on to see if Stieglitz and Dang-Xuan (2013) also accounts for online behavior regarding fake news.

Hypotheses development

Title structure

Burgoon et al. (2003) found that when people tend to lie in a face to face setting. They are more prone to using short words and using simpler and smaller sentences compared to people who tell the truth. This is because lying requires more mental ‘heavy lifting’ than telling the truth. Therefore, it creates more cognitive load. When people lie their focus lies more on creating a coherent, plausible story and their non-verbal behavior. Because of this shift in concentration people are more prone in using simpler and shorter words because their cognitive abilities are occupied by peripheral matters. This could suggest that titles of fake news stories should be containing shorter and less words because people who create such fake news are busier with coming up with a good and believable story instead of using their ‘normal’ complete vocabulary (Burgoon et al. 2003). From the readers’ perspective, shorter titles are easier to parse than longer titles. Thus, need less attention to be interpreted and are thus, more prone in being consumed. Which could lead to more shares

(7)

7 are a major contributor to the dissemination of fake news over social media (Silverman, 2015). Elyashar et al. (2017) main finding suggest that clickbait titles are significantly shorter than non-clickbait titles. They found that on average a non-clickbait title contains 2 less words than a legitimate title. From these findings, we generate the following hypothesis:

H1: Fake news headlines contain less words than real news headlines

H2: The words used in fake news headlines are shorter than in real news headlines

Sharing behavior

Prior research has proven that liking and sharing behavior of people can be influenced through several factors. One of these factors is the structure of the title. Letchford et al. (2015) found that academic papers with shorter and clearer titles where cited more often than similar papers with long titles containing difficult and long words. We would assume that these would also account for the dissemination of fake news on the internet; Thus, we come up with the following hypothesis:

H3: The length of the title is positively associated with the quantity of shares

Another factor influencing sharing behavior is the emotions felt during reading. Stieglitz and Dang- Xuan (2014) found that when tweets where containing emotional value they were prone to get shared more often and more rapidly. Therefore, we focus our research on the different emotions which can be present in a fake news title.

One of the factors influencing sharing behavior is the intention of the message. Nagarajan et al. (2010) analyzed over 1 million tweets regarding three real-world events. They found that tweets which can be categorized as a ‘call for action’ or ‘crowdsourcing’ result in very sparse sharing graphs. On the other hand, tweets with the intention of information sharing (e.g. containing URL’s) result in much more dens retweets graphs. Another factor influencing sharing behavior is the sentiment of the message. Stieglitz & Dang-Xuan (2014) analyzed two datasets containing politically relevant tweets of a germen election period. They found that the more sentiment a tweet contains, the more likely it is to be retweeted. Also, the speed with which a tweet is retweeted increases with the amount of sentiment a tweet contains. They also found that tweets with a negative sentiment got shared more often compared to positive tweets. Another conclusion of their research implies that tweets containing @mentions (whether the tweet is directed to someone else) or containing hashtags or URLS tend to shared more.

Rime et al. (1991) found in an offline research that when people experience intense emotions, they experience a greater need to regulate their emotions and thus share their information with more people and share information more often. Bazarova et al. (2015) added that when people

experience happiness the emotions felt much more intense and thus leading in even more shares. The sentiments ‘negative’ and ‘positive’ are the overarching groups combined of all segment words. Positivity for example, is a broader segment which can contain words of all segments. So instead of focusing on one aspect of the sentiments (for example joy), the sentiment positive is the overall scope of the headline. The same accounts for the sentiment ‘negative’.

H4: Joyful headlines are positively associated with the quantity of shares H5: Positives headlines are positively associated with the quantity of shares

(8)

8 not feel withheld to share certain news stories they otherwise would not share. Cotter (2008)

depicted that when stories do contain a high degree of believability which can be achieved by using a lot of trustworthy words. Can in turn contribute to the chance of people sharing their thoughts and stories with each other. Cotter (2008) also states that stories which trigger fear get shared more often. When people experience fearful thoughts, they tend to seek comfort with each other. This comfort can be achieved by just sharing the story so other people can comfort you. Another aspect of the trend in sharing information when you are feeling threatened is to warn your friends and family. Which in turn also results in more shares. Thus, we can assume that titles which contains a lot of fearing words tend to be shared more.

H6: Trustworthy headlines are positively associated with the quantity of shares H7: Fear indulging headlines are positively associated with the quantity of shares Heath, Bell and Stemberg (2001) explored the role of disgust in sharing urban legends. They

conducted a research in which students had to read multiple urban legends, and then share some of them with other students. Their research showed that urban stories which elicit a higher level of disgust got shared more often. They concluded that disgusting stories were a lot easier to recall and people felt more joy in telling disgusting things to significant other. Therefore, we could assume that this offline feeling of enjoyment by sharing disgusting stories would translate to the online

environment. Another aspect from their research was the fact they saw on their monitored website that the most read and shared story also contained a high level of disgust. Following is hypothesis 8.

H8: Disgusting headlines are positively associated with the quantity of shares

Bazarova et al. (2015) state that when people experience negative feelings, they tend to seek for a confirmation that they are not alone when feeling this negativity. After sharing the negative stories Bazarova et al. (2015) found that the receivers of these negative stories are more prone to respond in contrast to neutral stories. For example, when receivers are responding they are most probably providing social support and are showing empathy. These responses in turn feed the storyteller with a feeling of satisfaction, which triggers them to share negative stories more often because they offered them social attention and validation. So it is assumed that negative headlines could trigger these emotions and thus increase the probability of a headline being shared.

H9: Negative headlines are positively associated with the quantity of shares

Sharing emotions and stories with other people is often believed to serve as a cathartic function, thereby dissolve the impact of certain emotional experiences. Using sharing as a remedy to dissolve emotions is most prevalent when feeling sadness (Littrell, 2009). Brans et al. conducted an

experiment in which they examined the influences of anger and sadness on sharing behavior in a real-life context. Following the findings of Littrell (2009) they confirmed that when people are experiencing sadness, they are prone to share their thoughts with others. The same accounts for the people who were experiencing anger. People tend to share their thoughts when there angry to decrease the intensity of their subjective feeling and calm down somewhat. Therefore, it is assumed that when people get mad because of headlines they feel the need to share these with friends because they will have to relieve those feelings of anger. On the other hand, people who experience sadness can feel the need to dissolve their emotions by sharing it with others. From these

assumptions H10 and H11 are derived.

(9)

9

Data and methodology

Data

For our analyses, we used two different datasets of news articles downloaded from kaggle.com. The first dataset contains text and metadata from 244 sites and represents 12,999 posts of fake news. These articles are collected during the U.S. presidential elections (the end of 2016 until the beginning of 2017). The data was pulled using a webhose.io API. The documents selected are tagged as

“Bullshit” by the BS Detector chrome extension by Daniel Sieradski. The plug-in uses a list of fake news sources as its reference point. When the plug-in spots a potentially fake story, the story gets tagged with a red banner saying: “This website is considered a questionable source”. This dataset contains the Title of the article, the text-body and the amount of likes and shares it received on Facebook. The second dataset which is used for our analyses is a dataset containing 50.000 news articles of United States news outlets. These articles are not classified as fake and thus represent real news from the fall of 2016 until the beginning 2017. A random set of 12,999 articles is used for this paper. This dataset contains the title of the article and the text-body but does not contain likes and shares because Facebook is not taken into consideration. The datasets used did only contain text. There were no visual or audio cues present. Both datasets were standardized, and English stop words were removed because they would influence the results. Table 3.1 gives a brief overview of the datasets

Word embedding

Word embeddings are a strong machine-learning technique that replaces every English word with a high-dimensional vector. This is done in such a manner that the geometry of the vector can capture the semantic relations between different words. Word embeddings are trained through large corpora of text, such as news articles or a large number of books (Garg et al. 2018). A pitfall of word-embeddings is the presence of stereotypes. For example, research has shown that the vector of honorable is closer to the vector for man, whereas the vector submissive is closer to woman. These stereotypes are automatically learned by the algorithm. This could be an issue if you are working with sensitive applications (Bolukbasi et al. 2016).

Skipgram

A state of the art technique among word embeddings is Skipgram. This technique is embedded in the word2vec software. Skipgram can produce useful word representations in a fast manner. Therefore, it is easy to train, and it is able to handle huge corpora (billions of words) (Levy and Goldberg, 2014). In our model we have used the skip-gram neural embedding model introduced by Mikolov et al. (2013a). As proposed by Levy and Goldberg (2014): “In the skip-gram model, each word w ∈ W is associated with a vector vw ∈ Rd and similarly each context c ∈ C is represented as a vector vc ∈ Rd, where W is the words vocabulary, C is the contexts vocabulary, and d is the embedding

dimensionality. The entries in the vectors are latent and treated as parameters to be learned.” Using datamining techniques regarding linguistic analysis we calculated the weighted most used words for fake news headlines and real news headlines. This small statistic clearly shows the

different intents with which fake and real news are made. Just by looking at the 10 most used words you can already tell the difference between the news sources.

(10)

10 Hillary was accused of violating federal requirements because she never used a state.gov email during her 4 years of secretary of state. Therefore, she could send emails secretly without the government’s interruption. 11 days before the elections the FBI announced that it had newly

discovered evidence of Hillary’s violations (BBC news, 2020). This ‘breaking news’ was covered in the real news media. For example, the word ‘email’ takes of 0.04% of the words used in real news titles and 0.2% in fake news titles. Looking at table 3.2, FBI covered in 0.1% of the news titles and covers 0.5% of the fake news titles. These small differences show us the malicious intents of the fake news titles, they are not only denigrating Hillary but also Trump during the election period. Following Hillary’s email scandal, Trump had a scandal of his own. Trump was accused of using Russian hackers to degenerate Hillary Clinton and thus accepting the help of Russia (Harding, 2020). Again, both stories got reported on by real and fake news. However, fake news puts a lot more emphasis on this story. The word ‘Russia’ used up 0.5% of the fake news titles and only 0.1% of the real news titles. Using word2vec skipgrams have been created to see the different combinations in word pairs in fake news titles and real news titles. The results of these skipgrams shows us the amount of times the word appeared in a headline. The weighted amount, and the probability a word follows up in a distance of N words on our word of interest. A distance of 3 words in front and behind our target word is chosen. The target words chosen are ‘Trump’ and ‘Hillary’ because of the origin of the data, namely during the US presidential elections.

Looking at the skipgrams shown in tables 3.3 and 3.4 we can see the number of times a word

combination is used in our dataset (total). The probability of the second word being used in less than 3 words prior of or behind the target word (probability). Prob. of word 1 and Prob. of word 2 depicts the weighted word count of word 1 and word 2. Comparing these skipgrams of fake and real news it is clear to see that the skipgrams fake news are a lot more about the elections and the biased news. For example, the skipgrams ‘trump win’, ‘trump anti’ and ‘trump victory’ in fake news, and the total absence of them in real news, shows us that fake news is trying to influence the opinion of the people. Whereas the real news was reporting about trumps family member (‘trump melania’ ‘trump ivanka’ and ‘trump jr’), fake news is not reporting on this matter because election-based news will probably result in more clicks.

The skipgrams for Hillary in tables 3.5 and 3.6 show the same phenomena. In fake news Hillary is more often associated with words as ‘FBI’, ‘Email’ and ‘Investigation’. Real news is also focused on those subjects, but way less, for example ’FBI’ and ’Hillary’, have a probability of being near each other of 0.000136 in fake news and nearly 10 times less in real news, 0.000017.

Sentiment analyses

(11)

11 positive and 70% negative, the document analysis would report this document as negative and with sentence analysis you will get the specific numbers on which parts where negative and positive. Analysis on aspect level is the most advanced analysis of the possible sentiment analyses types. This technique splits a document vectors of single words and assigns a sentiment to each word. This gives you the ability to see person’s opinions on certain aspects of your text body instead of per sentence of text body (Gandomi and Haider, 2015).

Syuzhet package

To be able to perform a sentiment analysis, the ‘Syuzhet’ package in R is used. The Syuzhet package allows you extract sentiments and sentiment-derived plot arcs from text bodies. This package tries to reveal latent structures of text bodies by means of sentiment analysis. The package comes with four different standard emotional lexicons namely: The Syuzhet lexicon, the Afinn lexicon, the Bing lexicon and the NRC lexicon.

The Syuzhet lexicon is chosen by default and is developed by the Nebraska Literary Lab. It contains 10748 words, of which 3587 are positive and 7161 are coded as negative. These words are rated on a scale of -1 to 1 and can take up to 16 different values.

The Afinn lexicon is developed by Finn Arup Nielsen. Afinn, in contrary to the other available lexicons contains, internet slang and obscene words. It started with a small set of obscene words and

gradually got bigger by using data from twitter and sets of words extracted from the urban dictionary. This lexicon only contains 2477 words, of which 1598 negative words and 878 positive words. The score ranges of Afinn span from -5 till 5 and can take up to 11 values.

The Bing lexicon is developed by Hu and Liu and is better known as the opinion lexicon. It contains 6789 words, of which 2006 positive words and 4783 negative words. This lexicon either assigns -1 for every negative word or a 1 for every positive word.

The fourth lexicon is the NRC lexicon. This lexicon contains 13889 words, this lexicon does not divide their words into two categories. This lexicon assigns a sentiment type to each word. This lexicon contains 8 sentiments; anger, anticipation, fear, joy, sadness, surprise and trust.

The NRC lexicon thus gets a sentiment-score for each sentence which it analyses (Naldi, 2019). In table 3.7 we can see the distribution of words in the NRC lexicon. So instead of giving sentiment scores of +1 or -2 this lexicon detects a word which are put in a category and just denoted them. For example, if a sentence includes 4 joy words and 2 anger words, the sentence would score 2 on anger and 4 on joy. An example for a Joyful title would be: Baby Bonds: ‘A Plan for Black/White Wealth Equality Conservatives Could Love?’. And an example for a disgusting title: ‘Desecrating the Koran? Police Arrest 9-Year-Old Christian Boy, Torture Him for Days and Attempt to put Him on Death Row’. Examples for the rest of the categories are given in table 3.8. Comparing the NRC scores of fake and real news, we can conclude that fake news is constantly mimicking the trends going on in real news. For example, a total of 46106 emotional tagged words were found in the fake news titles and 48781 in the real news titles. After weighing each of the present emotions we cannot see a clear difference in the use of emotions in real and fake news. For example, the biggest difference in percentage is 1.3% (tables 3.9 and 3.10) which is the use of ‘surprise words’ (8.6% in real news and 7.3% in fake news). This is most probably done by the fake news producers to further complicate the distinction between fake and real news.

(12)

12 So, words such as ‘no’ and ‘not’ are deleted and not present in the dataset. After getting these sentiment scores for each headline. We can then analyze this data by the use of regressions. Accordingly, we can see whether these sentiments make any difference on the sharing behavior of people.

Results

Kruskall-Wallis Test

To test H1, whether the title lengths of fake and real news differ from each other, it is first needed to check whether title lengths are normally distributed. This is done by testing the skewness and

kurtosis of the distributions. The skewness and kurtosis statistics are used to describe the shape characteristics of a distribution. Following these measurements, the normality of the distribution can be determined (Joanes and Gill, 1998). Skewness is a measurement of the asymmetry of the

distribution. This measurement gives you the direction of the skew of the population. If the skewness is 0, your data is symmetrical and thus most likely normally distributed. When the skewness is less than -1 or greater than 1, it means that the distribution is highly skewed, thus not normally

distributed. A skewness measure smaller than -1 means that there are too many high values. A value greater than 1 means that there are too much low values in your distribution (Bulmer, 1979). The kurtosis measure depicts the height and sharpness of the central peak relative to that of a standard bell curve. A standard normal distribution has a kurtosis of 3 and is recognized as mesokurtic. A kurtosis of >3 can be seen as a thin ‘bell’ with a high peak and very short tails. On the other hand, a kurtosis of >3 means the ‘bell’ shape broadens and results in thickening of the tails.

Looking at table 4.1 we can conclude that the skewness (0.2159) and kurtosis (3.1641) measures of the distribution of the real titles suggests it is normally distributed. However, the skewness and kurtosis measures of ‘fake titles’ shows that this distribution is not normally distributed, the distribution is highly skewed. The skewness measurement is greater than 1 (skewness = 1.05). Therefore, we cannot use a simple paired-t test, or a Wilcoxon signed rank test because the

populations are not normally distributed, and the datasets are independent. Thus, a Kruskall-Wallis test is used to whether title lengths do significantly differ from each other. As the p-value of the Kruskall-Wallis is 0.3983 we can conclude that there is no difference between the title lengths. A way of dealing with skewed data is to log-transform it, after log-transforming we can perform a normal T-test. The P-value of the t-test is <0.001. Which means that there is a statistical difference in title lengths, with log-transformed data. After anti-logging our averages we come up with the

following averages. Fake news has an average length of 6.88 and real news has a length of 7.63. We performed a Kruskall-Wallis test on the average word length of the fake news and real news headlines. A Kruskall-Wallis test is used because we cannot assume normality in the distribution of Fake titles. The skewness of the fake title population is 1.04 which is higher than 1 so we can conclude that there is a high degree of skewness. Therefore, the normality assumption cannot hold, and we have to use the Kruskall-Wallis test. As the P-Value is higher than 0.1 namely 0.56, we can conclude that there is no evidence that the average word length differs significantly from each other. Thus, H2 is not supported.

(13)

13

Regressions

Ordinary least-squares (OLS) regressions were used to study the relationships between the different sentimental values and shares on social media. OLS regression are used to understand the mean change in a dependent variable given one unit of change in the independent variable (Pohlmann and Leitner, 2003). This dataset contains skewed data, which means that the data is not normally

distributed. This data contains negative skewness which means that the mean of the values is less than the median. This is called ‘left-skewed data’ (e.g. figure x, which shows the distribution of disgusting terms used in the titles of fake news. It is clearly visible that there are significant more titles without disgusting words in them than titles with disgusting words. Disgust for example has a skewness of 2.61 and kurtosis of 10.23. Therefore, we cannot assume normality). When ignoring skewed data, the residual standard errors will be too high. To deal with this problem, the

independent variable and the dependent variable have been log transformed (Feng et al., 2014). As shown in table 4.3 the skewness and kurtosis values of the predictor variables.

Another reason to log transform our variables is because there is a certain ceiling effect. If we assume linearity in our regression it would mean that the more words added, the more shares are generated, which will not hold in a real environment.

Shares is included as dependent variable. The sentimental values are included as independent variables. This results in a total of six different regressions, containing the variables; joy, positive, trust, fear, disgust and negative. Below, the used regression is shown:

yi= (log)α + (log) β ∗ independent va𝑟i+ εi

As seen in table 4.4, H3 is supported with a p value of p=<0.0001 and a beta of 0.12838. This confirms that the length of the title has impact on the amount of shares the article receives, however this beta is positive which means that longer titles generate more shares. With an increase of 1% in word usage you can expect an increase of 12% in the number of shares.

The amount of joyful words is positively linked with the number of shares (p=0.028), with a beta of 0.064. This means that when you increase the amount of joyful words in your title with 1% the expected number of shares will rise with 6%. Thus hypothesis 4 is accepted.

The link between positivity and additional sharing is not backed by this research (p=0.452) So, H5 is rejected.

The amount of trustworthy words in a title does also not increase the number of shares (p=0.341), therefore, H6 is not supported. H7 is also rejected (p=0.112) which means that we cannot confirm that fear indulging headlines increases the amount of shares an article receives.

H8 is significant which means that disgusting headlines do increase the number of shares (p=0.061) with a beta of 0.057. Which means that when you increase your disgusting word use by 1% in your title you will gain 5.7% more likes.

H9 is rejected by our research (p=0.117). Which means that negative titles do not get shared more. H10 is also rejected by our research (p=0.331). We cannot assume that a sad sentiment positively influences the sharing behavior of online articles.

The link between the amount of angry words in a title and the amount shares is not significant (p=0.238). Therefore, H11 is not supported and we cannot state that an angry sentiment increases the amount of shares an article receives.

Conclusion

(14)

14 ability. It is found that using too little words is not preferable regarding shares. This study found no evidence that the sentiments: trust, fear, sadness, anger, negativity and positivity, influence the number of shares. The results of this thesis can be used by companies who face a lot of fake news. The can now further concentrate on the sentiments which will lead to the most shares and thus act accordingly. Doing so, they will protect themselves from harmful content and the damage which will ensue if they do not act accordingly. Currently third-party companies are used to tackle this problem, by using these results this spending can be saved and used elsewhere. Social media platforms such as Facebook can use these results to further improve their algorithms to detect and flag fake news. This paper offers new ways of looking towards the dissemination of online content. By using the dissemination of fake news, this paper has found the sentiments, which are most likely to get fake news to go viral. Not only taking, whether content contains emotional value in to perspective, this paper also categories these emotions into sentiments. Thus, making this paper stand-out in contrary to previously done research on dissemination. This new approach can further encourage other practitioners to improve on these results. This paper clearly illustrates the influences of different sentiments of fake news sharing in the United States, it also raises the question if this translates to other countries and other content types, such as advertisements. Therefore, further research is needed to generalize these outcomes. This approach can also be used to study the impact of

sentiments on other aspects of social media. For example, instead of sharing numbers the amount of likes or comments a post generates can be studied.

A limit of the current techniques regarding sentimental analysis is the fact it cannot take negativity into consideration. For example: ‘Donald Trump is not good’ is considered the same as ‘Donald Trump is good’. So, titles can be wrongly interpreted by our analyzing technique.

(15)

15

Literature

Balahur, A. and Jacquet, G. (2015). Sentiment analysis meets social media – Challenges and solutions of the field in view of the current information sharing context. Information Processing &

Management, 51(4), pp.428-432.

Bazarova, N., Sosik, V. S., Choi, Y. H. and Cosley, D. (2015). Social Sharing of Emotions on Facebook: Channel. The 18th ACM conference pp:154-164.

Bayer, M., Sommer, W. and Schacht, A. (2012). Font Size Matters—Emotion and Attention in Cortical Responses to Written Words. PLoS ONE, 7(5), p.e36042.

Burgoon, J., Pete, P., J., Tiantian, Q. and Nunmaker, J. (2003) Detection deception through linguistic analysis. Conference paper

Blau, P.M. (1964), Exchange and Power in Social life, John Wiley and Sons, New York, 1964 Bosson, J., Johnson, A., Niederhoffer, K. and Swann, W. (2006). Interpersonal chemistry through negativity: Bonding by sharing negative attitudes about others. Personal Relationships, 13(2), pp.135-150.

BBC News. (2020). Clinton emails - what's it all about?. [online] Available at: https://www.bbc.com/news/world-us-canada-31806907 [Accessed 7 Jan. 2020]. BBC News. (2020). The city getting rich from fake news. [online] Available at: http://www.bbc.co.uk/news/magazine-38168281 [Accessed 8 Jan. 2020].

Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V. and Kalai, A. T. (2016) Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural

Information Processing Systems. 29, pp 4349–4357.

Bulmer, M. (1979). Concepts in the analysis of qualitative data. The Sociological Review, 27(4). Chakraborty, A., Paranjape, B., Kakarla, S. and Ganguly, N. (2016). Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media. Internation Conference on advances in social networks

analysis and mining.

Cotter, E. M. (2008). Influence of emotional content and perceived relevance on spread of urban legends: A pilot study. Psychological Reports, 102(2), 623-629.

Degruyter.com. (2020). [online] Available at:

https://www.degruyter.com/downloadpdf/j/gfkmir.2018.10.issue-1/gfkmir-2018-0003/gfkmir-2018-0003.pdf [Accessed 7 Jan. 2020].

Elyashar, A., Bendahan, J.m. and Puzis, R. (2017). Detecting clickbait in online social media: you won’t believe how we did it. Telekom innovation laboratories.

Feng, C., Wang, H., Lu, N., He, H., Lu, Y. and Tu, X. (2014). Log-Transformation and its implications for data analysis. Shanghai Arch Psychiatry, 26(2): 105-109.

Flanagan, K. (2015). The Essential Guide to Creating a Successful Content Marketing Strategy. [Blog]. Gandomi, A. and Haider, M. (2015). Beyond the hype: Big data concepts, methods, and

(16)

16 Garg, N., Schiebinger, L., Jurafsky, D. and Zou, J. (2018). Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16), pp.E3635-E3644.

Graham, D. (2020). Why Bogus News Stories Are So Hard to Stop. [online] The Atlantic. Available at: https://www.theatlantic.com/politics/archive/2017/07/why-bogus-stories-persist-because-they-work/533589/ [Accessed 7 Jan. 2020].

Greenwood, S., Perrin, A. and Duggan, M (2016). Social Media Update 2016.

Available at: http://downtowndubuque.org/wp-content/uploads/2017/01/Social-Media-Update-2016.pdf [Accessed 7. Jan. 2020]

Harding, L. (2020). What we know about Russia's interference in the US election. [online] the Guardian. Available at: https://www.theguardian.com/us-news/2016/dec/16/qa-russian-hackers-vladimir-putin-donald-trump-us-presidential-election [Accessed 7 Jan. 2020].

Heath, C., Bell, C. and Sternberg, E. (2001). Emotional selection in memes: The case of urban legends.

Journal of Personality and Social Psychology, 81(6), 1028-1041.

Jackson, D. and Moloney, K. (2015). Inside Churnalism. Journalism Studies, 17(6), pp.763-780. Jalilvand, M., Esfahani, S. and Samiei, N. (2011). Electronic word-of-mouth: Challenges and opportunities. Procedia Computer Science, 3, pp.42-46.

Joanes, D. and Gill, C. (1998). Comparing measures of sample skewness and kurtosis. Journal of the

Royal Statistical Society: Series D (The Statistician), 47(1), pp.183-189.

Katz, M.L. and Shapiro, C. (1985). Network externalities, competition and compatibility. American

Economic Review, 75, pp. 424-440Khaldarova, I. and Pantti, M. (2016). Fake News. Journalism Practice, 10(7), pp.891-901.

Kimmel, A. and Smith, N. (2001). Deception in marketing research: Ethical, methodological, and disciplinary implications. Psychology and Marketing, 18(7), pp.663-689.

Koohikamali, M. and Sidorova, A. (2017). Information Re-Sharing on Social Network Sites in the Age of Fake News. Informing Science: The International Journal of an Emerging Transdiscipline, 20, pp.215-235.

Lazer, D. M. J. and Baum, M. A. (2018). The science of fake news, addressing fake news requires a muldisciplinary effort. Science magazine. 359(6380) pp.1094-1096

Lee, M. and Youn, S. (2009). Electronic word of mouth (eWOM). International Journal of Advertising, 28(3), pp.473-499.

Letchford, A., Preis, T. and Moat, H. S. (2015). The advantages of short paper titles. Open science, 2(8):150-266.

Levy, O. and Goldberg, Y. (2014) Dependency-Based Word Embeddings. Proceedings of the 52nd

Annual Meeting of the Association for Computational Linguistics. 23, pp.302-308

(17)

17 Social Capital Theory. Cyberpsychology, Behavior, and Social Networking, 14(10), pp.565-570.

Luminet, O., Bouts, P., Delie, F., Manstead, A. and Rimé, B. (2000). Social sharing of emotion following exposure to a negatively valenced situation. Cognition & Emotion, 14(5), pp.661-688. Media.digitalnewsreport.org. (2020). [online] Available at: http://media.digitalnewsreport.org/wp-content/uploads/2018/11/Digital-News-Report-2016.pdf?x89475 [Accessed 8 Jan. 2020].

Mikolov, T., Chen, K., Corrado, G. and Dean, J. (2013) Efficient estimation of word representations in vector space. Computation an Language (cs.CL)

NBC News. (2020). Fake news is a 'tech suicide bomber' that can sink a company, say branding experts. [online] Available at: https://www.nbcnews.com/business/business-news/fake-news-can-cause-irreversible-damage-companies-sink-their-stock-n995436 [Accessed 12 Jan. 2020].

Nagarajan, M., Purohit, H. and Sheth, A. (2010) A qualitative examination of topical tweet and retweet practices. Proceedings of the Fourth International AAAI Conference on Weblogs and Social

Media. pp. 295–298.

Naldi, M. (2019). A review of sentiment computation methods with R packages. Department of civil

engineering and computer science.

Nyilasy, G. (2019). Fake news: When the dark side of persuasion takes over. International Journal of

Advertising, 38(2), pp.336-342.

Our World in Data. (2020). The rise of social media. [online] Available at: https://ourworldindata.org/rise-of-social-media [Accessed 7 Jan. 2020].

Opreana, A. and Vinerean, S. (2015) A new development in online marketing: introducing Ditital Inbound Marketing. Expert Journal of Marketing. 3(1) pp.29-34

Ormond, D., Warkentin, M., Johnston, A. and Thompson, S. (2016). Perceived deception: Evaluating source credibility and self-efficacy. Journal of Information Privacy and Security, 12(4), pp.197-217. Pelling, E. and White, K. (2009). The Theory of Planned Behavior Applied to Young People's Use of Social Networking Web Sites. CyberPsychology & Behavior, 12(6), pp.755-759.

Pew Research Center's Journalism Project. (2020). News Use Across Social Media Platforms 2016. [online] Available at: https://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/ [Accessed 2 Jan. 2020].

Pohlman, J. T. and Leitner, D. W., (2003). A comparison of ordinary least squares and logistic regression. Ohio Journal of Science,103(5):118-125

Rimé, B., Mesquita, B., Boca, S. and Philippot, P. (1991). Beyond the emotional event: Six studies on the social sharing of emotion. Cognition & Emotion, 5(5-6), pp.435-465.

(18)

18 Shirsat, V.S., Jagdale, R.S. and Deshmukh, S.N. (2017). Document Level Sentiment Analysis from News Articles. 2017 International Conference on Computing, Communication, Control and Automation

(ICCUBEA), 1-4.

Shu, K., Sliva, A., Wang, S., Tang, J. and Liu, H. (2017). Fake News Detection on Social Media. ACM

SIGKDD Explorations Newsletter, 19(1), pp.22-36.

Silverman, C. (2015). Lies, Damn Lies, and Viral Content: How News Websites Spread (and Debunk) Online Rumors, Unverified Claims and Misinformation. http://towcenter.org/research/lies-damn-lies-and-viralcontent.

Statista. (2020). Frequency of fake news on online news websites U.S. 2018 | Statista. [online] Available at: https://www.statista.com/statistics/649234/fake-news-exposure-usa/ [Accessed 8 Jan. 2020].

Stieglitz, S. and Dang-Xuan, L. (2013). Emotions and Information Diffusion in Social Media—

Sentiment of Microblogs and Sharing Behavior. Journal of Management Information Systems, 29(4), pp.217-248.

Tandoc, E., Lim, Z. and Ling, R. (2017). Defining “Fake News”. Digital Journalism, 6(2), pp.137-153. Thompson, M. (2020). 10 questions to help you write better headlines - Poynter. [online] Poynter. Available at: https://www.poynter.org/reporting-editing/2011/10-questions-to-help-you-write-better-headlines/ [Accessed 12 Jan. 2020].

Visentin, M., Pizzi, G. and Pichierri, M. (2019). Fake News, Real Problems for Brands: The Impact of Content Truthfulness and Source Credibility on consumers' Behavioral Intentions toward the Advertised Brands. Journal of Interactive Marketing, 45, pp.99-112.

What’s New in Publishing | Digital Publishing News. (2020). Fake or not? Decoding the authenticity of

online news | What’s New in Publishing | Digital Publishing News. [online] Available at:

https://whatsnewinpublishing.com/fake-or-not-decoding-the-authenticity-of-online-news [Accessed 8 Jan. 2020].

(19)

19

Appendix

Dataset Fake Real

Title Yes Yes

Text body Yes Yes

Likes on Facebook Yes No Shares on Facebook Yes No Table 3.1 Description of both datasets

Word Fake news Real news

Trump 0.019 0.025 Hillary 0.013 0.005 Clinton 0.012 0.007 Election 0.007 0.002 Video 0.006 0.001 News 0.005 0.002 War 0.005 0.001 FBI 0.005 0.001 Russia 0.005 0.001 Comment 0.005 0.0001

Table 3.2 The 10 most used words in Fake compared to real news

Word 1 Word 2 Total Probability Prob. of word 1 Prob. of word 2

Trump Trump 19746 0.00821 0.0247 0.0247 Trump Donald 3427 0.00142 0.0247 0.00708 Trump Melania 112 0.0000465 0.0247 0.000261 Trump Tower 76 0.0000316 0.0247 0.000180 Trump Ivanka 90 0.0000374 0.0247 0.000229 Trump Supporter 60 0.0000249 0.0247 0.000180 Trump Supporters 193 0.0000802 0.0247 0.000607 Trump Jr 39 0.0000162 0.0247 0.000133 Trump Administration 149 0.0000619 0.0247 0.000528 Trump Leads 81 0.0000337 0.0247 0.000297

Table 3.3 Skipgrams of the word ‘Trump’ in fake news

Word1 Word 2 Total Probability Prob. of word 1 Prob. of word 2

Trump Trump 3197 0.00600 0.0189 0.0189 Trump Donald 411 0.00722 0.0189 0.00387 Trump Supporter 48 0.0000901 0.0189 0.000560 Trump Supporters 69 0.000130 0.0189 0.000906 Trump Anti 76 0.000143 0.0189 0.00145 Trump Win 62 0.000116 0.0189 0.00126 Trump Wins 37 0.0000696 0.0189 0.000786 Trump Victory 44 0.0000826 0.0189 0.00105 Trump President 81 0.000152 0.0189 0.00243 Trump Vote 45 0.0000845 0.0189 0.00216

(20)

20

Word 1 Word 2 Total Probability Prob. Word 1 Prob. Word 2

Hillary Hillary 2357 0.00442 0.0133 0.0133 Hillary Clintons 99 0.000186 0.0133 0.00203 Hillary Clinton 567 0.00106 0.0133 0.0120 Hillary Investigation 63 0.000118 0.0133 0.00194 Hillary Email 62 0.000116 0.0133 0.00212 Hillary Campaign 60 0.000113 0.0133 0.00254 Hillary Emails 41 0.000070 0.0133 0.00263 Hillary FBI 72 0.000135 0.0133 0.00474 Hillary Election 38 0.0000713 0.0133 0.00721 Hillary Trump 82 0.000154 0.0133 0.0189

Table 3.5 Skipgrams of the word ‘Hillary’ in fake news

Word 1 Word 2 Total Probability Prob. Word 1 Prob. Word 2

Hillary Hillary 4759 0.00198 0.00586 0.00586 Hillary Clintons 391 0.000163 0.00586 0.00117 Hillary Clinton 1869 0.000777 0.00586 0.00693 Hillary Email 64 0.0000206 0.00586 0000582 Hillary Obama 59 0.0000245 0.00586 0.00420 Hillary Campaign 88 0.0000366 0.00586 0.00199 Hillary Vote 46 0.0000191 0.00586 0.00116 Hillary FBI 42 0.0000175 0.00586 0.00111 Hillary Sanders 40 0.0000166 0.00586 0.00116 Hillary Donald 125 0.0000520 0.00586 0.00708

Table 3.6 Skipgrams of the word ‘Hillary’ in real news Category Number of words

Anger 1247 Anticipation 839 Disgust 1058 Fear 1476 Joy 689 Sadness 1191 Surprise 534 Trust 1231 Positive 2312 Negative 3324

(21)

21 Category Title

Anger Veterans day is typical .01% rogue state inversion to ‘honor’ duped military serving forever lie-started illegal wars of aggression. Truly thank veterans by demanding arrests of US ‘leaders’ who treasoned them to attack, invade, occupy victims of Empire

Anticipation Will Rasmea Odeh’s appeal expose Israeli prison torture in a US court?

Fear Comment on gold medalist wrestler gets vioent with police – all 7 cops choose not to engage in deadly force by Buck Rogers

Sadness BELGIUM: Iranian Muslim invader found guilty of drugging, repeatedly raping and threatening to kill a 15-year-old Belgian schoolgirl, GETS NO JAIL TIME!

Surprise “Violent Revolution If Trump Lets Them Down”: People Remain Poised for Angry Revolt

Trust I just lost all faith in our deeply corrupt legal system and in the Rule Of Law in the United States

Negative Flash Mob of Black Teens Assault Whites, Media Silent On If It’s A Hate Crime

Positive Joint Way Forward Deal Does Lead to Peace and Progress for Afghans

Table 3.8 Exemplary titles

Category Sum Weighted average

Anger 4624 0.094 Anticipation 4080 0.084 Disgust 2249 0.046 Fear 5895 0.120 Joy 2607 0.053 Sadness 3621 0.074 Surprise 4175 0.086 Trust 5727 0.117 Positive 7697 0.156 Negative 8106 0.166

(22)

22

Category Sum Weighted average

Anger 4222 0.092 Anticipation 4068 0.088 Disgust 2278 0.049 Fear 5512 0.120 Joy 2603 0.056 Sadness 3320 0.072 Surprise 3388 0.073 Trust 5344 0.116 Positive 7362 0.160 Negative 8009 0.174

Table 3.10 Weighted averages of NRC words in real news titles

Measure Real title Fake title Skewness 0.2159 1.0519 Kurtosis 3.1641 8.2445

Table 4.1 Skewness and Kurtosis of distributions of title length

Measure Real title Fake title Skewness 0.3628 1.0413 Kurtosis 4.1954 9.4800

Table 4.2 Skewness and Kurtosis of distributions of word length

Measure Words Positive Joy Trust Fear Disgust Negative

Skewness 1.0519 1.4690 1.4690 2.4837 1.6648 2.6144 1.4254 Kurtosis 8.2445 5.3715 10.4131 6.1502 6.4032 10.2257 5.3235 Table 4.3 Skewness and Kurtosis of distributions of predictor variables

Hypotheses 𝛃 P-value T-Value Adj. R²

H3 - Words 0.12838 0.0000*** 5.687 0.002584 H4 – Joy 0.064035 0.028** 2.198 0.0002994 H5 - Positive 0.01464 0.452 0.752 -3.39e-05 H6 - Trust 0.02087 0.341 0.952 -7.34e-06 H7 – Fear 0.03407 0.112 1.59 0.0001194 H8 - Disgust 0.057454 0.061* 1.874 0.0001962 H9 - Negative 0.02973 0.117 1.567 0.0001138 H10 – Sad 0.02555 0.331 0.972 0.000043 H11 - Anger 0.0279 0.238 1.179 0.000053

Table 4.4 Influence of sentiments on sharing behavior

Referenties

GERELATEERDE DOCUMENTEN

For any positive integer n give a formula for the matrix representation of f n , first with repect to the basis of eigenvectors, and then with repect to the standard

Again an ordinal linear regression analysis was executed in SPSS with the help of the Chi-square test (χ2) to test whether patients’ perceived quality of care

The following subjects are discussed during the interviews: the process concerning choosing the appropriate study, more specific the wants and needs of people concerning

werkplaats van Botticelli en was de zoon van de grote meeste Fra Filippo Lippi, maar is zelf uiteindelijk uitgegroeid tot een evenzeer geslaagde kunstenaar. Lippi wordt

Single human primary chondrocytes directly after isolation (P=0) and after culture expansion at normoxia and hypoxia (P=2 and P=4) and chondrocytes within human cartilage tissue at

In order to isolate the specific cerebral responses that are modulated by the need to voluntary control affect-incongruent AA responses over and above the effects associated with

This qualitative research study uses dialogue and story to explore how the concept and practice of sustainability is emerging through a process of multiple and diverse

grond hiervan moes daar gepoog vvord om die invloed van die twee faktore op die toetsnommers uit te skakel aangesien daar ~ be- duidende verskil in