• No results found

Music in Movies: the Emotional Effects of Tempo and Attention

N/A
N/A
Protected

Academic year: 2021

Share "Music in Movies: the Emotional Effects of Tempo and Attention"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Music in Movies: The Emotional Effects

of Tempo and Attention

Author: Jaëlle-Laurence Günther Student number: 12263915 Master’s Thesis

Graduate School of Communication Track: Entertainment Communication University of Amsterdam

Under the supervision of dr. Marlies Klijn Date of Completion: 31.01.2020

(2)

Abstract

The current study explored the role of music accompanying a cinematic scene on emotions. More specifically the role of lyrics, melody, and speed of the music on the viewer’s emotional responses. It aimed at answering the following research question: “How are people

emotionally affected by music in movies, and does this effect depend on whether the musical cue is fast vs. slow and whether the individual exposed pays more attention to the lyrics or the melody?” Through an online experiment, participants were assigned to one of five conditions (four experimental conditions and one control condition). They were asked to watch a movie scene which was accompanied by either a slow song or a fast song while focusing on either the lyrics or the melody. No significant differences between the conditions on the reporting of emotions after watching the scene were found.

Keywords: Music, Lyrics, Melody, Tempo, Speed, Emotions

Introduction

The emotions conveyed by a movie scene can vary amongst individuals and this is something that intrigued scholars as early as the beginning of the 20th century. Filmmaker Lev Kuleshov

(1899-1970) exposed participants to a scene with a neutral face, to which a different object was added each time. He found that viewers interpreted the emotions of the neutral face differently depending on the added object. What is now known as the Kuleshov effect, refers to the addition of an emotional context in order to influence the viewer’s interpretation of the facial expression depicted in a scene (Baranowski & Hecht, 2017). Baranowski and Hecht analysed the possible existence of an auditory Kuleshov effect. By adding neutral, happy, and sad music to scenes that depicted neutral, happy, and sad facial expressions, the researchers found that faces received more positive and happy ratings when they were accompanied by happy music than when they were accompanied by sad music. While the role of music in

(3)

movies on the viewer’s emotional response has not widely been studied by scholars, the evaluation of a movie character’s emotions according to the music has been investigated by Bouhuys, Bloem, and Groothuis (1995). The researchers found that music can influence a viewer’s interpretation of the emotional state of a protagonist. Another study found that music could influence the likeability of a movie character (Hoeckner, Wyatt, Decety, & Nusbaum, 2011). The studies mentioned above have all proven that music can influence the perception of a scene and its protagonists. Through an online experiment, the present research thus investigates the influence of music in a cinematic scene on the viewer’s emotions. The aim of this research is to investigate whether the emotional responses of an individual exposed to a cinematic scene depend firstly on whether the scene is accompanied by a musical cue or not but also on the speed of the music. Researchers have focused on the effects of tempo and have found that slow paced music tended to be experienced in a sad way while fast songs elicited happier feelings (Gundlach, 1935; Hevner, 1937; Watson, 1942; Dalla Bella, Peretz,

Roussseau, & Gosselin, 2001; Larsen & Sastny, 2011). Additionally, this research focuses on the attention paid to a specific musical feature in order to determine whether focusing on the lyrics or on the melody of a song influences one’s emotional responses. The main research question is thus “How are people emotionally affected by music in movies, and does this effect depend on whether the musical cue is fast vs. slow and whether the individual exposed pays more attention to the lyrics or the melody?”

In a study on the effects of music, Baumgartner, Esslen, and Jäncke (2006) exposed individuals to facial expressions depicting different emotions. When the pictures were presented accompanied by music that was consistent with the depicted emotions, the participants’ psychophysiological responses (heart rate and respiration for example)

increased. Blood and Zatorre (2001) found that music activates neural systems of reward and emotion and brain areas that are similar to those activated by food, sex and drugs. These

(4)

results indicate that music can be really powerful as it can induce neural activity which does not arise from biological stimuli like sex or food.

As mentioned above, previous studies have investigated the role of music on

psychophysiological responses and the role of music on perceived emotions, but none have focused on the effects of music in movies on the viewer’s emotions when paying attention to one specific feature of the musical cue. Some researchers have also focused their attention on the influences of lyrics and melody on emotions (Ali & Peynircioğlu, 2006) but the emotional effects of these features in combination with moving pictures remain to be investigated. The film industry already widely uses music to enhance the emotional impact of moving pictures (Baumgartner, Lutz, Schmidt, & Jäncke, 2006) but additional knowledge on whether

emotions are triggered in different manners depending on the musical feature the viewer pays attention to could represent valuable information to create scenes that are even more affective and captivating.

Music is not only used in the film industry, it is also considered an effective tool in the marketing industry. Findings have suggested that happy music was linked to more positive moods and thus led to higher purchase intentions (Alpert & Alpert, 1986, 1988; Bruner, 1990). Furthermore, Simpkins and Smith (1974) found that the source of an advertisement was evaluated to be more credible when the background music was judged to be compatible with the ad than when it was judged to be incompatible. Insight into the effects of music, lyrics, and melodies on mood, advertisement recall, and purchase intentions can thus not only be valuable information to the film industry, and the scientific fields of communications and cognitive sciences, but also to the marketing industry.

Theoretical Background

The role of music in movies and in combination with pictures has been studied by several scholars (Baumgartner et al., 2006; Juslin & Västfjäll, 2008; Holbrook, 2008; Langkjaer,

(5)

2015; Baranowski & Hecht, 2017). It has been demonstrated that music influences

underlying neural mechanisms that have an impact on emotions (Juslin & Västfjäll, 2008). A study on the power of music in movies found that participants exposed to a scene combined with emotional music reported feeling some emotions to a more extreme extent. Respondents exposed to a scene combined with emotional music rated their emotions of fear and sadness more negatively and their emotions of happiness more positively than the subjects solely exposed to the moving pictures (Baumgartner et al. 2006).

Emotions

As defined by Dieckmann and Davidson (2018), the notion of “emotions englobes a network of terms, which includes affects, feelings, sentiments, moods and passions.” (p. 30) In music psychology, two different types of emotions can be distinguished: induced emotions, which concern felt and experienced emotions, and perceived emotions which are of conveyed and observed nature (Daly et al., 2014). However, researchers do not all agree on the effects of music on emotions. Some argue that music does not yield spontaneous emotional responses while others believe that emotions are responses that cannot be disguised nor feigned (Juslin & Västfjäll, 2008). In their study, Juslin & Västfjäll, mention the lack of research carried out on understanding the psychological mechanisms that play a role in music induced emotions. This extremely important aspect is supported by Koelsch (2010) in his article focusing on the “neural basis of music-evoked emotions”. Not only can brain areas like the amygdala, the ventral striatum or the hippocampus be activated by musical cues, early exposure to music has been found to have other effects on the brain, like the expansion of the areas dedicated to music processing (Peretz & Hébert, 2000; Pantev et al., 1998).

In their Handbook of Music and Emotion, Juslin and Sloboda (2011), present the complex relationship between music and emotions through various studies. They cite several researchers who have identified a clear influence of music on the emotions of the

(6)

listener. Not only can music cause listeners to be touched or to get “chills”, it can also stimulate genuine emotions. Up until now, the effects of music in movies on emotions has particularly been investigated on the feelings of fear and anxiety (Gabrielsson & Juslin, 1996; Juslin, 2000; Baumgartner et al., 2006). Previous research thus leads to the first hypothesis:

H1: Music accompanying a cinematic scene influences an individual’s emotional responses to that scene.

Speed of music

According to Rakowski (2001), music is about “understanding a musical theme”. The theme is made up of structured sounds and the listener has to interpret and understand them. In this sense, the listener is allowed to freely interpret the sounds, meaning that this understanding can widely differ across individuals. The effects of tempo have already been studied for several decades, and a number of researchers found that fast music was recognized to be more happy, while slow music elicited more solemn and negative feelings (Gundlach, 1935;

Hevner, 1937; Watson, 1942). More recently, when analysing the influence of music on mixed emotions, Larsen and Sastny (2011) exposed participants to different music conditions and found that the speed of the music played a role. They found that participants spent more time reporting “sadness” when exposed to slow songs than when exposed to fast songs, and that they spent more time reporting “happiness” when exposed to fast songs than during slow songs. Additionally, they found that respondents reported more mixed emotions when they listened to slow songs than when they were exposed to fast paced music. In a study on the ability to determine the sad or happy connotation of a song, researchers found that music with a fast tempo elicited more happy feelings than music with a slow tempo (Dalla Bella et al., 2001). In the present study, it is therefore hypothesized that exposing a respondent to a sad

(7)

scene accompanied by a slow song will lead to the reporting of more negative and sad emotions than when exposed to a sad scene accompanied by fast music.

H2a: The speed of the music accompanying a scene influences an individual’s emotional response.

H2b: A slow song will trigger more negative and sad emotions than a fast song.

Attention

Auditory attention concerns the ability someone has to selectively focus or ignore aspects of their surrounding environment (Spence & Santangelo, 2010).Lyrics and melody are

processed separately in the brain and the distinction between these two processes has previously been investigated by researchers, who found that aphasic patients were better at producing a melody than at producing lyrics (Saito et al., 2012; Hébert, Racette, Gagnon, & Peretz, 2003; Racette, Bard, & Peretz, 2006). In this study, participants will specifically be asked to focus on one of the two features and it is hypothesized that emotional responses will differ depending on which feature the respondent pays most attention to.

Melody and Lyrics

A melody is characterized by a sequence of notes which are arranged in a certain rhythm with varying or similar tones and pitches (Watt, 1924). Combining a thrilling or scary scene with a fitting melody or sound effects can highly impact the emotional response to the scene

(Baumgartner et al., 2006). A perfect example of this are the two alternating notes

constituting the pattern for the thrilling soundtrack of the movie Jaws (1975).1 The music

added an entire dimension to the movie’s villain and amplified the feelings of fear and anxiety when watching the movie.

(8)

More recently, the effects of lyrics on melody have been studied by Cohen, Pan, Stevenson and McIver (2015). The researchers investigated the effects of non-native lyrics when learning the melody of a song. They found that listening to a song containing native lyrics facilitated the memorization of the melody and that participants thus struggled at learning a melody when the lyrics were not in their native language. This study suggests that lyrics play an important role in music, as they influence the recall of a melody.

The text accompanying a melody is a feature that has been proven to impact individuals’ thoughts and behaviours(Greitemeyer, 2009; Ruth, 2017; Guéguen, Jacob, & Lamy, 2010). The effects of song lyrics on behaviours have been researched by Greitemeyer (2009) in a study on songs with prosocial lyrics. The scholar found that exposing participants to songs with prosocial lyrics and songs with neutral lyrics had an influence on their prosocial thoughts and their helping behaviour. Lyrics with prosocial thoughts increased participants’ helping behaviour and fostered their prosocial thoughts. Similarly, Ruth (2017) hypothesized that music with prosocial lyrics would have an effect on prosocial purchase intentions. The results revealed that individuals exposed to music with prosocial lyrics were more likely to show prosocial consumer behaviour (in this case: buying a fair trade coffee), than individuals exposed to neutral lyrics.

These findings do not only apply to prosocial behaviour. Another study focused on the effects lyrics might have on compliance to courtship requests (Guéguen, Jacob, & Lamy, 2010). The study exposed a number of women to music with romantic lyrics and another group of women to neutral lyrics. The researchers found that, when exposed to romantic lyrics, the participants were more likely to accept dating requests than the participants who heard a song with neutral lyrics.

When analysing the effects of lyrics on emotions, studies have mostly indicated mixed and contradictory results. Stratton and Zalanowski (1994) identified negative effects on mood and emotions when participants were exposed to sad lyrics as well as when they were exposed

(9)

to sad lyrics accompanied by the slow melody of a sad song. The same effect was observed when the melody was played in an upbeat tempo but the lyrics were sad. Additionally, they found that when coupling a sad and slow melody with positive and joyful lyrics, participants reported positive and pleasant feelings, thus concluding that lyrics affected emotions in a stronger way than melodies. On the other hand, Sousou (1997) found that exposing

participants to a sad melody led to the report of a negative mood while participants exposed to a happy melody rated their mood as positive, regardless of the lyrical cues present in the songs. Sousou (1997) thus concluded that mood and emotions were mostly affected by melodies and not lyrics. In an experiment on the difference between lyrics and melodies, Ali and Peynircioğlu (2006), found that melodies were more dominant than lyrics when it came to eliciting emotions. However, they found that emotions were enhanced by lyrics in sad music and that this thus impacted negative emotions in a stronger way than melodies alone. Taking into account previous research, and since the present study focuses on the enhancement of negative emotions, the following hypotheses2 are drawn:

H3a: Paying attention to a specific musical feature (lyrics or melody) results in different emotional responses.

H3b: Individuals paying attention to the lyrics will be emotionally affected in stronger way than individuals paying attention to the melody.

H4a: Individuals exposed to a slow song and paying attention to the lyrics will be emotionally impacted in the most negative way and participants exposed to a fast song and paying

attention to the melody will be emotionally impacted in the least negative way.

H4b: Individuals exposed to a fast song and paying attention to the lyrics will be emotionally impacted in a more negative way than participants exposed to a slow song and paying attention to the melody.

2 See Table 2 in the Appendices for a visual representation of the conditions which are hypothesized to induce the most negative or most positive emotions in H4a and H4b.

(10)

Method

Cinematic Scene

The cinematic scene chosen for this experiment is the ending scene of the award winning movie Call Me By Your Name (2017), directed by Luca Guadagnino. The scene3 depicts

a boy, the main character Elio played by Timothée Chalamet, hanging up the phone and making his way through the dining room to go sit by the fireplace. The scene does not contain any dialogues and the main focus is on the boy. As he sits by the fireplace, the orange light of the flames reflects on his face and he starts to cry. This specific scene was chosen because fairly little happens in it and because it is a sad scene. There are no dialogues and few movements, this allowed for the music to be a prominent aspect of the scene, not just a background accompaniment.

While the movie was added to online streaming platforms shortly before this experiment, 83.2% of the participants had not watched Call Me By Your Name before taking part in the experiment. Even though only a small portion of the participants had seen the movie (16.8%), an independent samples t-test revealed that people who had seen the movie before (M= 2.29, SD=.49) reported slightly stronger emotions after watching the video than the people who had never seen the movie (M= 2.09, SD=.50). This mean difference of .21 was statistically significant t (212)= 2.27, p= .024, 95% CI [.03; .38], and constituted a small effect size, d= .31, meaning that having seen the movie before did influence one’s reported emotions after watching the video but this effect was rather small.

(11)

Musical Cues

As done in previous research, discussed in the theoretical background (Baumgartner et al., 2006; Larsen & Stastny, 2011), it was decided to expose participants to two

audiovisual conditions (and an additional mute control condition) which only differed with regards to the music, as the aim was to elucidate whether slow music and fast music in cinematic scenes had a different effect on individuals’ emotional responses. The control condition was added in order to test whether music played a role in influencing the participants’ felt emotions or not.

The two songs chosen for the experimental conditions are Visions of Gideon by Sufjan Stevens4 and It’s Only (feat. Zyra) by Odesza [Odesza VIP Remix]5. The original song

accompanying the ending scene of Call Me By Your Name is Visions of Gideon by Sufjan Stevens. The song is in a F Minor Key and has a speed of 75 BPM, the lyrics are of sad and negative nature. It was therefore chosen to keep the original song as it fit the characteristics of the desired stimulus nicely: the tempo is slow and the lyrics are sad. The song by Odesza is in an E Minor Key and has a speed of 120 BPM. The lyrics are also of sad and negative nature. This song was chosen because it is also composed of negative lyrics but the speed of the melody is completely opposed to that of Visions of Gideon. The speed is different but the lyrics are similar and both songs are in minor key. These songs were chosen because their characteristics fit the conditions as best as possible. In order to investigate the difference in emotional responses depending on the speed of the melody, these two songs were fitting since tempo is the only characteristic that differs between both melodies. These characteristics also support the distinction between the two musical features (lyrics or melody), as reporting less sad emotions when exposed to a sad scene and focusing on the melody, regardless of the sad lyrics would corroborate the hypotheses of the present study.

4 https://www.youtube.com/watch?v=IDgR3FNlsUM 5 https://www.youtube.com/watch?v=-_z8ySSYsnE

(12)

Design and Participants

The design of this study is a 2x2 between-subjects factorial design with a fifth condition: the control condition.

Table 1. Factorial design of the experiment.

The first independent variable is the Speed of Music in a Cinematic Scene. This factor has two levels characterized by the rhythm of the music which is either slow or fast. The cinematic scene used for the experiment thus incorporated either a slow song or a fast song. As these songs accompanied a sad and emotional scene, the lyrics in both conditions focused on sad themes. The experimental conditions consisted of the ending scene from Call Me By Your Name which were accompanied either by the song “Visions of Gideon” by Sufjan Stevens or by “It’s Only” (feat. Zyra) by Odesza (Odesza VIP Remix). The control condition consisted of the same cinematic scene as in the other four conditions but without any

soundtrack, the video was mute.

The second independent variable is the Musical Feature paid most Attention to. This variable was chosen in order to help understand whether emotions are triggered in a stronger

Speed of Music in a Cinematic Scene

Musical Feature paid most Attention to

Slow Music (Visions of Gideon – Sufjan Stevens)

Fast Music (It’s Only (feat. Zyra) - Odesza [Odesza VIP

Remix])

Lyrics

(13)

manner when an individual pays more attention to one specific aspect of the music. This factor has two levels, depending on what feature an individual focuses most on: the lyrics or the melody. The manipulation was done by instructing the participants, before they watched the scene, to focus either on the lyrics or on the melody during their viewing. In the condition where participants were asked to focus on the lyrics, subtitles were added to make sure that the lyrics were clearly perceived and understood. In the mute control condition, the

participants were asked to focus on the visual aspects of the video for the purpose of conducting a manipulation check.

The dependent variable which was tested in this experiment is the Emotional Response to a Cinematic Scene. In order to measure this, an adapted version of the Differential

Emotions Scale (Izard, Dougherty, Bloxom, & Kotsch, 1974), of the Eight State

Questionnaire (Curran & Cattell, 1976) and of the Dispositional Positive Emotions Scale (Shiota, Keltner, & John, 2006) was used. The participants were asked to self-report, on a 5-point Likert scale, to what extent they felt certain emotions after their viewing of the scene. The items included from the Differential Emotions Scale were: Sadness, Joy, Interest, Anger, Disgust, Fear, and Guilt. The included items from the Eight State Questionnaire were:

Anxiety and Depression. The items from the Dispositional Positive Emotions Scale that were included are: Amusement, Contentment, and Compassion6. To make sure that participants’

pre-existing emotional states did not influence their subsequently reported emotions, they were asked to indicate, before watching the video, how they were feeling by assessing an emoticon to their current mood (ranging from 1 Very Happy to 5 Very Sad). This variable was used as a moderator in the analyses, in order to control for pre-existing emotional states when testing the different hypotheses.

The participants were contacted through convenience and snowball sampling among the researcher’s network. This method was chosen in order to obtain the required number of

6 See Table 3 in the Appendices.

(14)

participants and to achieve a sample consisting of a wide age range. A total of 296

participants were recorded, however, 3 did not agree to take part in the experiment, 4 were not above the age of 18, and 5 reported not understanding English. Additionally, 70 respondents did not fully complete the online survey and thus had to be excluded. This left 214 valid participants whose answers were used for the analyses.

Procedure

An online experiment was administered through a survey, using the Qualtrics platform. This platform was chosen because it provides clear design options and because it is accessible to use for university students. After an informed consent form, participants were required to be above the age of 18 and to understand English to be eligible to take part in the study. For the purpose of conducting randomization checks, participants were asked to fill out their age and gender.

As described in the design section, respondents’ pre-existing emotional states were determined. Participants were then randomly assigned to one of the five conditions: 1. scene with Visions of Gideon by Sufjan Stevens accompanied by subtitles and task to focus on the lyrics, 2. scene with It’s Only by Odesza accompanied by subtitles and task to focus on the lyrics, 3. scene with Visions of Gideon by Sufjan Stevens and task to focus on the melody, 4. scene with It’s Only by Odesza and task to focus on the melody, 5. mute scene (control condition) with the task to focus on the visual aspects in order to make sure that the participants were indeed watching the video.

To test whether being familiar with the movie impacted one’s emotions, people were asked if they had already seen the movie from which the scene was taken. Subsequently, respondents had to indicate on a 5-point Likert scale to what extent they felt the 12 different

(15)

emotional states selected from the three scales7. Finally, some manipulation checks were

inserted in order to make sure that the participants had completed their tasks correctly. To do so, questions on the speed of the melody, the instruments present in the melody and the sentences present in the lyrics were asked. Participants were also required to recognize elements that were present on the table in the scene. These four questions were asked to all the respondents, regardless of which condition they were in, allowing participants in the mute condition to select the option “I did not hear anything”. This allowed for a manipulation check as it helped to distinguish whether focusing on a particular task would arise difficulties when being asked to recognize elements attributed to the other tasks.

Results

Randomization checks

To check if participants’ age was comparable across the experimental conditions, a Oneway ANOVA was conducted. This ANOVA had Condition (Sufjan Stevens song and focus on lyrics, Odesza Song and focus on lyrics, Sufjan Stevens song and focus on melody, Odesza Song and focus on melody, and mute control condition) as independent variable, and Age (18 through 85 years old) as dependent variable. The ANOVA showed that participants’ mean age was not significantly different across conditions with different audio-visual exposures, F (4, 209) =.66, p=.619, ηpartial2=.01. This means that randomization of participants across

conditions was successful in terms of participants’ age. Second, to check the distribution between conditions for gender, Chi-square tests were conducted. There were no significant differences between conditions on gender, χ2(4) = .62, p = .961. Randomization of gender was

thus successful.

7 Differential Emotions Scale (Izard, Dougherty, Bloxom, & Kotsch, 1974), Eight State Questionnaire (Curran & Cattell, 1976), Dispositional Positive Emotions Scale (Shiota, Keltner, & John, 2006).

(16)

Manipulation checks

A manipulation check was conducted in order to determine whether the participants were aware of the condition they were in and whether they had paid attention to the specific features. To check if the participants were aware of the condition they were assigned to, a Oneway ANOVA was conducted with the correct lyrics from the Sufjan Stevens song as the dependent variable and the five different conditions as the independent variable. The

dependent variable assessed whether the participants recognized the correct lyrics from the song Visions of Gideon by Sufjan Stevens. A significant effect of Condition on the correctly recognized lyrics was found F(4,78) = 10.81, p< .001, ηpartial2= .36.

Post-hoc Bonferroni tests showed that the participants exposed to the condition in which they were asked to focus on the lyrics of the Sufjan Stevens song Visions of Gideon (M=1.69, SD=.47) recognized the correct lyrics significantly more than participants exposed to condition in which they had to focus on the melody of the Sufjan Stevens song Visions of Gideon (M= 1.18, SD = .39, p< .001) and than participants exposed to the mute condition (M= 1.00, SD=.00, p<.001). The latter two conditions did not differ significantly, p=1.000. This analysis reveals that the manipulation was successful, as the participants in the lyrics

condition recognized the correct lyrics significantly more than the participants in the melody condition and in the mute condition. Similarly, a Univariate ANOVA was conducted with the correct lyrics from the Odesza song as the dependent variable and the five different conditions as the independent variable. In this case, the dependent variable assessed whether the

participants recognized the correct lyrics from the song It’s Only by Odesza. A significant effect of Condition on the correctly recognized lyrics was found F(4,61) = 3.03, p=.024, ηpartial2= .17. However, post-hoc Bonferroni tests did not show significant differences across

the five conditions. Manipulation was thus partially successful.

For the mute condition, participants were asked to focus on the visual aspects of the video. In order to verify this, participants were shown a list of objects and were asked to

(17)

report which ones they remembered seeing on the table in the scene. To check whether the participants were aware of the condition they were in, a Univariate ANOVA was conducted with the correct elements present on the table as dependent variable and the conditions as independent variable. The effect of Condition on the recognition of the elements present on the table did not prove to be statistically significant F(4,205) = 1.39, p=.240, ηpartial2= .03, the

manipulation was thus not successful.

Reliability Analysis

Participants’ emotions were measured using twelve items, each answered on a 5-point Likert scale ranging from 1 (Not at all) to 5 (To a very great extent). An exploratory factor analysis with Direct Oblimin rotation indicated that the emotions scale was tridimensional (three components with Eigenvalue above 1.00, EigenvalueFactor1= 3.38, EigenvalueFactor2= 1.89,

EigenvalueFactor3= 1.44), explaining 55.84% of the variance in the twelve original items. The

12-item scale also proved to be reliable as indicated by a Cronbach’s Alpha of .75 (M= 2.12, SD= .50). Subsequently, a mean scale for Emotions was computed.

In order to differentiate negative emotions from positive emotions, separate factor and reliability analyses were conducted to determine whether the scales were reliable. For

negative emotions (Sadness, Anger, Fear, Guilt, Anxiety, Depression, and Disgust), the exploratory factor analysis with Direct Oblimin rotation indicated that the negative emotions scale was bidimensional (two components with Eigenvalue above 1.00, EigenvalueFactor1=

2.95, EigenvalueFactor2= 1.15), explaining 58.72% of the variance in the original seven items.

The 7-item scale proved to be reliable as indicated by a Cronbach’s Alpha of .76 (M= 1.97, SD= .62). Subsequently, a mean scale for Negative Emotions was computed.

The exploratory factor analysis with Direct Oblimin for positive emotions (Interest, Joy, Amusement, Compassion, and Contentment) indicated that the positive emotions scale was bidimensional (two components with Eigenvalue above 1.00, EigenvalueFactor1= 2.07,

(18)

EigenvalueFactor2= 1.14), explaining 64.18% of the variance in the original five items. The

5-item scale proved to be reliable as indicated by a Cronbach’s Alpha of .61 (M= 2.33, SD= .61). Subsequently, a mean scale for Positive Emotions was computed.

To make sure that participants had no prior emotional conditions that would influence their reports, the respondents’ feelings were measured prior to exposing them to the video. This was measured on a 5-point Likert scale ranging from 1 (Very Happy) to 5 (Very Sad). Participants were divided into two groups, people who felt happy vs. people who felt rather sad, on the basis of the median of the sample (Median = 2.00). The Happy group thereafter consisted of 178 people, and the Sad group consisted 36 people. An independent samples t-test revealed that people who felt happy before watching the video (M= 2.11, SD=.50) reported slightly weaker emotions after watching the video than the people who felt sad (M= 2.19, SD=.50), however, the mean difference of .08 was not statistically significant t (212)= -.85, p= .397, 95% CI [-.26; .10]. There was thus no statistically significant difference in the felt emotions reported after watching the video between the group of people who felt happy before exposure and the group who felt sad before watching the clip.

Hypothesis 1

For H1, a two-way ANOVA was conducted to determine whether the different exposure conditions had an effect on emotions while controlling for the emotional state reported prior to the exposure. A non-significant interaction effect was revealed, F(4,204) = 1.05, p= .382, ηpartial2=.02. Simple effects analyses that compared exposure conditions separately for

participants with happy and sad pre-existing emotional states were performed. For

participants who felt happy before watching the video, the simple effects analysis revealed a non-significant difference on emotions between exposure to the mute video (M= 2.10, SD= .42), exposure to the lyrics condition with the Sufjan Stevens song (M= 1.96, SD= .41, p= 1.000), exposure to the lyrics condition with the Odesza song (M= 2.33, SD= .54, p=.636),

(19)

exposure to the melody condition with the Sufjan Stevens song (M= 2.06, SD= .42, p= 1.000), and exposure to the melody condition with the Odesza song (M= 2.16, SD= .67, p= 1.000). For participants who felt rather sad before watching the video, there was no significant difference between exposure to the mute video (M= 2.07, SD= .47), exposure to the lyrics condition with the Sufjan Stevens song (M= 2.35, SD= .55, p= 1.000), exposure to the lyrics condition with the Odesza song (M= 2.17, SD= .74, p= 1.000), exposure to the melody condition with the Sufjan Stevens song (M= 2.02, SD= .55, p= 1.000), and exposure to the melody condition with the Odesza song (M= 2.28, SD=.32, p= 1.000). This analysis thus does not support the hypothesis that music accompanying a cinematic scene influences and

individual’s emotional responses and H1 is therefore rejected.

Hypothesis 2a

For H2a, a two-way ANOVA was conducted to determine whether the speed of the music had an effect on emotions while controlling for the emotional state reported prior to the exposure. The conditions in which participants were asked to focus on the melody were analyzed. Similar to H1, a non-significant interaction effect was revealed, F(4,204) = 1.05, p= .382, ηpartial2=.02. Simple effects analyses that compared exposure conditions separately for

participants with happy and sad pre-existing emotional states were performed. As previously demonstrated by the independent samples t-test, there were no statistically significant

differences across the conditions. This analysis does not support the hypothesis that the speed of music accompanying a scene influences an individual’s emotional response and H2a is thus rejected.

Hypothesis 2b

For H2b, a two-way ANOVA was conducted with the Mean Negative Emotions scale (consisting of Sadness, Anger, Fear, Guilt, Anxiety, Depression, and Disgust) as dependent

(20)

variable and Condition and Emotions Prior to Exposure as independent variables. There was no statistically significant effect of the condition on the reported negative emotions F(4,204)= 1.06, p= .378, ηpartial2= .02. Simple effects analyses that compared exposure conditions

separately for participants with happy and sad pre-existing emotional states were performed. As previously demonstrated by the independent samples t-test, there were no statistically significant differences across the conditions. This analysis does not support the hypothesis that a slow song will trigger more negative emotions than a fast song and H2b is thus rejected.

Hypothesis 3a and 3b

In order to test whether paying attention to one specific feature (lyrics or melody) had an effect on the reported emotions, a two-way ANOVA was conducted. The effect of condition on emotions was thus analysed while controlling for pre-existing emotional state. No

statistically significant effects of audio-visual conditions on emotions were found F(4,204)= 1.05, p= .382, ηpartial2= .02. Simple effects analyses that compared exposure conditions

separately for participants with happy and sad pre-existing emotional states were performed. As previously demonstrated by the independent samples t-test, there were no statistically significant differences across the conditions. The hypotheses that paying attention to a certain feature influences one’s emotional response and that paying attention to the lyrics will impact one’s emotions in a stronger way than paying attention to the melody are not supported. H3a and H3b are thus rejected.

Hypothesis 4a and 4b

In order to test whether some conditions would convey more negative emotions than others, a two-way ANOVA was conducted with the Mean Negative Emotions scale as dependent variable and Condition and Emotions Prior to Exposure as independent variables. No

(21)

p= .378, ηpartial2= .02. Simple effects analyses that compared exposure conditions separately

for participants with happy and sad pre-existing emotional states were performed. As previously demonstrated by the independent samples t-test, there were no statistically

significant differences across the conditions. The hypothesis that focusing on the lyrics of the Sufjan Stevens song would lead to the reporting of the strongest negative feelings and that focusing on the melody of the Odesza song would influence and individual’s negative

emotions the least is not supported. Similarly, the hypothesis that focusing on the lyrics of the Odesza song would lead to the reporting of more negative emotions than focusing on the melody of the Sufjan Stevens song was not supported. H4a and H4b are thus rejected.

Conclusion & Discussion

This study explored the effects of music in movies on emotions, depending on the speed of the song and on the musical feature paid most attention to (lyrics or melody). Although previous studies supported the effects of slow and fast music on emotions (Dalla Bella et al., 2001; Larsen & Sastny, 2011)this study did not observe significant differences in the

reporting of emotions when exposed to fast and slow songs. Previous research reported mixed and contradictory findings on the effects of lyrics and melody on felt emotions. In accordance with tempo, the present study did not find any significant differences in felt emotions when participants focused on the lyrics or on the melody of the songs.

The present study hypothesized that, similar to Stratton and Zalanowski’s (1994) findings, lyrics would have a greater impact on the viewer’s negative emotions than the melody solely. In line with Larsen and Sastny’s (2011) results, this research also hypothesized that a slow tempo would elicit more negative emotions than a fast tempo. Taken together, it was expected that a fast song would elicit more positive emotions than a sad song, but that this would not be the case when an individual paid attention to the lyrics specifically. When paying attention to the lyrics, participants were expected to report stronger negative emotions, regardless of the speed of the melody. This was not the case, as none of the results proved to

(22)

be statistically significant. By failing to demonstrate that there is a significant difference between the different conditions, the present study adds to the body of existing mixed findings (Stratton & Zalanowski, 1994; Sousou, 1997; Ali & Peynircioğlu, 2006).

The non-significant results can be explained by the non-significant manipulations. The manipulations present in the different conditions did not all prove to be successful. Since the manipulations did not work properly, the final results were non-significant as well. This could be explained by various distractions which could have occurred in the participant’s

environment or by the poor performance of the attention task, as presenting participants with both melody and lyrics might have been problematic when being asked to focus on one aspect specifically. The literature review revealed contradictory results on the effects of lyrics and melody. It was not clear whether one feature influenced felt emotions in a stronger way than the other. A more thorough investigation of previous studies might have led to different hypotheses from which significant results could have derived.

For practical reasons, the experiment was administered via convenience and snowball sampling among the researcher’s network. Although a wide age range was represented in the sample (18 through 85 years old), a majority of the participants were of European origin and had completed a high education level. This does not allow the results to be generalized to a wider population as the socio-demographic characteristics are not representative of the world’s population. Furthermore, the chosen scene from the movie Call Me By Your Name was never pre-tested in order to determine whether it was perceived as a sad scene. A participant’s pre-existing emotional state could influence their viewing of the scene and it might thus not be perceived as a sad scene by all of the respondents. Pre-testing the scene in order to find out how it is perceived by participants might be valuable for future research. Additionally, the group of participants who reported feeling happy before watching the clip consisted of 178 people while the group of participants who reported feeling sad consisted of only 36 people. In future research, it would be better to aim at having an approximate equal

(23)

amount of participants in each category, in order to hopefully distinguish a clear and significant difference on the emotions reported after being exposed to the video.

Since the manipulation was only partially successful, a laboratory experiment might be more appropriate, as it would allow to control for external factors and thus to supervise the manipulation. With a larger available timeframe, it would be interesting to pre-test participants before the experiment in order to determine whether they already unconsciously pay more attention to one musical feature (lyrics or melody). During this experiment,

participants were exposed to both melody and lyrics in all conditions (except the mute control condition) and were asked to focus on either the lyrics or the melody. Exposing some participants to the melody solely (without any lyrics) and others to the lyrics only might also positively increase the chances of a successful manipulation.

The scale used to assess participants’ pre-existing emotional states (before watching the video) was purposely different from the scale used to assess the felt emotions after the viewing of the scene. This choice was made in order to avoid a biased viewing of the video because too much information about the purpose of the experiment would have been given away. However, measuring these pre-existing emotions in a more thorough manner (with a different and more exhaustive scale for example) might give more information in the subsequent analyses of the participants’ emotions.

To summarize, no significant effects on emotions were found. The speed of the music as well as paying attention to either lyrics or melody did not result in statistically significant differences on the participants’ reported emotions. The results were thus not in line with the hypotheses and these all had to be rejected. These non-significant results could be explained by an only partially successful manipulation, but also by the contradictory results of previous studies. A more precise analysis of the literature on this topic might have led to the

formulation of different hypotheses, which could have led to statistically significant results. The present results therefore do not shed light on theory and do not have practical or societal

(24)

implications. Further research on this topic should take the present study’s limitations into consideration in pursuance of finding significant results which could lead to additional and valuable knowledge in this field.

References

Ali, S., & Peynircioğlu, Z. (2006). Songs and emotions: Are lyrics and melodies equal partners? Psychology of Music, 34(4), 511–534.

https://doi.org/10.1177/0305735606067168

Alpert, J., & Alpert, M. (1986), "The Effects of Music in Advertising on Mood and Purchase Intentions," unpublished working paper 85/86, 5-4, University of Texas.

Alpert, J., & Alpert, M. (1988). Background music as an influence in consumer mood and advertising responses. Advances in Consumer Research, 16. Retrieved from

http://search.proquest.com/docview/1293571841/

Baranowski, A., & Hecht, H. (2017). The auditory Kuleshov effect: Multisensory integration in movie editing. Perception, 46(5), 624–631.

https://doi.org/10.1177/0301006616682754

Baumgartner, T., Esslen, M., & Jäncke, L. (2006). From emotion perception to emotion experience: Emotions evoked by pictures and classical music. International Journal of Psychophysiology, 60(1), 34–43. https://doi.org/10.1016/j.ijpsycho.2005.04.007

Baumgartner, T., Lutz, K., Schmidt, C., & Jäncke, L. (2006). The emotional power of music: How music enhances the feeling of affective pictures. Brain Research, 1075(1), 151– 164. https://doi.org/10.1016/j.brainres.2005.12.065

Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of the National

(25)

Academy of Sciences of the United States of America, 98(20), 11818–11823. https://doi.org/10.1073/pnas.191355898

Bouhuys, A., Bloem, G., & Groothuis, T. (1995). Induction of depressed and elated mood by music influences the perception of facial emotional expressions in healthy

subjects. Journal of Affective Disorders, 33(4), 215–226. https://doi.org/10.1016/0165-0327(94)00092-N

Bruner, G. (1990). Music, Mood, and Marketing. Journal of Marketing, 54(4), 94–104. https://doi.org/10.2307/1251762

Cohen, A., Pan, B., Stevenson, L., Mciver, A., Gudmundsdottir, H., & Cohen, A. (2015). Does non-native language influence learning a melody? A comparison of native English and native Chinese university students on the AIRS Test Battery of Singing

Skills. Musicae Scientiae, 19(3), 301–324. https://doi.org/10.1177/1029864915599598 Curran, J. P., & Cattell, R. B. (1976). Manual for the eight state questionnaire. Champaign,

IL: Institute for Personality and Ability Testing.

Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N. (2001). A developmental study of the affective value of tempo and mode in music. Cognition, 80(3), B1–B10.

https://doi.org/10.1016/S0010-0277(00)00136-0

Daly, I., Malik, A., Hwang, F., Roesch, E., Weaver, J., Kirke, A., … Nasuto, S. (2014). Neural correlates of emotional responses to music: An EEG study. Neuroscience Letters, 573, 52–57. https://doi.org/10.1016/j.neulet.2014.05.003

Dieckmann, S., & Davidson, J. (2018). Emotions. Music and Arts in Action, 6(2). Retrieved from http://search.proquest.com/docview/2162996783/

Gabrielsson, A., & Juslin, P. (1996). Emotional expression in music performance: Between the performer’s intention and the listener’s experience. Psychology Of Music, 24, 68– 91. https://doi.org/10.1177/0305735696241007

(26)

Greitemeyer, T. (2009). Effects of songs with prosocial lyrics on prosocial thoughts, affect, and behavior. Journal of Experimental Social Psychology, 45(1), 186–190.

https://doi.org/10.1016/j.jesp.2008.08.003

Guéguen, N., Jacob, C., & Lamy, L. (2010). “Love is in the air”: Effects of songs with romantic lyrics on compliance with a courtship request. Psychology of Music, 38(3), 303–307. https://doi.org/10.1177/0305735609360428

Gundlach, R. (1935). Factors determining the characterization of musical phrases. The American Journal of Psychology, 47(4), 624–643. https://doi.org/10.2307/1416007 Günther, J.-L. (2019). Assignment A. Research Methods Tailored to the Thesis. Unpublished

paper, University of Amsterdam.

Günther, J.-L. (2019). Assignment B. Research Methods Tailored to the Thesis. Unpublished paper, University of Amsterdam.

Hébert, S., Racette, A., Gagnon, L., & Peretz, I. (2003). Revisiting the dissociation between singing and speaking in expressive aphasia. Brain, 126(8), 1838–1850.

https://doi.org/10.1093/brain/awg186

Hevner, K. (1937). The affective value of pitch and tempo in music. American Journal of Psychology, 49(4), 621–630. https://doi.org/10.2307/1416385

Hoeckner, B., Wyatt, E., Decety, J., & Nusbaum, H. (2011). Film music influences how viewers relate to movie characters. Psychology of Aesthetics, Creativity, and the Arts, 5(2), 146–153. https://doi.org/10.1037/a0021544

Holbrook, M. (2008). Music meanings in movies: The case of the crime-plus-jazz genre. Consumption Markets & Culture, 11(4), 307–327.

https://doi.org/10.1080/10253860802391326

Izard, C. E., Dougherty, F. E., Bloxom, B. M., & Kotsch, W. E. (1974). The differential emotions scale: A method of measuring the subjective experience of discrete emotions. Unpublished manuscript, Vanderbilt University, Nashville, TN.

(27)

Juslin, P. (2000). Cue utilization in communication of emotion in music performance: Relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26(6), 1797–1812. https://doi.org/10.1037/0096-1523.26.6.1797

Juslin, P., & Sloboda, J. (Eds.). (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press.

Juslin, P., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), 559–575. https://doi.org/10.1017/S0140525X08005293

Koelsch, S. (2010). Towards a neural basis of music-evoked emotions. Trends in Cognitive Sciences, 14(3), 131–137. https://doi.org/10.1016/j.tics.2010.01.002

Langkjær, B. (2015). Audiovisual styling and the film experience: Prospects for textual analysis and experimental approaches to understand the perception of sound and music in movies. Music and the Moving Image, 8,

35-47. https://doi.org/10.5406/musimoviimag.8.2.0035

Larsen, J., & Stastny, B. (2011). It’s a bittersweet symphony: Simultaneously mixed emotional responses to music with conflicting cues. Emotion, 11(6), 1469–1473. https://doi.org/10.1037/a0024081

Pantev, C., Oostenveld, R., Engelien, A., Ross, B., Roberts, L. E., & Hoke, M. (1998). Increased auditory cortical representation in musicians. Nature, 392(6678), 811–814. https://doi.org/10.1038/33918

Peretz, I., & Hébert, S. (2000). Toward a biological account of music experience. Brain and Cognition, 42(1), 131–134. https://doi.org/10.1006/brcg.1999.1182

Racette, A., Bard, C., & Peretz, I. (2006). Making non-fluent aphasics speak: Sing along! Brain, 129(10), 2571–2584. https://doi.org/10.1093/brain/awl250

(28)

Rakowski, A. (2001). What is music? Musicae Scientiae, 5(2_suppl), 125–130. https://doi.org/10.1177/10298649010050S217

Ruth, N. (2017). “Heal the world”: A field experiment on the effects of music with prosocial lyrics on prosocial behavior. Psychology of Music, 45(2), 298–304.

https://doi.org/10.1177/0305735616652226

Saito, Y., Ishii, K., Sakuma, N., Kawasaki, K., Oda, K., Mizusawa, H., & Hashimoto, K. (2012). Neural substrates for semantic memory of familiar songs: Is there an interface between lyrics and melodies? (Semantic Memory for Familiar Songs). 7(9), e46354. https://doi.org/10.1371/journal.pone.0046354

Shiota, M. N., Keltner, D., & John, O. P. (2006). Positive emotion dispositions differentially associated with Big Five personality and attachment style. The Journal of Positive Psychology, 1(2), 61-71. https://doi.org/10.1080/17439760500510833

Simpkins, J. D., & Smith, J. A. (1974). Effects of music on source evaluations. Journal of Broadcasting & Electronic Media, 18(3), 361-367.

https://doi.org/10.1080/08838157409363749

Sousou, S. D. (1997). Effects of melody and lyrics on mood and memory. Perceptual and motor skills, 85(1), 31-40.

Spence, C., & Santangelo, V. (2010). Auditory attention. In Oxford Handbook of Auditory Science: Hearing. https://doi.org/10.1093/oxfordhb/9780199233557.013.0011

Stratton, V. N., & Zalanowski, A. H. (1994). Affective impact of music vs. lyrics. Empirical Studies of the Arts, 12(2), 173-184. https://doi.org/10.2190/35T0-U4DT-N09Q-LQHW Watson, K. (1942). The nature and measurement of musical meanings. Psychological

Monographs, 54(2), i–43. https://doi.org/10.1037/h0093496

(29)

Appendices

Table 2. Visual representation of the conditions which are hypothesized to induce the most negative or most positive emotions in H4a and H4b.

Speed of Music in a Cinematic Scene

Musical Feature paid most Attention to

Slow Music (Visions of Gideon – Sufjan Stevens)

Fast Music (It’s Only (feat. Zyra) - Odesza [Odesza VIP

Remix])

Lyrics - - - - - +

(30)

Please indicate to what extent you feel the different emotions listed below on a scale of 1 (Not at all) to 5 (To a very great extent): Not at all To a small

extent To a moderate extent To a great extent To a very great extent Interest Joy Sadness Amusement Anger Compassion Fear Guilt Anxiety Depression Contentment Disgust

Referenties

GERELATEERDE DOCUMENTEN

Electromagnetic, electrostatic (comb drive, dipole surface drive, inchworm), thermal, and piezoelectric actuators all seem promising candidates for use in a probe data storage

From the (deviatoric) stress- strain relation a ratchet-like behavior is observed: Increasing the coefficient of friction leads to a transition from ratcheting

Test 3.2 used the samples created to test the surface finish obtained from acrylic plug surface and 2K conventional paint plug finishes and their projected

Echter vanuit een kennisperspectief is niet duidelijk wat er precies met de kennis van VWO-campus gebeurt, of deze verankerd wordt in de onderwijsorganisaties en hoe

To investigate the effects of the social stress context and the cortisol responses (CR) on the selective attention to angry and neutral faces we conducted a two-way ANOVA rm for

Each pair of children would play two rounds of the game each in either of the two artificial languages and in a natural language, in this case their native language Urdu.. The

But, and I again side with Sawyer, the emancipation of music appears to have caused dancers to approach music from the outside, not as something to dance, but as something to dance

Just like the post-digital artist longs for the analogue, so too does the ‘atomized individual’ crave for it, not so much as a factual reality but rather as a non-quantifiable