• No results found

Expressiveness in music performance and performer discrimination: The role of timing and loudness

N/A
N/A
Protected

Academic year: 2021

Share "Expressiveness in music performance and performer discrimination: The role of timing and loudness"

Copied!
9
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Expressiveness in music performance and performer discrimination: The role of timing and loudness Yasmin Mzayek

University of Amsterdam Master’s Brain and Cognitive Sciences

Research Project I

Supervisor: Carlos Vaquero Co-assessor: Henkjan Honing

Institute of Logic, Language, and Computation (ILLC) Music Cognition Group

(2)

Abstract

The recognition of expressive timing and loudness in music performance as idiosyncratic features of the performer is a yet growing area of research. Gingras et al. (2011) and Koren & Gingras (2011) have shown that listeners can recognize pieces played by the same performer by using expressive features evident in the performer’s playing style, such as tempo and articulation. The aim of this study is to extend these findings by assessing if individuals can recognize or differentiate between performers playing the same piece and to test whether their capability is associated with timing and loudness. The ability of musicians versus non-musicians was also examined. Participants listened to a pair of musical excerpts and were asked if the performer was the same or different. The pair of excerpts were either left unaltered, altered in timing (only loudness information available), or altered in loudness (only timing information available). The results show that the effect of loudness on response accuracy is modified by whether the performer is the same or not. When performer is the same loudness information is

associated with correct responses. However, the effect of loudness when performer is different is only trending towards significance. No effects of musicians were found compared to non-musicians. These findings suggest that expressive timing and loudness may have an effect on the way individuals perceive and compare musical performances.

(3)

Introduction

Expressiveness in music performance has been studied extensively to understand how performers interpret and play pieces and how listeners perceive these performances. Expressiveness can be defined as deviation from the musical score (Gabrielsson, 1974). This can be articulated in many different ways through how performers use performance features. For example, two particularly salient expressive features include timing and loudness. The amount of timing and loudness deviations from the score has been shown to affect perceived expressiveness, with greater deviations, to a certain extent,

communicating more expressiveness (Bhatara et al., 2011). Further, performers have been shown to use these expressive features differentially (Vaquero, 2015). Generally, they are more consistent with their use of timing compared to loudness.

Research has shown that performers use expressive features to mark phrase structure and important events for the listener (Repp, 1995a; Palmer 2006). Listeners, in turn, use these features to segment and group musical pieces (Drake & Palmer, 1993). Performers’ playing style and use of expressive features aids listeners in recognizing pieces played by the same performer (Gingras et al., 2011; Koren & Gingras, 2011). Gingras et al., (2011) investigated performer individuality by testing how listeners grouped expressive and inexpressive performances played by two pianists. They found that listeners were better at grouping expressive performances together as being played by the same performer. Koren & Gingras (2014) carried out a performer individuality study using pieces played on the harpsichord and showed that tempo and timing were relevant features in grouping the

performances and recognizing the performer. However, because the harpsichord is limited in how much performers can deviate in terms of loudness, its role is left unclear.

Of interest has also been the difference in performance between musicians and non-musicians in these performer individuality studies. Several studies have arrived to contradicting conclusions. Gingras et al., (2011) showed that the musical expertise of the listeners did not have an effect on their accuracy in grouping performances played by the same pianists. They conversely showed that performer expertise is more important. Other studies, however, showed that musicians did do better than non-musicians in similar grouping tasks. For example, two studies showed that non-musicians are better at accurately grouping two performances as being played by the same performer compared to non-musicians (Koren & Gingras, 2011; Koren & Gingras, 2014). Thus, non-musicians and non-non-musicians need to be considered in addition to assessing the role of expressive music performance features. This will elucidate a possible interaction between musical expertise and expressiveness on the ability of participants to discriminate between performers.

By investigating the role of expressive timing and loudness, we can better understand how listeners can recognize and discriminate between performers. This can be done by controlling for either loudness or timing information while asking participants to discriminate between performers. It is expected that when timing deviations are present and loudness deviations are taken out, participants will do better at discriminating between performers. This is because studies have shown that timing is a salient expressive feature and is used more consistently by performers (Koren & Gingras, 2014; Vaquero, 2015). We will also take musical expertise into account as a factor that could affect accuracy and we expect that musicians will be better at discriminating between performers compared to non-musicians only when timing deviations are present. It is likely that musicians are more familiar with the classical excerpts used in this experiment and their training might help them better recognize the consistencies in timing of individual performers. Thus, the current study aims to show how expressive timing and loudness affect perceptual discrimination between music performers and to clarify the role of listener musical expertise.

(4)

Methods Participants

Fifteen musicians and fifteen non-musicians participated in this psychoacoustic listening experiment. Musicians were defined as those who completed at least 1.5 years of undergraduate study of music performance and/or had 5 years of continuous performance practice. Participants who did not meet these criteria were grouped as non-musicians. Participants were recruited using flyers and through email correspondence. They were compensated for their participation. The pieces used in this

experiment were recorded by two professional pianists. Each pianist recorded five different pieces using a MIDI piano.

Materials

Five Chopin excerpts were chosen as stimuli for this experiment. Each performer recorded each excerpt four times. Reaper (http://www.reaper.fm/), a digital audio application, was used to cut and edit the excerpts. All of the excerpts had a clear beginning and end and their length ranged from 13 to 23 seconds. MATLAB Musical Instrument Digital Interface (MIDI) toolbox was used to manipulate the excerpts for each of the conditions by controlling for either timing or loudness. Timing was controlled for by replacing the MIDI note onset values (milliseconds) of the actual performances with the values of the scores. This will be referred to as the loudness condition because only loudness was allowed to deviate. Loudness was controlled for by setting all the MIDI velocity values to one value. This will be referred to as the timing condition because only timing was allowed to deviate. The normal condition consisted of unaltered performances where both timing and loudness were allowed to deviate

according to how they were performed. The MIDI files were converted to WAV in Reaper using the Neo Piano VST plugin, which samples a Yamaha C7 concert grand piano

(http://www.supremepiano.com/product/piano1.html). The experiment was implemented in an open-source software called PsychoPy (Peirce, 2009). It was run on a laptop with headphones in a quiet room. Design

Pairs of excerpts were presented in the normal, timing, and loudness conditions as shown in Table 1. Each pair can either be performed by the same performer or two different performers. The pairs were presented in five blocks, each comprising of six trials. Each participant listened to all five blocks. The same Chopin piece was used within one block. Excerpts were randomly picked and paired from the four different performances of each piece and the same performances were never paired together. Order of pairs was also randomized within each block.

Table 1. This table represents the conditions of the six trials in each block Same performer/same piece

• Normal vs. Normala

• Loudness only vs. loudness only • Timing only vs. timing only Different performer/same piece

• Normal vs. Normal

• Loudness only vs. loudness only • Timing only vs. timing only

a. In the normal vs. normal condition where performer is the same, the excerpts were randomly picked from the four performances that the performer made of the same piece. The same performance was never paired with itself

(5)

Procedure

The experiment entailed listening to pairs of excerpts and discriminating between performers by answering a question. Before beginning the experiment, participants went through a short

familiarization session by listening to examples of the excerpts and were told what the condition was to help them get acquainted with the different conditions. Afterwards, the participants could begin the experiment. After they listened to a pair of excerpts, they were asked to answer the question: “Do you think the pair of excerpts was played by the same performer or two different performers?" and

recorded their answer by pressing the corresponding key on the keyboard. Participants read and signed an informed consent form before starting the experiment and answered a demographics questionnaire after completing the task.

Analysis

General linear mixed effects analysis (http://cran.r-project. org/web/packages/lme4/index.html) in the R statistical language (http://www. r-project.org/) was performed on the effect of expressiveness and musical expertise on performer discrimination ability. Timing, loudness, musical expertise, performer, and age were included as fixed effects. Performer x timing, performer x loudness, and performer x musical expertise were included as interaction terms. Timing and loudness can either be altered or normal, musical expertise can either be musician or non-musician, and performer can be the same or different. The outcome of performer discrimination ability will be referred to as response accuracy and measures whether participants were able to correctly discriminate between performers or recognize the same performer. Participants and excerpts were included as random intercepts. Signal Detection Theory was used to parse out participants’ ability to discriminate between performers from noise and to understand participants’ response patterns.

Results

The average age of the participants was 26.98(4.69) years, ranging from 20 to 40 years old. The sample was balanced for males and females. (see Table 2).

Table 2. Demographic data

Group Number of

participants Mean age (Sd) Male (%) Female (%)

All 30 26.98 (4.69) 14 (47) 16 (53)

Musician 15 28.29 (5.46) 6 (40) 9 (60)

Non-musician 15 26.27 (3.77) 8 (53) 7 (47)

Overall, musicians and non-musicians performed below chance level (40% and 41%, respectively). There was no effect of sex or age on response accuracy. There was a significant effect of performer on

response accuracy (β=-0.77, p=0.006). The effect of loudness on response accuracy in the overall sample was trending towards significant (β=-0.40, p=0.095). There was a significant interaction between

(6)

Table 4 shows the main effects assessed in the two separate performer conditions. When performer is the same, the effect of loudness is significant (β=0.56, p=0.019), suggesting that loudness is associated with responding correctly. This means that in the loudness condition, participants are better able to recognize that the performer is the same. When the performer is different, there seems to be an influence of loudness on response accuracy, though the effect is only slightly significant (β =-0.41, p=0.091). In this case, loudness is associated with responding incorrectly. Whether the performer is the same or different, there is no effect of being a musician or not on response accuracy.

The frequency of correct to incorrect answers is shown in figure 1. The significant effect of loudness compared to timing when performer is the same is illustrated by the difference in proportions of correct to incorrect answers. When performer is different, participants are more likely to answer correctly in both conditions, however they do worse in the loudness condition.

Table 4. General linear mixed effects analysis on the effect of expressiveness and musical expertise on response accuracy in performer same vs. performer different conditions

Estimate SE z-vlaue p-value

Performer same Loudness 0.56 0.24 2.35 0.019** Timing 0.11 0.23 0.47 0.641 Musician 0.22 0.21 1.03 0.303 Age -0.00 0.02 -0.19 0.852 Performer different Loudness -0.41 0.24 -1.69 0.091* Timing -0.09 0.25 -0.37 0.711 Musician -0.09 0.22 -0.42 0.675 Age -0.04 0.02 -1.65 0.099* Significance levels: <0.01 ‘***’, <0.05 ‘**’, <0.1 ‘*’

Table 3. General linear mixed effects analysis on the effect of expressiveness, performer, and musical expertise on response accuracy

Estimate SE z-vlaue p-value

Performer -0.77 0.28 -2.76 0.006*** Loudness -0.40 0.24 -1.67 0.095* Timing -0.09 0.24 -0.37 0.715 Musician -0.14 0.20 -0.68 0.496 Age -0.02 0.02 -1.37 0.171 Loudness x performer 0.97 0.34 2.85 0.004*** Timing x performer 0.20 0.34 0.59 0.556 Musician x performer 0.40 0.28 1.45 0.148 Significance levels: <0.01 ‘***’, <0.05 ‘**’, <0.1 ‘*’

(7)

When looking at individual pieces, we found that there was a significant interaction of performer and loudness in piece 4 (β=2.27, p=0.003) and a significant interaction of musician and performer in piece 5 (β=1.58, p=0.027). The effect of musical expertise is not significant when looking at subsets of the data where the performer is the same or different. No significant effects or interactions were seen in any of the other pieces.

Finally, Signal Detection Theory (SDT) was used in order to clarify the impression of the results and to detect if there is a response bias. This approach is usually used in such discrimination tasks where task difficulty might lead to uncertainty. In this experiment, SDT showed that participants are not clearly discriminating between the performers (d’ = 0.44). The participants are also biased in their response and are more likely to answer that the performers are the different when in doubt (β = 1.00).

Discussion

The first hypothesis was not supported by the results of this study. There was no effect of timing on the response accuracy of participants. This can be explained by inconsistency in expressive timing deviations between the two musical excerpts being compared, which is the result of two different performances of the same piece from the same performer. If the performers are not consistent in their use of timing deviations then participants might conclude that two different performers are playing.

The results show that the effect of loudness on response accuracy is modified by whether the performer is the same or not. If performer is the same and only loudness is allowed to deviate, then participants are more likely to respond correctly. However, if performer is different within the same condition, then participants are more likely to respond incorrectly. This could be because loudness information does not vary enough to help participants discriminate between different performers.

Another explanation takes into consideration that in the loudness condition, timing information is still present but altered to sound inexpressive. Thus, it is also possible that, if participants are paying more attention to timing, then the similarly-altered timing is leading them to incorrectly respond that the performer is the same. This is also supported by the pattern of responses when performer is different. In this case, participants are more often responding incorrectly that the performer is the

Figure 1. Number of correct/incorrect responses in timing only vs. loudness only conditions when performer is the same (left) and when performer is different (right)

(8)

same, which could be the result the similarly-altered timing profiles of the two excerpts. This interpretation is merely speculative, but would be supported by our first hypothesis.

Concerning the proportion of correct answers when the performer is the same compared to that when performers are different, the results showed that participants answered correctly about as

frequently in both conditions. This contradicts the findings in Gingras et al. (2011), which showed that musical excerpts are more likely to be grouped together correctly when they are performed by the same performer. The results of this study suggest that this may be task dependent.

Musicians were expected to perform significantly better than non-musicians in the timing condition, however this was not seen. A small effect was found of musical expertise on response accuracy that might be modified by performer. The results showed that when performer is the same, musicians are marginally more likely than non-musicians to respond correctly. However, there was no effect when performer is different, though age might have had a small influence on response accuracy. Because most of the musicians were older than non-musicians, it is possible that assessing the amount of years in experience that the participants who were categorized as musicians had would yield a stronger effect.

Familiarity with the excerpts can also play a role. Honing & Ladinig (2006) showed that familiarity helps with discriminating between unaltered and tempo-transformed pieces. It is not clear how this finding directly relates to discriminating between performers, however it is likely that listeners familiar with a piece can better pick up on the expressive features and, thus, may be better able to pay attention to timing and loudness. A focus of future studies can be on the amount of experience in musical performance that the listeners have had and their familiarity with the musical stimuli.

Piece effects were also seen when analyzing the different excerpts individually. This is supported by Koren & Gingras (2014), who showed that participants perform differently in the same task

depending on the piece being used. This suggests that listeners’ ability to discriminate between

performers when taking into account timing and loudness is partly affected by other features inherent in the piece itself. It may also be that the way pianists perform a piece and how they use expressive

features is affected by the pieces they perform.

Finally, results from SDT make it difficult to make any firms conclusions based on the results. Several steps can be taken in order to improve on the current experiment. Firstly, the task could be made easier in order to reduce the noise evident from the SDT analysis. This can be done by using the same exact performance when the performer is the same rather than randomly picking from four different performances of the same piece. Though less ecologically valid, it could serve as a better control to show that participants can indeed group the same performance as being played by the same performer. Another approach could be to instruct the pianists to play as consistently as possible.

Secondly, the sound plugin used for converting the MIDI files to WAV was not the exact same sound that the pianists heard when recording the pieces. This could have affected the way the pianists played the piece while recording it, but may not be reflected in the sound that the participants heard. Thirdly, the requirements for musicians could be made stricter.

The results from this experiment suggest that there may be an effect of expressive timing and loudness on the ability to discriminate between performers. Moreover, they yield several

interpretations. However, it is difficult to make strong conclusions from these interpretations. This study gives a good platform for future studies to build on in order to clarify the role of both of these

(9)

References

Bates, D., Maechler, M., Bolker, B., Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1-48. doi:10.18637/jss.v067.i01.

Bhatara, A., Tirovolas, A. K., Duan, L. M., Levy, B., & Levitin, D. J. (2011). Perception of emotional expression in musical performance. Journal of Experimental Psychology: Human Perception and Performance, 37(3), 921.

Drake, C., & Palmer, C. (1993). Accent structures in music performance. Music perception, 343-378. Eerola, T. & Toiviainen, P. (2004). MIDI Toolbox: MATLAB Tools for Music Research. University of Jyväskylä: Kopijyvä, Jyväskylä, Finland. Available at http://www.jyu.fi/musica/miditoolbox/.

Gabrielsson, A. (1974). Performance of rhythm patterns. Scandinavian Journal of Psychology, 15(1), 63– 72.

Gingras, B., Lagrandeur-Ponce, T., Giordano, B. L., & McAdams, S. (2011). Perceiving musical

individuality: Performer identification is dependent on performer expertise and expressiveness, but not on listener expertise. Perception, 40(10), 1206-1220.

Honing, H., & Ladinig, O. (2006) The effect of exposure and expertise on timing judgments in music: Preliminary results. 9th International Conference on Music Perception and Cognition.

Koren, R., & Gingras, B. (2011). Perceiving individuality in musical performance: recognizing

harpsichordists playing different pieces. In A. Williamon, D. Edwards, & L. Bartel (Eds.), Proceedings of the International Symposium on Performance Science 2011 (pp. 473-478). Utrecht: European Association of Conservatoires.

Koren, R., & Gingras, B. (2014). Perceiving individuality in harpsichord performance. Frontiers in Psychology, 5, 1-13.

Palmer, C. (2006). The nature of memory for music performance skills. Music, motor control and the brain, 39-53.

Peirce, J.W. (2009) Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2:10.

R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.

Reaper v5.15. http://www.reaper.fm/

Repp, B. H. (1995a). Detectability of duration and intensity increments in melody tones: A partial connection between music perception and performance. Perception & Psychophysics, 57(8), 1217-1232. Sound Magic. http://www.supremepiano.com/product/piano1.html

Vaquero, C. (2015). A quantitative study of seven historically informed performances of Bach’s bwv1007 Prelude. Early Music, 43(4), 611-622

Referenties

GERELATEERDE DOCUMENTEN

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Belgian customers consider Agfa to provide product-related services and besides these product-related services a range of additional service-products where the customer can choose

As Fitzgerald’s lyricism delivers a two-fold display of the American Dream – as it has resulted in a morally decayed society and failure for Gatsby, but continues to foster

Recipients that score low on appropriateness and principal support thus would have low levels of affective commitment and high levels on continuance and

Als we er klakkeloos van uitgaan dat gezondheid voor iedereen het belangrijkste is, dan gaan we voorbij aan een andere belangrijke waarde in onze samenleving, namelijk die van

The main policy implication of this study is that higher education institutions possess the necessary resources to support social innovation initiatives acting as agents of

Because I am not incredibly familiar with handling space characters in LaTeX I had to implement spaces that have to appear in the spot color name with \SpotSpace so you have to use

Intranasal administering of oxytocin results in an elevation of the mentioned social behaviours and it is suggested that this is due to a rise of central oxytocin