• No results found

The Multidimensional Role of Timbre in Musical Stimuli for Beat Perception Studies

N/A
N/A
Protected

Academic year: 2021

Share "The Multidimensional Role of Timbre in Musical Stimuli for Beat Perception Studies"

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Multidimensional Role of Timbre in Musical Stimuli for Beat Perception Studies Olivier Teerling

Henkjan Honing – Fleur Bouwer University of Amsterdam

29-03-2020

Author Note

Olivier J. Teerling, student of the Master Brain and Cognitive Sciences at the Institute of Interdisciplinary Sciences at the University of Amsterdam. Literature thesis written under the supervision of Henkjan Honing & Fleur Bouwer. Final version handed in on the 29th of March 2020.

(2)

Abstract

The study of beat perception has been studied for over three decades. Studies often use different kinds of musical stimuli, which makes comparing studies not always straightforward. It has been shown that changes in beat perception are affected by the exact composition of musical stimuli and that distinct musical aspects composing these stimuli, like pitch, meter and rhythm (complexity) are interrelated and often covary. However, one musical aspect, timbre, is often overseen. The current review gives a critical look at old and new research that demonstrates a possible role of timbre in beat perception. The role of timbre in beat perception has been rarely studied, even though timbre by itself is present in every audible sound. Timbre represents the ‘sound color’ and is defined as a complex group of auditory perceptual attributes that contribute to the recognition and identification of a sound. Next to a perceptual definition, timbre is also defined based on a set of acoustic correlates, which relate to the frequency spectrum of the sound. As timbre is not easy to define, it has long been assumed that changing timbres doesn’t affect beat perception. This assumption has not been subjected to abundant direct testing, so therefore this review unifies diverse streams of behavioral and neurological evidence to elucidate the potential role of timbre in beat perception. It turns out that while often a change in beat perception is attributed to one distinct musical aspect, this change often relates to general changes in the frequency spectrum of the sound, which is a characteristic of musical timbre. Besides, correct judgements of beat and tempo seem to depend more on lower compared to higher frequency ranges, often represented by specific instruments, like the bass drum. Finally, an extensive amount of literature is addressed that demonstrates neural overlap between brain areas that process beat perception and timbre. This overlap is mostly anatomical and future research should combine neuroscientific methods to investigate if functional overlap also exists and if it can explain perceptual effects of timbre on beat perception.

(3)

The Multidimensional Role of Timbre in Musical Stimuli for Beat Perception Studies For humans, in contrast to the majority of animals, music plays a major role in daily life. Music has been shown to occur in many societies and serve similar types of behavior (Mehr et al., 2019; Stevens, Tardieu, Dunbar-Hall, Best, & Tillmann, 2013). The earliest forms of music stem from over 42.000 years ago (Higham et al., 2012). Music has been studied for over four decades and is still an interesting topic of debate. Musical sounds have the tendency to recur in a regular temporal pattern (Grahn, 2012; Nguyen, Gibbings, & Grahn, 2018; Simon & Summer, 1968). This phenomenon of equidistant isochronous points in time, that are metrically salient, is termed ‘the beat’. As studies have shown that the beat doesn’t need to be physically present, it is generally thought that humans are able to internally generate it while listening to music (Grahn, 2009; Henry, Herrmann, & Grahn, 2017; Merchant, Grahn, Trainor, Rohrmeier, & Fitch, 2015). However, beat perception studies have been using many different stimuli in order to investigate the sense of beat, which makes generalizing the results often difficult. Some used real songs (London, Burger, et al., 2019), which makes the translation of experimentally found results to the real world more reliable, while others used more synthetic stimuli (Bouwer, Werner, Knetemann, & Honing, 2016), which makes controlling the tested variables easier. Studies have shown that the distinct musical aspects that make up these musical stimuli, like rhythm (Bouwer, Burgoyne, Odijk, Honing, & Grahn, 2018; Henry et al., 2017; Povel & Essens, 1985), tempo (Cameron & Grahn, 2014; Apel, 1970), meter (Grahn, 2012; Henry et al., 2017; Witek, Clarke, Kringelbach, & Vuust, 2014), and pitch (Boltz, 2011; Hove, Marie, Bruce, & Trainor, 2014) affect beat perception. Hove, Marie, Bruce, and Trainor (2014) for example found that the success of tapping along to a musical piece was more dependant on the lower pitch frequencies than the higher pitch frequencies. However, while the relationship of the before mentioned musical aspects with beat perception are relatively well studied, this

(4)

relationship remains unclear for timbre, which is a musical aspect that so far is rather understudied, but at the same time very omnipresent in almost every musical stimulus.

The current literature review will therefore try to answer the question if the timbre of musical stimuli can affect beat-perception? Secondly, this review will critically look at why timbre has been underrepresented in the literature as a possible musical aspect that can affect beat perception.

In order to answer these questions this literature review will shortly explain the concept of beat perception by discussing recent advances in the field to then further dive into the exact definition and terminology of timbre and timbre research both on a psycho-acoustic and acousto-analytical level. This review will explore the old and new literature that does exist about the potential relationship between timbre and beat perception and this review will discuss studies that addressed the relationship between beat perception and other musical stimuli like pitch and tempo, as timbre often plays an (unrecognized) role in these studies. Finally, this study will thoroughly look at where in the brain timbre and beat perception are processed to see if the neural underpinnings of timbre and beat perception show overlap. By looking at the role of timbre on beat perception from these different angles, a clear overview can be given about the potential role of timbre in beat perception studies. If there are reasons to believe that timbre affects beat perception, it should therefore be better controlled for when designing stimuli for beat perception studies.

Beat perception: a concise overview

Beat perception is the concept of a recurring regular temporal pattern in music, which can be felt even though there are no physical temporal accents that emphasize the presence of beat nor is it necessary that the onset of the sound aligns with the onset of the beat (Bouwer et al., 2016; Grahn, 2009; Winkler, Háden, Ladinig, Sziller, & Honing, 2009a). Research so far therefore suggests that ‘the beat’ can be internally generated (Grahn & Watson, 2013). This

(5)

trait seems highly developed in humans compared to animals, even though some animals do show some sign of beat perception (Hattori & Tomonaga, 2020; Merchant et al., 2015; Wilson & Cook, 2016). In order to properly compare results of different beat perception studies amongst each other, it is important to ask the question which variables have been altered and which have been kept the same. However, with music it is sometimes difficult to achieve this as music consists of many aspects that can be individually varied. These are amongst others: the amount of instruments (Wallmark, Iacoboni, Deblieck, & Kendall, 2018), pitch (Boltz, 2011), rhythm (Bouwer et al., 2018), metrical structure (Witek et al., 2014), and tempo (London, Burger, et al., 2019; London, Burger, Thompson, & Toiviainen, 2016). They can all be present in musical stimuli and they can all be varied. The relationship of many of these musical aspects with beat perception has been studied, but the study of timbre and beat perception remains scarce (Siedenburg, Mcadams, Popper, & Editors, 2019). Timbre is a more sophisticated musical attribute to define as differences in timbre, in contrast to differences in pitch or loudness, can not be described by on single scale. Over the last 40 years of studying music this led to an underrepresentation of data on timbre, while there is little reason to believe that this isn’t an equally important musical feature to study (Plomp, 1970; McAdams & Bregman, 1979; Clarkson, Clifton, & Perris, 1988).

Timbre: What is it and how to study?

To understand why it is difficult to study timbre, it is important to ask the question: What is timbre exactly?. Different definitions are used to describe timbre. Some call it the ‘Sound color’ (Slawson, 1985), but the most general definition of timbre, also accepted by the Acoustical Society of America (ASA), is: ‘That attribute of auditory sensation in terms of which a listener can judge that two sounds similarly presented and having the same loudness and pitch are dissimilar (Smalley, 1994)’. However, Smalley criticized this definition as it is technically incorrect and it doesn’t imply that there should be a source that produces the sound (Smalley,

(6)

1994). A better definition according to Smalley would be that timbre is: ‘A general, sonic physiognomy whose spectromorphological ensemble permits the attribution of an identity.’ This is in line with the latest ideas on timbre by the most prominent researchers in the field (Siedenburg et al., 2019). The many definitions of timbre however do make studying a timbre a difficult matter and to reflect the awkward position of timbre in music research, McAdams once even called it ‘the psycho-acoustician’s multidimensional waste-basket category for everything that cannot be labeled pitch or loudness’. So where do we find timbre and how do we measure it?

Every group of instruments has it’s own timbre, and within those groups every instrument has their personal unique timbre. Therefore timbre is present in every musical study that is performed as every individual sound or mixture of sounds has it’s unique timbre. Timbre is thus a perceptual construct of a certain blended auditory event, either produced by a single acoustic sound source (monophonic timbre) or by multiple acoustic sound sources (polyphonic timbre) (Alluri, 2018). As timbre covaries with other musical aspects like pitch (Allen & Oxenham, 2014; Boltz, 2011), loudness (Alluri & Kaidiri., 2019; Reiterer, Erb, Grodd, & Wildgruber, 2008) and accentuation (Halpern & Mullensiefen, 2008), even a single instrument can have a whole pallet of timbres. Timbre is a rather abstract term for a collection of concrete perceptual characteristics, which has been studied for almost 150 years already (Helmholtz, 1877; Plomp, 1970, Wessel, 1973). McAdams and Siedenburg (2019) recently described two widely accepted characteristics of timbre that contribute in the understanding of sound and music. Firstly, it is a multifarious set of perceptual attributes that are used to describe the sound. These attributes are used in psycho-acoustic questionnaires to describe the sound. Sounds in this sense are for example described as more or less bright, full, rough or active. Sounds can also be sharp in attack or rather blatt or be described in many more other ways. Secondly, timbre is the main characteristic used to recognize, identify and thus categorize sound sources.

(7)

On the other hand, timbre is described in a more acousto-analytical way. In order to do this, independent acoustic correlates have been determined (Grey & Gordon, 1978; Kendall et al., 1999; McAdams et al., 1995). The general name for these correlates are audio descriptors. The audio descriptors are physical attributes of sound which can be found in audiograms and which describe aspects of the frequency spectrum of the sound (McAdams & Siedenburg, 2019). While reading literature it became clear to me that for the same phenomenon, different terms exist. Therefore I deem it essential to have a good overview of the used terminology in timbre and beat perception research. McAdams & Siedenburg (2019) recently described the four most profound acoustic correlates. I will sum these up and include a couple of other important audio descriptors. Besides I will show how these timbral audio descriptors relate to the before mentioned perceptual attributes of timbre.

One of the most used audio descriptors is the spectral centroid, which shows the centre (gravity) point of the spectrum. It is the average of all the low and high frequencies that determine the sound. The temporal change of the spectral centroid seems to be related to the perception of the material of struck objects (McAdams & Giordano, 2008). Spectral brightness (or nasality/sharpness), which describes the clarity of the sound is widely accepted as a perceptual attribute of spectral centroid (Alluri & Toiviainen, 2009; Wallmark et al., 2018; McAdams & Siedenburg., 2019). The link between brightness and spectral centroid seems to reside in the amount of upper partials and energy state of the sound (Alluri, 2018), where having more upper partials in the spectral centroid is perceived as brighter just as higher energy (Alluri & Toiviainen, 2009; Duke, 1990; Alluri, 2018). A second, often used, spectral feature is the logarithm of the attack time. This feature describes the time needed for the volume to reach it’s maximum (McAdams, Winsberg, Donnadieu, De Soete, & Krimphoff, 1995; McAdams & Siedenburg., 2019). It distinguishes more abrupt instruments that are plucked or struck from more smooth, enduring instruments like horns. So vital information for instrument

(8)

identification is present in the logarithm of the attack time. (Saldanha & Corso, 1964; McAdams & Siedenburg, 2019). Spectral Flatness and Timbral Complexity are two terms used to describe the same phenomenon. This spectral feature contains information about the flatness and the spread of the spectrogram (Wiener Entropy). Generally, it shows how tonal the sound is in contrast to how noise-like the sound is. Another important spectral feature is the Spectral Flux. This describes how much the spectral shape evolves over a tone’s duration. This is often measured over different sub-band frequencies as every instrument is more present in certain parts of the spectral frequency (Burger, Thompson, Luck, Saarikallio, & Toiviainen, 2013). Especially lower frequency spectral flux has been associated to rhythm, perceptual features, like the brightness (Burger et al., 2013) and tempo judgements (London, Burger, et al., 2019) of music. Fullness and Activity, two perceptual attributes often used in music studies to describe timbre, are associated with the spectral flux of the lower end and middle end of the spectrum, respectively (Alluri & Kadiri, 2019). Lastly, researchers sometimes address a spectral feature called the zero-cross rate. This feature describes how often the spectrogram crosses the zero line in an audiogram. This feature is used a lot to discriminate between percussive timbres (Gouyon, 2000), but also for other instruments (Banchhor & Khan, 2012). It needs to be said that many other spectral features exist, like inharmonicity, high-frequency energy and spectral deviation, but those will not be addressed in this review.

The above sections gave an overview of the definition of timbre and the psycho-acoustic and technical terminology used in timbre studies. The next section will, with the knowledge we have now, go in on old and new studies that have been done about beat perception, timbre and their relation. In this section studies will also be addressed where the relationship between timbre and other musical features, like pitch, have been studied. Besides, studies will be addressed where an effect of timbre on tempo estimations was studied, which will show possible effects of timbre on beat perception.

(9)

Timbre & beat perception: The importance of the musical stimulus

As mentioned earlier, the stimuli used in music studies, and especially beat perception studies, exist of many unique musical aspects that can all be individually varied. These aspects include amongst others tempo, pitch, melody, loudness, rhythm and, the aspect this review is focussing on, timbre. In scientific paradigms with a controlled setting a (dependant) variable is often tested by keeping every other (independent) variable between conditions the same, except for that one (dependant) variable. A change in result can then be associated with the change in that variable. However, in music studies this golden rule of science is not that easy to apply. This section will explain why and it will describe the pitfalls and difficulties when ‘designing’ musical stimuli in the context of beat perception. This section will especially highlight how timbre plays an important, but yet unrecognized role in this.

Musical stimuli come in many different forms. Some use real songs (London, Burger, et al., 2019), but often stimuli are synthetic and produced by a computer. Piano melodies or general noise are often used (Boltz, 2011), but for beat perception studies the most used stimuli are computer-produced drum or percussion rhythms (Bouwer et al., 2016; Lenc, Keller, Varlet, & Nozaradan, 2018a). One study that looked at beat perception used drum rhythms, where individual notes, either the bass-drum, hi-hat, snare or a combination of, were omitted (Winkler et al., 2009a). This allowed the researcher to only change minor parts of the rhythm and therefore correlate significant differences in beat perception to these changes in rhythm. These ‘omissions’ were expected to be more or less salient depending on where in the rhythm they happened. Omission studies often include the use a specific component of the event-related potential (ERP). This ERP signal, the Mismatch-Negativity response (MMN), is unique for showing a violation in sensory expectancy. The study by Winkler, Háden, Ladinig, Sziller and Honing (2009) was conducted with fourteen 37-40 week old infants. As the infants didn’t have much experience with music and beat, it was expected that a sense of beat could be fully

(10)

contributed to an innate perception of beat rather than a learned response. It was found that omitting tones, that were related to the onset of rhythmic cycles (the downbeat), led to a stronger MMN-response than omitting tones unrelated to the downbeat. It was thus suggested that these infants had an innate sense of beat, which couldn’t be the result of a learning process. However, critique on this interpretation also exists. It has for example been suggested that such differences are not representing beat perception, but rather statistical learning (Bouwer et al., 2016; Honing, Bouwer, Prado, & Merchant, 2018; Winkler, Háden, Ladinig, Sziller, & Honing, 2009b). So the position of the omitted tone had an important role, but even so could the type of omitted tone play an important role. This wasn’t addressed in Winkler’s study, but other studies do show that some tones, depending on their frequency, play a more significant role in beat perception than others (Hove et al., 2014; Lenc et al., 2018a; Van Dyck et al., 2011). One of the most recent studies by Tomas Lenc (2018) showed that low frequency sounds, like the bass-drum, contain more information about the rhythm and the beat than high-frequency sounds like the high-hat and snare drum (Lenc, Keller, Varlet, & Nozaradan, 2018b). He suggested that there might be a neurophysiological explanation to this whereby low-frequency sounds boost locking to the beat which in turn shapes the neural representation of the rhythm. In the study of Winkler (2009) there were three bass drums in the whole rhythm, but only the bass drum on the downbeat was left out. There were no controls that tested the effect of omitting a bass-drum in a different position. Besides in the condition where the downbeat was omitted, both the hi-hat and bass-drum were omitted, leaving the question if the found MMN-response was caused by the position (downbeat or offbeat) or type (hi-hat or bass-drum) of omission or both. Based on the study of Lenc (2018) one can speculate that the results of Winkler’s study might not only be caused by the salience of the position of omission (on the downbeat) but also because of the salience of the frequency of a specific tone (the bass-drum). As the spectral shape, a timbral characteristic, of a bass-drum is very different than that of a hi-hat, it could be that this

(11)

difference has played a role in the results mentioned above. More research is however needed to investigate the role of timbre characteristics in these kind of studies.

The above line of reasoning can also be applied when looking at complex versus simple rhythms. Beat perception studies often make use of this paradigm, where complex rhythms are composed of a rhythmic structure containing more tones, more syncopation and more off-beat accentuation or on-beat omissions (Bouwer et al., 2018). The complexity of a rhythm is well explained by the 'complexity score' designed by Povel and Essens (Povel & Essens, 1985). This score describes the relationship between the temporal accent structure and the perceived beat. However, even though it seems that this difference is only contributable to the complexity of the rhythm, the question should be asked how the frequency of the tone plays a role in this. It has already been studied that bass-syncopation is more salient than hi-hat syncopation (Witek et al., 2014) and rhythmically complex bass patterns are judged as more danceable than rhythmically simple bass patterns (Wesolowski & Hofmann, 2016). Taking into account the results by Lenc and several other authors that low-frequency sounds, like the bass, contain more information about the rhythm and the beat than high-pitched sounds (Hove et al., 2014; Lenc et al., 2018a; Wesolowski & Hofmann, 2016), one could only speculate that timbre in this case has an effect on the results and should therefore be better considered when designing musical stimuli for beat perception studies. However, more research is needed to actually find a causal link between timbre (frequency spectrum) and beat perception.

One other recent study addressed the apparent existence of a general assumption that the acoustic features of tones making up the stimuli do not affect beat perception (Henry et al., 2017). The researchers looked at the effects of changing some acoustic features (on-/offset ramp duration and tone length) of a tone on the frequency spectrum of simple versus complex stimuli with the goal of investigating if this impacts direct comparisons between stimulus frequency signals and neural frequency signals measured with EEG. This was done because recently it has

(12)

been proposed that the comparability between the stimulus frequency and the neural frequency can be seen as a neural marker for beat perception, a.k.a. the ‘frequency-tagging’ method (Nozaradan, Peretz, Missal, & Mouraux, 2011; Nozaradan, Peretz, & Mouraux, 2012). The goal was to see if this is indeed a valid method to study beat perception. The complexity of the rhythm was determined by the amount of on-beat accents at the beginning of a group of four tones. They found that changing the on/-offset ramp duration and tone length did significantly change the stimulus frequency, but did not significantly affect beat perception. Moreover, when the rhythms were rotated (so that the last eight tones were now the first eight tones), beat perception did change, even though the frequency spectrum of the stimulus stayed the same. Therefore they concluded two things: (1) conclusions in the context of beat perception can not be made when comparing the frequency domain spectrum of different stimuli with neural frequency signal and (2) because significant changes of acoustic features (on/off set ramp duration and tone duration) did not affect beat perception, this ‘confirmed the long-standing assumption in the literature that the acoustic features of tones making up the rhythm should have little consequence for beat perception’. However, the ‘significant changes’ of these acoustic features were characterized by the significant change in the frequency spectrum of the stimulus. So even though their study showed that using differences in the frequency spectrum is not a valid way of testing beat perception, these differences in frequency spectrum were the exact parameter used in their own research to study the effects of changing acoustic features. Besides, to claim that changing acoustic features in general doesn’t affect beat perception is a particularly harsh claim as in their study only two acoustic features, the on-/offset ramp duration and tone length, were changed. It is also understandable that changing the ramp duration and tone length of the stimuli did not affect beat perception as changing these acoustic features doesn’t impact the general structure or sound of the stimulus. Lenc showed that some features of the stimulus frequency spectrum do play a significant role in beat perception, namely the

(13)

lower frequencies often represented by the bass patterns. Therefore it can not be excluded that the frequency spectrum of the stimulus does play a role in beat perception. Nevertheless, the overall question these studies try to answer is a valid and important question: ‘What is a way of testing if acoustic features of a tone changed ‘enough’ to make claims about its effect on beat perception?’. These are both recent studies and more research is needed to identify the role of acoustic features and the frequency spectrum of a stimulus in beat perception.

The previous section demonstrated the importance of choosing the right paradigm and stimulus properties for conducting beat perception research. An especial emphasis was on the role of the stimulus frequency spectrum which is a characteristic of musical timbre. The research by Lenc (2018) showed that low-frequency sounds convey more information about the beat than higher-frequency sounds. Therefore it is important to consider the role of spectral frequency, especially when using drum-rhythms with a prominent role for the bass drum, like in Winklers study (2009). Besides, statistical learning should be controlled for as this can obscure the findings in beat perception studies. Finally, studies that claim to exclude a role of acoustic features on beat perception, like the one of Molly Henry (2017) addressed above often focussed on a specific subset of these acoustic features. Therefore they should be interpreted with caution and more research is needed to elucidate the role of external acoustic features, and especially timbre, on beat perception.

The next section will continue with unraveling a possible role of timbre on beat perception by looking into old and recent studies about the effect of timbre on tempo detection, which can hold information of the role of timbre on beat perception. Firstly, a combination of old papers will be addressed to then continue to exciting new research by Justin London (2019) and Marilyn Boltz (2011).

(14)

Tempo, timbre and beat perception

Many studies don’t explicitely look at beat perception but rather at tempo perception. These studies often change certain musical aspects, while keeping the tempo the same in order to test the influence of those musical aspects on tempo perception. Although beat and tempo perception are not the exact same, they are closely related. Some researchers therefore claimed that tempo perception is a valid measure for beat perception (Fujii & Schlaug, 2013). As previously described, beat perception is understood as the perception of an underlying recurrent pulse in an audio fragment or sound and in order to judge the tempo of a song you need to identify the underlying recurrent beat. Justin London is a researcher that performed many studies with tempo perception. In personal contact he explained me that tempo- and beat perception indeed show many commonalities, but that there are precarious differences. These were demonstrated in one of his studies, which showed that even though participants are successful in relative beat ratings of original compared to ‘Time-stretched’ versions of the same song (5 BPM faster or slower), the actual BPM of the time stretched songs was often still over- or underestimated compared to time stretched- and original versions of other songs (London et al., 2016). So the actual BPM could not be identified, even when tapping along to the beat was allowed (London, Thompson, Burger, Hildreth, & Toiviainen, 2019). However, as the differences are relatively small it is interesting to look at what has been done in this field and to see how it can implicitly tell us something about beat perception. Besides, the effect of external musical aspects, including timbre, on tempo perception has been a topic of recent research. Therefore the below section will describe first old and then new relevant research, which will aid in further understanding the effects of timbre on beat perception.

One old study made participants listen to musical excerpts that varied in tempo and melodic activity and that had a present or absent underlying beat (Kuhn, Booth, & State, 1988). The study showed that a change in melodic activity led to a bigger change in tempo perception

(15)

(while keeping the actual tempo the same) than an actual change in tempo. Besides, even with an underlying beat present, melodic activity led to a big change in tempo perception. Melodic activity, which was either plain or ornamented (see fig 1), therefore served as the most crucial factor of the tempo determination. An ornamented melody was interpreted as much faster than a plain melody, even though the tempo staid the same (see figure ..). So the underlying recurrent pulse was not correctly judged when a melody was ornamented compared to plain.

Figure 1: An example of a musical piece with plain (top) and ornamented (bottom) melody. During the study, the actual tempo always staid the same (Kuhn et al., 1988)

This finding was supported by a series of studies by Geringer, Madsen & Duke (1993), who also found that melodic activity was the most prominent factor determining the tempo perception of participants. Changing a musical piece’s melody from plain to ornamented, with seventeen extra tones, doesn’t only change the melodic activity, it also greatly affects the timbre of the piece, which implicitly means that timbre could also have played a role in this study. However as timbre is more hard to define than melodic activity, the change in tempo perception was fully attributed to a change in melodic activity.

In two other studies by Duke, Madsen & Geringer from 1993, the effects of changing timbre on beat perception were directly tested. The study looked at the point in time where participants felt a change in beat, while listening to a rhythm with slowly decreasing or

(16)

increasing tempo (3 bpm/change). In both studies they also changed the sequence in which the cowbell, hi-hat and rimshot, all having a unique timbre, were played. The results of both studies showed that on average participants reported four changes of beat and that the order of timbres had an effect on the moment in time when participants felt a change in beat and on the interval between felt changes in beat (Geringer, Duke, & Madsen, 1993; Geringer, Duke, & Madsen, 1992). The researchers suggested that the different order of timbres led to a different perception of spectral frequency, which could have directed the attention of the participants to a specific timbre (instrument). This study thus showed that the order in which participants heard the different instruments (timbres) affected the beat perception. A third study by Robert Duke (1990) addressed the same question.

This study showed that the perception of the tempo of the underlying beat is dependant on timbral changes during the stimulus presentation. Three different timbres were used with low, medium or high brightness, characterized by the amount of upper partials. The participants were told that the tempo could change during the stimulus presentation, but that it could also stay the same. The results showed that even when the actual tempo stayed the same, participants perceived an increase in tempo when the sound became brighter and a decrease in tempo when the sound became less bright. I think that a possible confound in this study was that participants were told that a change in tempo could be expected without telling them a change in timbre or loudness could also occur. Therefore it could be that the participants were primed for a tempo change and therefore interpreted every observed change as a possible tempo change. How studies pose their psychometric questions is a different, but very important topic by itself, which is still relevant for modern research (Flowers & Antonio, 1983). Nevertheless, changing the timbre led to a wrong interpretation of the underlying tempo and therefore underlying beat. These studies already showed early on that timbre is not a musical aspect that can be simply

(17)

ignored when it comes to beat perception. However, these studies are from more than twenty years ago. So how do modern studies look at the effects of timbre on tempo and beat perception?

One recent study by London et al., 2019 argued that for tempo-shifted versions of songs, tempo estimations are better in songs with higher beat salience and lower salience of other musical aspects (pitch, melody, timbre etc.). They therefore used three types of stimuli: Motown songs, disco songs and drum rhythms that are respectively increasing in beat salience and decreasing in salience of external musical aspects. They argued that beat estimation would be similar for all stimuli when tempo-shifting did not occur, but that for the Motown and Disco songs an over- and underestimation would occur for tempo up-shifted and tempo down-shifted versions, respectively. They indeed found that with the Motown songs and to a lesser extent the Disco songs, participants often over-estimated an increase in tempo and under-estimated a decrease in tempo, while with the drum-rhythms this effect disappeared. So they argued that a perception of tempo and beat is not only dependant on the BPM and beat salience, but also on the presence of external cues like pitch, melody, harmony and timbre. Although this study didn’t specifically look at timbre alone, it does show possible influence of external musical aspects like timbre on beat perception. Besides, there was one unexpected but interesting result where timbre did play a clear role. In one of the experiments tempo ratings, on a 1-7 Likert scale, had to be given of BPM-paired songs that did not change in tempo. A significantly anomalous tempo rating was found for a song (song A) on 110 BPM (see fig 2).

(18)

Figure 2: The graph shows the mean tempo rating on a Likert scale from 1-7 of paired songs (Song A and B) with a specific BPM. An anomalous rating can be found of Song A on 110 BPM. This song is Keep it Coming Love (London, Burger, et al., 2019)

The explanation that was given for this result is that this particular song, 'Keep it coming love', had the highest sub-band (100-200Hz) spectral flux of all stimuli, which is related to fullness (Alluri, 2018). However, this was not clear from the table that was referred to as the spectral flux of this song was 7008 with other songs reaching up to a flux value of 10650 (see Table 1). I contacted Justin London and Birgitta Burger to clarify this matter. They told me that this was indeed a flaw in the study due to different flux calculations in an earlier version of the study and an erratum will be send out to correct this. London is however highly convinced that spectral flux, an important aspect of musical timbre, affects tempo ratings and beat perception. He therefore also suggested that it is interesting for future research to look for flux values and their effect on tempo ratings in other frequency sub-bands as this paper only looked at the 100-200 Hz sub-band, which captures low-frequency cues for tempo change. Interestingly, an effect of low frequency sound was also found on movement to the beat (Burger et al., 2013) and beat perception (Lenc et al., 2018a), as previously described.

(19)

Table 1: The table shows the spectral flux value of the different songs used in the study of London et al (2019).

One other recent study by Boltz (2011) also looked at the effects of timbre and pitch on tempo perception. She presented sound pairs with a stable tempo that did differ in pitch or timbre, with the goal of seeing if a pitch or timbre change induced a change in tempo perception. Sound pairs with a pitch change from low to high were interpreted as displaying an acceleration in tempo, while a high to low pitch change was interpreted as displaying a reduction in tempo. Sound pairs with a timbre change from bright to dull were interpreted as displaying a reduction in tempo, while pairs with a dull to bright timbre change were interpreted as displaying an acceleration of tempo. An interaction effect between timbre and pitch was also found, namely that congruent pairs (high-bright and low-dull) showed an additive effect, leading to a perception of a bigger decrease or increase in tempo. This shows that changing the timbre (and pitch) of musical sounds, while maintaining a stable tempo does affect the tempo/beat perception. One possible explanation of the misjudgement of tempo change is that a multitude of musical aspects covary within the auditory spectrum. This is thought to be characterized by the energy state of the sound (Boltz, 2011). Lower pitch, lower amplitudes and decreasing tempo are a characterization of decreased energy states. Higher energy states, on the other hand

(20)

are characterized by faster tempo, higher pitch and higher amplitudes (Black, 1961). The existence of an automatic association of fast vs. low tempi with high- or low energy states has also been shown in a paper by Shibuya & Hasegawa (2016) who found that musicians who were asked to play a musical piece either in a dull timbre or in a bright timbre, without any further restrictions, had the tendency to play the brighter timbre in a faster pace and the darker timbre in a slower pace (Shibuya & Hasegawa, 2016). That this interaction effect between pitch, timbre and tempo is universal is shown by studies that proved that it is independent of one’s musical background (Allen & Oxenham, 2014; Vurma, Raju, & Kuuda, 2011). Interestingly, Siedenberg & McAdams (2019) recently described the energy states of sound to be a characteristic of musical timbre. The above discussed papers show that in music there is a natural perceptual relationship between tempo and timbre, but in order to fully understand the contribution of timbre changes on tempo and beat perception further research is required.

The above section showed old and new research which showed that changing musical aspects that of the stimulus affects tempo and beat perception, possibly through changes in the stimulus frequency spectrum, a characteristic of timbre (Boltz, 2011; Duke, 1990; Geriner, Madsen, & Duke, 1993; Kuhn et al., 1988; London, Burger, et al., 2019). These studies often didn’t explicitly look at timbre, but rather at a multitude of musical aspects including timbre and the interaction between these musical aspects and tempo/beat perception. What these studies show is that changing musical aspects affects objective tempo and beat perception. What remains unclear is what the exact unique impact of every individual musical aspects (loudness, pitch, timbre, melodic activity etc.) is on beat perception. Many of these musical aspects however are at least partially tied to the same physical properties of sound, namely frequency, intensity, duration, attack and decay time (Houtsma, 1997; Wallmark et al., 2018). While it is now clear that the choice of stimulus design is important for studying beat perception, the effect

(21)

of every individual musical aspect of the stimulus on beat perception might not be merely unique but rather often interactive with other musical aspects.

This review has so far shown that beat perception can be affected by the exact composition of the musical stimuli and that musical aspects, like pitch (Hove et al., 2014), meter (Iveren, Repp & Patel, 2009), melodic activity (Kuhn et al., 1988) and also timbre (Boltz, 2011; Duke, 1990; Geringer et al., 1992; Henry et al., 2017; London, Burger, et al., 2019; London et al., 2016; London, Thompson, et al., 2019), amongst others, can affect beat perception. Moreover, on a physical level these musical aspects seem to be at least partially tied to the same physical attributes of sound, like frequency, intensity, duration, attack and decay time. The fact that these musical factors are not physically independent makes it tempting to think that the effect of one individual musical aspect on beat perception, say pitch for example, is not completely contributable to what we understand as the psychometrical term ‘pitch’ but rather to the physical attributes behind it. This creates a new way of understanding the effects of musical aspects on beat perception where the effects are probably not caused just by ‘pitch’, ‘loudness’ or ‘timbre’ but rather by an interaction between those distinct musical aspects (Boltz, 2011; Houtsma, 1997). The interrelatedness of distinct musical aspects on a physical level raises the question if these musical aspects are processed in the brain in distinct or overlapping ways. If on a neural level these musical aspects are processed in distinct brain areas rather than in overlapping brain areas, then this holds implications for the way these musical aspects can be perceived. Therefore the next, and final, section will continue with the neural aspect of the relation between timbre and beat perception.

Timbre, beat perception and the brain

The following section will give an overview of how beat, tempo, timbre and other musical factors are processed in the brain. The previous section has shown that the effects of different musical aspects are often separately studied, but that these musical aspects share the

(22)

same physical components. Therefore it is interesting to understand how these psychometrically distinct musical aspects are processed in the brain in order to see if they have neural overlap or are rather processed separately. The first section will dive into research trying to answer the question which brain areas and processes are essential for the internal generation of beat, when the beat is not clearly present in the rhythm. The second section will address the question where and how timbre is processed in the brain. This section will also point out which brain areas are serious candidates for possible neural overlap between timbre and beat perception processing.

Beat/tempo processing

Multiple studies have found evidence that a cortico-subcortical network plays an essential role in beat perception (Grahn & Rowe, 2009; Kung, Chen, Zatorre, & Penhune, 2013). Moreover, some studies find that the basal ganglia (BG) are more active in rhythms with a beat than without a beat (Fujioka, Zendel, & Ross, 2010; Kung et al., 2013). Especially bilateral putamen activity was higher in rhythms with a lower presence of external beat cues (Grahn & McAuley, 2009; Kung et al., 2013). Besides, higher connectivity between the SMA, premotor cortex (PMC), auditory cortex (AC) with the putamen was observed regardless of how clear the beat was (Grahn, 2009; Grahn & Rowe, 2009). Other studies found that especially superior temporal gyrus (STG) and ventro-lateral prefrontal cortex (VLPFC) activation and connectivity increased when internal beat generation was needed more (Kung et al., 2013). These results suggest the existence of a motor cortico-basal-ganglia-thalamo-cortical network including the putamen, SMA, and PMC essential for the prediction of beat, especially when this beat has to be internally generated (Grahn & Rowe, 2009; Kung et al., 2013; Merchant et al., 2015). On a cellular level, beat synchronization seems to depend on the dynamics of cell populations in the mCBGT, which are tuned to the serial order and duration of intervals of the beat (Merchant et al., 2015). However, these are not the only areas activated during beat perception.

(23)

Another area of activation is the cerebellum. When it comes to the cerebellum, previous studies have suggested a connection between cerebellar activity and beat perception (Schwartze & Kotz, 2013). However, where some studies link this activity to absolute timing and fine motoric responses rather than to actual beat perception (Merchant et al., 2015), more recent studies interpret it differently. Paquette et al (2017) found a correlation between the performance on the Beat Finding and Interval Test (BFIT) of the H-BAT test battery and gray matter volume in the left cerebellar IX lobule and in the bilateral crus I (Fujii & Schlaug, 2013; Paquette, Fujii, Li, & Schlaug, 2017). Another, clinical study found that for cerebellar patients neural tracking of a beat frequency is worse for beat rhythms with fast tempi (Nozaradan, Schwartze, Obermeier, & Kotz, 2017). This is also clear from patients with cerebellar damage, who experience issues with the conscious perception of changes in tempo, but not with tapping to the beat (Molinari et al., 2005). These studies suggest that the role of the cerebellum is to precisely encode a perceived temporal structure, which is needed to improve predictive adaptation of behavior. These studies thus suggest that the ability to detect changes in beat and tempo is actually linked to changes in cerebellar volume.

Taken together, these results indicate that the BG, and especially the putamen, play an active role in internal beat generation (some even call it the ‘pacemaker’). At the same time it is thought that the BG is in direct contact with both the cerebellum which aids in the precise timing and predicting of the beat as motor areas which play a role in beat production. Overall the science on beat perception in the brain suggests that more basic sensorimotor areas work together with higher-order frontal areas in order to perceive, maintain and produce a beat.

The next section will dive into the literature about timbre processing in the brain and will where possible lay a link between timbre and beat processing in order to identify possible neural overlap.

(24)

Timbre processing

Timbre is the most understudied musical aspect when it comes to studying the neural correlates of it. This is mostly due to the multidimensionality of timbre. However, in the last decade there was a rise in research both explicitly or implicitly looking at the neural correlates of timbre. This section will describe these recent advances in timbre research and will try to lay a link between beat and timbre processing in the brain.

When it comes to timbre processing, it is important to understand how sound is processed in the brain. Sound is thought to be processed on a hierarchal level starting from the separation of the 'where' and 'what' of sound, which is processed by the dorsal and ventral pathway respectively. Timbre identification thus mostly takes place in the ventral pathway (See fig. 3). Subsequently, timbre processing in the brain is separated depending on different abstract levels of timbre. This has been properly described in a recent review by Alluri & Kadiri (2019). ‘Low and middle-level’ timbral features describe the acoustic structure of a sound, represented by earlier mentioned physical components like the, amongst others, spectral centroid, attack time and spectral shape. ‘High-level’ timbral features relate to the sound source identification. The distinction of these features becomes clear when listening to a plucked or bowed violin. Both sounds will be, on a higher timbral level, be identified as a violin sound, but on a lower abstraction level there are crucial timbral differences. These different levels of timbre seem to be, at least in the brain, processed in different ways.

(25)

Figure 3: A graphic overview of the dorsal (red) and ventral (green) pathway of sound processing in the brain. Timbre is though to be processed by the ventral pathway (Picutre taken from: ( Alluri & Kadiri, 2019).

Low-level timbre processing already starts in the cochlea by processes of frequency decomposition (Alluri & Kadiri, 2019; Alluri, 2018). Information then continues to the inferior colliculi and the superior olivary complex. Spectral shape of the sound, an important characteristic of timbre, is encoded in a V-shaped high-low-high frequency manner along the posterior-anterior axis around Heschl’s gyrus (Formisano et al., 2003; Humphries, Liebenthal, & Binder, 2010). Fine changes in spectral shape seem to correspond with changes in response amplitude in Heschl’s gyrus and the anterior STG (Güçlü, Thielen, Hanke, & Van Gerven, 2016), while more coarse spectral modulations were located posterior-laterally to Heschl’s gyrus, at the planum temporale (especially planum polare) and posterior STG (Angulo-Perkins et al., 2014; Caclin, McAdams, Smith, & Giard, 2008; Halpern, Zatorre, Bouffard, & Johnson, 2004; Ogg, Slevc, Ogg, & Slevc, 2019; Santoro et al., 2014; Toiviainen et al., 1998). There is an overall right-hemispheric bias for spectral shape processing (F. Samson, Zeffiro, Toussaint,

(26)

& Belin, 2011; Santoro et al., 2014; Zatorre & Belin, 2002). Temporal changes are processed around the same areas, with slow temporal modulations being processed along Heschl’s Gyrus and STG (Belin et al., 1998; Boemio, Fromm, Braun, & Poeppel, 2005), while fast temporal modulations were found on the Planum Temporale, and posterior/medial side of Heschl’s gyrus, with a bias for the left-hemisphere (Séverine Samson, 2003; Séverine Samson, Zatorre, & Ramsay, 2002; Alluri & Kadiri, 2019. However, temporal processing has also been observed in the right hemisphere, as lesions of the right hemisphere led to difficulties with stimulus onset timing, which is crucial for proper beat perception (Séverine Samson, 2003; Severine Samson & Zatorre, 1994; Séverine Samson et al., 2002). A higher activity was measured in bilateral STG, HG and medial temporal gyrus with changing brightness, fullness, spectral flatness and activity, which was related to moments in the music that music was played faster, often with multiple instruments playing at the same time (Alluri et al., 2012). As described, STG activation has also been found with beat perception (Kung et al., 2013). Besides, Vinoo Alluri told me that so far unpublished research has found a clear activation of the inferior colliculus for temporal modulation and timing (Alluri, personal communication). So, activation of the auditory pathway, primary and secondary auditory cortices seems to occur both in a response to timbre changes and temporal changes. However, how timbral and temporal changes exactly interact in these areas and what it could mean for the relationship between timbre and beat processing remains elusive, but possible anatomical and possibly even functional neuronal overlap seems eminent here (Santoro et al., 2014; Alluri & Kadiri, 2019).

The existence of neural overlap between distinct musical aspects is not surprising as interactions between timbral features and features of other musical aspects are not uncommon and have been already properly studied. Caclin et al (2006 & 2008), for example, found that three dimensions of timbre: spectral centroid, attack time and spectrum fine structure are individually processed in initial stages of sound processing but interact at a later point in time.

(27)

Allen, Burton, Olman, and Oxenham (2017) found a big spatial overlap in Heschl's Gyrus for the processing of timbre and pitch and another study found, by using multi-voxel pattern analysis (MVPA), that timbre processing, but not pitch, extended further along the superior temporal gyrus (Staeren, Renvall, De Martino, Goebel, & Formisano, 2009). Moreover, when looking at activity patterns within those shared anatomical areas, timbre and pitch seemed to be processed in functionally distinct ways (Allen, Burton, Olman, & Oxenham, 2017; Bizley, Walker, Silverman, King, & Schnupp, 2009; Walker, Bizley, King, & Schnupp, 2011). This neural evidence is corroborated by previously described psychoacoustic evidence of perceptual interactions between pitch and timbre (Allen & Oxenham, 2014; Boltz, 2011; Houtsma, 1997). Activation of distinct areas in the V-shaped tonotopic map around Heschl's gyrus also seems crucial for stimuli with a certain degree of brightness and percussiveness, a timbral feature (Alluri & Toiviainen, 2009).

Mid- and high-level timbre are rather processed in the middle superior temporal sulcus (STS), which plays an important role both in processing spectral shape information and in relaying this information from primary auditory cortex to temporal lobe regions that are more engaged in source recognition (Warren, Jennings, & Griffiths, 2005; Alluri, 2018). Next to the STS, processing and identification of spectrally more complex sounds seem to happen in the posterior part of the STG (Güçlü et al., 2016), and insular cortex (Alluri et al., 2012; McAdams et al., 1995; Menon et al., 2002). Interestingly, insular areas are also activated by rhythmic features of sound (Toiviainen, Alluri, Brattico, Wallentin, & Vuust, 2014). Other areas activated by timbral changes are the somatosensory cortex, default mode network and cerebellum (Alluri et al., 2012), an area also activated by beat perception (Paquette et al., 2017). Frontal areas seem not activated by timbre processing, although Alluri et al (2012) did find activation of the right medial frontal gyrus with changing brightness and activity. Besides, when a task requires attention, frontal areas are activated (Degerman, Rinne, Salmi, Salonen, & Alho, 2006; Petersen

(28)

& Posner, 2012; Reiterer et al., 2008). Neural overlap between timbre and beat perception in these higher cortical areas should be interpreted with caution though as it remains unclear how exactly these areas are involved in both processes.

Finally, timbre also seems to be processed by subcortical/limbal areas. Alluri et al (2012) found that the putamen is especially active during the processing of timbral brightness, a claim which is supported by other researchers (Wallmark et al., 2018). As previously described, putamen activity has also been reported in beat perception, especially when internal beat generation was required (Grahn & Rowe, 2009; Nozaradan et al., 2017). This evidence is corroborated by the finding that reduced pulse clarity also led to putamen activity (Alluri et al., 2012). However these results should be interpreted with caution as the currently available techniques mostly show anatomical overlap. Besides, the subcortical/limbic areas are rather small and not every study reports these findings. More research is thus needed to elucidate this question.

It must be taken into account that the above described results are often found by using EEG, MEG and fMRI, techniques that mostly say something about the anatomical/geographical location of sound processing and are often also rather noisy. It does not reveal much functional proof of how timbre, beat and music in general are processed. It could thus well be that some musical aspects seem to be processed by the same brain areas but that the underlying neuronal activation is very unique per process, as found in some timbre & pitch research (Allen et al., 2017; Ogg, Moraczewski, Kuchinsky, & Slevc, 2019; Peretz, Vuvan, Lagrois, & Armony, 2015). So far the study of timbre has been approached in a rather deterministic way using controlled settings and often synthetically produced stimuli, where only monophonic timbral features were manipulated. Recently, there has been a rise of a more holistic and naturalistic approach for studying real sounds and polyphonic musical pieces. If in addition a multimodal approach will be used where multiple neuroscientific methods are combined (MEG+EEG or

(29)

fMRI+EEG), or alternative methods like electrocorticography are used, then the study of timbre and beat perception can develop even further (for ECoG explanation, see: Mesgarani & Chang, 2012; Ogg, Slevc, et al., 2019; McAdams, S, 1982. New York: Plenum.).

In summary, the above section discussed the existing research about the neural underpinnings of both beat perception and timbre and where possible it described how these two processes are related or even interacting. Clear neural overlap has been found in the auditory pathway and in primary and secondary auditory cortex areas like Heschl’s gyrus, inferior colliculus and STG. As both timbre and rhythm information are extracted in these areas it is not surprising to find neural overlap here. However, due to the existing techniques this overlap is so far mostly anatomical. Besides, neural overlap has been found in insular areas, cerebellum and putamen. As these areas are small and higher up in the sound processing pathway, it is unclear how timbre and rhythm information are exactly processed in those areas, so these findings should be interpreted with caution. This is especially because studies so far have not found much functional overlap, which is partly due to the restrictiveness of available methods (EEG, MRI, fMRI), but maybe even mostly due to the multidimensionality of timbre. Further research should make use of more holistic and unified stimuli, without too much variation in the musical aspects that make up the stimuli or acknowledge that these differences can also be an explanatory factor. This was already proposed in a review by Grahn (2009), but still the variety in musical stimuli remains abundant. If this change can be made parallel to combining different neuroscientific methods, then the relationship between timbre and beat perception specifically, but also the whole field of music studies in general can move forward in the coming decade.

Conclusion

For a long time timbre research was not a popular branch of musical science, and it has long been assumed that timbre, a distinct musical aspect, doesn’t play a role in beat perception

(30)

studies. Timbre still remains underrepresented, but the last decade more research has been done both on a psychometric as neuroscientific level.

The current review has discussed most of the old and new relevant literature where the relationship between timbre and beat perception has either been explicitly or implicitly studied. One of the biggest reasons that timbre is difficult to study is because timbre is difficult to define on a physical level. It is a multidimensional musical aspect that is present in every audible sound. On a technical level timbre is more clearly defined by the use of audio descriptors (McAdams, 2019), of which the spectral centroid and spectral flux are the most related ones when it comes to beat perception. Recent studies found these audio descriptors to have a potential effect on beat and tempo perception (Lenc et al., 2018a; London, Burger, et al., 2019). Lenc et al (2018) found that low-frequency sounds contain more information about the beat and the rhythm of a sound. Therefore it is suggested that musical stimuli should be carefully designed, especially when making use of drum-like stimuli. Besides, it has been shown that changing the pitch, melodic content and order of timbres of a stimulus, changes the tempo and beat perception of a song, but also the frequency spectrum, suggesting a potential role for timbral effects (Boltz, 2011; Geriner et al., 1993; Geringer et al., 1992; Hove et al., 2014). Studies that attempted to show that changing acoustic features of the stimulus didn’t affect beat perception used a limited amount of changing acoustic features, while the conclusions were generalized to all acoustic features. Therefore more research is needed to understand which acoustic and musical features of the stimulus do and do not affect beat perception (Henry et al., 2017; Lenc et al., 2018a). The fact that on a behavioral level musical aspects like timbre, pitch but also beat perception seem to interrelate is supported by the fact that the brain areas responsible for the processing of these features show overlap at least on an anatomical level. For timbre and beat perception, neural overlap seems to mostly exist in primary and secondary auditory cortex areas like, Heschl’s gyrus, the superior temporal gyrus, and inferior colliculi.

(31)

Overlap has also been found in the basal ganglia, with the putamen in particular and in a lesser extent the cerebellum and possibly frontal areas, but as these areas are higher up in the auditory pathway and the techniques are limited, these results should be interpreted with caution. So far because of the multidimensionality of timbre and the temporal and spatial limitations with current neuroscientific methods that are applicable to human research, a functional/causal relation between timbre and beat perception has been difficult to determine. However, seen that both on a perceptual and neural level there are serious reasons to believe that timbre and beat perception are related, future research should at least be careful with interpreting and comparing studies of beat perception. Most importantly, as suggested by previous researchers already (Grahn, 2009; Grahn & Watson, 2013), in order to get a trustworthy and true idea of the origins of beat perception and how it relates with other musical aspects, it is important to carefully chose and design musical stimuli, taking into account the multidimensionality of music in itself.

We all enjoy music and the ability to feel the beat and dancing along to music is one of world’s greatest pleasures. Understanding why the human race compared to other species so easily feels the beat is an intriguing question. However, as music is a very multidimensional concept already, our ways of studying should likewise be multidimensional. With a more holistic approach to studying music, the next decade might become an important one for musical sciences.

(32)

Acknowledgements

I’d like to foremost thank my direct supervisor Henkjan Honing for his valuable feedback and inspiring talks during the process of writing this literature thesis. Besides, I would like to thank Fleur Bouwer for being the second corrector of my thesis and grading the final version. I then like to thank Justin London and Birgitta Burger, whom I contacted with questions about their 2019 article ‘Motown, Disco and Drumming…’, for their interesting explanations to my question and the generous offer of professor London to send out an erratum with my name about a small flaw in the study. Finally, I like to thank Molly Henry for providing me answers to the questions I had about her 2017 paper ‘What can we learn about beat perception by comparing brain signals and stimulus envelopes’. Finally, I like to thank Vinoo Alluri for the valuable skype-session we had about her very recent 2019 review of the neural correlates of timbre which was a chapter in the book ‘Timbre: Acoustics, Perception and Cognition’ by professor Kai Siedenburg en Stephen McAdams.

(33)

References

Allen, E. J., Burton, P. C., Olman, C. A., & Oxenham, A. J. (2017). Representations of pitch and timbre variation in human auditory cortex. Journal of Neuroscience, 37(5), 1284– 1293. https://doi.org/10.1523/JNEUROSCI.2336-16.2016

Allen, E. J., & Oxenham, A. J. (2014). Symmetric interactions and interference between pitch and timbre. The Journal of the Acoustical Society of America, 135(3), 1371–1379. https://doi.org/10.1121/1.4863269

Alluri, V., & Toiviainen, P. (2009). Exploring perceptual and acoustical correlates of polyphonic timbre, 223–242.

Alluri, V., Toiviainen, P., Jääskeläinen, I. P., Glerean, E., Sams, M., & Brattico, E. (2012). Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. NeuroImage, 59(4), 3677–3689.

https://doi.org/10.1016/j.neuroimage.2011.11.019

Angulo-Perkins, A., Aubé, W., Peretz, I., Barrios, F. A., Armony, J. L., & Concha, L. (2014). Music listening engages specific cortical regions within the temporal lobes: Differences between musicians and non-musicians. Cortex, 59, 126–137.

https://doi.org/10.1016/j.cortex.2014.07.013

Apel, W. (1970). Harvard dictionary of music. Cambridge, MA: Belknap Press of Harvard University Press

Belin, P., McAdams, S., Smith, B., Savel, S., Thivard, L., Samson, S., & Samson, Y. (1998). The functional anatomy of sound intensity discrimination. Journal of Neuroscience, 18(16), 6388–6394. https://doi.org/10.1523/jneurosci.18-16-06388.1998

(34)

Interdependent encoding of pitch, timbre, and spatial location in auditory cortex. Journal of Neuroscience, 29(7), 2064–2075. https://doi.org/10.1523/JNEUROSCI.4755-08.2009

Boemio, A., Fromm, S., Braun, A., & Poeppel, D. (2005). Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nature Neuroscience, 8(3), 389–395. https://doi.org/10.1038/nn1409

Boltz, M. (2011). Illusory Tempo Changes Due to Musical Characteristics. Music Perception, 28(4), 367–386.

Bouwer, F. L., Burgoyne, J. A., Odijk, D., Honing, H., & Grahn, J. A. (2018). What makes a rhythm complex? The influence of musical training and accent type on beat perception. PLoS ONE, 13(1), 1–26. https://doi.org/10.1371/journal.pone.0190322

Bouwer, F. L., Werner, C. M., Knetemann, M., & Honing, H. (2016). Disentangling beat perception from sequential learning and examining the influence of attention and musical abilities on ERP responses to rhythm. Neuropsychologia, 85, 80–90.

https://doi.org/10.1016/J.NEUROPSYCHOLOGIA.2016.02.018

Burger, B., Thompson, M. R., Luck, G., Saarikallio, S., & Toiviainen, P. (2013). Influences of Rhythm- and Timbre-Related Musical Features on Characteristics of Music-Induced Movement. Frontiers in Psychology, 4(April), 1–10.

https://doi.org/10.3389/fpsyg.2013.00183

Caclin, A., McAdams, S., Smith, B. K., & Giard, M. H. (2008). Interactive processing of timbre dimensions: An exploration with event-related potentials. Journal of Cognitive Neuroscience, 20(1), 49–64. https://doi.org/10.1162/jocn.2008.20001

Cameron, D. J., & Grahn, J. A. (2014). Enhanced timing abilities in percussionists generalize to rhythms without a musical beat. Frontiers in Human Neuroscience, 8(DEC), 1–10.

(35)

https://doi.org/10.3389/fnhum.2014.01003

Clarkson, M. G., Clifton, R. K., & Perris, E. V. E. E. (1988). Infant timbre perception : Discrimination of spectral envelopes, 15–20.

Degerman, A., Rinne, T., Salmi, J., Salonen, O., & Alho, K. (2006). Selective attention to sound location or pitch studied with fMRI. Brain Research, 1077(1), 123–134. https://doi.org/10.1016/j.brainres.2006.01.025

Duke, R. A. (1990). Beat and Tempo in Music: Differences in Teachers’ and Students’ Perceptions. Update: Applications of Research in Music Education, 9(1), 8–12. https://doi.org/10.1177/875512339000900103

Duke, R. A. (1994). When Tempo Changes Rhythm: The Effect of Tempo on Nonmusicians’ Perception of Rhythm. Journal of Research in Music Education, 42(1), 27–35. https://doi.org/10.2307/3345334

Ellis, R. J., & Jones, M. R. (2009). The role of accent salience and joint accent structure in meter perception. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 264–280. https://doi.org/10.1037/a0013482

Flowers, P. J., & Antonio, S. (1983). Effect Instruction Vocabulary and Changes Listening in Music, 31(3), 179–189.

Formisano, E., Kim, D. S., Di Salle, F., Van De Moortele, P. F., Ugurbil, K., & Goebel, R. (2003). Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron, 40(4), 859–869. https://doi.org/10.1016/S0896-6273(03)00669-X

Fujii, S., & Schlaug, G. (2013). The Harvard Beat Assessment Test (H-BAT): A battery for assessing beat perception and production and their dissociation. Frontiers in Human Neuroscience, 7(NOV), 1–16. https://doi.org/10.3389/fnhum.2013.00771

(36)

Fujioka, T., Zendel, B. R., & Ross, B. (2010). Endogenous neuromagnetic activity for mental hierarchy of timing. Journal of Neuroscience, 30(9), 3458–3466.

https://doi.org/10.1523/JNEUROSCI.3086-09.2010

Geriner, J. M., Madsen, C. K., & Duke, R. A. (1993). Perception of Beat Note Change in Modulating, (119).

Geringer, J. M., & Duke, R. A. (2020). Council for Research in Music Education Musicians ’ Perception of Beat Note : Regions of Beat Change in Modulating Tempos Author ( s ): John M . Geringer , Robert A . Duke and Clifford K . Madsen Published by : University of Illinois Press on behalf of th, 114(114), 21–33.

Geringer, J. M., Duke, R. A., & Madsen, C. K. (1992). Musicians’ Perception of Beat Note: Regions of Beat Change in Modulating Tempos, 114(114), 21–33.

Grahn, J. A. (2009). Neuroscientific investigations of musical rhythm: Recent advances and future challenges. Contemporary Music Review, 28(3), 251–277.

https://doi.org/10.1080/07494460903404360

Grahn, J. A. (2012). Neural Mechanisms of Rhythm Perception: Current Findings and Future Perspectives. Topics in Cognitive Science, 4(4), 585–606. https://doi.org/10.1111/j.1756-8765.2012.01213.x

Grahn, J. A., & McAuley, J. D. (2009). Neural bases of individual differences in beat perception. NeuroImage, 47(4), 1894–1903.

https://doi.org/10.1016/j.neuroimage.2009.04.039

Grahn, J. A., & Rowe, J. B. (2009). Feeling the beat: Premotor and striatal interactions in musicians and nonmusicians during beat perception. Journal of Neuroscience, 29(23), 7540–7548. https://doi.org/10.1523/JNEUROSCI.2018-08.2009

Referenties

GERELATEERDE DOCUMENTEN

The tweet fields we will use as features for the Bayes model are: timezone, user location, tweet language, utc offset and geoparsed user location.. When a field is empty we ignore

Initially, a similar distance dependence is also observed for the junction with an octanethiolate attached to the tip (see Figure 6(b) ), but once the molecule jumps into contact

De concept cartoons zijn duidelijk en aangezien er bij een groot gedeelte van de leerlingen overeenstemming bestaat tussen het epistemologisch niveau van hun redenering en

There are four main differences in the spin relaxation behavior between Si and III-V semiconductors such as GaAs Blakemore, 1982: i Si has no piezoelectric effect, and therefore

He states that “[t]he rise of this new form of capitalism is … of course extremely bad news in the light of climate change and the permanent catastrophe it will entail” (De Cauter

What is the role of change magnitude when it comes to the relationship between leadership style and the success of the change initiative, and how does the change agent?.

The pragmatic-semantic level shows some more differences since, even though they both incorporate these two themes of love and protection, the translation seems to focus more on the

In the body of this paper, the following topics are dis- cussed: psychoacoustic demonstrations which lead to pro- cessing simplifications for beat-tracking, the construction of