• No results found

Pre-attentive beat perception and musical abilities: An ERP-study on the influence of attention and musical abilities on beat perception and statistical learning.

N/A
N/A
Protected

Academic year: 2021

Share "Pre-attentive beat perception and musical abilities: An ERP-study on the influence of attention and musical abilities on beat perception and statistical learning."

Copied!
34
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Pre-attentive beat perception and musical abilities:

An ERP-study on the influence of attention and musical abilities on beat

perception and statistical learning.

- Master thesis of Carola M. Werner -

Internship period: Student: Student number: Master: Track: Organization: Unit: Daily supervisor: Co-assessor: Amount of ECTs: Oct. 2014 – Oct. 2015 C.M. Werner BSc 10529462

MSc Brain and Cognitive Sciences Cognitive Science University of Amsterdam MCG, ILLC Mw. F.L. Bouwer MSc MMus Prof. Dr. H. Honing 42

(2)

2

Abstract

Dancing and music are very important for social interaction. Beat perception is crucial to dance and to make music, hence it has been investigated extensively. Previous studies on beat perception, however, found very contradictory results. First of all, it remains to be elucidated whether beat perception is possible in the absence of attention, because previous studies, that claimed the existence of pre-attentive beat perception, may have measured statistical learning instead of beat perception. Secondly, there are mixed results in studies that investigated the effect of musical training on beat perception due to many different classifications of musicians and non-musicians. Because beat perception might be innate, possibly even a fundamental trait, it might be better to investigate the effect of beat perception abilities on beat perception instead of the effect of musical training on beat perception. The current study aimed to replicate previous findings by disentangling beat perception and statistical learning. In addition, we investigated the effect of beat perception abilities on beat perception and compared this to the effect of musical training on beat perception. To these purposes 34 healthy adults participated in an ERP-study in which we focus on three main ERP-components, i.e. MMN, N2b and P3a. To disentangle beat perception from statistical learning, a regular condition, with consistent onset-intervals, and a jittered condition, with variable inter-onset-intervals, were presented to the participants. In the jittered condition the pattern of loud (‘beat’) and soft (‘offbeat’) tones of the stimuli was the same as in the regular condition. This makes statistical learning possible in the jittered condition, but due to the variable inter-onset-intervals beat perception is not possible here. In the regular condition beat perception is possible, hence the effect of beat perception ánd statistical learning in the regular condition minus the effect of statistical learning in the jittered condition results in a measure of beat perception. Furthermore, to see how attention affects beat perception, we presented the stimuli in an attended condition as well as in an unattended condition. Musical training and beat perception abilities were measured by the Goldsmiths Musical Sophistication Index. The results show, for the first time, that beat perception is indeed possible without attention, even when we control for statistical learning. In addition, we show that people with extensive musical training show a larger effect of attended beat perception in ERPs than people with little musical training, but we found no effect of beat perception abilities in the attended condition. In contrast, people with high beat perceiving abilities show a larger effect of unattended beat perception in ERPs than people that have low beat perceiving abilities, but there was no effect of musical training on beat perception here. Hence, our study substantiates the claim that beat perception may be innate and possibly even a fundamental human trait, but shows also that, although it is already present at birth, beat perception can be trained by musical training.

(3)

3

Table of contents

Abstract ... 2 Introduction ... 4 Method ... 9 Ethics Statement ... 9 Participants ... 9 Stimuli ... 9 Procedure ... 11 EEG recording ... 11 EEG analysis ... 12 Statistical analysis ... 13 Results ... 14 Gold-MSI ... 14 N2b ... 16 MMN ... 16 P3a ... 17 Expertise ... 17 Discussion ... 21 Main results ... 21

Beat perception, statistical learning and attention ... 21

Musical training versus beat perception abilities ... 23

Future directions ... 23

Conclusions ... 24

References ... 25

(4)

4

Introduction

Beat perception is important for social interaction, allowing us to dance and make music together. Music contains rhythm, which is defined as a pattern of durations that can be represented on a discrete, symbolic scale (Honing, 2013, p. 371). When listening to a rhythm, one assigns a metrical structure to it. A metrical structure is a hierarchically organized interpretation of two of more levels of the beat (Honing, 2013, p. 371). The beat is often defined as a regular recurring salient moment in time (Cooper & Meyer, 1960). Figure 1 illustrates these three concepts based on a rhythm noted down in common music notation (score). Beat perception can be explained by the dynamic attending theory (DAT) (Large & Jones, 1999). DAT states that temporal

fluctuations in attention entrain to a rhythm, making it possible to predict future events. On the beat these temporal fluctuations in attentional resources peak, suggesting that unexpected deviant events on metrically strong positions are better detected than unexpected deviant events on metrically weak positions. Confirming this, an ERP-study on beat perception in adults shows that omissions in the rhythm are better detected on metrically strong positions (the beat) than on metrically weak positions (the offbeat) (Ladinig, Honing, Háden, & Winkler, 2009). However, a more recent study shows that unexpected decrements are indeed better detected on metrically strong positions than on metrically weak positions, but that unexpected increments are better detected on metrically weak positions than on metrically strong positions (Bouwer & Honing, 2015). Apparently, the temporal fluctuations in attentional resources (temporal attending) are not the only factor that add to beat perception. Temporal prediction also plays a major role.

The latter study, as well as several other studies that measure event-related-potentials (ERPs), indicates that beat perception is possible without attention (Bouwer & Honing, 2015; Bouwer, Van Zuijen, & Honing, 2014; Geiser, Sandmann, Jäncke, & Meyer, 2010; Ladinig et al., 2009). A study with newborns even hints that beat perception might be innate (Winkler, Háden, Ladinig, Sziller, & Honing, 2009), substantiating the claim that beat perception is a fundamental human trait (Honing, 2012). However, caution is warranted with the claim that beat perception can be pre-attentive. Fig. 1. Clarification of a rhythm in four different ways. First, the rhythm is noted down in common music notation (score). Second, the rhythm is explained with dashes for sound and dots for silence. Third, the perceived beat is indicated with bullets and fourth, a possible metrical structure is displayed (the length of the branches indicates the saliency of that note). The second level of the meter indicates the position of the perceived beat in the rhythm (adapted from Honing, Bouwer, & Háden. 2014).

(5)

5 Possibly, some of these studies (e.g. Bouwer et al., 2014, Ladinig et al., 2009, Winkler et al., 2009) did not measure beat perception, but statistical learning. Statistical learning can be defined as the ability to learn patterns of stimuli, such as music, unfolding in time (Daltrozzo & Conway, 2014, p. 3). It is known that the auditory cortex is an expert in statistical learning of tone sequences (Saffran, Johnson, Aslin, & Newport, 1999). Due to statistical learning one is able to predict what the next tone will be, while beat perception also helps to predict when the next tone will occur. Therefore, it is important to disentangle beat perception and statistical learning in order to investigate beat perception alone.

For example, Bouwer et al. (2014) might have measured statistical learning instead of beat perception in their ERP-study. They presented their participants with a continuous stream of varying rock-like rhythms that induced a beat. The rhythms were constructed with hi-hat, snare and bass drum sounds. There were four standard patterns that accounted for 90% of the whole stream. These patterns are illustrated in figure 2. In S1 all positions of the rhythm were filled, whereas in S2-S4 sounds were omitted on metrically weak positions. The remaining 10% of the stream consisted of three different deviants. In D1 and D2 sounds were omitted on metrically strong (the beat) positions, whereas in D3 sounds were omitted in metrically weak positions (the offbeat). The positions of the omitted sounds were different for the standard patterns and the deviants. The problem with the stimuli in this study are the transitional probabilities of the deviants. Each deviant occurs in 3.3% of the stream, resulting in a chance of 6.6% that it is silent after a hi-hat. However, the chance that it is silent after a bass sound is 25.8%, because this is the case in D3 ánd S2. Hence, the transitional probabilities are not the same for all deviants (6.6% versus 25.8%). Because there is a relatively high chance that it is silent after a bass drum, the mismatch negativity (MMN) response to deviants on metrically weak positions following a bass drum may be much smaller than the response to deviants on metrically strong positions following a hi-hat. Therefore, the difference between a MMN-response to deviants on a metrically weak position and deviants on a metrically strong position may be due to statistical learning and not Fig 2. A schematic overview of the stimuli that

Bouwer et al. 2014 used in their experiment. The rhythms consisted of eight positions. A possible metrical structure is shown on top of the figure.

(6)

6 to beat perception. This example stresses the importance to disentangle beat perception and statistical learning.

Another important topic of current debate is the influence of expertise on beat perception. On the one hand research showed that newborns already show a form of beat perception, suggesting that beat perception is innate (Winkler et al., 2009). These results substantiate the claim that beat perception is a fundamental human trait (Honing, 2012). If this really is the case, this would suggest that beat perception is present regardless of musical training, so non-musicians may perform equally well to musicians in a beat perception task. Indeed, Bouwer et al. (2014) did not show a difference between musicians and non-musicians in a beat perception task. They found that the difference in size of the MMN-response to deviants on metrically weak positions and deviants on metrically strong positions does not differ between musicians and non-musicians. However, as earlier described, it might be that they investigated statistical learning instead of beat perception. On the other hand, Geiser et al. (2010) investigated the MMN-response to metre-congruent and metre-incongruent deviants consisting of unexpected intensity accents. They did found a larger difference in MMN-response to metre-incongruent and metre-congruent deviants for musicians than for non-musicians. The difference between the two latter studies may be explained by the length of the IOIs in the stimuli that were used in these studies. The IOIs in S1 in Bouwer et al. (2014) (see figure 2) was 150 ms and occasionally 300 ms in S2-S4 due to the omitted sounds. For the stimuli of Geiser et al. (2010) the standard IOI was 300 ms and occasionally this was even longer (600 ms). An ERP-study investigating omissions in regular tone series with varying lengths of inter-onset-intervals (IOIs) showed that musicians have a longer and more precise window of temporal integration than non-musicians. While musicians showed reliable MMNs to omissions regardless of the length of the IOIs of the sound stream, non-musicians did barely show a MMN-response to omitted sounds in a sound stream with IOIs of 180 and 220 ms, but they did show a MMN-response to omitted sounds in the sound streams with shorter IOIs (100 ms and 120 ms) (Rüsseler, Altenmüller, Nager, Kohlmetz, & Münte, 2001). Hence, the IOI differences between the stimuli of Bouwer et al. (2014) and Geiser et al. (2010) might explain their contrasting results.

A more recent study focussed on the influence of expertise on temporal and numerical regularity (van Zuijen, Sussman, Winkler, Näätänen, & Tervaniemi, 2005). This study did not exactly focus on beat perception and statistical learning, but one could argue that the paradigm used in this study did test something similar. The paradigm existed of a sequence of tones that had the same pitch which switched to a new tone sequence with a different pitch than the previous sequence. This could happen in two different ways; either after 750 ms (temporal regularity) or after four tones (numerical regularity). In the temporal regularity condition, the tone sequences consisted of a variable number

(7)

7 of tones with the same pitch in a fixed period of time (750 ms). The deviant in this condition was a sequence duration >750 ms. This violation in temporal regularity can be seen as a violation of the beat, because participants built an expectation of pitch change every 750 ms in this condition. Hence, every 750 ms there was a pitch change, which is a salient moment in time, thus a ‘beat’ according to Cooper & Meyer (1960). The deviant, a sequence duration >750 ms, is a violation of this regular pattern, resulting in a MMN. On the other hand one could argue that the numerical regularity represented statistical learning. In the numerical regularity condition each sequence contained four tones of the same pitch, but the duration of the sequences was variable. So, it was possible to predict what the next pitch would be ,i.e. the same as the previous tone or different of the previous tone, but it was not possible to find a regular salient change in time. This is not beat perception but statistical learning. The results show no effect of expertise for violations in temporal regularity. In the numerical regularity condition an effect of expertise for violations in numerical regularity was found; there was no MMN-response to violations of the numerical regularity in non-musicians at all (van Zuijen et al., 2005). So, where temporal regularity might be guided by beat perception and according to this results is not influenced by musical training, numerical regularity (and maybe statistical learning) could be influenced by musical training.

In sum, some studies illustrate that it could be that beat perception can be trained (Geiser et al., 2010; Rüsseler et al., 2001), while other studies show that beat perception exists regardless of musical training (Bouwer et al., 2014; van Zuijen et al., 2005). While the length of the IOIs can be one explanation for this difference, another explanation can be the way of selecting musicians and non-musicians. Some studies select only professional musicians and people who never had any music lessons at all, while other studies have less strict selection criteria. This makes it hard to explain the contrasting results. Possibly, musical ability, and in this case beat perception ability, is a much better predictor for the results of a beat perception experiment instead of musical training (Levitin, 2012). With the current ERP-study we aim to solve the earlier described issues of previous studies that investigated beat perception. First, we will investigate what the effect of attention is on beat perception, taking statistical learning into account. In order to induce a beat we will present our participants with a pattern of loud (‘beat’) and soft (‘offbeat’) tones. To disentangle beat perception and statistical learning we will present our participants with regular and jittered versions of this pattern. In the jittered condition the pattern will be the same as in the regular condition, but in contrast to the regular condition, where the IOIs will be identical between all tones, IOIs will be random between all tones. Because there is consistency in the pattern in the jittered condition it will be possible to perform statistical learning, but due to the lack of regularity beat perception will be impossible. The difference between the ERP-responses to deviants on the beat and deviants offbeat

(8)

8 in the regular condition minus the difference between the ERP-responses to deviants on the beat and deviants offbeat in the jittered condition will be considered as beat perception. To create identical contexts for all deviants, there will also be loud offbeat tones in the pattern. Second, we will investigate whether the effect of beat perception in the ERPs is affected more by musical training or beat perception abilities. Beat perception abilities will be measured with the Beat Alignment Task (BAT) (Iversen & Patel, 2008; Müllensiefen, Gingras, Musil, & Stewart, 2014) and musical training will be measured by a subscale of the Gold-MSI questionnaire developed by Müllensiefen et al. (2014). Then, we will divide our participants in two different ways; one based on the score on the subscale for musical training of the Gold-MSI questionnaire and one based on the BAT-score. With a regression analysis we will investigate whether the effect of beat perception in ERPs is affected more by musical training or beat perception ability. In this study we will focus on three main ERP-components: the MMN, the N2b and the P3a.

MMN

The MMN is an ERP response elicited by violations in acoustic expectations (Winkler, 2007) and is often found using oddball paradigms. An oddball paradigm consists of a standard tone sequence in which unexpected deviants are introduced. The standards are regular and expected tones, whereas the deviants are infrequent and therefore unexpected changes. A deviant will elicit a MMN. The MMN can be recognized as a negative going wave that peaks between 100 and 200 ms after the onset of the stimulus (Luck, 2005). The size of this wave will change accordingly with the magnitude of the deviation (Winkler, 1997). Another important feature of the MMN is that it is independent of attention (Honing, Bouwer, & Háden, 2014; Luck, 2005). This makes the MMN an attractive component to study beat perception without attention, for example in newborns (Winkler et al., 2009) or monkeys (Honing, Merchant, Háden, Prado, & Bartolo, 2012).

N2b

The N2b component is similar to the MMN. However, in contrast to the MMN, the N2b is affected by attention (Honing et al., 2014). The N2b is elicited by task-related and by task-irrelevant deviants (Schröger & Wolff, 1998). Mostly, the N2b overlaps with the MMN, making it difficult to study the MMN in an attended task. Because the N2b is very similar to the MMN, this is a good candidate to study attended beat perception.

P3a

The P3 family consists of the frontally located P3a and the more parietally located P3b. The P3a is mostly called the novelty P3, because it is elicited in classic oddball paradigms (Polich, 2007). The P3b is only elicited when deviants are task-relevant (Luck, 2005). A common interpretation of the P3b is

(9)

9 that it is related to context-updating (Donchin, 1981), while the P3a is more related to focal attention (Polich, 2007). Furthermore, it is known that the P3a is independent of attention (Muller-Gass, Macdonald, Schröger, Sculthorpe, & Campbell, 2007). In this study, we will focus on the P3a, because we will only test task-irrelevant deviants. The P3a is a positive frontal ERP component that peaks around 300 ms and is independent of attention (Luck, 2005).

We expect to find larger ERPs on the beat than offbeat in general, because of dynamic attending. Because we control for statistical learning, we expect to find this effect in the jittered condition as well as in the regular condition. However, because we hypothesize that beat perception does play a role in listening to music, we expect that the difference in the size of ERPs to deviants on the beat and offbeat will be larger in the regular condition than in the jittered condition. Furthermore, we hypothesize, in line with Levitin (2012), that beat perception abilities affect beat perception more than musical training does.

Method

Ethics Statement

All participants gave written informed consent before the study. The experiment was approved by the Ethics Committee of the Faculty of Humanities at the University of Amsterdam.

Participants

Thirty-four healthy adults participated in this study (23 females) with an average age of 26.2 years (SD = 5.3 years, range = 19-46 years). None of them had neurological or hearing problems. All participants had college education or higher. In the selection of participants we aimed to have a large variety in the years of formal instrumental lessons in order to create a diverse group of participants (mean = 9.7 years training, SD = 9.6 years training, range = 0-34 years training).

Stimuli

In our study a continuous rhythm is played, which consists of bass drum and hi-hat sounds. The bass drum and hi-hat sounds were created with QuickTime’s drum timbres (Apple Inc.). A visual representation of the stimuli is shown in figure 3. The black accented tones consist of bass drum and hi-hat tones. The grey unaccented tones consist only of hi-hat sounds. The unaccented tones are 16.5 dB softer than the accented ones to create a clear pattern of loud and soft tones inducing a beat in the pattern. Deviants were attenuated accented sounds that were 25 dB softer than normal accented tones. The standard pattern, which accounted for 90% of the total stream, is formed out of S1 (60%) and S2 (30%). S2 was designed in order to create equal transitional probabilities for beat

(10)

10 deviants and offbeat deviants. This means that a deviant was always preceded by an accented sound, i.e. D1 was always preceded by S2. Also, to create identical contexts for D1 and D2, D1 had an accented offbeat. Both, D1 and D2, occurred in 10% of all trials. To disentangle beat perception and statistical learning we have designed two different streams based on the pattern illustrated in figure 3, resulting in four deviants (i.e. D1regular, D2regular, D1jittered,

D2jittered). In the attended condition there were

150 deviant trials for each condition, so 600 deviant trials in total. In the unattended condition there were 180 deviant trials for each condition, so 720 deviant trials in total. In the regular condition beats and offbeats followed each other with an IOI of 225 ms, resulting in a tempo of 133 beats per minute. This tempo is within the range of preferred tempo for beat perception in humans (London, 2012). The control condition was a jittered condition; the IOIs were different for each tone and defined based on a flat distribution, i.e. the chance of a specific length of the IOI between 150 ms and 300 ms is equal for all IOI durations within that interval. This makes the rhythm very unpredictable, hence it is impossible to find a beat in it. However, the pattern of accented and unaccented tones was the same as in the regular condition, making it possible to perform statistical learning. The IOIs around the deviants in the jittered condition were fixed at 225 ms in order to create a similar acoustic context as for the regular deviants. These two conditions allow us to disentangle statistical learning and beat perception, because the difference between the elicited ERPs by the beat and offbeat deviants in the jittered condition is a measure of statistical learning. When the difference in MMN-response to the beat and offbeat deviants in the regular condition is larger than the difference in MMN-response to the beat and offbeat deviants in the jittered condition, we probably have measured beat perception.

Beat perception abilities were measured with the BAT designed by Iversen and Patel (2008) and adapted by Müllensiefen et al. (2014). In this task participants listen to 17 music samples from different music styles. Along with this pieces of music a beep is played. Participants have to indicate whether this beep is on or off the beat. The accuracy on this task (the BAT-score) can be seen as a measure for beat perception ability (Iversen & Patel, 2008; Müllensiefen et al., 2014). In addition, musical training will be measured with the Gold-MSI questionnaire designed to measure musical Fig. 3. Visual representation of the stimuli with S1 and S2

being the standard patterns and D1 and D2 being the deviant patterns. In black the accented bass drum + hi-hat sounds and in grey the unaccented hi-hat sounds are expressed. The pattern is the same for the regular and jittered condition, the only difference between these two conditions are the IOIs. This results in four different deviants, i.e. D1r, D2r, D1j, D2j.

(11)

11 sophistication (Müllensiefen et al., 2014). One of the six subscales of this questionnaire is a measure of musical training. The score in this subscale is a better measurement of musical training than years of formal music lessons alone, because it also contains questions regarding hours of practice, number of instruments that the participant plays and years of formal theory lessons. Because many of the participants were native Dutch speakers, we translated the questionnaire into Dutch. For the non-Dutch speakers the original English questionnaire was used.

Procedure

First, the participant was informed about the goals of the study. Then, the informed consent was signed and we started the experiment after giving the participant detailed instructions about the experiment. The stimuli were presented with Presentation® software (Version 17.4, www.neurobs.com) through speakers at a sound level of 60 dB. The stimuli were divided in blocks of five minutes, containing five separate sequences of 54 seconds. After each block it was possible to give the participant a break if needed. The experiment consisted of two parts; an unattended and an attended part. During the unattended part of the experiment (approximately 65 min.) the participant watched a self-chosen silenced movie with subtitles, while listening to the stimuli. In total 12 blocks of five minutes were presented in the unattended condition. In the attended condition (approximately 55 min.), the participant had to press a button when a temporal perturbation in the rhythm was heard. Each block (10 in total) of five minutes contained a maximum of two temporal perturbations, but we told the participants there were up to three perturbations possible. In this way they paid full attention to the stimuli until the end of the block. In the regular condition this temporal perturbation was an IOI of 185 ms. In the jittered condition the IOI of the temporal perturbation was 110 ms. Each participant performed the experiment in this order (unattended versus attended). The experiment ended with the BAT (approximately 10 min.) to measure beat perception abilities (Müllensiefen et al., 2014). Afterwards, the Gold-MSI questionnaire (see appendix 1) and a short questionnaire about the experiment were filled in. As a reward for participation the participant received an appropriate amount of money.

EEG recording

The EEG-signal was recorded with a 64 channels Biosemi Active-Two reference-free EEG system (Biosemi, Amsterdam, The Netherlands), which were positioned according to the 10/20 system. Seven additional electrodes were placed on both mastoids, on the left and right of the eyes, above and under the right eye and on the tip of the nose to monitor eye movements and to measure electrical signals that are due to environmental noise. The signal was recorded at a sampling rate of 8 kHz.

(12)

12

EEG analysis

The analysis of the EEG-data was performed with Matlab (The Mathworks, Inc.). We started with pre-processing using EEGLAB (Delorme & Makeig, 2004). First, we re-referenced the raw EEG-data to the linked mastoids. Then, the data was high-pass filtered at 0.5 Hz to minimize slow drifts. Next, the data was visually inspected to see whether channels had to be removed and interpolated. In total, 10 channels in 8 subjects were removed and interpolated. After interpolation of bad channels an independent component analysis was performed. For each subject the eye-blink component was removed. Subsequently, the data was low-pass filtered at 20 Hz to minimize noise caused by muscle activation. Also, the data was down-sampled to 256 Hz, epochs were created starting 300 ms before stimulus onset1 and ending 700 ms after stimulus onset and the epochs were baseline corrected with a baseline of 200 ms starting 200 ms before stimulus onset. Finally, an automatic trial rejection rejected all trials that contained an amplitude smaller than -75 µV and larger than 75 µV.

After pre-processing, the ERPs were created by averaging over trials for all time-points and electrodes for each condition (regular and jittered) and participant separately. From these raw ERPs difference waves were calculated (deviantcondition minus standardregular), resulting in four difference

waves (i.e. beatregular, offbeatregular, beatjittered, offbeatjitter). We used the regular standards to create

these difference waves instead of the jittered standards, because for the regular standards the acoustic context was always the same. This was not the case for the jittered standards due to the variable IOIs. Figure 4 shows the difference waves calculated with the jittered standard and the regular standard respectively. As is illustrated, the use of the regular standard for calculating the difference wave results in a flat baseline of the difference wave, whereas the use of the jittered standard causes a lot of noise in the baseline of the difference wave.

Fig. 4. Making off of the difference waves. On the left figure the difference wave is calculated by subtracting the jittered standard from the jittered deviant. On the right figure the difference wave is calculated by subtracting the regular standard from the jittered deviant.

1

Here, stimulus onset is the onset of the single sound. So, for D1 this is the sound on a metrically strong position and for D2 this is the sound on a metrically weak position. For the standards, only the onsets of the accented sounds were used and compared to the deviant sounds on the exact same positions.

(13)

13

Statistical analysis

All statistical analyses were performed with SPSS version 21 (IBM Inc.). Because we translated the Gold-MSI questionnaire into Dutch, we started to validate our results from this questionnaire. First, we compared our scores on the questionnaire with those of Müllsensiefen et al. (2014). Second, we ran a correlation test on all subscales of the Gold-MSI questionnaire with the BAT-score.

For the analysis of the ERPs we need to define a window of analysis for each ERP-component separately. These windows were defined based on the peaks of the components. The peaks of the components were based on the average of the difference waves of all conditions, i.e. regular beat, regular offbeat, jittered beat and jittered offbeat. We defined the time windows 60 ms around each peak for that component. Because the unattended MMN is a negative-going mid-frontal wave (Luck, 2005), we averaged over electrodes FCz and Cz. The negative peak of these electrodes lay at 134 ms, hence the window of analysis for the MMN ranged from 104 ms to 164 ms after stimulus onset on electrode FCz and Cz. For the attended N2b the same electrodes were used, as this component is similar to the MMN in scalp distribution (Honing et al., 2014). The negative peak of these electrodes lay at 161 ms. So, the window of analysis for the N2b ranged from 131 ms to 191 ms after onset stimulus on electrode FCz and Cz. For the P3a we used electrode FCz for analysis, because the P3a is a frontal ERP component (Polich, 2007). A distinction was made between beat and offbeat, because the P3a peak of the offbeat condition was much later than the beat condition (see table 1). The positive peak of the attended beat condition lay at 247 ms, resulting in a window of analysis ranging from 217 ms to 277 ms after onset stimulus. For the offbeat condition the peak lay at 290 ms, resulting in a window ranging from 260 ms to 320 ms after onset stimulus. For the unattended beat condition the P3a peaked at 231 ms, hence the window ranged from 201 ms to 261 ms after onset stimulus. For offbeat condition the peak lay at 247 ms, resulting in a window ranging from 217 ms to 277 ms after onset stimulus. An overview of all peak latencies can be found in table 1.

For each condition we averaged over the time window for that specific component, resulting in one difference wave value per component per condition. To study whether the participants used beat perception to recognize the beat, and not statistical learning, a repeated measures ANOVA (RM-ANOVA) was performed for each component separately. The within-subject factors were regularity (regular versus jitter) and position (beat versus offbeat).

(14)

14

Table 1. An overview of the peak latencies of N2b, MMN and P3a for each condition.

NB. The average of these peak latencies can slightly differ from the peak-values used to define the windows of

analyses. This is due to the fact that for these values we first averaged over all conditions and then defined the peak of that average difference wave. Because some conditions may have a more intense peak than other, these may weigh heavier in this average.

Attended Unattended

N2b MMN

Regular Beat 161 ms 134 ms

Regular Offbeat 171 ms 130 ms

Jittered Beat 145 ms 134 ms

Jittered Offbeat No peak 130 ms

Average over all conditions 161 ms 134 ms

P3a P3a

Regular Beat 243 ms 227 ms

Regular Offbeat 286 ms 243 ms

Jittered Beat 255 ms 235 ms

Jittered Offbeat 298 ms 251 ms

Beat - Average over all conditions 247 ms 231 ms

Offbeat - Average over all conditions 290 ms 247 ms

To study whether there is a different effect on beat perception for musical training or BAT-score a value representing beat perception and a value representing statistical learning were calculated for each component. We defined statistical learning as the difference in ERP-response to jittered beat and offbeat deviants. Because we would like to control for statistical learning in this study, beat perception was calculated as the difference in ERP-response to regular beat and offbeat deviants minus the difference in ERP-response to jittered beat and offbeat deviants. Next, the participants were divided into two groups in two different ways. First, each participant was categorized as musician or non-musician based on their score on the subscale musical training of the Gold-MSI questionnaire. Second, each participant was categorized as low- or high-beat perceiver based on their BAT-score. Finally, a linear stepwise regression on beat perception was performed with these groups as independent variables.

Results

Gold

-MSI

The Gold-MSI questionnaire consists of six subscales, i.e. active engagement, perceptual abilities, musical training, emotions, singing abilities and general sophistication. The scores on the subscales of this questionnaire are displayed in table 2A (our results) and 2B (the results of Müllensiefen et al. (2014)). The results we found in the current study are perfectly within the range of the scores of

(15)

15 Müllensiefen et al. (2014). Furthermore, we performed a correlation analysis of the scores on the subscales with the BAT-score. The results of this test are illustrated in table 3. The correlation of the subscale musical training and BAT-accuracy matches with Müllensiefen et al. (2014), hence we can use these two measures to see which of these affects beat perception more.

Table 2B. Overview of the scores on the subscales of the Gold-MSI questionnaire from the study of

Müllensiefen et al. (2014) (N = 136924). Active Engagement Perceptual Abilities Musical Training Emotions Singing Abilities General Sophisticat ion Min 9 9 7 6 7 18 Max 63 63 49 42 49 126 Mean 41,52 50,20 26,52 34,66 31,67 81,58 SD 10,36 7,86 11,44 5,04 8,72 20,62

Table 3. Correlations of the scores on the subscales of the questionnaire and the accuracy score on the beat

alignment task. The column bat accuracy shows the correlations of the present study (N = 34), the column

Müllensiefen et al. (2014) shows the correlations Müllensiefen et al. (2014) found in their test-retest sample

(N = 34).

Bat Accuracy Müllensiefen et al. (2014) Active Engagement Pearson’s R

p (2-tailed)

0.280 0.108

0.216 >0.05

Perceptual Abilities Pearson’s R

p (2-tailed)

0.699 <0.001

0.325 >0.05

Musical Training Pearson’s R

p (2-tailed) 0.493 <0.01 0.354 <0.05 Emotions Pearson’s R p (2-tailed) 0.229 0.193 0.308 >0.05

Singing Abilities Pearson’s R

p (2-tailed)

0.552 0.001

0.353 <0.05

General Sophistication Pearson’s R

p (2-tailed)

0.514 <0.01

0.379 <0.05

Table 2A. Overview of the scores on the subscales of the Gold-MSI questionnaire of the current study (N =

34). Active Engagement Perceptual Abilities Musical Training Emotions Singing Abilities General Sophistication Min 15 29 7 29 7 31 Max 58 61 46 42 45 117 Mean 38,59 47,91 27,77 34,62 30,79 78,97 SD 8,83 9,14 14,45 3,43 9,61 24,25

(16)

16

N2b

To study beat perception in the attended condition a RM-ANOVA on the N2b-data was performed. An interaction with electrode (F(1,33) = 18.094, p < 0.001, η2 = 0.354) was found. Post-hoc tests

revealed an effect of position in the regular condition (p < 0.001) on electrode FCz, but not in the jittered condition (p = 0.985). On electrode Cz there was also an effect of position in the regular condition (p < 0.001), but not in the jittered condition (p = 0.184). So, there was no statistical learning yet in the early attended N2b-component. These effects are also illustrated in figure 6. Figure 9A gives an illustration of the N2b-component in the difference wave and the scalp distribution of the N2b per condition. As illustrated in figure 6, the interaction between regularity and position of the N2b was much larger on electrode FCz than on electrode Cz. Therefore, beat perception and statistical learning were calculated on this electrode for the N2b.

Fig. 6. RM-ANOVA on the N2b on electrodes FCz and Cz. Position is indicated on the x-axis and regularity with the lines indicating the average N2b-values.

MMN

To investigate whether the participants performed beat perception or statistical learning in the unattended condition we analysed the MMN-data with a repeated measures ANOVA (ANOVA). The RM-ANOVA showed no interaction with electrode position (F(1,33) = 1.369, p = 0.250, η2 = 0.040),

but there was an interaction between regularity and position (F(1,33) = 14.762, p =

0.001, η2 = 0.309). This interaction is

illustrated in figure 7. Post-hoc pairwise

Fig. 7. Interaction effect in the MMN of position and regularity explained. Position is indicated on the x-axis and the bars indicate the average MMN-values over electrodes FCz and Cz.

(17)

17 comparisons show that for the regular condition a difference between position is observed (p = 0.001), but not in the jittered condition (p = 0.640). This means that there was no statistical learning yet in the early unattended component. Figure 9B gives an illustration of the MMN-component and the scalp distribution of the MMN per condition.

P3a

For the P3a we performed two separate RM-ANOVAs; one for the attended condition and one for the unattended condition. For the attended condition we found a main effect of position (F(1,33) = 7.408, p

= 0.010, η2 = 0.183) as well as a main effect of regularity (F(1,33) = 6.192, p < 0.050, η2 = 0.158).

However, we did not find an interaction between position and regularity (F(1,33) = 1.465, p = 0.235, η2

= 0.043). The P3a was larger in the beat condition than in the offbeat condition, as illustrated in figure 8A, indicating that there was statistical learning going on. This suggests that although there was no interaction between regularity and position, thus no beat perception, there is some regularity detection going on. In the unattended condition there was a main effect of position (F(1,33) = 13.696, p

= 0.001, η2 = 0.278), but not of regularity (F(1,33) = 0.823, p = 0.371, η2 = 0.024) as illustrated in figure

8B. No interaction between position and regularity was found (F(1,33) = 3.905, p = 0.057, η2 = 0.106).

The plots of the difference waves and scalp distributions of the attended and unattended P3a are illustrated in figure 9.

A. B.

Fig. 8. Results of the RM-ANOVA of the attended (figure A) and unattended (figure B) P3a on electrode FCz.

Expertise

To study the effect of musical training and beat perception abilities on beat perception, we divided the 34 participants in two different ways. First, we divided them into separate groups based on the score in the subscale musical training of the Gold-MSI questionnaire and secondly, we divided them based on their BAT-score. The median score in musical training-category was 63.3%. This results in a group of seventeen musicians (12 females) that were between 19 and 46 years old (mean age = 27 years, SD = 6.7 years). They scored between 67.4% and 93.9% (mean score = 83.1%, SD = 7.7%). The

(18)

18 seventeen non-musicians (11 females) were between 21 and 35 years old (mean age = 25.4 years, SD = 3.6 years) and had a score between 14.3% and 59.2% (mean score = 30.3%, SD = 15.9%). There was no age-difference between the people with a high score on musical training and the people with a low score on musical training (t(32) = -0.900, p = 0.058). For the BAT-score the median was 79.4%. The seventeen high beat perceivers (13 females) were between 19 and 46 years old (mean = 27.4 years, SD = 6.3 years). Their BAT-score ranged from 82.4% until 100% (mean = 93.1%, SD = 7.0%). The low beat perceivers (10 females) were between 20 and 35 years old (mean = 25.0 years, SD = 4.0 years) and scored between 47.1% and 76.5% (mean = 64.0%, SD = 10.4%). Also, there was no age-difference between the high beat perceivers and the low beat perceivers (t(32) = -1.371, p = 0.194). There was 75% overlap within the groups, hence 75% of the people who scored low on musical training also were low beat perceivers.

For all components we performed a separate regression analysis on beat perception and on statistical learning with the groups based on musical and BAT-score as independent variables. Figure 10 shows the results of the regression analysis on the N2b and MMN components, i.e. attended and unattended beat perception. As figure 10 illustrates, people that score high on musical training also show a large effect of beat perception in the attended condition (t(32) = -2.900, p < 0.010). This effect

is not visible for the BAT-score (t(32) = -0.246 , p = 0.807). However, for unattended beat perception

this effect is the other way around. Here, a high BAT-score results in a large effect of beat perception (t(32) = -2,231, p < 0.050), but not for musical training (t(32)= 0.359, p = 0.722). On the other hand,

attended statistical learning is not affected by musical training (t(32) = -0.023, p = 0.982), nor by

BAT-score (t(32) = 1.484, p = 0.148). Unattended statistical learning is also not affected by musical training

(t(32) = 1.074, p = 0.291) or BAT-score (t(32) = 0.438, p = 0.665).

Because previous results showed an indication of statistical learning in the P3a component of the attended condition and a hint of beat perception in the P3a component of the unattended condition, we also performed a regression analysis for beat perception and statistical learning as represented in the P3a component. In the attended condition we found no effect of musical training (t(32) = -0.129, p

= 0.898) or BAT-score (t(32) = 1.839, p = 0.076) on attended beat perception in the P3a. There was also

no effect of musical training (t(32) = -0.009, p = 0.993) or BAT-score (t(32) = 0.276, p = 0.784) on

attended statistical learning in the P3a. In the unattended condition beat perception was also not affected by musical training (t(32) = -1.282, p = 0.210) or BAT-score (t(32) = 0.527, p = 0.602).

Unattended statistical learning in the P3a component is also not affected by musical training (t(32) =

1.262, p = 0.216) or BAT-score (t(32) = -0.137, p = 0.892). The effects of musical training and BAT-score

(19)

19 A.

B.

(20)

20

Attended Unattended

A. B.

Fig. 10. Figure A shows the average beat perception scores (x-axis) calculated from the N2b, thus attended beat perception, for BAT-score and musical training (MT) score in the Gold-MSI sub-tests. Figure B shows these same scores, only here these are the scores for the MMN, thus unattended beat perception. The x-axes are upside down, just like the axes in the ERP-plots.

* p < 0.05 ** p < 0.01

Attended Unattended

A. B.

Fig. 11. Figure A shows the average beat perception scores (x-axis) calculated from the attended P3a for BAT-score and musical training (MT) score in the Gold-MSI sub-tests. Figure B shows these same scores, only here these are the scores for the unattended P3a. The x-axes are upside down, just like the axes in the ERP-plots.

* **

(21)

21

Discussion

Main results

The aim of this study was to examine whether beat perception is possible without attention. Because previous studies possibly measured statistical learning instead of beat perception (e.g. Bouwer et al., 2014, Ladinig et al., 2009, Winkler et al., 2009), we controlled for statistical learning in the current study. Furthermore, previous studies divided participants on basis of years of musical training (Bouwer et al., 2014; Geiser et al., 2010; Rüsseler et al., 2001; van Zuijen et al., 2005), which makes it more difficult to compare studies with each other. Additionally, it might be that beat perception ability in general is much more important than years of musical training (Levitin, 2012). Therefore, a second aim of the current study was to investigate which of these factors was a better predictor for beat perception. To these purposes 34 adults participated in a beat perception experiment consisting of an attended and an unattended part. They also performed the beat perception alignment task (Iversen & Patel, 2008; Müllensiefen et al., 2014). Our MMN- and N2b-results show that the difference in MMN-response to deviants on the beat and deviants offbeat is larger in the regular condition than in the jittered condition. Hence, beat perception is possible without attention. Furthermore, the regression analyses show that musical training affects attended beat perception more than beat perception abilities, whereas beat perception abilities affect unattended beat perception more than musical training. Statistical learning is not affected by musical training, nor beat perception abilities. Below, we will further discuss our main findings in more detail.

Beat perception, statistical learning and attention

A major aim of this study was to disentangle beat perception from statistical learning, because earlier studies may have measured statistical learning instead of beat perception (e.g. Bouwer et al., 2014, Ladinig et al., 2009, Winkler et al., 2009). In the jittered condition, we did not find any signs of statistical learning in the early components (N2b and MMN). Only in the late P3a component in the attended condition signs of statistical learning were found. This means that statistical learning indeed plays a role in the perception of the beat and that it is important to control for this.

Our results do show that beat perception is possible with and without attention confirming the claims of previous studies that beat perception is possible without attention (Bouwer & Honing, 2015; Bouwer et al., 2014; Geiser et al., 2010; Ladinig et al., 2009). In both conditions the early components (N2b and MMN) show an interaction between position and regularity, in other words beat perception. In addition, there was no statistical learning measured yet in these early components. This might confirm that previous studies did measure beat perception, because they only studied the early MMN-component (Bouwer et al., 2014; Ladinig et al., 2009; Winkler et al.,

(22)

22 2009). In the attended condition statistical learning was present in the late P3a component, but beat perception was not. In contrast with this, the late P3a component in the unattended condition hints to beat perception, but shows no signs of statistical learning. These results suggest that beat perception is an early cognitive process, while statistical learning is a late cognitive process that requires attention. The fact that statistical learning might depend on attention confirms an earlier linguistic study that showed that attention was needed to perform speech segmentation via statistical learning (Toro, Sinnett, & Soto-Faraco, 2005). However, our results are in contrast with the findings that statistical learning of auditory regularities in fact can occur implicitly (Daltrozzo & Conway, 2014). This contrasting result could be explained by the way we analysed the P3a-components. The time windows for the P3a components all had an overlap with the onset of the next sound. Consequently, early components of the next sound fell within this time window resulting in a noisy P3a component. It is difficult to disentangle the P3a component of the early components elicited by the next sound, because the P3a peaks around 300 ms and the next sound in our stimuli occurs at 225 ms. Because the P3a peaks later than 225 ms it is also not a good solution to take an earlier time window. This would affect the results of the analyses too much since we would miss the P3a peak entirely as this lays later than 225 ms. Although our results suggest that the P3a is not involved in attended beat perception, the results of the unattended condition show a marginal effect of beat perception. Hence, it could be the case that beat perception is present for the late P3a components, but possibly it is difficult to measure. This is also the case for the unattended statistical learning. Slowing the tempo of our stimuli seems to be an obvious solution, because this avoids overlapping sounds. However, slowing the tempo of our stimuli would not work, because this makes it more difficult for non-musicians to identify the deviants (Rüsseler et al., 2001).

Another issue might be that our jittered condition was too difficult to perform statistical learning due to the flat distribution that was used to create this condition. Maybe a certain amount of regularity needs to be present in order to perform statistical learning. When the jittered condition becomes more predictable this may allow participants to identify some salient moment in time, i.e. a ‘beat’. It would be interesting to study whether the difference in ERP-response to beat deviants and offbeat deviants becomes larger as the temporal predictability of the stimuli increases. If this is the case, it could be that beat perception is just an enhanced form of statistical learning and that it is not possible to disentangle the two processes. On the other hand, if the difference in ERP-response to beat deviants and offbeat deviants in the most predictable jittered condition still is much smaller than in the regular condition, beat perception and statistical learning indeed are two different cognitive processes.

(23)

23

Musical training versus beat perception abilities

The second aim of this study was to examine which factor does affect beat perception more; musical training or beat perception abilities. For the first time we show that musical training affects attended beat perception more than beat perception abilities affect beat perception, but that beat perception abilities affect unattended beat perception more than musical training affects beat perception. This suggests that musical training provides us with skills that enhances our attended beat perception. However, those skills do not play a role when we pre-attentively perceive a beat; our innate beat perception ability takes over. We also found that both musical training and beat perception ability do not affect statistical learning in these early components. In line with this, musical training and beat perception ability do not affect the size of the P3a, because here we mainly found statistical learning and not beat perception.

These results could explain why Bouwer et al. (2014) did not find any differences between musicians and non-musicians in an unattended beat perception task. They divided their participants on basis of musical training. If they indeed measured statistical learning instead of beat perception our results confirm that this is not predicted by musical training. On the other hand, if they measured beat perception, it could be that they would find a difference when they divide their participants on basis of beat perception ability. So, if they would divide their participants on basis of beat perception ability and they still do not find any differences between high and low beat perceivers, this once more suggests that they measured statistical learning instead of beat perception. If they, on the other hand, do find a difference it would confirm that they did measure beat perception and not statistical learning.

Future directions

As described earlier, future studies should investigate whether beat perception and statistical learning are two separate processes or that beat perception is just an enhanced form of statistical learning. This can be done by making different jittered conditions ranging in predictability. Furthermore, previous work suggests that there is neuronal entrainment to the beat (Nozaradan, Peretz, Missal, & Mouraux, 2011; Nozaradan, Peretz, & Mouraux, 2012). Therefore, time-frequency analyses of the EEG-data might be useful in identifying differences in neuronal entrainment to the beat between the jittered and the regular conditions, i.e. differences in neuronal entrainment induced by statistical learning and by beat perception.

Apart from this, it would also be interesting to investigate how much information and what kind of information is important to recognise a beat. It seems for example that it is more difficult to tap along when a rhythm becomes more complex (Fitch & Rosenfeld, 2007), hence the complexity of a

(24)

24 rhythm might play a major role in recognising a beat. In addition, another study showed that infants prefer the musical meter they are familiar with (Soley & Hannon, 2010), so this suggests that it might be more difficult for western non-musicians to recognise the beat in irregular rhythms. It may be that making the stimuli more complex, by for example using an irregular meter, is a good way to investigate whether beat perception abilities indeed affect unattended beat perception more than musical training does.

Conclusions

In sum, we can conclude that beat perception is possible without attention, although future studies should investigate whether beat perception may be an enhanced form of statistical learning. Furthermore, it seems that beat perception ability is a skill that is used pre-attentively where musical training provides skills that can be used consciously. Therefore, our study substantiates the claim that beat perception may be innate and possibly even a fundamental human trait, but shows also that, although it is already present at birth, beat perception can be trained by musical training.

(25)

25

References

Bouwer, F. L., & Honing, H. (2015). Temporal attending and prediction influence the perception of metrical rhythm: evidence from reaction times and ERPs. Frontiers in Psychology, 6(July), 1–14. doi:10.3389/fpsyg.2015.01094

Bouwer, F. L., Van Zuijen, T. L., & Honing, H. (2014). Beat processing is pre-attentive for metrically simple rhythms with clear accents: an ERP study. PloS One, 9(5), e97467.

doi:10.1371/journal.pone.0097467

Cooper, G., & Meyer, L. B. (1960). The Rhythmic Structure of Music. University of Chicago Press. Retrieved from http://books.google.com/books?id=V2yXrIWDTIQC&pgis=1

Daltrozzo, J., & Conway, C. M. (2014). Neurocognitive mechanisms of statistical-sequential learning: what do event-related potentials tell us? Frontiers in Human Neuroscience, 8, 437.

doi:10.3389/fnhum.2014.00437

Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21. doi:10.1016/j.jneumeth.2003.10.009

Donchin, E. (1981). Surprise!? Surprise? Psychophysiology, 18(5), 493–513. doi:10.1111/j.1469-8986.1981.tb01815.x

Fitch, W. T., & Rosenfeld, A. J. (2007). Perception and Production of Syncopated Rhythms. Music

Perception: An Interdisciplinary Journal, 25(1), 43–58. doi:10.1525/mp.2007.25.1.43

Geiser, E., Sandmann, P., Jäncke, L., & Meyer, M. (2010). Refinement of metre perception--training increases hierarchical metre processing. The European Journal of Neuroscience, 32(11), 1979– 85. doi:10.1111/j.1460-9568.2010.07462.x

Honing, H. (2012). Without it no music: beat induction as a fundamental musical trait. Annals of the

New York Academy of Sciences, 1252, 85–91. doi:10.1111/j.1749-6632.2011.06402.x

Honing, H. (2013). Structure and Interpretation of Rhythm in Music. In D. Deutsch (Ed.), The

Psychology of Music (3rd ed., pp. 369–404). Elsevier. doi:10.1016/B978-0-12-381460-9.00009-2

Honing, H., Bouwer, F. L., & Háden, G. P. (2014). Perceiving Temporal Regularity in Music : The Role of Auditory Event-Related Potentials ( ERPs ) in Probing Beat Perception. In H. Merchant & V. de Lafuente (Eds.), Neurobiology of interval timing. Springer. doi:10.1007/978-1-4939-1782-2 Honing, H., Merchant, H., Háden, G. P., Prado, L., & Bartolo, R. (2012). Rhesus monkeys (Macaca

mulatta) detect rhythmic groups in music, but not the beat. PloS One, 7(12), e51369. doi:10.1371/journal.pone.0051369

Iversen, J. R., & Patel, A. D. (2008). The Beat Alignment Test (BAT): Surveying beat processing abilities in the general population. In K. Miyazaki, Y. Hiraga, M. Adachi, Y. Nakajima, & M. Tsuzaki (Eds.),

Proceedings of the 10th International Conference on Music Perception and Cognition (ICMPC 10)

(26)

26 Ladinig, O., Honing, H., Háden, G. P., & Winkler, I. (2009). Probing Attentive and Preattentive

Emergent Meter in Adult Listeners Without Extensive Music Training. Music Perception, 26(4), 377–386.

Large, E. W., & Jones, M. R. (1999). The dynamics of attending: How people track time-varying events. Psychological Review, 106(1), 119–159.

Levitin, D. J. (2012). What does it mean to be musical? Neuron, 73(4), 633–7. doi:10.1016/j.neuron.2012.01.017

London, J. (2012). Hearing in Time: Psychological Aspects of Musical Meter. Retrieved from https://books.google.com/books?hl=nl&lr=&id=8vUJCAAAQBAJ&pgis=1

Luck, S. J. (2005). An introduction to the event-related potential technique. Massachusetts: MIT Press. Müllensiefen, D., Gingras, B., Musil, J., & Stewart, L. (2014). The musicality of non-musicians: an index

for assessing musical sophistication in the general population. PloS One, 9(2), e89642. doi:10.1371/journal.pone.0089642

Muller-Gass, A., Macdonald, M., Schröger, E., Sculthorpe, L., & Campbell, K. (2007). Evidence for the auditory P3a reflecting an automatic process: Elicitation during highly-focused continuous visual attention. Brain Research, 1170, 71–78. doi:10.1016/j.brainres.2007.07.023

Nozaradan, S., Peretz, I., Missal, M., & Mouraux, A. (2011). Tagging the neuronal entrainment to beat and meter. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience,

31(28), 10234–40. doi:10.1523/JNEUROSCI.0411-11.2011

Nozaradan, S., Peretz, I., & Mouraux, A. (2012). Selective neuronal entrainment to the beat and meter embedded in a musical rhythm. The Journal of Neuroscience : The Official Journal of the

Society for Neuroscience, 32(49), 17572–81. doi:10.1523/JNEUROSCI.3203-12.2012

Polich, J. (2007). Updating P300: an integrative theory of P3a and P3b. Clinical Neurophysiology :

Official Journal of the International Federation of Clinical Neurophysiology, 118(10), 2128–48.

doi:10.1016/j.clinph.2007.04.019

Rüsseler, J., Altenmüller, E., Nager, W., Kohlmetz, C., & Münte, T. F. (2001). Event-related brain potentials to sound omissions differ in musicians and non-musicians. Neuroscience Letters,

308(1), 33–36. doi:10.1016/S0304-3940(01)01977-2

Saffran, J. R., Johnson, E. K., Aslin, R. N., & Newport, E. L. (1999). Statistical learning of tone sequences by human infants and adults. Cognition, 70(1), 27–52. doi:10.1016/S0010-0277(98)00075-4

Schröger, E., & Wolff, C. (1998). Behavioral and electrophysiological effects of task-irrelevant sound change: a new distraction paradigm. Cognitive Brain Research, 7(1), 71–87. doi:10.1016/S0926-6410(98)00013-5

Soley, G., & Hannon, E. E. (2010). Infants prefer the musical meter of their own culture: A cross-cultural comparison. Developmental Psychology, 46(1), 286–292. doi:10.1037/a0017555

(27)

27 Toro, J. M., Sinnett, S., & Soto-Faraco, S. (2005). Speech segmentation by statistical learning depends

on attention. Cognition, 97(2), B25–34. doi:10.1016/j.cognition.2005.01.006

van Zuijen, T. L., Sussman, E., Winkler, I., Näätänen, R., & Tervaniemi, M. (2005). Auditory

organization of sound sequences by a temporal or numerical regularity--a mismatch negativity study comparing musicians and non-musicians. Brain Research. Cognitive Brain Research, 23(2-3), 270–6. doi:10.1016/j.cogbrainres.2004.10.007

Winkler, I. (1997). Two separate codes for missing-fundamental pitch in the human auditory cortex.

The Journal of the Acoustical Society of America, 102(2), 1072. doi:10.1121/1.419860

Winkler, I. (2007). Interpreting the Mismatch Negativity. Journal of Psychophysiology, 21(3-4), 147– 163. doi:10.1027/0269-8803.21.34.147

Winkler, I., Háden, G. P., Ladinig, O., Sziller, I., & Honing, H. (2009). Newborn infants detect the beat in music. Proceedings of the National Academy of Sciences, 106(7), 2468–2471.

(28)

28

Appendix 1:

(29)

Gold-MSI vragenlijst

Omcirkel wat het meest van toepassing

is:

1. Helemaal

mee oneens

2. Sterk mee

oneens

3. Oneens 4. Neutraal 5. Eens

6. Sterk

mee eens

7. Helemaal

mee eens

1. Veel van mijn vrije tijd besteed ik aan

muziek-gerelateerde activiteiten.

1

2

3

4

5

6

7

2. Soms kies ik muziek die me rillingen

over mijn rug bezorgt.

1

2

3

4

5

6

7

3. Ik schrijf graag over muziek,

bijvoorbeeld op blogs en forums.

1

2

3

4

5

6

7

4. Als iemand een liedje begint te zingen dat

ik niet ken, kan ik meestal wel mee doen.

1

2

3

4

5

6

7

5. Ik kan beoordelen of iemand een goede

zanger is of niet.

1

2

3

4

5

6

7

6. Meestal weet ik of ik een liedje voor het

eerst hoor.

1

2

3

4

5

6

7

7. Ik kan muziek uit mijn hoofd zingen of

spelen.

1

2

3

4

5

6

7

8. Ik ben gefascineerd door muziekstijlen

die ik niet ken en wil daar meer over weten.

1

2

3

4

5

6

7

9. Stukken muziek maken zelden emoties

bij mij los.

1

2

3

4

5

6

7

10. Ik ben in staat om de juiste tonen te

raken als ik met een opname meezing.

(30)

30

Omcirkel wat het meest van toepassing

is:

1. Helemaal

mee oneens

2. Sterk mee

oneens

3. Oneens 4. Neutraal 5. Eens

6. Sterk

mee eens

7. Helemaal

mee eens

11. Ik vind het moeilijk om de fouten in een

uitvoering van een liedje te herkennen, zelfs

als ik de melodie goed ken.

1

2

3

4

5

6

7

12. Ik ben in staat om twee uitvoeringen of

versies van hetzelfde muziekstuk te

vergelijken en de verschillen te bespreken.

1

2

3

4

5

6

7

13. Ik vind het moeilijk om een bekend

liedje te herkennen als het wordt uitgevoerd

op een andere manier of door een andere

artiest.

1

2

3

4

5

6

7

14. Ik heb nog nooit complimenten gehad

voor mijn talent als muzikant.

1

2

3

4

5

6

7

15. Ik lees of zoek vaak dingen op internet

op die met muziek te maken hebben.

1

2

3

4

5

6

7

16. Ik zoek vaak bepaalde muziek uit om

me te motiveren of te prikkelen.

1

2

3

4

5

6

7

17. Ik ben niet in staat om een tweede stem

mee te zingen als iemand een bekende

melodie zingt.

1

2

3

4

5

6

7

18. Ik kan beoordelen of iemand uit de maat

speelt of zingt.

1

2

3

4

5

6

7

19. Ik ben in staat om te identificeren wat er

speciaal is aan een bepaald muziekstuk.

1

2

3

4

5

6

7

20. Ik ben in staat om te praten over de

emoties die een muziekstuk in mij oproept.

Referenties

GERELATEERDE DOCUMENTEN

A last question regarding the 5 different change perspectives would be to research whether the cooperating organizations should have the same change perspective

Op punt 3 wordt het zicht belemmerd door dichte beplanting, terwijl het een plek is met veel reliëf en hier juist potentie is voor zichtrelaties omdat het op een punt van het

Op basis van eerder onderzoek werd verwacht dat de deelnemers onder makkelijke leeromstandigheden (met voortraining) op korte termijn meer woorden zouden leren, maar op lange

De concept cartoons zijn duidelijk en aangezien er bij een groot gedeelte van de leerlingen overeenstemming bestaat tussen het epistemologisch niveau van hun redenering en

This principle of juxtaposition or aggregation is also typica l of Ginsberg ' s poetry, and he develops it to the extreme in his longer poems (see examples quoted

In the body of this paper, the following topics are dis- cussed: psychoacoustic demonstrations which lead to pro- cessing simplifications for beat-tracking, the construction of

The general peak detection algorithm was rather basic in order to be able to detect peaks in multiple different sig- nals of interest.. As it was not known which type of sig- nal

Results revealed that there is indeed a significant effect of the type of gesture used for language learning; it showed a significant difference between the performance of