• No results found

Time-frequency analysis of word-in-noise processing in musicians and nonmusicians

N/A
N/A
Protected

Academic year: 2021

Share "Time-frequency analysis of word-in-noise processing in musicians and nonmusicians"

Copied!
17
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Time-frequency analysis of word-in-noise

processing in musicians and nonmusicians

Abstract:

Musicians are better able to understand speech in noisy environments. It is hypothesized

that this is partly due to more robust brainstem encoding, which in turn facilitates lexical

access. This study is the first to investigate attention-dependent cortical oscillations of

speech-in-noise processing and focussed on the alpha (8-12 Hz) and beta (16-24 Hz)

frequency bands in particular. Musicians (N=12) and nonmusicians (N=12) were

presented single words in three active noise conditions, in which they had to repeat the

word aloud, while brain activity was monitored using EEG. A time-frequency analysis

was then performed. In general, alpha synchronization increases with increasing noise.

Alpha increase has been associated with distractor/noise suppression in both the visual

and the auditory domain. The alpha increase was most prominent in musicians, and is

thought to reflect more acoustic processing of the stimulus. Beta synchronization is

suppressed with decreasing noise. Various possible explanations for this beta

suppression are explored.

Information:

Name: Merwin Eward Olthof Supervisor: Benjamin Zendel

Student ID: 5974097 Affiliate university: Université de Montreal

Amount of EC: 40 Research group: BRAMS

Research Period: 15-09-2014 until 15-07-2015 UvA Representative: Paula Roncaglia Denissen

Master: Brain & Cognitive Sciences Co-assessor: Paula Roncaglia Denissen

(2)

Introduction

It is well established that musicians have enhanced auditory processing abilities compared to non-musicians (e.g., Beauvois & Meddis, 1997; Rammsayer & Altenmüller, 2006; Zendel & Alain, 2009). Interestingly, these enhancements transfer to speech understanding, as musicians are better able to understand speech in multitalker babble noise. One source of this benefit is due to more robust encoding of speech signals in noise in the brainstem (Parbery-Clark, Skoe, & Kraus, 2009) which in turn could shape auditory mechanisms in a top-down fashion. This means that musical training can strengthen cognitive functions which in turn benefits auditory skills (Strait et al, 2010) such as, in our case, word-in-noise processing.

These enhancements are not only reflected by better task performance (e.g. Strait et al., 2011), but also in differences in cortical EEG response between musicians and non-musicians. For example, in ERP experiments, musicians have shown an enhanced P1 while processing speech in noise (Musacchia et al., 2008; Zendel et al., 2015) compared to nonmusicians. Additional evidence for enhanced auditory processing comes from N400 studies. The N400 component is usually associated with lexical access (Kutas & Federmeier, 2011). In a word-in-noise context, such as is the case in our study, the N400 likely reflects the comparison between an incoming word and a stored lexical representation of the word. The N400 amplitude usually increases with increased effort. Zendel et al. (2015) found an increased N400 amplitude for nonmusicians, as noise increased. This increase was related to decreased task performance. Zendel et al. hypothesized that the early P1 enhancement, associated with more robust

subcortical encoding of the target signal, facilitates lexical access in musicians, which results in a reduced impact of noise on the N400.

Most studies aimed at better understanding of speech-in-noise processing have focused on stimulus-evoked activity. In this study however, we focus on induced activity instead. When averaging ERP`s, one implicitly assumes that background EEG activity is random (Bishop et al. 2011). However, background EEG activity is not random, instead it consists of an ensemble of oscillations at different frequencies that are not necessarily phase locked to the onset of the stimulus. Time-frequency analysis allows us to measure the relative power change for each of these frequencies following a stimulus. Different frequencies are associated with different (functional) processes. For example, in a speech-in-noise situation, alpha oscillations (8-12 Hz) have been hypothesized to reflect suppression of irrelevant acoustic streams (e.g. Klimesch et al, 2007; Foxe & Schneider, 2011; Strauss et al., 2014)

(3)

In this study, we aim to unveil the oscillatory activity in a word-in-noise context by performing a time-frequency analysis. Addition of background noise during a lexical decision task likely increases attentional and cognitive effort (Zendel et al. 2015). Strauss et al. (2014) noticed that, when presenting auditory materials against background noise, a prominent alpha power increase is usually detected. They proposed that alpha oscillations could support auditory selective inhibition. Although encoding of acoustic features is generally regarded an automatic process (e.g. Chandrasekaran & Kraus, 2010a), according to Strauss et al. (2014), the selective suppression of auditory channels regulated by a top-down attentional mechanism. For example, in the famous `cocktail-party` situation, the listener can decide which auditory stream to attend to consciously. This process requires suppression of the noise and therefore we expect an increase in alpha synchronization when noise is increased during a speech-in-noise task.

The role of beta oscillations (13-30 Hz) in language processing is still heavily under discussion. Engel and Fries (2010) have proposed that beta activity signals the status quo. If the status of a stimulus is as intended or predicted, beta oscillations are expressed more strongly than when an unexpected change occurs, according to Engel and Fries. However, in the

language domain, results on beta activity are tentative. Increases in beta desynchronization has been reported for open-class compared to closed-class words (Bastiaansen et al., 2005), words versus pseudowords (Supp et al., 2004), emotional versus neutral words (Hirata et al., 2007). Because tasks requirements for these studies are highly different, predictions on beta activity in our study are hard to make.

An additional question is whether there is a difference in oscillatory activity between musicians and non-musicians. If alpha suppression is based on acoustic suppression of the noise, following Zendel et al.’s (2015) hypothesis, we expect a bigger increase in alpha synchronization for musicians. However, if the suppression is lexical, we expect bigger increase in alpha

synchronization for non-musicians.

Below, we describe the materials and methods used, as well as the demographics of the participants. In the ensuing section, we will present our results. In the final section, we discuss our results.

(4)

Materials & Methods Participants

Twenty-four participants (13 males; mean age, 22.5 years; SD, 3.6 years) participated for the study and provided formal informed consent in accordance with the Research Ethics Board of the Quebec Neuroimaging Group. The participants were recruited from the university

community through on-campus advertisement. Musicians had formal training and had at least 10 years of musical experience as a vocalist and/or instrumentalist. Another requirement for being classified as a musician was to practice a minimum of 10 hours per week in the year the testing took place. Participant demographics are presented in Table 1. All participants were right handed, native French speakers, and proficient French-English bilinguals. All participants had normal audiometric thresholds (i.e., below 25 dB HL for frequencies between 250-8000 Hz). A 2 (musician, non-musician) by 2 (Left ear, Right ear) by 6 (Frequency: 250, 500, 1000, 2000, 4000 & 8000 Hz) ANOVA revealed that pure-tone thresholds were similar for both musicians and non-musicians.

Group Age Gender

Education (years) Music training onset (age) Music experience (years) Music practice (hours per week) Musicians 18-35 (M = 23.5; SD = 4.5) 7 female; 5 male 13-21 (M = 16.7; SD = 2) 2-15 (M = 7.8; SD = 3.8) 10-28 (M = 15.7; SD = 5.3) 11-85 (M = 31.3; SD = 19) Nonmusician s 19-27 (M = 21.5; SD = 2.2) 4 female; 8 male 14-18 (M = 15; SD = 1.2) Stimuli

The stimuli consisted of 150 French words spoken by a male native speaker of Quebec French. These were originally from a test used to measure audiometric speech thresholds (Benfante et

(5)

al. 1966). Words were presented through insert-earphones (Etymotic ER-2) at ~75 decibels sound pressure level (dB SPL).

The noise consisted of multi-talker babble. Four native speakers of Quebec French (gender balanced) read a rehearsed monologue in a silent room for 10 minutes. Recordings were then made at a sampling rate of 44.1 KHz at 16 bits, using an Audio-Technica 4040 condenser microphone. These recordings were then combined in a single sound file. This multi-talker babble was then looped repeatedly in two of the conditions, at ~60 and ~75 dB SPL respectively.

Procedure

All 150 words were presented in a random order, in each of the three levels of multi-talker babble noise. In the ‘None’ condition, words were presented without multi-talker babble noise. In the ‘SNR-15’ condition words were presented with multi-talker babble noise at ~60 dB SPL (i.e., 15 dB signal-to-noise ratio [SNR]). In the ‘SNR-0’ condition words were presented with multi-talker babble noise that was at the same level as the word (i.e., 0 dB SNR).

These three noise levels were presented in both an active and a passive listening condition. A self-selected silent movie, which has been proven to be a good attentional manipulation without interfering with auditory processing (Pettigrew et al., 2004), was played in the passive condition. In this condition, the participants were told to ignore the words. In the active condition, the participants were told to repeat the word aloud, and did not watch a movie.

The words were presented with a SOA (Stimulus Onset Asynchrony) that was

randomized between 2500 and 3500 milliseconds. Word correctness was then judged online by a native Quebec French speaker.

The active SNR-0 condition was always presented first, to ensure that prior exposure to the words from one of the other conditions would not influence performance. Following the SNR-0 condition, the active SNR-15 and active None conditions were presented. Prior exposure to the words in the SNR-0 condition could not influence performance in these other conditions, because performance were both at ceiling.

Recording and averaging of electrical brain activity

Neuroelectric brain activity was digitized continuously from 71 active electrodes at a sampling rate of 1024 Hz, with a high pass filter set at 0.1 Hz, using a Biosemi ActiveTwo system (Biosemi,

(6)

Inc., Netherlands). Six electrodes were placed bilaterally at mastoid, inferior ocular, and lateral ocular sites (M1, M2, IO1, IO2, LO1, LO2). In addition, during active listening, trials where the participant did not correctly repeat the word were excluded from the analysis. BESA Research (v.6.0) preprogrammed artifact correction algorithm was then used to clean artifacts such as eye blinks and muscle artifacts. Bad channels were interpolated before moving on to time-frequency analysis.

Time-frequency analysis (Electrophysiological)

The preprocessed EEG data were transformed into the time–frequency domain using the complex demodulation algorithm implemented in BESA Research (v.6.0). This algorithm uses a periodic exponential function with a frequency equal to the frequency that is being investigated to transform the raw time-domain data into the time-frequency domain (Papp and Ktonas, 1977). Subsequently, a low-pass filter is applied in the shape of a finite impulse response filter of Gaussian shape in the time domain, which is related to the envelope of the moving window in wavelet analysis (Hoechstetter et al. 2004). The time-frequency sampling was set at 2Hz/25ms. The frequency sampling range was 0-32 Hz, because we primarily interested in low-to-medium frequency bands. This frequency sampling range covers, among others, alpha (8-12 Hz) and beta (12-30 Hz) activity. The analysis epoch included 200 milliseconds of pre-stimulus activity (to determine baseline) and 1000 milliseconds of post-stimulus activity. The evoked signal was then subtracted to be able to look at induced brain oscillations only. We then looked at the resulting percentage power change compared to baseline (pre-stimulus activity) rather than spectral density.

Statistical analysis

We focused our analysis on induced activity near the vertex, which is thought to represent activity from sources near the auditory cortex along the superior temporal plane. To ensure a stable and reliable estimate of the induced activity we averaged the percentage of spectral power change in each frequency bin across nine fronto-central electrodes (FC1, FC2, FCz, C1, C2, Cz, CP1, CP2 and CPz). The data was then averaged in time bins of 300 ms, and frequency bins of 4 Hz. Subsequently, these bins were statistically analyzed using SPSS (v. 22). A series of ANOVAs was performed on the six frequency and three time bins ranging from 8 to 32 Hertz and 0 to 900

(7)

ms respectively to determine effects of noise on induced activity. The results of these tests are discussed in the next section.

Results

Behavioral results

Figure 1 shows the amount of correct responses by nonmusicians and musicians in the three noise conditions, None, SNR-15 and SNR 0. A repeated measures ANOVA revealed a significant effect of noise on performance, F(1.071, 23.567) = 270.07, p < .001. Additionally, there was an interaction of noise with musicianship, F(1.071, 23.567) = 5.125, p = .031. Musicians (N=12) performed significantly better than non-musicians, t(22) = 2.186, p = .04, in the SNR-0 condition. In the remaining conditions, SNR-15 (p = .76) and None (p = .19), there was no significant difference between the two groups. Participants of both groups performed at ceiling level in both SNR-15 and None conditions.

Time-frequency results

We further assessed the stimulus-induced responses for non-musicians and musicians in all of the six active conditions by visualizing them in Matlab (v.6.0). These visualizations can be viewed

(8)

in Figure 2. We then decided to focus on relative alpha and beta modulation, as these frequencies showed the most robust relative power change based on visual inspection of the Matlab visualizations. Repeated measures ANOVA’s were performed for alpha (8-12 Hz) and mid beta (16-24 Hz) frequencies to assess differences between conditions and groups. The results of these statistical tests are reported below.

Figure 3. Visualizations of the relative power change in frequency range -. Red indicates neuronal synchronization, blue indicates neuronal desynchronization. Based on visual inspection, alpha

synchronization seems to increase with increasing noise. Also, beta desynchronization seems to increase with decreasing noise, especially in nonmusicians.

(9)

Alpha synchronization with increasing noise

Based on visual inspection of the time-frequency power plots (see Figure 2) we first investigated the relative alpha power change by analyzing three equally spaced time bins (0-300 ms, 300-600 ms, and 600-900 ms). Alpha activity was then compared over noise conditions and experimental groups. Figure 3 shows relative alpha power change for all three active conditions for both musicians and non-musicians.

For the first epoch (0-300 ms, figure 3A), there was no significant main effect of noise on relative alpha power change (p = 0,148). For the second epoch, there was a trend towards a significant effect of noise, F(2,44) = 3.092, p = .055. Non-musicians did not show any change in alpha power for the SNR-15 condition, which could have influenced the results. In the 600-900 ms time range, we did find a significant effect of noise on relative alpha power, F(2,44) = 6.487, p = .003.

Although, in general, musicians showed relatively more alpha synchronization with increasing noise than non-musicians (see Figure 3), in all of the conditions (None, SNR-15, SNR-0) these differences were not significant.

Low Beta de-synchronization with decreasing noise

Based on visual inspection of the time-frequency plots (see Figure 2) we decided to average the activity in the 16-24 frequency range. Relative (16-24 Hz) beta power change was separated in three equally spaced time bins (0-300 ms, 300-600 ms, and 600-900 ms). Low beta activity was then compared over noise conditions and experimental groups. Figure 4 shows relative beta power change for all three active conditions for both musicians and non-musicians.

For the first epoch (0-300 ms, figure 4A), there was a small but significant main effect of noise on relative beta power change F(2,44) = 3.772, p = .031. For the second epoch, there was a significant effect of noise, F(1.419,41.331) = 10.635, p = .001. In the 600-900 ms time range, we did find a significant effect of noise on relative beta power, F(1.459,35.267) = 7.929, p = .004.

(10)

Figure 4 Relative alpha (8-12 Hz) power change for all three analyzed

epochs: 0-300, 300-600, and 600-900 ms. Increased noise leads to increased alpha synchronization, especially in the later epochs. Musicians overall show larger increase in alpha synchronization, which is interpreted as more acoustic suppression, than nonmusicians.

(11)

Figure 5 Relative beta (16-24 Hz) power change for all three analyzed epochs:

0-300, 300-600, and 600-900 ms. Increased noise leads to decreased beta desynchronization, especially in the later epochs. Nonmusicians overall show larger increase in beta desynchronization, which is interpreted as more lexical processing, than musicians.

(12)

Discussion

The main finding from this study was that both alpha and beta oscillations are impacted by noise level in a speech in noise task. Interestingly the alpha and beta effects are amplified and

reduced respectively in musicians, paralleling their improved ability to understand speech in background noise. Importantly these results are the first to demonstrate how background noise impacts oscillatory activity related to processing speech, and suggests that musical training can improve these abilities.

First of all, there was an increase in alpha synchronization as noise increases, and this increase was strongest in the last two epochs (300-900 ms). This is in line with Strauss et al.’s (2014) hypothesis that this alpha synchronization reflects the (acoustic) suppression of irrelevant auditory streams. The fact that the increase, agrees with the assumption that this suppression is top-down rather than bottom-up. In general, this increase in alpha synchronization is more prominent in musicians than in non-musicians. This is likely due to the fact that musical training improves auditory abilities and one of these abilities is the ability to suppress the non-relevant acoustic streams around them.

Alpha synchronization reflecting distractor suppression to facilitate stimulus processing has been found in both the auditory and the visual domain (Mazaheri et al., 2014). For example, Kelly et al. (2006) found increased alpha activity during sustained visuospatial attention when subjects had to suppress a distractor. In this domain, alpha activity has also been hypothesized to reflect the switch from one stimulus to another (Sauseng et al., 2005).

Secondly, we found that relative beta de-synchronization was attenuated with

increasing noise. Brennan et al. (2014) found the same beta desynchronization, although with a slightly lower frequency, for words that were unrelated to a prime. In our case, the participants were not primed, but in a similar fashion, did not have any expectations about which stimulus would be presented. In Brennan et al.’s study this de-synchronization was attenuated when words were primed by a related word, they hypothesized that this desynchronization was weakened when lexical activation was facilitated. Our results suggest that this is not necessarily the case. In our study, beta desynchronization was attenuated when lexical access was impeded

(13)

by noise, rather than facilitated. This is also reflected by the fact that musicians generally showed less beta de-synchronization. This could, again, be a result of auditory expertise.

Wang et al. (2012) have reported high correlations between N400a and beta suppression. We have not found the same correlation in our study. Our lack of significant correlations on this matter can be explained by the fact that there are many factors that

influence the N400a, such as task requirement. The task in our study differed substantially from the one used in Wang et al.’s study. However, this shows the importance of time-frequency studies in general. When doing time-frequency analyses, we are clearly measuring activity that is different from the activity measured in ERP studies.

This is the first study to map the cortical oscillations of word-in-noise processing. Also, this is the first study to investigate differences in induced cortical activity between musicians and non-musicians in this context. Although not always significant, differences in induced activity in certain frequency bands are clearly present between musicians and non-musician. This knowledge adds to the evidence already there supporting the hypothesis that musical expertise has effects that overlap to linguistic auditory processing.

A better understanding of the effects of music on cortical mechanisms involved in word-in-noise processing contributes to knowledge about the effects of musical training on auditory perception. We already know that musicianship might lead to enhanced attention-dependent activity in older adults (Zendel & Alain, 2013a)

These kind of studies also highlight the importance of musical education in children, as numerous studies have demonstrated wide reaching benefits of being a musician across the lifespan (Kraus & Chandrasekaran, 2010b; Zendel & Alain, 2012; Zendel & Alain, 2013b). Some of the benefits may be related to the enhanced auditory abilities described previously. For

example, musicians have repeatedly shown improved capacity of L2 learning (Delogu, Lampis, & Belardinelli, 2006) and increased working memory capacity (Bidelman, Hutka & Moreno, 2013).

Honing (2011) has proposed that music might simply be a very effective form of cognitive play. It constantly challenges a large variety of cognitive functions in both

hemispheres. This in itself is an implicit incentive to keep practicing music, as this results in increased neuroplasticity (Schlaug et al, 2009; Wan & Schlaug, 2010). It is therefore important to keep expanding our knowledge of the effects of musical training on auditory abilities and its underlying activity.

(14)

References

Bastiaansen, M. C. M., van der Linden, M., ter Keurs, M., Dijkstra, T., Hagoort, P. (2005). Theta responses are involved in lexical-semantic retrieval during language processing. Journal of

Cognitive Neuroscience, 17(3), 530-541.

Beauvois, M. W., & Meddis, R. (1997). Time decay of auditory stream biasing. Perception &

Psychophysics, 59(1), 81-86.

Benfante, H., Charbonneau, R., Areseneault, A., Zinger, A., Marti, A., & Champoux, N. (1966). Audiométrie Vocal. Hoôpital Maisonneuve, Montréal.

Bidelman, G. M., Hutka, S., Moreno, S. (2013). Tone language speakers and musicians share enhanced perceptual and cognitive abilities for musical pitch: evidence for bidirectionality between the domains of language and music. PLoS ONE, 8(4).

Brennan, J., Lignos, C., Embick, D., & Roberts, T. P. L. (2014). Spectrotemporal correlates of lexical access during auditory lexical decision. Brain & Language, 133, 39-46.

Basar, E., Basar-Eroglu, C., Karakas, S., Parnefjord, R., Rahn, E., & Schürmann, M. (1992). Evoked potentials: ensembles of brain induced rhythmicities in the alpha, beta and gamma ranges. In E. Basar & T. H. Bullock (Eds.), Induced rhythms in the brain (p. 155-181). Boston, MA: Birkhauser. Chandrasekaran, B. and Kraus, N. (2010), The scalp-recorded brainstem response to speech: Neural origins and plasticity. Psychophysiology, 47: 236–246.

Delogu, F., Lampis, G., Belardinelli, M. O. (2006). Music-to-language transfer effect: may melodic ability improve learning of tonal languages by native nontonal speakers? Cognitive Process, 7, 203-207.

Foxe, J. J., & Snyder, A.C. (2011). The role of alpha-band brain oscillations as a sensory suppression mechanism during selective attention. Frontiers in Psychology, 2, article 154.

(15)

Hoechstetter, K., Bornfleth, H., Weckesser, D., Ille, N., Berg, P., & Scherg, M. (2004). BESA Source Coherence: a new method to study cortical oscillatory coupling. Brain Topography, 16(4), 233-238.

Honing, H. (2011a). Musical cognition. A science of listening. New Brunswick, NJ: Transaction

Publishers.

Kelly, S.P., Lalor, E.C., Reilly, R. B., & Foxe, J.J. (2006). Increases in alpha oscillatory activity power reflect an active retinotopic mechanism for distractor suppression during sustained visuospatial attention. Journal of Neurophysiology, 95(6), 3844-3851.

Klimesch, W., Sauseng, P., & Hanslmayr, S. (2007). EEG alpha oscillations: the inhibition-timing hypothesis. Brain Research Reviews, 53, 63-88.

Kraus, N., & Chandrasekaran, B. (2010). Music training for the development of auditory skills.

Nature Reviews Neuroscience, 11(8), 599-605.

Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology, 62, 621-647.

Mazaheri, A., van Schouwenburg, M. R., Dimitrijevic, A., Denys, D., Cools, R., & Jensen, O. (2014). Region-specific modulations in oscillatory alpha activity serve to facilitate processing in the visual and auditory modalities. NeuroImage, 87, 356-362.

Musacchia, G., Strait, D., & Kraus, N. (2008). Relationships between behavior, brainstem and cortical encoding of seen and heard speech in musicians and non-musicians. Hearing research, 241(1-2), 34-42.

Papp, N., & Ktonas, P. (1977). Critical evaluation of complex demodulation techniques for the quantification of bioelectrical activity. Biomed. Sci. Instrum., 13, 135-143.

(16)

Parbery-Clark, A., Skoe, E., & Kraus, N. (2009). Musical experience limits the degradative effects of bbackground noise on the neural processing of sound. The Journal of Neuroscience, 29(45), 14100-14107.

Rammsayer, T., & Altenmüller, E. (2006). Temporal information processing in musicians and nonmusicians. Music Perception, 24(1), 37-48.

Sauseng, P., Klimesch, W., Stadler, W., Schabus, M., Doppelmayr, M., Hanslmayr, S., Gruber, W. R., & Birbaumer, N. (2005). A shift of visual spatial attention is selectively associated with human EEG alpha activity.

Schlaug, G., Forgeard, M., Zhu, L., Norton, A., Winner, E. (2009). Training-induced neuroplasticity in young children. Annals of the New York Academy of Sciences, 1169: 205-208.

Strait, D., Kraus, N., Parbery-Clark, A., & Ashley, R. (2010). Musical training shapes top-down auditory mechanisms: evidence from masking and auditory attention performance. Hearing

Research, 261, 22-29.

Wan, C. Y., & Schlaug, G. (2010). Music making as a tool for promoting brain plasticity across the life span. TheNeuroscientist, 16(5), 566–77.

Wang, L., Jensen, O., van den Brink, D., Weder, N., Schoffelen, JM., Magyari, L., Hagoort, P., Bastiaansen, M. (2012). Beta Oscillations relate to the N400m during language comprehension.

Human Brain Mapping, 33, 2898-2912.

Zendel, B. R., & Alain, C. (2009). Concurrent sound segregation is enhanced in musicians. Journal

of Cognitive Neuroscience, 21(8), 1488-98.

Zendel, B. R., & Alain, C. (2012). Musicians experience less age-related decline in central auditory processing. Psychology and Aging, 27(2), 410–417.

(17)

Zendel, B. R., & Alain, C. (2013a). Enhanced attention-dependent activity in the auditory cortex of older musicians. Neurobiology of Aging, 35(1), 55-63.

Zendel, B. R., & Alain, C. (2013b). The influence of lifelong musicianship on neurophysiological measures of concurrent sound segregation. Journal of cognitive neuroscience, 24(5), 503-516. Zendel, B. R., Tremplay, CD., Belleville, S., Peretz, I. (2015). The impact of musicianship on cortical mechanisms related to separating speech from background noise. Journal of Cognitive

Referenties

GERELATEERDE DOCUMENTEN

The first sub-question is: “Do social cue factors influence the perceived trustworthiness of e-commerce websites?” and the second sub-question is: “Do content design factors

Loofdoding verliep het snelst na gebruik van Grammoxone Na 3 maanden werd Xf niet meer gevonden in bladresten in grond, onafhankelijk van de wijze van loofdoding. Na 6 maanden werd

Om voldoende beweidingsruimte te houden, teelt men maïs vaak niet op percelen die beweid kunnen worden door het melkvee.. Het aandeel maïs in het rantsoen (zie hoofdstuk 12) is

Een van Lex Kattenwin- kel (ter plekke uitgezocht maar niets gevonden) en een van Henk Mulder die mij, via Frank Wesselingh, twee literbak- ken vol met Trigonostoma’s deed

Several times we shall refer to Lemma 3.6 while we use in fact the following matrix-vector version the proof of which is obvious.. Let M(s)

‘gebastioneerde omwalling’. De term ‘Spaanse omwalling’ kwam later tot stand en duidt op de gehele versterking die onder voornamelijk Spaans bewind tot stand kwam.. Voor de start

In order to reduce the number of constraints, we cast the problem in a CS formulation (20) that provides a shrinkage of the constraint according to the number of samples we wish

The figure’s upper panel shows for 100 GHz the simulation of the residual e ffect for T T , EE, BB in the power spectra of half-mission di fference maps for one simulation of the