• No results found

Mastering of two languages with different rhythmic properties enhances musical rhythmic perception.

N/A
N/A
Protected

Academic year: 2021

Share "Mastering of two languages with different rhythmic properties enhances musical rhythmic perception."

Copied!
90
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Mastering of two languages with

different rhythmic properties enhances

musical rhythmic perception.

A Master's Thesis submitted to the Graduate School of Humanities, University of Amsterdam, in partial fulfillment of the requirements for the Degree of Master of Arts in the Department of Musicology

By Drikus Antonio Roor

Amsterdam, the Netherlands

2015

(2)

Table of Contents

1. Purpose of Study...5

2. Acknowledgements...7

3. Introduction...8

Musical aptitude...8

Musical aptitude enhances linguistic abilities...9

OPERA...12

Linguistic abilities enhance musical aptitude...13

Second-language learning and musical aptitude...14

Language rhythm classes...16

Hypothesis...18

Research question:...18

Hypothesis 1 (research hypothesis):...19

Hypothesis 2 (null-hypothesis):...19

Hypothesis 3 (negative effect):...19

4. Experiments...22

Subjects...23

Experiment 1 – Dutch L2 English speakers...23

Experiment 2 – Turkish monolinguals and L2 English speakers...24

Experiment 3 – Chinese L2 English speakers...25

Tests and questionnaires...25

Musical Ear Test...25

Backward Digit Span...27

Mottier Test...28

Lexical Tone Test...30

Self-Reported Language Background Questionnaires...31

Music & Social Background Questionnaire...31

Procedures...32

Experiment 1 – Dutch L2 English speakers...32

(3)

Experiment 3 – Chinese L2 English speakers...35 Data analysis...36

5. Results...38

Between-group comparison...39 ANOVA...39 ANCOVA...41 Within-group comparison...44 Turkish monolinguals...44

Dutch L2 speakers of English...47

Turkish L2 speakers of English...48

Chinese L2 speakers of English...49

Multiple regression...50

Musical rhythm aptitude...50

Melodic aptitude...52

6. General discussion...54

Discussion of results...54

Hypothesis supported...57

On the role working memory...58

Implications for existing literature...59

Limitations and points of attention...60

Implications for future research...61

Conclusion...62

7. References...64

8. Appendices...71

Appendix A – Backward Digit Span...71

A.1.1 Instructions – Dutch group:...71

A.1.2 Instructions - Turkish groups:...71

A.1.3 Instructions - Chinese group:...71

A.2.1 Answer sheet – Dutch group:...72

A.2.2 Answer sheet – Turkish groups:...73

(4)

Appendix B – Mottier Test...75

B.1.1 Instructions – Dutch...75

B.1.2 Instructions - Turkish groups:...75

B.1.3 Instructions - Chinese group:...75

B.2.1 Answer sheet – Dutch and Turkish group:...76

B.2.2 Answer sheet – Chinese group:...78

Appendix C – Musical Ear Test...79

C.1.1 Instructions – Dutch group:...79

C.1.2 Instructions – Turkish Groups:...79

Appendix D – Lexical Tone Test...80

D.1.1 Instructions – Turkish groups:...80

Appendix E – Self-reported Language Questionnaires...81

E.1.1 Questionnaire in English – Chinese group:...81

Appendix F – Music & Social Background Questionnaire...88

(5)

Purpose of Study

The idea that there is a relationship or a comparability between language and music can be traced back to ancient times. Socrates, for example, saw music as an art with mimetic qualities to which he tried to apply the same rules for imitation in narrative discourse and poetry (Woerther, 2008). Music would be purified of harmonies and rhythms that promoted drinking, or that expressed sorrow or grief. For the education of the soul, music, just like poetry, should imitate the sounds and rhythm of courageous and temperate men. Vice versa, music could be a model of expressivity for rhetorics (Quintilian in McCreless, 2002).

Later, in the renaissance, as classical text were studied, rhetorics and music would be seen as related disciplines (Gibson, 2008). Music, and especially vocal parts, would be composed according to the rules of rhetorics and to the qualities of the language it was sung in (McCreless, 2002). In this we can recognize the ideal that singing in a song should mimic natural speech. According to Baragwanath (2014), musical theorists in the 17th and 18th century started paying attention to how music and language influence or even determine each other. Melodies and rhythmic groupings in music were seen as a direct result of rhythm and accents within a given language (Baragwanath, 2014). This suggestion that language rhythm influences music rhythm still persists and is still being studied until today (Abraham, 1974; Wenk, 1987; Patel, 2002; Sadakata, 2003).

Since the 20th century the comparison between music and language has also been studied from a cognitivist perspective, with studies on cross-domain perception and aptitude between music and language. This is possibly a bidirectional process, training or enhancement of musical aptitude could have a positive effect on language ability (Patel et al., 1998; Liu et al., 2010; Milanov & Tervaniemi, 2011) and conversely, language ability might also have an effect on musical aptitude (Bidelman, Hutka & Moreno, 2013; Milovanov & Tervaniemi, 2011). According to Patel's OPERA-hypothesis (2013), language experience in specifically tone languages or bilingualism can increase musical aptitude because it demands precision and attention in acoustical processing. However, the effect of bilingualism or L2 (second-language) acquisition on musical aptitude has not been studied extensively yet (Asaridou & McQueen, 2013).

A recent study demonstrated an enhanced music rhythm perception of Turkish L2 German learners as compared to German L2 English learners, suggesting that the mastering of two

(6)

languages with different rhythmic properties (i.e. stress-timed and syllable-timed) has an enhancing effect on musical rhythm perception. The purpose of this present quantitative experimental study was to investigate whether this extends to different combinations of languages. This study examined the effect of bilinguality in languages with different rhythmic properties on musical rhythm perception skill between and within Dutch, Turkish and Chinese native speakers at the University of Amsterdam in Amsterdam, the Netherlands and the Middle East Technical University in Ankara, Turkey. Data was obtained through behavioral experiments, in which participants were tested on melodic and musical rhythm perception skill, lexical tone perception skill, working memory and phonological memory. Additionally, participants filled in several questionnaires in which they reported about their social, educational and language background.

The second chapter contains a literature review that examines the research that has been done on defining and the attempts to measure musical aptitude. Empirical literature on the relation between musical aptitude and language ability, multilingualism, and several other factors will be examined as well. Furthermore, by combining the OPERA-hypothesis and the Language Rhythm Class-theory, a framework will be given that explains why there could be a relation between multilingualism in languages with different rhythmic properties and musical rhythm aptitude, and how this relation could be explained. Lastly, the research question and hypotheses of this study will be introduced, along with its expectations and possible explanations.

(7)

Acknowledgements

For her patientless effort in helping me with almost everything, even though she was not obliged to do this anymore after I exceeded the one-year Master thesis limit, I would like to thank Dr. Makiko Sadakata. For teaching me how to set-up and run an experiment, and for providing me with her so many materials, I would like to thank Dr. Paula Roncaglia-Denissen. As she provided me the opportunity, data, recordings, translations, and contacts to expand my study, I would like to thank Dr. Ao Chen. For giving me the opportunity to perform my experiments and for showing me around at the Middle East Technical University of Ankara, Turkey, I would like to thank Annette Hohenberger. For sending her whole class of students to my experiment, I would like to thank Kamuran Özlem. For providing me with materials, recordings, translations, and/or other assistance, I would like to thank Özge Yüzgeç, Gizem Çelikm Gözde Nasuhbeyoğlu, Ece Takmaz, Gökhan Gönül, Zoyi Zuo. Kaj pro doni iliajn retroefikojn al mi, mi volus danki Leonie "Looney" van Breeschoten kaj Robert "R0kus" Zomers. Eu também gostaria de agradecer minha namorada Adriana Matsufuji porque do seu amor e apoio moral. And lastly, for creating me, and for making it possible for me to go to university, I would like to thank my parents, Karin van Rijnbach and Henk Roor.

(8)

Introduction

Musical aptitude

As this study investigates effects of bilingualism on musical rhythm perception, we need to know what is musical rhythm perception. Below I will discuss the different viewpoints towards musical rhythm perception and musical rhythm aptitude. I will also give an explanation of why I will use the term musical rhythm aptitude instead of musical rhythm perception.

As aptitude can be be defined as a talent, one might think that musical aptitude is a talent for producing music. In popular definition, the talent for music, or musical aptitude, has often been associated with the talent learning how to play music on musical instruments or how to sing. One of the first to propose a list of measurements to measure musical talent was Seashore (1915). Seashore made a distinction between the ability to hear music (Sensory), to sing music (Motoric), to think in music (Associational), and to feel music (Affective). Within the Sensory and Motoric abilities, he made a distinction between Pitch, Loudness and Time. In 1965, a test was designed to measure musical aptitude, defined here as a combination of aural perception, kinesthetic musical feeling, and musical expression (Gordon, 1965). Similarly, in an attempt to make musical ability quantifiable, Levitin (2012) argued to approach musical ability (also using the word musicality) as a concept with many aspects, paying attention to playing, composing & arranging, perceiving, and thinking or writing about music.

These definitions are all quite similar, all are giving attention to motoric reproduction, creativity, and intellectual skills. However, little attention is given to the neural processes that underlie the perception of music. Peretz & Kolinsky (1993) assessed two models of music processing that had been mainstream up until then. They refer to these models as the "two-component" model and the "single-"two-component" model. In the two-component model, melody and rhythm are seen as two separate dimensions, in which pitch and rhythm are processed independent from one another (Krumhansl & Castellano, 1983). In the single-component model, melody and rhythm are regarded to be one dimension in music processing. Jones, Boltz & Kidd (1982) found that pitch or key violations were more easily detected when tones that confirmed the key environment were (rhythmically) accented. Similarly, it was found that rhythm influenced recognizability of melodic patterns and the detection of melodic violations (Jones & Ralston,1991),

(9)

and also influenced perception of melodic or tonal completion (Boltz, 1989a; Boltz 1989b). This does not necessarily mean that melody and rhythm cannot be perceived without each other. For example, brain damage can affect the ability to perceive rhythm but leave pitch perception intact (Peretz & Kolinsky, 1993; Liégeois-Chauvel et al., 1998). Conversely, brain damage can affect the ability to perceive pitch but leave rhythm perception intact. (Vignolo, 2003).

However, it might be noted that these studies use the concept of pitch instead of melody (Liégeois-Chauvel et al., 1998), or use pitch as almost a synonym for melody (Peretz & Kolinsky, 1993), thereby partly ignoring the temporal aspects of melodies and making comparisons difficult. Still, in spite of this ambiguity in melody, pitch and musical aptitude, melody (or pitch variations) and rhythm (sometimes referred to as temporal) nowadays are still regarded as the main dimensions of music and musical aptitude (Krumhansl, 2000; Wallentin et al., 2010; Jerde et al., 2011).

Broadly speaking, we can thus say that musical aptitude was first defined by including motoric skills, while more recent definitions focus more on perception. Within musical perception there have been two models in which pitch and rhythm were either seen as two independent dimensions of processing or as one dimension of shared processing. My position here is to emphasize perception in musical aptitude and to acknowledge that musical rhythm and pitch perception are at least partly processed separate from each other. It should then also be possible to measure both abilities separately. I will therefore define musical rhythm aptitude as the ability of perceiving rhythm in music. Similarly, musical melodic aptitude is defined here as the ability of perceiving pitch and melody in music. The reasons for using the term aptitude instead of perception in this study are mostly practical. As a musical aptitude test will be used in the experiments, it makes more sense to be consistent in word-use. Also, musical aptitude has been linked to (pre-)motoric and intellectual abilities. The term musical aptitude might therefore be more useful with regards to the possible external influences like linguistic ability, which can also be regarded as partly motoric and/or intellectual.

Musical aptitude enhances linguistic abilities

As this study investigates the effect of linguistic ability on musical (rhythm) aptitude, one might expect and overview of the past research on this topic. However, this topic is still relatively uninvestigated. Also most of the earlier research that has been done on transfers between the musical and linguistic domains of processing actually focused on the opposite: The effect of

(10)

musical training on linguistic abilities. Now, musical training is not the same as musical aptitude but they have been shown to correlate strongly (Wallentin et al., 2010). The concept of linguistic ability is used here as a container for various linguistic abilities, such as reading ability or speech perception abilities such as pitch or prosody processing. This section will summarize the research that has been done on the effect of musical training or aptitude on several linguistic abilities. This is useful to see which linguistic abilities possibly share processing resources with which forms of musical aptitude. Attention will also be given to the role of working memory as a third variable that can be influenced and can influence both musical aptitude and linguistic ability.

The idea that musical aptitude can effect language skills (reading, perceiving speech, producing speech, or the learning of language) or vice versa is quite recent (Thompson et al. 2004). Not too long ago, in the 1980s and 1990s, the cognitive processing of music and speech were contrasted as being fundamentally different (Sergent, 1993), or at least on the semantic level (Marin, 1989). The processing of verbal information has a semantic goal, which "presupposes" previous experience, a mental representation. A word has to be recognized, while this is not the case in music, which supposedly has no mental representation.

Patel et al. (1998), however, compared the musical domain and the linguistic domain on its similarities. As melodic contour and speech intonation are similar, and as both domains are similar in temporal grouping, it was to be expected that both domains share some processing and neurological resources. This was reflected in the following experiment with amusic participants, in which participants either scored good or bad in both musical and linguistic processing (Patel et al., 1998). These results were similar to an earlier experiment by Peretz et al. (1994), which also showed impairment in both speech prosody perception and pitch contour perception in participants with brain damage.

The possible links between the musical and the linguistic domain has since been studied in various combinations. In a study on the effect of musical training on recognizing speech prosody, it was found that musically trained participants were better in recognizing emotion through speech prosody (Thompson et al., 2004). This enhancement in the perception of speech was not only found for the recognition of emotion, but also for pitch processing, both in adults (Schön, Magne & Besson, 2004; Besson et al., 2007) as well as in children (Besson et al., 2007; Moreno et al., 2009). Even for a foreign language unknown to the participant, pitch perception in speech was found to be enhanced by musical training (Marques et al., 2007). Additionally, musicians were also found to outperform non-musicians in the perception of speech-in-noise (Parbery-Clark, Skoe & Kraus,

(11)

2009), although working memory and frequency discrimination might also play a big role in this (Parbery-Clark et al., 2009).

Aside from possible enhancements in speech perception through musical training, a possible link between reading ability and musical ability has been suggested as well (Barwick et al., 1989; Lamb & Gregory, 1993). Several experimental studies found positive relationships between musical aptitude and reading ability in both "normal" children (Anvari et al., 2002; Moreno et al., 2009) as well as children with reading difficulties (Douglas & Willats, 1994).

But although there are correlations between musical training or aptitude and linguistic ability, what is it specifically that seems to enhance linguistic skills such as prosody, pitch processing, speech-in-noise processing, or reading ability? Indeed, some studies work with musical training, contrasting results of musicians and non-musicians (Marques et al., 2007; Parbery-Clark et al., 2009; Parbery-Clark, Skoe & Kraus, 2009; Thompson, 2004), or by giving one group musical training next to a control group (Besson et al., 2007). But although musical training often correlates with musical aptitude test scores, using musical training or contrasts between musicians and non-musicians is problematic if we want to specify what could cause enhancements in linguistic skills. For reading ability, Lambert & Gregory (1993) suggest an awareness in pitch changes as a specific influence, Anvari et al (2002) suggest both musical rhythmic skill and musical pitch perception as possible factors but are not sure, and Douglas & Willats (1994) suggest rhythmic ability as a possible factor.

As musical training or musical aptitude might enhance speech perception and reading ability might both be enhanced by musical training or musical aptitude, it might also influence second language skills and second language learning. Several earlier studies found no links between musical ability (self-reported) and second-language spoken foreign accent (Tahta, Wood & Lowenthal, 1981; Thompson, 1991), although these links were found by Tanaka & Nakamura (2004). Slevc and Miyake (2006) found that musical ability positively predicted second language phonological ability, both in reception and in production, while this was not the case for syntactical or lexical knowledge. However, although Delogu, Lampis & Belardinelli (2010) found no enhanced phonological processing, they did find enhanced tonal processing (intonation and lexical) in L2 Mandarin Chinese for Italian native speakers with higher musical aptitude. A higher sensitivity to second-language linguistic timing was found as well for musicians compared to non-musicians (Sadakata & Sekiyama, 2011).

(12)

between musical ability and second-language acquisition. It should be noted, however, that this musical ability was self-reported and not tested. Morgan (2003) did find a positive relation between musical aptitude and success in second-language learning, with rhythm perception and music production as specific predictors. Sadakata & Sekiyama (2011) found an increased ability for musicians in learning linguistic timing information.

The possible importance of working memory in second-language acquisition should also be considered. Several studies already mentioned the possible link between verbal working memory and later vocabulary and reading skills (Gathercole et al., 1992; Service, 1992; Gathercole et al., 1994). As it is likely that musical training enhances working memory (Bergman-Nutley, Darki & Klingberg, 2014), it could be possible that musical training enhances foreign-language learning through working memory.

Although there is an extensive amount of evidence for transfer effects from the musical domain to the linguistic domain, the relation between musical aptitude on linguistic skills yet remains unclear, mainly because of methodology, and sometimes because of mixed or contradictory results. And again, even though musical aptitude often strongly correlates with musical training, it is not yet clear which musical aptitudes are related to which specific linguistic abilities.

OPERA

Let us say there is an effect of bilingualism or any other linguistic ability on musical aptitude, or more specifically, musical rhythm aptitude. It is important we have ways to explain

why there could be an effect and how this effect works.

Trying to explain how musical aptitude and linguistic skills are related, Patel (2011) proposed the OPERA-hypothesis, and argues that enhancements in linguistic skills by higher musical aptitude are driven by adaptive plasticity in speech-processing. This plasticity depends on five conditions: Overlap, Precision, Emotion, Repetition, and Attention. Music and speech overlap in the sense that they both place demands on acoustical processing, such as waveform periodicity and amplitude envelope. Because music places higher demands on these forms of acoustical processing than speech, in which semantic processing drives attention away from acoustical processing, musical training is more likely to benefit speech perception than vice versa. The effect of linguistic training on musical aptitude would then be absent or not as great. However, the OPERA-hypothesis does not so much rule out bidirectionality, as it tries to offer a framework that

(13)

explains the benefits of musical training on linguistic ability. Moreover, and this is important because it relates to this study, it also argues that more demanding forms of linguistic experience, such as learning a tone language or learning a second language, could in fact have an impact on musical aptitude.

Linguistic abilities enhance musical aptitude

There has not been a lot of research on the effect of bilingualism on musical rhythm aptitude. Therefore, this section will mostly pay attention to similar and comparable topics, such as the effects of bilinguality on specific forms of auditory processing, which could also be beneficial for musical aptitude, as well as language-specific effects on different musical aptitudes or forms of auditory processing.

Although the transfer effects of musical aptitude and musical training on linguistic skills have been studied extensively, this has not been the case for transfer effects of linguistic skills on musical aptitude until recently, and mostly for language-specific effects (Bidelman, Hutka & Moreno, 2013; Milovanov & Tervaniemi, 2011). However, there are evidences that suggest these transfer effects might be bidirectional, both temporal (rhythmically) as well as pitch-specific.

A common argument for suggesting the possibility of language-to-music transfers are the results of studies towards tone languages. As tone languages use tone to distinguish words from one another, it has been hypothesized that speakers of these languages are more sensitive to pitch and contour in language, and thus also in music. In a study towards absolute pitch, Deutsch et al. (2006) found a greater prevalence of absolute pitch when the age of onset of musical training is lower. However, in the groups that were tested, Chinese native speakers (Mandarin-Chinese being a tone language) scored higher on absolute pitch on all age of onset levels than American non-tone language speakers. This suggests that learning a tone language at a young age increases the acquisition of absolute pitch, and that musical training enhances this chance. In a similar study by Bidelman, Hutka and Moreno (2013), Cantonese non-musicians (Cantonese-Chinese being a tone language as well) scored significantly higher on a musical melody discrimination test than English-speaking non-musicians, and also scored higher on pitch discrimination and working memory capacity.

Different from enhanced absolute pitch or melody discrimination and although not directly related to musical aptitude, Marie, Kujala & Besson (2012) also found enhanced pre-attentive and attentive processing of duration deviants for Finnish musicians compared to French

(14)

non-musicians. It was argued that because Finnish is a quantity language, a language in which phonemic duration is used to distinguish meaning (like lexical tone does in tonal languages), Finnish native speakers would be more sensitive to duration deviants as compared to non-quantity language speakers. Although participants were not tested on musical aptitude in this experiment, Finnish non-musicians and French musicians were similar in Mismatch Negativity results (pre-attentive processing) and behavioral discrimination scores ((pre-attentive processing) for duration deviants when compared to French non-musicians. Similarly, although not controlled for musical aptitude (or contrasting musicians with non-musicians) as well, Tervaniemi et al. (2006) found enhanced duration discrimination in non-speech sounds for quantity language speakers (Finnish) and non-quantity language speakers (Germans).

According to the OPERA-hypothesis, speech processing does not demand as much precision as music does, linguistic experience should therefore not benefit musical aptitude, or only slightly. However, the reported enhancements in acoustical and musical processing were found only for languages in which acoustic features such as pitch contour (tone language) or phoneme duration (quantity language) are used to distinguish meaning. And because meaning is conveyed through pitch contour or phoneme duration, processing speech requires a higher sensitivity for these acoustical features which indeed places higher demands on acoustical processing. These higher demands on acoustical processing might entrain this processing in such a way that it also would benefit musical aptitude, both in melodic processing as well as rhythm processing. As these findings (especially the ones in tone languages) support the OPERA-hypothesis, it suggests that transfers from the linguistic to the musical domain are also possible through second-language learning.

Second-language learning and musical aptitude

Second-language learning can be regarded as a process that demands precision in acoustical processing – one needs to learn how to recognize new words and sentences based on acoustical features that might be new to him or her. We can therefore expect an enhancement in acoustical processing, and maybe even in musical aptitude. To my best knowledge there has not been any research on this topic, apart from the study by Roncaglia-Denissen et al. (2013), which will be introduced at the end of this section. The rest of this section, like the previous section, will therefore focus mostly on the effects second-language learning on linguistic abilities.

(15)

and abilities (Bialystok, Craik & Luk, 2012). Positively, such as increased intelligence (Peal & Lambert, 1962), more advanced processing of verbal material (Ben-Zeev, 1977), and enhanced working-memory (Morales, Calvo & Bialystok, 2013), but also negatively, as bilinguals have been found to score lower on some verbal abilities, such as vocabulary size (Bialystok et al. 2010), and processing duration or accuracy of understanding and producing words (Randsdell & Fischler, 1987; Ivanova & Costa, 2008).

The relation between second-language acquisition on musical aptitude has not been studied extensively yet. (Asaridou & McQueen, 2013). However, there have been studies towards the effects of bilingualism on auditory and speech processing, some of which could suggest enhancements in the musical domain as well. Compared to monolingual Spanish native speakers, Spanish L2 English speakers were found to show enhanced encoding of fundamental frequency (F0) in speech, which is also a feature that underlies pitch perception (Krizman et al., 2012). As for vowel discrimination, which is suggested to be related to pitch contour perception (Trainor & Desjardins, 2002), Bosch & Sebastián-Gallés (2005) found that sensitivity to certain vowel contrasts is lower for 8-months old bilingual infants compared to 4-months old bilingual infants. However, Sundara & Scutellaro (2011) found no significant differences for discrimination in the vowels /e – ε/ between monolingual or bilingual 4- and month old infants, nor between 4-month old and 8-month old bilingual infants. Similarly, Sundara, Polka & Molnar (2008) found no differences in vowel discrimination between 6-8 and 10-12 months old monolingual and bilingual infants. Although evidences for the enhancement of vowel reduction by second-language learning has been meager, the enhanced encoding of fundamental frequency is not. And as encoding of fundamental frequency underlies pitch perception, this enhancement might possibly extend to musical aptitude as well.

With regards to rhythm, both newborns and adults are sensitive to language rhythm and are capable of discriminating different language rhythm classes (Nazzi, Bertoncini & Mehler, 1998; Ramus, Dupoux & Mehler, 2003) – a concept that will be explained in the next section of this chapter. But bilingual infants growing up with languages in the same rhythm class (Spanish and Basque) were also found to be better at discriminating the corresponding language rhythms compared to Spanish and Basque monolingual infants (Molnar & Gervain, 2014).

Roncaglia-Denissen et al. (2013) found enhanced musical rhythm aptitude for Turkish native speakers that learned German as a second language, compared to Germans who learned English as a second language. Turkish and German are different in its rhythm and metric properties

(16)

(syllable-timed vs. stress-(syllable-timed speech rhythm) while German and English are similar in this regard (both

are stress-timed). Turkish who learned German as a second language would therefore be more sensitive to different speech rhythms, which would also be reflected in their musical rhythm aptitude. As OPERA assumes that demanding forms of acoustical processing of linguistic experience might be beneficial for musical aptitude, learning to perceive these (differences in) speech rhythms might be one of those more demanding forms that could benefit musical (rhythm) aptitude.

Language rhythm classes

The dichotomy between stress-timed and syllable-timed languages has become a common way to classify languages rhythmically. Pike (1945) and Abercrombie (1967) were among the first who classified languages rhythmically based on their differences in isochronous timing, or the isochronous recurrence of a speech unit. According to this theory, in stress-timed languages, the intervals between stressed vowels or syllables would be of equal timing, and in syllable-timed languages, all syllables would be of equal length (Pike, 1945; Abercrombie, 1967). Additionally, several researchers proposed the mora-timed (a mora is a sub-unit of a syllable which consists of a short vowel and any preceding consonants) language as a third type, in which every mora is of equal length (Bloch, 1950; Ladefoged, 1975). Germanic languages such as English, German and Dutch are languages that have often been regarded as stress-timed, while Latin languages such as French and Spanish have often been regarded as syllable-timed (Pike, 1945; Hockett, 1958; Abercrombie; 1967; Ladefoged, 1975; Smith, 1976; Catford, 1977), Japanese has been proposed as an example of a mora-timed language (Bloch, 1950; Han, 1962).

Although several studies have tried to find an empirical basis for this hypothesis, finding evidences for differences between languages in isochronic timing has proved to be problematic to say at least (Beckman, 1992; Grabe & Low, 2002). In none of the stress-, syllable-, or mora-timed languages, inter-stress intervals (intervals between stressed syllables), syllables or morae languages were found to be of equal length (Pointon; 1980; Roach, 1982; Wenk and Wioland, 1982; Dauer, 1983). This has led some researchers to alter the hypothesis. Some argued that true isochrony still underlies speech, but gets disrupted through distinct properties of languages (Beckman, 1992). Others altered the hypothesis from a dichotomy (Pike, 1945; Abercrombie, 1967) to a continuum in which all languages more or less have stress-timed and syllable-timed qualities (Dauer 1983), and that this depends on several phonological qualities of a language, such as the

(17)

amount of vowel reduction or syllable-structure (Bertinetto, 1989; Dasher and Bolinger, 1982; Dauer, 1983).

Departed from (true) isochrony, Ramus, Nespor & Mehler (1999) found that the percentage of vocalic intervals (vowels, V%), together with the average standard deviations of consonantal intervals for each sentence in a language, seemed to be the best predictor of rhythm class. Similarly, Grabe & Low (2002) measured the durations of vowels and the duration of intervals between vowels and computed PVI-values (pairwise variability index) for both measurements, which expresses the level of variability in successive measurements. They found that the prototype stress-timed and syllable-timed languages (English vs. French, etc.) are best predicted using nPVI-values (Normalized PVI) for vocalic intervals. Both studies and earlier studies (Nespor, 1990) as well noted that some languages, such as Catalan and Polish fall in both the stress-timed and syllable-timed category, as Catalan has a syllabic structure associated with syllable-syllable-timed languages with vowel reduction associated with stress-timed languages, while Polish has the exact opposite qualities.

Evidences for rhythm classes in language were found at the perceptual level as well. Using low-pass filtered speech, Nazzi, Bertoncini & Mehler (1998) found that infants could discriminate between English (stress-timed) and Japanese (mora-timed), but not between Dutch and English (both stress-timed). When using combinations of English, Dutch, Spanish and Italian, infants could only discriminate between a combination of English and Dutch (both stress-timed) and a combination of Italian and Spanish (both syllable-timed). Similar results were found for adults (Ramus, Dupoux & Mehler, 2003) as Dutch and English could not be discriminated from each other while English could be discriminated from both Spanish (syllable-timed) and Catalan. Polish could be discriminated from English, Spanish and Catalan. As Catalan and Spanish could not be discriminated from each other, the authors concluded that Catalan is a syllable-timed language while the results for Polish might serve as an argument for a new rhythm class.

Language rhythm classes might even be reflected in music and music rhythm. Patel & Daniele (2002) compared nPVI-values of English and French speech rhythm with nPVI-values of 19th century instrumental music from British English and French composers and found significant differences between both English and French speech as well as British-English and French instrumental music, with English having higher variability in both speech and music. However, the sample of composers and their music that served as examples of English and French instrumental music was not that large and could also reflect individual differences more than linguistic

(18)

background differences.

Isochrony was long thought to be the empirical basis for the rhythm class hypothesis for languages. However, more recent studies have shown that the languages proposed to have certain rhythm classes actually can be distinguished from each other using other forms of measuring rhythm and duration in language (%V, and nPVI for vocalic intervals). As mentioned earlier, it also has been found that both newborns and adults are sensitive to language rhythm and are both better in discriminating languages with different rhythm classes (stress-timed vs. syllable-timed) than in discriminating languages that have the same rhythm class.

Hypothesis

Evidences for effects of musical aptitude or musical training on (second) language skills are thus widespread. Except for language-specific effects in tonal languages such as Mandarin- and Cantonese-Chinese on pitch perception, and enhancements in duration discrimination in quantity-language speakers, the evidences for an effect of second-quantity-language learning and bilingualism on musical aptitude are lacking. The primary aim of this study is to examine how bilingualism effects musical aptitude, or more specifically:

Research question:

How does bilingualism in languages with different rhythmic properties effect musical aptitude?

One of the arguments in OPERA is that musical training enhances speech processing because music processing is more demanding in precision and attention. OPERA argues that linguistic experience does not enhance musical aptitude because speech processing is not that demanding. However, linguistic experience in tone and quantity languages, and bilingualism might actually increase acoustical sensitivity as acoustical information is more essential for extracting meaning out of speech. Indeed, speakers of tone and quantity languages seem to be more sensitive for (musical) pitch or phonological duration, while bilingual infants are better at discriminating language rhythm classes than monolingual infants. This would suggest that, through an enhancement in acoustical processing, the rhythmic sensitivity of bilinguals in languages with different rhythmic properties could extend to musical rhythm sensitivity as well, and thus to musical rhythm aptitude. Therefore, the following hypotheses are proposed:

(19)

Hypothesis 1 (research hypothesis):

Mastering two languages with different rhythmic properties enhances musical rhythm aptitude.

Note that several outcomes are possible in this hypothesis. An enhancing effect can be found when comparing bilinguals in rhythmically different languages to bilinguals in rhythmically similar languages. An enhancing effect can also be found when comparing bilinguals to monolinguals. However, this study assumes that learning different speech rhythms is more demanding than similar or only one speech rhythm, thereby enhancing acoustical processing. Therefore, it is expected that it is only the mastering two languages with different rhythmic properties that enhances musical rhythm aptitude. This hypothesis therefore is founded upon two testable premises, namely: 1) Mastering two languages with different rhythmic properties enhances musical rhythm aptitude, as compared to mastering one language. And 2) Mastering two languages with different rhythmic properties enhances musical rhythm aptitude, as compared to

mastering two languages with similar rhythmic properties.

Hypothesis 2 (null-hypothesis):

Mastering two languages with different rhythmic properties has no effect on musical rhythm aptitude.

Again, two explanation are possible in this hypothesis. The first one is in comparison to monolinguals: Mastering two languages with different rhythmic properties has no effect on musical rhythm aptitude, as compared to mastering one language. The second explanation is in comparison to bilinguals in rhythmically similar languages: Mastering two rhythmically different languages has no effect on musical rhythm aptitude, as compared to mastering two languages

with similar rhythmic properties. In other words, mastering two languages does enhance musical

rhythm aptitude, but rhythmic properties are not a (significant) factor in this.

Hypothesis 3 (negative effect):

(20)

Although unlikely, it could be the case that mastering two languages with different rhythmic properties decreases musical rhythm aptitude. Similarly, two explanations can be derived from this, which are not mutually exclusive. The first one is again in comparison to monolingualism: Mastering two languages with different rhythmic properties impairs musical rhythm aptitude, as

compared to mastering one language. The second explanation assumes that mastering two

languages with different rhythmic properties impairs musical rhythm aptitude, as compared to

mastering two language with similar properties.

In order to test these hypotheses, four groups were tested on their musical rhythm aptitude in three experiments. These four groups were: Turkish monolinguals, Turkish L2 English speakers, Chinese L2 English speakers, and Dutch L2 English speakers. As Turkish and Chinese (Mandarin) are syllable-timed languages (Mok, 2009) and as Dutch and English are stress-timed languages (Grabe & Low 2002), Turkish- and Chinese L2 English speakers represent bilinguals in languages with different rhythmic properties, while Dutch L2 English speakers represent bilinguals in languages with similar rhythmic properties. According to the research hypothesis, Turkish- and Chinese L2 English learners should therefore have enhanced musical rhythm aptitude compared to Dutch L2 English learners and Turkish monolinguals. Furthermore, all participants were assessed on their L1 and L2 language skills. If exposure and experience is higher in a second language with different rhythmic properties, a higher musical rhythm aptitude can also be expected.

Although Dutch and English are similar with regards to speech rhythm, Dutch L2 English speakers will be expected to score higher on musical rhythm aptitude than Turkish monolinguals as second-language acquisition is also suggested to enhance working memory capacity, which has been suggested to play a role in both musical aptitude and linguistic abilities. (Tanaka & Nakamura, 2004). On the one hand, working memory is suggested to be enhanced by musical training (Nutley, Darki & Klingberg, 2014), in which both visuo-spatial, verbal and auditory working memory as well as long-time memory seems to benefit (Jakobson et al., 2008; Degé et al., 2011; Chan, Ho & Cheung, 1998; Franklin et al., 2008; Groussard et al., 2010). On the other hand, working memory is also suggested to enhance linguistic abilities such as speech-in-noise perception, reading ability, speech production such as reasoning and producing words in context, and (foreign) language acquisition (Parbery-Clark et al., 2009; Daneman & Carpenter, 1980; de Jonge & de Jong, 1996; Kyllonen & Christal, 1990; Daneman & Green, 1986; Ellis & Sinclair, 1996; Gathercole et al., 1992; Gathercole et al., 1994; Service, 1992). As linguistic abilities thrive upon working memory, verbal

(21)

and auditory memory, and as second language learning demands precision and accuracy (remembering and repeating new words, etc.), it could be suggested that second language learning entrains these forms of memory which would in turn would be beneficial for musical aptitude.

To control for these possible variations in working memory to have an effect on musical aptitude outcomes, participants were also tested on their phonological- and working memory capacity. In order to get more detailed information about their language skills than just acknowledging their mono/bilinguality, participants were asked to report their language background. In addition to the assessment of the language background of participants, as musical training and musical aptitude are often strongly correlated and can therefore distort results, participants were also assessed on their musical background through a questionnaire.

As this study is highly similar and to a large extent based on Denissen et al. (2013) – for example, the research hypothesis is the same – it can be regarded as a follow-up study. In order to make results from both studies comparable, the same tests and questionnaires were used. Also, the statistical analysis and presentation of results are highly similar. This study can therefore also be regarded as a recommendation to this method of investigating the effect of bilingualism on musical aptitude and as an invitation to further explore this topic.

In the following chapters, the procedures, results and conclusions for these experiments will be described. In the next chapter Experiments, a general overview of the experiments will be given. Then the subjects and subject groups will be described. Next, all of the tests and questionnaires that have been used in the experiments will be described and motivated. After the tests-section, the procedures for each of the three experiments will be described. Lastly, the method of data analysis will be elaborated. In the Results chapter, the results of each of the groups will be presented and compared to each other, as well as the results of the groups combined. Using the data analysis methods described in the Methods chapter, the validity and the statistical and practical significance of the results will be assessed. Finally, in the Discussion chapter these results will be interpreted in the light of the literature – the OPERA-hypothesis and Denissen et al. (2013) in specific – that have been reviewed in this chapter. Furthermore, a conclusion will be given on what this could mean for future research and what this could mean for how the relation between the musical- and language domains could be understood.

(22)

Experiments

This chapter describes the experiments that have been carried out for this study, as well as the participants and testing materials that have been used in these experiments. First, a summary of all experiments will be given, followed by a description of each of the language-groups. Then every test and questionnaire used in this study will be described. Lastly, each of the three experiments will be described.

To test the hypothesis of whether mastering two languages with different rhythm properties enhances musical rhythm aptitude compared to mastering two languages with similar properties or only one language has no effect on musical rhythm perception aptitude, four groups were tested. These four groups represent three categories:

1. Monolinguals

a) Turkish monolinguals group

2. Bilinguals in which the native and second language have similar rhythmic properties a) Dutch L2 English speakers group

3. Bilinguals in which the native and second language have different rhythmic properties a) Turkish L2 English speakers group

b) Chinese L2 English speakers group

All participants were tested on three tests and all participants filled in two questionnaires. All participants except for the Dutch L2 English speakers group were tested on a fourth test, the Lexical Tone Test. The most important test, the Musical Ear Test, measures melody and musical rhythm perception skill (separate from each other). The Backward Digit Span measures working memory, and the Mottier Test measures phonological memory. The Lexical Tone Test measures lexical tone perception The Self-Reported Language Questionnaire asks the participant to report its own native and second language history and background. The Music & Social Background Questionnaire asks the participant to report general information, and information about education, musical training, and music consumption behavior.

Every group was tested in one of the three experiments. The first experiment was carried out in Amsterdam, the Netherlands in the summer of 2014, in which the Dutch L2 English speakers

(23)

group was tested. The second experiment was carried out in Ankara, Turkey in autumn 2014, in which the Turkish monolinguals group and the Turkish L2 English speakers group were tested. The third experiment was carried out in the winter of 2014-2015 in the Netherlands, in which the Chinese L2 English speakers group was tested.

1. Experiment 1. Amsterdam, 2014.

a) Dutch L2 English speakers group (all tests & questionnaires minus the Lexical Tone Test) 2. Experiment 2. Ankara, 2014.

a) Turkish monolinguals group (all tests & questionnaires) b) Turkish L2 English speakers group (all tests & questionnaires) 3. Experiment 3. the Netherlands, 2014-2015.

a) Chinese L2 English speakers group (all tests & questionnaires)

In this Method section, a more detailed information will be given about the groups and its participants, the tests and questionnaires, the experiments as well as the method of data analysis that will be used in the results.

Subjects

Experiment 1 – Dutch L2 English speakers

The Dutch L2 English speaker group consisted of fourteen participants (nine female and five male) with a mean age of 24.71 (SD = 4.23). All of the participants were currently studying in or had studied higher education (e.g. HBO or University in the Netherlands). All of the participants also reported not to be professional musicians and had no or sporadic musical instrument training, with a mean musical instrument training years of 1.42 (SD = 1.02). Lastly, no participant reported to have any neurological impairment or hearing deficit.

All participants were born in the Netherlands and native speakers of Dutch who learned English as their second language, with a mean age onset of 8.79 (SD = 3.45) for English. For their participation, participants were paid an amount money comparable to approximately 3.5 hours of Dutch minimum hourly wage in 2014.

(24)

Ethics statement for experiment 1:

This experiment was approved by the ethics committee of the Faculty of Humanities from the University of Amsterdam and was carried out in the summer of 2014. All participants provided the informed consent form approved by this ethics committee. In this informed consent form, participants were given notice that they were free to withdraw the experiment at all times and that all information and their anonymity would be guaranteed.

Experiment 2 – Turkish monolinguals and L2 English speakers

The Turkish monolingual group consisted of 15 participants (8 female and 7 male) with a mean age of 18.93 (SD = 1.94) and were born in Turkey native speakers of Turkish. All of the participants were preparing for university by learning English in a preparatory school at the Middle East Technical University of Ankara, Turkey. Before testing, the participants have had one or two weeks of English classes and did not have sufficient skill to conversate in English. The exposure to English is therefore considered to be limited.

The Turkish L2 English learner group consisted of 7 participants (6 female and 1 male) with a mean age 23.71 (SD = 1.11) and were native speakers of Turkish and born in Turkey. All of the participants were currently studying in or had studied higher education (e.g. have followed or finished an education in university).

All of the participants also reported not to be professional musicians and had no or sporadic musical instrument training, with a mean musical instrument training years of 1.2 (SD = 1.58) for the Turkish monolinguals and 1.43 (SD = 1.13) for the Turkish L2 English speakers. Lastly, no participant reported to have any neurological impairment or hearing deficit. All participants reported to be native Turkish speakers who learned English as a second language, with a mean age onset of 8.43 (SD = 2.82) for English. No participant has been paid for participating in this experiment.

Ethics statement for experiment 2:

This experiment was approved by the Middle East Technical University Human Subjects Ethics

Committee and was carried out in the fall of 2014. All participants provided the informed consent

form approved by this ethics committee. In this informed consent form, participants were given notice that they were free to withdraw the experiment at all times and that all information and their anonymity would be guaranteed.

(25)

Experiment 3 – Chinese L2 English speakers

The Chinese L2 English learner group consisted of 20 participants with a mean age of 25.05 ( SD = 1.61) All of the participants were currently studying in or had studied higher education (e.g. have followed or finished an education in university). All of the participants also reported not to be professional musicians and had no or sporadic musical instrument training, with a mean music instrument training years of 0.81 (SD = 1.10). Lastly, no participant reported to have any neurological impairment or hearing deficit. All participants reported to be native Chinese speakers who learned English as a second language, with a mean age onset of 10.8 (SD = 2.70) for English. No participant has been paid for participating in this experiment.

Ethics statement for experiment 2:

This experiment was approved by the ethics committee of the Faculty of Humanities from the University of Amsterdam and was carried out in the winter of 2014/2015. All participants provided the informed consent form approved by this ethics committee. In this informed consent form, participants were given notice that they were free to withdraw the experiment at all times and that all information and their anonymity would be guaranteed.

Tests and questionnaires

Musical Ear Test

The Musical Ear Test (Wallentin et al., 2010) is a test specifically designed to measure music perception aptitude or as the creators call it: “musical competence”. In their study, this test has shown to highly correlate with musical expertise. It clearly distinguishes non-musicians, amateur musicians and professional musicians from each other. Furthermore, it correlates strongly with age of onset and weekly playing (hours for musicians. Lastly, it also highly correlates with the imitation test used as entry exam in Danish music academies. This test was also used in (Denissen et al. 2013).

The Musical Ear Test consists of two sub-tests: the melodic subtest and the rhythmic subtest. This division between measuring melodic and rhythmic skill in one balanced test is very useful with regards to the hypothesis. It allows to see the (absence of) relation between bilinguality in languages which differ in speech-rhythm on melodic and rhythmic skills separately. In the Musical Ear Test the participant are presented with pairs of melodies or rhythms. After the

(26)

presentation of a pair, the participant has to judge whether the melodies are the same or different. The whole test consists of 104 pairs (or trials), of which 52 trials belong to the melodic subset and an equal amount to the rhythmic subtest and lasts approximately 20 minutes (break not included). The melodic subset consists of 26 “same” and 26 “different” trials. All different trials contain one pitch violation, 25 of the different trials contain non-diatonic tones, and 13 of the different trials also contain a contour violation. The melodies consist of 3 to 8 tones, are one measure long, and are recorded at 100 bpm on a piano. The rhythm subset also consists of 26 same and different trials. All different trials contain one rhythmic violation and 21 of the different trials also contain triplets. The rhythms consist of 4 to 11 beats, are also one measure long, and are recorded at 100 bpm on wood blocks. In both the subsets the order of all these features were randomized.

In this project, the original audio recordings by the creators of the test were used, although the recorded instructions were excluded in favor of custom text instructions due to a different way of presenting the test (Appendix C – Musical Ear Test). The Musical Ear Test was presented using the software Presentation, using a modified version of the script used in Denissen et al. (2013).

Participants were asked to sit on a comfortable chair in front of computer monitor screen in a quiet room. Stimuli were presented through the monitor screen and through headphones. Through a button device participants were able to answer to the trials. The participant would hear a trial. After this, for 2 second long, the monitor screen displayed the words “same” on one side of the screen and “different” on the other side of the screen (the Turkish version used the words Aynı and Farklı for Same and Different), using a white font on a black background. To control for left or right preferences, 50% of the participants in a group saw the words “same” on the left and “different” on the right while the other 50% saw it vice versa. Assigning one of the two versions to participants happened in a pseudo-randomized manner. Participants then answered by pressing the button that corresponded to the right of left side of the screen. No feedback was given about whether the trial was answered or the correctness of the answer. Before the test started, participants could practice using two samples (one “same” and one “different”) that were not included in the test, until they felt ready to start the test.

Reaction time and response types (correct, incorrect, no response) were collected per trial. The reaction time is used only to determine the outliers. Also, both “incorrect” and “no response” types were treated as incorrect. For both sub-tests (melodic and rhythmic), the score of the test will be interpreted as a percentage of the correct answers compared to all trials, and this will be seen as a representation of what is rhythmic or melodic aptitude.

(27)

Backward Digit Span

The Backward Digit Span is a test which measures working-memory. It consists of a number of trials of increasing difficulty. In every trial the participant is presented with a sequence of digits. After this presentation of the sequence is complete, this sequence then has to be reversed then and returned by the participant. Both the stimuli as well as the form of returning the reversed sequence can be done through written or through spoken communication.

In this project, only recordings of spoken digits have been used and participants were to answer the trials through speech as well. The test started with a trial of two digits. After every two trials, the amount of digits per sequence was increased by one digit. For example, the third trial consisted of a sequence of three digits while the eighth trial consisted of a sequence of five digits. The test was finished after the fourteenth trial, with a sequence of eight digits.

Instructional and accompanying texts were written in Dutch, Turkish, and English for corresponding language versions of the Backward Digit Span test. Also, auditory stimuli were recorded in Dutch, Turkish, and Chinese (Mandarin), using a Samson C01U-usb condenser microphone, with a recording distance of approximately 10-30 centimeters. Each language version was recorded using a native speaker of the corresponding language. To eliminate the chance of prosody having an effect on perception and memory, each digit (0 to 9) was recorded separately instead of recording each sequence in one take. Using Adobe Audition software, each recording has been edited in order to improve understandability. First, if needed, the recording was compressed using dynamics processing. After that, parametric equalizing was used to decrease the lower frequencies and to increase higher frequencies (high-pass 100 Hz band, -10 dB; 200 Hz band, -6 dB, Q = 1.5; 10 Khz band, 3 dB, Q = 2). Then, the recording was normalized to -0.1 dB so that all recordings were approximately equal in loudness. Finally, where necessary, pitch processing was applied to make recordings more monotonous. The sequences, all of them using the same unique recording of each digit, were later stitched together with exactly 1 second between the onsets of every digit. To be able to present the instructions and stimuli in an accessible way, a web application was written by me with the use of HTML, CSS, PHP, and JavaScript. It can be found on

http://drikusroor.nl/pilot/bds_app.php

After starting the web application, the participant immediately saw the instructions. After reading these instructions, the participant could start the test by clicking on the corresponding button. On the next page, the participant could click on the play sound button to present the

(28)

stimulus of the trial. After clicking this button disappeared as the participant is not allowed to listen to the fragment again. After the presentation of the stimulus, the participant gives its answer by saying it out loud, in its native language (Appendix A – Backward Digit Span). The researcher then writes down this answer on an answer sheet. The participant then clicks on the next button to go to the next trial. This process is repeated until the test is finished.

Based on the participant's answers, a level was calculated based on how far the participant could progress in the test, and a score was calculated based on the amount of errors made in a trial group. A trial group consists of two trials, each trial containing a sequence of digits. Both trials in a trial group contain sequences of equal length. A trial group was successfully answered if the participant answered 50% or more of the trials within the group correctly. The level was determined through the last successfully answered trial group before the first failed trial group (correctly answered trials < 50%). If the last successfully answered trial group was answered 100% correctly, the level of the participant would be equal to the amount of digits of this trial group. If the last successfully answered trial group was answered less than 100%, the level would be the amount of digits per trial in the corresponding trial group minus 0.5. To give an example, if a participant successfully answered the trial group of 5 digit-sequences by answering correctly all of the trials in this group, its level would be 5. However, if the participant succeeded this trial group by answering correctly only 1 out of 2 trials (50%), its level would be only 4.5. The score is calculated by counting the amount of correctly answered trials in the successfully answered trial groups. Both the level and score will be used to represent a participant's working memory, but more value is attached to the score as it says more about all trials than only the correctly answered trials in the last succeeded trial group.

Mottier Test

The Mottier Test is a pseudoword repetition test which measures phonological memory. It consists of pseudowords without any meaning, built from consonant-vowel syllables (for example, na, po, or gadibelosu). The spoken pseudowords words are presented one by one and after each pseudoword the participant has to repeat, pronouncing the same word. The test starts with six trials, each trial containing one pseudoword. This is called a trial group. The first trial group contains only two-syllable pseudowords. The second trial group contains six three-syllable pseudoword trials. With every trial group that follows, the amount of syllables per pseudoword increases by one. The test stops after the set of six seven-syllable pseudowords.

(29)

For every language group (the Dutch, Turkish, and Chinese group), a slightly modified version of the Mottier test was used (Appendix B – Mottier Test). Although the same stimuli on paper (the pseudowords without any meaning) were used, a small variation was made to the version for the Chinese group so that all the pseudowords would be consistent with certain properties of the language, such as pronunciation and its alphabet. For the Chinese version this meant that, where necessary, a part of the R’s were replaced by L’s. Based on the text version of the Mottier test, recordings were made for every language group. Native speakers of Turkish, Dutch, and Chinese each recorded the Mottier pseudowords by pronouncing the pseudowords like they would in their own native language. To eliminate the chance of prosody having an effect on perception and memory, voice actors were asked to keep their voice monotonous and to keep a steady pulse for every syllable in the pseudoword. Pseudowords were recorded completely instead of recording the syllables separately. Using Adobe Audition software, each recording has been edited in order to improve understandability. First, if needed, the recording was compressed using dynamics processing. After that, parametric equalizing was used to decrease the lower frequencies and to increase higher frequencies (high-pass 100 Hz band, -10 dB; 200 Hz band, -6 dB, Q = 1.5; 10 Khz band, 3 dB, Q = 2). Then, the recording was normalized to -0.1 dB so that all recordings were approximately equal in loudness. Finally, where necessary, pitch processing was applied to make recordings more monotonous.

The presentation of this task was done on the same web application framework written by me in HTML, CSS, PHP, and JavaScript for the Backward Digit Span (http://drikusroor.nl/pilot/mot_app.php), and the same procedure was used: The participant opened up the web page, read the instructions and started the test. The participant could then click on the play button to hear the pseudoword trial. After the sound fragment stopped playing, the participant tried to repeat by pronouncing the pseudoword. After giving the answer, the researcher wrote down this answer on an answer sheet. The participant then clicks on the next button to go to the next trial. This process is repeated until the test is finished.

Based on the participant's answers, a level was calculated based on how far the participant could progress in the test and a score the amount of errors made in a trial group. A trial group consisted of six trials, each trial containing one pseudoword. A trial group was successfully answered if the participant answered 50% or more of the trials within the trial group correctly. The level was determined through the last successfully answered trial group before the first failed trial group (correctly answered trials < 50%). If the last successfully answered trial group was answered

(30)

100% correctly, the level of the participant would be equal to the amount of syllables of this trial group. If the last succeeded trial group was answered less than 100%, the level would be the amount of syllables per trial in the corresponding trial group minus 0.5. To give an example, if a participant successfully answered the trial group of 5 syllables by answering correctly all of the trials in this group, its level would be 5. However, if the participant succeeded this trial group by answering correctly only 4 out of 6 trials (66.6%), its level would be only 4.5. The score is calculated by counting the amount of correctly answered trials in the successfully answered trial groups. Both the level and score will be used to represent a participant's phonological memory, but more value is attached to the score as it says more about all trials than only the correctly answered trials in the last succeeded trial group.

Lexical Tone Test

The Lexical Tone Test measures the participant's ability to recognize and discriminate lexical tone in spoken language (for the instructions, see Appendix D – Lexical Tone Test). The Lexical Tone Test was included for a third-party study on lexical tone perception and musical pitch perception. In this third-party study, the Chinese L2 English speakers have already been tested earlier using the Lexical Tone Test. In the experiment with Turkish monolinguals and Turkish L2 English speakers (Experiment 2), the Lexical Tone Test was included as well.

In the Lexical Tone Test, participants have to discriminate pairs of recorded spoken words based on lexical tone. The test consists of two sub-tests: The monosyllabic subtest, in which the participant has to discriminate pairs monosyllabic words (for example, ma, do, or bo), and the bisyllabic subtest, in which the participant has to discriminate pairs of bisyllabic words. The bisyllabic subtest consisted of 180 pairs of words, in a random order and the monosyllabic subtest consisted of 120 pairs of words.

The test was presented using the software application ZEP on a Linux operating system, using the same presentation script as used in the third-party study. Participants were asked to sit on a comfortable chair in front of computer monitor screen in a quiet room. Stimuli were presented through the monitor screen and through headphones. The participants were able to answer to the trials using a mouse. The participant would hear a trial after which the monitor screen shortly displayed the words “Aynı” (Same) on the left side of the screen and “Farklı” (Different) on the right side of the screen. The buttons were gray-colored and the background color was yellow-khaki. Participants then answered using the mouse by clicking the “Aynı”-button on the

Referenties

GERELATEERDE DOCUMENTEN

Last but not least, I would like to thank the China Scholarship Council, the Leiden University Fund for its Delta scholarship and LUCL for your financial

Not only did their results bear out that intelligibility was best between American speakers and listeners, but they also showed the existence of what they called an

In the preceding subsection we introduced the difference between onset and coda. It happens very often that a language uses clearly distinct allophones for the same

Given the absence of obstruents in Mandarin codas and the absence of coda clusters, it is an open question how Chinese learners of English will deal with the fortis

Pearson correlation coefficients for vowel and consonant identification for Chinese, Dutch and American speakers of English (language background of speaker and listeners

Since vowel duration may be expected to contribute to the perceptual identification of vowel tokens by English listeners, we measured vowel duration in each of the

Before we present and analyze the confusion structure in the Chinese, Dutch and American tokens of English vowels, let us briefly recapitulate, in Table 6.2, the

The overall results for consonant intelligibility are presented in Figure 7. 1, broken down by nationality of the listeners and broken down further by nationality