• No results found

Faculteit Letteren en Wijsbegeerte Departement Taalkunde

N/A
N/A
Protected

Academic year: 2022

Share "Faculteit Letteren en Wijsbegeerte Departement Taalkunde"

Copied!
322
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculteit Letteren en Wijsbegeerte Departement Taalkunde

Identifiability and intelligibility of the speech of children with a cochlear implant: a comparison with normally hearing

children and children with an acoustic hearing aid

Identificeerbaarheid en verstaanbaarheid van de spraak van kinderen met een cochleair implantaat: een vergelijking met normaalhorende kinderen en kinderen met een akoestisch

hoortoestel

Proefschrift voorgelegd tot het behalen van de graad van doctor in de taalkunde aan de Universiteit Antwerpen te

verdedigen door Nathalie Boonen

Promotor: prof. dr. Steven Gillis

Promotor: dr. Hanne Kloots Antwerpen, 2020

(2)
(3)

Table of contents

Dankwoord – Acknowledgements ... 1

1. General introduction ... 7

1.1 Hearing devices: cochlear implant vs. acoustic hearing aid ... 9

1.2 Speech and language characteristics of children with a cochlear implant and an acoustic hearing aid ... 12

1.2.1 Speech characteristics of children with CI and HA ... 13

1.2.2 Language characteristics of children with CI and HA ... 14

1.3 Factors influencing the speech of children with CI and HA ... 15

1.4 Intelligibility ... 21

1.4.1 Defining intelligibility and its place in the speech chain... 21

1.4.2 Importance of speech intelligibility and factors influencing speech intelligibility measurements ... 24

1.4.2.1 Role of the speaker ... 25

1.4.2.2 Role of the listener ... 27

1.4.2.3 Type of utterance ... 29

1.4.2.4 Type of measurement ... 31

1.4.3 Studies on the intelligibility of hearing-impaired children ...33

1.5 Identifiability ... 38

1.5.1 Markers of identity in speech... 39

1.5.2 Markers of hearing status in hearing-impaired children ... 40

1.5.3 Labelling identification task... 41

1.5.4 Speech quality judgements ... 42

1.6 Aims of this dissertation ... 44

(4)

2. Measuring spontaneous speech intelligibility using entropy:

early cochlear implanted children versus their normally hearing

peers at seven years of age ... 49

2.1 Introduction ... 50

2.1.1 Intelligibility in hearing-impaired children ... 50

2.1.2 Intelligibility testing: imitated vs. spontaneous speech tasks... 52

2.1.3 Measuring speech intelligibility scores ... 54

2.1.4 Aims of this study ... 56

2.2 Method ... 58

2.2.1 Stimuli ... 58

2.2.2 Selection of the stimuli ...60

2.2.3 Procedure ...60

2.2.4 Data analysis ... 62

2.2.4.1 Alignment of the transcriptions ... 62

2.2.4.2 Entropy calculations ... 63

2.2.4.3 Statistical analyses ... 65

2.3 Results ... 66

2.3.1 Intelligibility scores for children with CI and NH ... 66

2.3.2 Individual differences between children with CI ... 68

2.4 Discussion ... 69

2.5 Conclusion ... 75

3. Spontaneous speech intelligibility: effect of the type of sample and the year of implantation ... 79

3.1 Introduction ... 81

3.1.1 Intelligibility of hearing-impaired children: effect of the length of the utterance ...81

(5)

3.1.2 Intelligibility of hearing-impaired children: effect of the year of

implantation ... 85

3.1.3 Aims and hypotheses of this study ... 86

3.2 Method ... 88

3.2.1 Stimuli ... 89

3.2.2 Selection of the stimuli... 93

3.2.3 Procedure ... 96

3.2.4 Data analysis ... 97

3.3 Results ... 98

3.3.1 Intelligibility: main effects analysis ... 98

3.3.2 Intelligibility: individual variability analysis ... 105

3.4 Discussion ... 107

3.5 Conclusion ...114

4. Identifiability of the speech of hearing-impaired children and normally hearing children: a categorisation task ... 117

4.1 Introduction ... 118

4.1.1 Acoustic measurements ... 119

4.1.2 Perceptual judgements ... 120

4.1.3 Objectives of this study ... 122

4.2 Method ... 123

4.2.1 Stimuli ... 123

4.2.1.1 Audio recordings ... 123

4.2.1.2 Selection of the experimental stimuli ... 124

4.2.2 Listeners ... 127

4.2.3 Procedure ... 127

4.2.4 Data analysis ... 129

4.3 Results ... 130

4.3.1 Analysis 1: NH versus HI children ... 131

(6)

4.3.2 Analysis 2: NH versus HA and CI children ... 135

4.3.3 Analysis 3: HI children labelled as NH ... 140

4.4 Discussion ... 142

4.5 Conclusion ... 148

4.6 Supplementary materials ... 150

5. Rating the overall speech quality of hearing-impaired children by means of comparative judgements ... 155

5.1 Introduction ... 157

5.1.1 Acoustic studies on the speech characteristics of HI children ... 157

5.1.2 Perceptual studies and the role of listeners’ experience ... 159

5.1.3 Alternative perceptual approach: comparative judgement... 160

5.1.4 Hypotheses ...162

5.1.4.1 Ranking: which child sounds better? ...162

5.1.4.2 Effect of listener group ... 163

5.2 Method ... 164

5.2.1 Stimuli ... 165

5.2.1.1 Audio recordings ... 165

5.2.1.2 Selection of the experimental stimuli ... 165

5.2.2 Listeners ... 167

5.2.3 Procedure ... 168

5.2.4 Data analysis ... 170

5.3 Results ...173

5.3.1 Overall speech quality of NH and HI children ... 174

5.3.2 Overall speech quality of CI and HA children ... 176

5.3.3 Effect of the degree of listeners’ experience ... 178

5.4 Discussion ... 180

5.5 Conclusion ... 185

(7)

5.6 Appendix ... 185

6. Native and non-native listeners’ judgements on the overall speech quality of hearing-impaired children ... 193

6.1 Introduction ... 194

6.1.1 Acoustic and perceptual studies on the speech of HI children .. 194

6.1.2 Non-native perception ... 195

6.1.3 Overall speech quality ... 197

6.1.4 Native perception of overall speech quality ... 199

6.1.5 Aims of this study ... 201

6.2 Method ... 203

6.2.1 Listeners ... 203

6.2.2 Stimuli ... 204

6.2.3 Procedure ... 206

6.2.4 Data analysis ... 208

6.3 Results ... 210

6.3.1 Place in ranking ... 211

6.3.1.1 Place in ranking for NH and HI children ... 211

6.3.1.2 Place in ranking for children with CI and HA ... 213

6.3.2 Pairwise comparisons ... 216

6.4 Discussion ... 221

6.5 Conclusion ... 227

6.6 Supplementary materials... 228

7. General discussion and conclusion... 233

7.1 Main outcomes ... 235

7.1.1 Intelligibility ... 235

7.1.2 Identifiability ... 245

7.1.3 Intelligibility vs. identifiability ... 253

(8)

7.2 Relevance of this dissertation and directions for future research .

... 254

7.2.1 Speech intelligibility ... 255

7.2.2 Speech identifiability ... 259

8. Summary ... 267

8.1 English summary ... 267

8.2 Nederlandse samenvatting ... 271

References ... 279

(9)

Dankwoord – Acknowledgements

De voorbije jaren heb ik het voorrecht gehad om mij dagelijks toe te leggen op de wonderlijke taalverwerving van normaalhorende kinderen en kinderen met een gehoorverlies. Samen met de hulp van verschillende personen is daardoor uiteindelijk dit doctoraat ontstaan. Hen wil ik graag aan het begin van dit proefschrift bedanken.

In de eerste plaats wil ik mijn twee promotoren, prof. dr. Steven Gillis en dr. Hanne Kloots, bedanken. Vanaf de FWO-aanvraag tot bij het indienen van dit proefschrift hebben ze mij op zowel academisch als menselijk vlak enorm ondersteund. Ze gaven me de vrijheid om veldwerk te doen, opnames te maken en bij proefpersonen experimenten af te nemen. Steven, bedankt voor het delen van je expertise en je liefde voor wetenschap. Zonder jouw relativeringsvermogen, feedback en ontelbare duwtjes in de rug was dit doctoraat er nooit geweest. Hanne, bedankt dat je vier jaar lang op het kruispunt van je eigen expertise en de wereld van de kindertaal bent komen staan. Jouw meedenkend vermogen, aandacht voor de praktische omzetting en precisie hebben dit doctoraat vormgegeven en onnoemelijk verbeterd.

Hierop volgend zou ik graag de leden van de doctoraatscommissie, prof. dr. Jo Verhoeven en dr. Leo De Raeve willen bedanken. Jo, bedankt voor je motiverende gesprekjes, je interesse voor het onderzoek en je bijdrage bij de eerste paper. Leo, bedankt om me wegwijs te maken in de wereld van kinderen met een gehoorverlies en me met open armen in jouw school, het KIDS, te verwelkomen. Daarnaast zou ik extern jurylid prof. dr.

Mieke Beers willen bedanken voor het lezen van dit proefschrift.

(10)

Ook een welgemeend dankjewel aan de lieve collega’s van CLiPS, en in het bijzonder aan de fantastische mensen van de derde verdieping. Bij iedereen stond de deur letterlijk open voor een werkgerelateerde vraag, een serieus of een luchtig gesprek. Jolien, jij was er van mijn eerste tot mijn laatste dag en weet wel waarom je deze vermelding hier dubbel en dik verdient. Bedankt! Ook aan alle anderen die de laatste jaren op de derde verdieping vertoefden, Ilke, Lotte, Michèle, Pietro, Bénédicte en Liesbeth:

dikke merci! Ook aan de CLiPS-collega’s die als testproefpersoon aan mijn experimenten deelnamen en mij zeer nuttige feedback gaven: bedankt!

Tijdens dit traject had ik het voorrecht om veldwerk te doen. Aan alle Limburgse audiologen en logopedisten die deelnamen aan mijn experimenten, bedankt! Jullie enthousiasme voor het onderwerp en praktische kennis zorgden ervoor dat we vaak nog tot lang na het experiment bleven napraten. Veel van de zaken die we toen bespraken, vormden de basis van hoe ik vandaag naar sommige resultaten kijk.

Aan dezelfde experimenten namen een aantal leerkrachten lager onderwijs deel. In de eerste plaats wil ik hiervoor graag de directie van de lagere school van Lanklaar en Dilsen bedanken om het hun leerkrachten mogelijk te maken deel te nemen aan deze experimenten. Dus bij dezen, Danny Pass: Bedankt dat ik na al die jaren nog eens op “mijn” schooltje mocht komen voor zowel de experimenten met de leerkrachten als de opnames van een aantal leerlingen. Bedankt, Lutgarde Aerts, om mij met open armen te ontvangen en natuurlijk bedankt aan alle leerkrachten die tot tweemaal toe meewerkten aan de experimenten.

Aan mijn “onervaren” luisteraars, aka vrienden, kennissen en

(11)

experiment dat vaak mijlenver verwijderd was van jullie leven en interesses.

Bedankt D-PAC’ers – en dan vooral Maarten Goossens en prof. dr.

Sven De Maeyer – om out-of-the-box te denken en jullie software om te vormen naar iets dat met audiofragmentjes kan werken. Sven, ook heel erg bedankt dat je op jouw vertrouwde kalme manier alle andere statistische onduidelijkheden aan mij wilde uitleggen.

Als je samen met je promotoren op het idee komt om een experiment op poten te zetten met Italiaanse, Franse en Duitse proefpersonen, zijn een paar buitenlandse handlangers geen overbodige luxe. A massive thank you to prof. dr. Marianne Kilani-Schoch and Giovanni Cassani for organising the French and Italian participants. This experiment would not have been possible without you.

Ook bij de studentenwerving voor de onderzoeken rond verstaanbaarheid kon ik op veel hulp rekenen. Hiervoor wil ik prof. dr.

Dominiek Sandra, dr. Guy De Pauw en dr. Sarah Bernolet bedanken. Aan hun studenten die deelnamen aan het experiment, bedankt voor jullie enthousiasme en inzet.

Geen wetenschappelijk onderzoek rond de spraak van kinderen met een gehoorverlies zonder deze kinderen natuurlijk. Daarom een dikke merci aan de kinderen met een gehoorverlies en hun ouders voor de gastvrijheid en het enthousiasme voor het onderzoek.

De afgelopen jaren stonden in het teken van dit onderzoek, maar vrienden en familie (en de twee leukste gezelschaphoudende poezenbeesten Proetskie en Cico) zorgden ervoor dat ik af en toe de laptop

(12)

aan de kant zette. Enkelen van hen zou ik hier uitdrukkelijk willen bedanken.

Mama en papa, bedankt om mij te steunen in alles wat ik doe en mij tijdens dit project nog meer dan anders tegelijkertijd los te laten en er altijd voor mij te zijn. Jullie rotsvast vertrouwen, luisterend oor en oppeppende woorden hebben ervoor gezorgd dat dit proefschrift hier nu ligt.

Robby, je was altijd al mijn grote sterke broer, maar de laatste maanden ben jij de definitie van doorzettingsvermogen. Jouw volharding en vechtlust zijn ongezien bewonderenswaardig en hebben mij mee tot aan de eindstreep van deze thesis geduwd.

Rowena, merci om mij op gezette tijden van achter mijn bureau te halen en voor de nodige ontspanning te zorgen. We go way back en er zijn geen woorden voor hoe dankbaar ik voor onze onvoorwaardelijke vriendschap ben.

Tot slot, liefste André, bedankt om altijd in mij te geloven, mij te motiveren en geduldig met mij te zijn. Je gaf me de vrijheid en tijd om me volledig te kunnen focussen op dit project, vierde mee bij succesjes en troostte me bij frustraties. Je onaflatende steun en liefde zijn en waren van onschatbare waarde voor mij.

(13)

Chapter 1

(14)
(15)

1. General introduction

This dissertation reports on the speech and language of hearing- impaired primary school aged children. They were born with a hearing impairment and their hearing was aided very early on in life. Two issues pertinent to their speech development are studied. The first one relates to the speech intelligibility of these hearing-impaired children: is their speech as intelligible as that of their normally hearing peers? The second one can be briefly described as follows: when these children enter primary school, is their speech distinguishable from the speech of normally hearing peers?

In other words, is their speech still identifiable as that of hearing-impaired individuals? Several perceptual studies are reported in which the speech of hearing-impaired children with cochlear implants and acoustic hearing aids is compared to that of normally hearing peers.

Hearing is crucial for the development of oral language and speech.

This simple observation has led to numerous studies on the speech of hearing-impaired individuals. In the 1970s and 1980s, research on hearing- impaired children focused on within-group comparisons of their speech production (Ferguson, 1981; Markides, 1970; McGarr, 1983; Mencke et al., 1983; Monsen, 1978). Gradually, as the technology of hearing devices evolved, studies started to compare normally hearing and hearing- impaired children’s speech and recently, it is even increasingly often suggested that hearing-impaired children could reach normally hearing children’s speech and language proficiency (Boons et al., 2013; Geers &

Nicholas, 2013; Levine et al., 2016; Nicholas & Geers, 2007; Uchanski &

Geers, 2003). The fact that this is a possible outcome for congenitally

(16)

hearing-impaired children is due to several health care related and technological advances that will be discussed in the next paragraphs.

First, early identification of hearing loss is crucial for an early start of rehabilitation and ultimately better outcomes. A major step in the early identification is the Universal Neonatal Hearing Screening (Kretschmer &

Kretschmer, 2010; Patel & Feldman, 2011; Verhaert et al., 2008; Yoshinaga- Itano, 2003). The goal of this screening is to assess the hearing of newborns in the first few weeks of their lives. In Flanders, i.e. the Dutch-speaking part of Belgium, the screening was introduced in 1998 and the Flemish infant welfare service, Kind & Gezin, now reaches around 98% of all newborns (Van Kerschaver & Stappaerts, 2012). The test is administered through an automated auditory brainstem response test (AABR) or otoacoustic emissions (OAE). In Flanders, 98 children were born with a hearing loss in 2018, which corresponds to 1.66 in 1000 newborns. Of these 98 children, 59 children had a bilateral hearing loss. This finding is consistent with other studies on the prevalence of hearing loss in children (Korver et al., 2017; Russ et al., 2009).

Early identification of the hearing loss enables an early diagnosis.

Hearing loss differs in terms of the locus and the severity of the loss.

Concerning the severity of hearing loss, six degrees are distinguished: slight (10-25 dB HL (decibels hearing level)), mild (26-40 dB HL), moderate (41- 55 dB HL), moderately severe (56-70 dB HL), severe (71-90 dB HL) and profound (> 91 dB) (Clark, 1981). Of the 59 newborn children that were diagnosed with a bilateral hearing loss in 2018 in Flanders, the children with a moderate or moderately severe hearing loss constitute the largest

(17)

severe hearing loss (20%) and a mild hearing loss (14%) (Van Kerschaver &

Stappaerts, 2012). The remaining 10% of the children with a bilateral hearing loss displayed different degrees of hearing loss in both ears. Slight hearing losses are not reported by Kind & Gezin. This distribution of the hearing levels is in line with prevalence scores of other countries (Fortnum et al., 2002; Russ et al., 2009).

The locus of the hearing loss determines the type of hearing loss.

Three types are distinguished: conductive, sensorineural and mixed hearing loss. In the case of conductive hearing loss, the hearing loss is situated in the outer and/or the middle ear. For example, this type of hearing loss is caused by an obstruction that blocks the sound vibrations or a defect in the middle ear ossicular chain. In a sensorineural hearing loss, the inner ear or neural system shows a defect. Apart from the vestibular system, which is dedicated to balance, the inner ear consists of the cochlea. Most sensorineural hearing losses are thus due to a defect in the cochlea. A combination of a conductive and sensorineural hearing loss is called a mixed hearing loss (Kral & O'Donoghue, 2010; Stachler et al., 2012).

1.1 Hearing devices: cochlear implant vs. acoustic hearing aid

When audiometry has provided information on the locus and severity of the hearing loss, rehabilitation can start. There are different hearing devices, but this dissertation focuses on two specific devices:

acoustic hearing aids and cochlear implants.

(18)

The acoustic hearing aid (henceforth: HA) is the traditional behind- the-ear hearing aid. It is most effective in conductive hearing losses. The basic principle behind the acoustic HA is to amplify sound. The cochlear implant (henceforth: CI), as the name suggests, is used for particular anomalies in the cochlea. For example, it is provided when the hair cells in the cochlea that are responsible for transmitting the sounds to the hearing nerve are defective or absent, resulting in a severe-to-profound hearing loss or complete deafness. A CI has a more complex structure than a HA and consists of an external and an internal part. The function of a CI is to surpass the defect cochlea. First, the microphone in the external part picks up the sound signal, which the speech processor converts into an electronic code. Through the transmitter, the code is transferred to the internal part.

Within the internal part, an electrode array is surgically inserted in the cochlea. Depending on the frequency of the incoming signal, particular electrodes are activated and send the code as electric pulses to the hearing nerve. Finally, the brain recognizes the signal as sound.

For each of these devices, important remarks have to be made.

Concerning the CI, it is important to note that the incoming signal is electronic. Hence, there is no certainty about how natural the incoming signal sounds for children who, prior to the implant, did not have any hearing. Recent research on CI users with single sided deafness is starting to provide insights into this matter since they can compare the sound signal through an implanted and a NH ear (Dorman et al., 2017). Moreover, the CI does not completely resolve the deafness or the severe-to-profound hearing loss. Overall, the hearing level (HL) after implantation is around

(19)

present prior to the implantation can oftentimes not be preserved (O'Donoghue, 2013). In some patients, who have residual hearing in the lower frequencies, this issue can be solved by the use of a hybrid cochlear implant. Here, high frequencies are electrically stimulated and low frequencies are acoustically amplified (Woodson et al., 2010).

For an acoustic HA, different aspects apply. Especially for severe hearing losses, restoring the hearing loss with an acoustic HA is not straightforward and the quality of the incoming signal varies individually.

The high degree of required amplification can lead to distortion of the sounds, discomfort or even damage of residual hearing (Agnew, 1998;

Arehart et al., 2007; Souza, 2002). The spectrum of what a severely hearing-impaired individual can actually hear is so limited that all frequencies have to be squeezed into this limited area (Gregory & Drysdale, 1976). Because of issues like sound distortion, discomfort or damage, a CI, which has better frequency restoration, could be the better option for hearing-impaired individuals with a severe hearing loss (Leigh et al., 2016).

For the reimbursement of a CI, several candidacy criteria apply.

These criteria differ between countries and for children and adults (Vickers et al., 2016). In Belgium, the most important requirement for children is a mean hearing threshold of minimally 85 dB HL. Moreover, possible candidates have to wear acoustic HAs for a period of time prior to the implantation in order to demonstrate that these do not provide a sufficient perceptual benefit to the child. Increasingly, the – rather strict – candidacy criteria for cochlear implantation are under debate: there is a growing consensus that children with less severe unaided hearing levels of for example 65 dB HL can reach better outcomes with a CI than with an

(20)

acoustic HA (Carlson et al., 2015; Leigh et al., 2016). With this evolution, the group of children with an acoustic HA will continue to shrink (Geers, 2006; Gillis, 2017).

In summary, the speech and language development of children who are born with a hearing impairment or acquire a hearing loss at a very young age is affected in minimally three ways. First, the start of their lives is characterised by a period of auditory deprivation. Even when the hearing loss is identified in the first month of life, it can take several months until the most appropriate hearing device is fitted. Secondly, in many cases, the hearing loss cannot be fully restored and the child is left with a remaining hearing loss. Lastly, the incoming signal – especially for severe hearing losses – is degraded. Nonetheless, providing these children with a hearing device has been shown to improve their speech and language perception (Tyler et al., 1997). Consequently, “auditory information potentiates the development of specific principles of articulatory organization” (Tye- Murray et al., 1995: 336) resulting in the development of speech and language production.

1.2 Speech and language characteristics of children with a cochlear implant and an acoustic hearing aid

In the previous paragraphs, it was argued that a congenitally hearing loss has implications for early auditory experience and consequently for speech and language perception and production. Considering that the focus of this dissertation lies on production, the following section only discusses studies on this topic. More specifically, this section will elaborate

(21)

on the specific speech (§1.2.1) and language (§1.2.2) characteristics of hearing-impaired (henceforth: HI) children with CI and HA.

1.2.1 Speech characteristics of children with CI and HA

Studies on the speech of HI children have meticulously examined subsystems such as articulation, phonation, resonance and prosody.

Concerning articulation, the consonants and vowels in HI children’s speech have been the topic of numerous investigations. For vowels, Verhoeven et al. (2016) found a reduced vowel space, smaller acoustic differentiation between vowels, and more centralised vowels in the speech of HI children. For consonants, errors in voicing, place and manner of articulation were reported (Van Lierde et al., 2005).

Concerning phonation, children with CI as well as children with HA exhibit a slightly hoarse, rough and strained vocal quality which however did not significantly differ from normally hearing (henceforth: NH) children (Baudonck et al., 2011a). Also in the study of Van Lierde et al.

(2005), both HI groups display phonation in the normal range.

In contrast, deviances in the speech of HI children were found with respect to resonance. In the study of Lenden and Flipsen (2007), resonance quality has even been recognised as one of the most problematic aspects in these children’s speech. More specifically, the nasality of nasal sounds is considered too low, whereas oral sounds are judged to be hypernasal (Baudonck et al., 2015; Van Lierde et al., 2005). Specifically for children with HA, this nasality of oral sounds is significantly higher than that of NH children (Baudonck et al., 2015).

(22)

Prosody includes aspects such as rhythm, stress, pauses, and intonation. Overall, HI children seem to have poorer prosodic skills than their NH peers (Lyxell et al., 2009). For example, with respect to intonation, children with CI exhibit less appropriate intonation than their NH peers (Peng et al., 2008). Moreover, mastering the small nuances for word stress has been shown to be difficult for HI children (De Clerck, 2018;

Lenden & Flipsen, 2007).

1.2.2 Language characteristics of children with CI and HA

The language development of HI children has also been the topic of many investigations. Since it is beyond the scope of this dissertation to fully document the impact of a hearing impairment on all linguistic aspects, this section will only briefly discuss the main findings.

Concerning the phonological development of HI children, studies have shown that children initially lag behind their NH peers (Ertmer &

Goffman, 2011). More specifically, they demonstrate lower phonological accuracy and higher variability in their word productions. However, at age five, early rehabilitated children seem to catch up (Faes et al., 2016; Faes &

Gillis, 2018). In a comparison between children with CI and HA, the latter group shows more phonological processes such as cluster reductions and substitutions (Van Lierde et al., 2005). Similarly, morpho-syntactic development initially shows some discrepancies between NH and HI children. For example, the mean length of utterance (MLU) of HI children is significantly lower and they are shown to have difficulties with grasping the inflectional morphological systems such as gender, tense or plural

(23)

marking (Faes et al., 2015; Hammer, 2010; Nicholas & Geers, 2007). But then again, between the ages of five and seven, HI children tend to reach age-appropriate scores. However, the group of children with CI is characterised by a large degree of interindividual variation, meaning that some children with CI reach speech and language outcomes that are comparable with those of children with NH, whereas others do not (Duchesne et al., 2009).

In this dissertation, early rehabilitated HI children with seven years of device use will be investigated. As far as the linguistic aspects are concerned, HI children of this age seem to have mostly caught up with their NH peers. In contrast, with respect to speech related aspects, children with several years of device use still exhibit acoustic deviances on several characteristics in comparison to their NH peers. These acoustic cues resulted in two research questions that introduce the main topics of this dissertation: (1) Do these acoustic cues affect the children’s speech intelligibility? and (2) Do these acoustic cues render the children’s speech identifiable for listeners?

1.3 Factors influencing the speech of children with CI and HA

In the previous paragraphs, the effects of hearing loss on the speech and language development of children with a hearing impairment were briefly discussed. Which factors influence hearing-impaired children’s developmental path? Age at implantation and length of device use are often mentioned. However, there are many more factors that affect

(24)

children’s speech and language outcomes. The multitude of factors influencing language and speech outcomes make the group of HI children rather heterogeneous. Before introducing the main topics of intelligibility (§1.4) and identifiability (§1.5), these characteristics will be discussed. The factors can be subdivided into to three categories: auditory related, child related and environment related characteristics (Boons et al., 2013).

Auditory related factors

First, numerous hearing related variables have to be differentiated.

Many of these variables are linked to the unique trajectory of an individual with a hearing loss. The first important factor is the onset of hearing loss.

The onset can be prior to birth (congenital hearing loss), prior to acquiring the first language (prelingual hearing loss) or later in life (postlingual hearing loss). After detecting hearing loss, two additional factors determine further development: the aetiology and the degree of hearing loss. Other than an unknown aetiology (which is quite common, even after extensive testing), the two most frequent aetiologies for congenital hearing loss are a CMV infection of the child’s mother during pregnancy or a genetic (for example connexin 26) mutation (Gillis, 2017). There is still some uncertainty about the effect of the aetiology, but overall, it seems that a hearing loss resulting from a CMV infection has less favourable outcomes with respect to speech and language than for example a connexin 26 associated hearing loss (Ramirez Inscoe & Nikolopoulos, 2004).

Concerning the degree of hearing loss, a lower degree of hearing loss leads to better speech and language outcomes and vice versa, severe hearing

(25)

losses have a far more negative impact on children’s speech outcomes (Ching et al., 2018; Svirsky et al., 2000a; Tseng et al., 2011).

In addition to the degree of hearing loss and the aetiology, the type of hearing device and the time at which the HI child receives the device are important contributors to variability (Svirsky et al., 2000b). The type of hearing device that is provided depends on the type and the degree of hearing loss (as was already discussed in §1.1). Consequently, small hearing losses that are treated with an acoustic HA result in relatively good speech and language outcomes, whereas children with a severe hearing loss treated with an acoustic HA would possibly reach better results with a CI (Leigh et al., 2016). Another main influential factor is the age at which HI children receive their hearing device (Castellanos et al., 2014; Peng et al., 2004; Svirsky et al., 2007). Obviously, this factor depends on the onset of the hearing loss, i.e. when the hearing loss occurred. For congenitally HI children, the general agreement is that an earlier activation of a hearing device leads to the best results. For children with CI, the most appropriate age to implant has been under debate for a long time considering that the pros (providing auditory stimulation in the sensitive period and hence, providing access to oral communication) have to be weighed against the cons of the medical risks of a surgery at a very young age (Bruijnzeel et al., 2016; Holman et al., 2013; Moreno-Torres et al., 2016; Szagun & Stumper, 2012). However, in recent years, there is a growing consensus of a so-called sensitive period which is characterised by the high plasticity of the auditory system (Kral & Sharma, 2012). Therefore, an implantation before the child’s second birthday is recommended and has been shown to lead to better speech and language outcomes than later implantation (Habib et al., 2010;

(26)

Nicholas & Geers, 2007; Ruben, 2018; Schafer & Utrup, 2016; Svirsky et al., 2007). For acoustic HA users, the situation is less complex and the device is provided at a very young age in order to keep the period of auditory deprivation as short as possible. Also, the fitting of a contralateral device, either simultaneously or sequentially, does not only have a positive impact on directional hearing but also on the speech and language development (Boons et al., 2013; Litovsky et al., 2006; Sadadcharam et al., 2016).

Moreover, some of the technical aspects of the cochlear implantation itself can be a factor as well. The questions of interest here are: was the electrode array fully inserted and could all electrodes be activated? Negative answers to those questions are indicative of poorer speech and language outcome.

Also, the domain of cochlear implantation is still developing. Since the beginning of cochlear implantation, there have been technical advances as well as a shift in the candidacy criteria leading to, for example, more and younger pediatric implant users. With respect to their speech and language outcome, children who were implanted in the early stages of pediatric implantation received a less advanced implant than children receive nowadays, which could lead to differences in their speech. Therefore, the calendar year of implantation has to be considered (Montag et al., 2014;

Ruffin et al., 2013). Moreover, the aided hearing threshold, i.e. the remaining hearing loss while wearing the device, has been shown to affect speech outcomes (Laccourreye et al., 2015). Finally, the length of device use, i.e. the period of time starting at the activation of the hearing device until the moment of testing, is of importance. Because other factors, such as the onset of the hearing loss and the chronological age at implantation

(27)

child actually has been wearing a device. Considering that HI children have to get used to the auditory experiences, it is generally assumed that a longer length of device use equals better speech and language outcomes (Khwaileh & Flipsen, 2010; Szagun & Stumper, 2012). This is especially the case for children with CI since they oftentimes have very minimal or no hearing prior to implantation and, thus, the change is the largest.

Child related factors

Secondly, there are child related factors such as gender, chronological age, intelligence and whether the child has additional comorbidities. The latter variable is of particular importance since it applies to 30-40% of the population of HI children and can greatly affect children’s speech and language development (De Raeve, 2006;

Nikolopoulos et al., 2008).

Environmental related factors

Finally, environmental factors have to be considered. These are mostly related to the family and educational situation. With respect to the family situation, the most significant question is whether the child is raised by hearing or deaf parents. Approximately 90% of HI children are born to hearing parents (Kretschmer & Kretschmer, 2010) who tend to have a preference for oral communication. When only considering the linguistic aspects, the use of sign language is under debate. Whereas some studies state that children’s speech and language development profits from bimodal communication (Mouvet et al., 2013), other studies indicate that

“[i]f signs are the more salient aspect of communication, auditory and

(28)

speech information will receive secondary attention. Thus, it might be that children who use total communication do not reach their potential in terms of speech development because of problems inherent in their method of communication” (Osberger et al., 1994: 178). The choice between oral, sign and total communication, i.e. a combination of oral and sign language, is also reflected in the educational track of the children.

Whereas mainstream schools are mainly focused on auditory-oral communication, special schools also provide sign support. Studies showed that children enrolled in mainstream schools reach better speech and language outcomes than children in special schools (Geers et al., 2003;

Tobey et al., 2003). It should however be noted that children enrolled in special schools are also more prone to additional disabilities (De Raeve &

Lichtert, 2012). Moreover, the aspect of socio-economic status (SES) should be considered. Studies have shown that a higher income, higher maternal education and/or a generally higher SES are predictors of better speech and language outcomes for NH as well as CI children (Cupples et al., 2018; Vanormelingen, 2016).

Considering the large diversity of factors, it is practically impossible in an experimental context to take into account each one of these factors and, at the same time, to reach a decent number of participants. The present dissertation investigates the long-term speech outcome of children with CI and compares this to peers with NH and HA (chapters 4-6). In order to create two homogeneous comparable HI groups, the CI and HA children met the following criteria. They received their device before their second birthday and their speech was assessed at primary school age after

(29)

matched on their aided hearing threshold, i.e. the remaining hearing loss.

Moreover, the children all had hearing parents. Consequently, the communication mode with the parents had a clear focus on oral communication with the use of signs as a support. In order to assure that the results could reasonably be ascribed to the children’s hearing impairment, children with additional comorbidities were excluded from the research reported in this dissertation. Moreover, in chapter 3, two groups of children with CI who were implanted in different (calendar) years were compared. The remaining selection criteria were the same: the children were implanted before the age of two and, at the moment of testing, they were in the first years of primary school.

1.4 Intelligibility

One of the two main aims of this dissertation is to compare the speech intelligibility of NH and HI children. In this section, speech intelligibility will be defined (§1.4.1), the influencing factors will be discussed (§1.4.2) and the studies on the intelligibility of HI children will be reviewed (§1.4.3).

1.4.1 Defining intelligibility and its place in the speech chain

Intelligible speech – although only one of many processes in the speech chain – is crucial for a successful oral conversation. A model of the speech chain divides an oral conversation into several processes (Rietveld

& van Heuven, 2016). Overall, the speech chain is divided into three major parts: production, transmission and perception, that are again subdivided

(30)

into various processes. More specifically, the production of the speaker starts with the intention for a message. Next, this idea is mentally formulated. The final step of the production consists of actually pronouncing the planned string of words. Following speech production, the speech stream is transmitted to the listener. Here, the environment in which or the medium with which the utterance is sent can enhance or reduce the successfulness of an utterance, e.g., a noisy and crowded room vs. a silent soundproof booth. After the transmission, the first step of the speech perception of the listener is to acoustically hear the speech stream.

After hearing the signal, the listener segments the speech stream into linguistic units, e.g., words, morphemes or phonemes (van Heuven & de Vries, 1983). This process of detecting and identifying linguistic units is referred to as speech intelligibility. The final step in speech perception is comprehension. Here, the listener analyses the information represented by the linguistic units he/she identified and brings to bear his/her linguistic and contextual knowledge in order to reconstruct the intended meaning of the message (Fontan et al., 2017; Kloots & Gillis, 2012; Miller, 2013). At the end of this paragraph, it should be noted that, even though the processes of the model are represented separately, they are intertwined and can overlap. For example, listeners do not wait until the end of an utterance before starting to decode.

The difference between intelligibility and comprehensibility is subtle and may even be a matter of considering the processes in a more narrow or broad perspective. Hence, in the literature, there is some ambiguity in the definitions of speech intelligibility. For example, some

(31)

intended message is recovered by the listener” (Kent et al., 1989: 483), “the ability of listeners to understand the performance of speech accurately”

(Ozbic & Kogovsek, 2010: 42) and “the ability to make oneself understood”

(Flipsen & Colvard, 2006: 94) clearly contain terminology that refers to comprehensibility rather than speech intelligibility. Other definitions such as “speech intelligibility is a measure of the extent to which listeners receive the verbal information that speakers intend to present” (Li et al., 2018: 136) and “number of whole words correctly recognized out of those that were actually produced by the child” (Freeman et al., 2017: 281) emphasize the process of intelligibility. In this dissertation, intelligibility will be defined as a process in oral communication in which the listener identifies linguistic units in the speech stream produced by the speaker (Freeman et al., 2017; van Heuven, 2008; Whitehill & Ciocca, 2000).

In summary, intelligibility is a process in the speech chain that is situated in the perception of the listener, yet is majorly influenced by the speech stream that is sent by the speaker. Or, as Kent et al. (1994: 81) stated:

“intelligibility is a joint product of a speaker and a listener”. Both parties contribute to whether or not an utterance is intelligible and successful.

More specifically, unintelligibility can result from an inherently unintelligible speech utterance of the speaker, but can also result from disordered speech perception of the listener. This makes intelligibility a two-way research domain. First, intelligibility can refer to a measure of the speech perception of the patient. For example, the functioning of hearing aids can be assessed by measuring the proportion of sounds that is intelligible for the patient (Francart et al., 2017). In this setting, intelligibility measures the auditory skills of people with a hearing loss.

(32)

Secondly, intelligibility is a measure of speech production skills. Here, the focus lies on the proportion of speech sounds that are intelligible to others with the focus on how the speaker was able to transmit these. The latter perspective is one of the main topics of this dissertation.

1.4.2 Importance of speech intelligibility and factors influencing speech intelligibility measurements

Considering that intelligible speech is required for daily interactions, studies have shown that intelligibility is linked with social well-being. For example, intelligibility and hearing peers’ attitude towards HI children were shown to correlate: as the intelligibility decreased, peers rated the personal qualities (cognitive and emotional-behavioural factors) of the speaker more negatively (Most et al., 1999). Low intelligibility also affected the speaker’s psychosocial competences. Parental reports revealed that several measures such as anxiety and social withdrawal correlated with intelligibility (Freeman et al., 2017). Thus, low intelligibility can lead to social isolation and lower psychosocial outcomes.

Considering the social consequences, reaching intelligible speech is an important milestone in the development of children. For NH children, the benchmark of reaching intelligible speech is set at around four years of age (Chin & Tsai, 2001; Weiss, 1982). Children who have reached this age and whose transcribed connected speech is less than 66% intelligible are candidates for speech and language therapy (Gordon-Brannan & Hodson, 2000).

(33)

Although there is consensus on the importance of intelligible speech, the aspects that contribute to the intelligibility of an utterance are less clear. In the following paragraphs, four factors will be discussed separately. Obviously, intelligibility depends on the speaker (§1.4.2.1) and the listener (§1.4.2.2). In an experimental or diagnostic setting, two additional factors contribute: the type of utterance (§1.4.2.3) and the type of measurement (§1.4.2.4).

1.4.2.1 Role of the speaker

When considering intelligibility merely as a product of the speaker, intelligible speech requires a combination of “prerequisite speech perception abilities to learn and understand speech, the linguistic knowledge to plan and execute spoken utterances, and the motor control abilities to articulate meaningful sentences” (Montag et al., 2014: 2332). In other words, intelligible speech requires – among others – accurate speech and language production.

With respect to speech production, intelligibility is oftentimes considered as “the product of a series of interactive processes as phonation, articulation, resonance and prosody” (De Bodt et al., 2002: 284). Of these aspects, articulation is the most obvious and allegedly the most important contributor to intelligible speech (Baudonck et al., 2010b). It is however a misconception that deviances in articulation automatically lower intelligibility. In a study of Ertmer (2010) on the relation between intelligibility and word articulation, it was found that the articulation quality of words cannot reliably predict a child’s connected speech intelligibility. This can be due to the fact that measurements of particular

(34)

characteristics of speech such as articulation can objectively be measured, yet these measures do not necessarily correspond to how listeners perceive children’s overall speech intelligibility (Uchanski & Geers, 2003).

Moreover, intelligible speech requires more than just adequate speech. As Ferguson (1981: 1) stated: “child’s intelligibility depends upon his ability to transmit the language code; a task which includes knowledge and proficiency with language – the expression of ideas through words”. In other words, linguistic aspects possibly affect speech intelligibility. For example, phonological distance measures, i.e. measures that compare the child’s production with the target at the phoneme level, have been shown to be correlated with speech intelligibility (Sanders & Chin, 2009).

Morphological and syntactic features such as the length of the utterance, the position of words and the syllabic structure have also been suggested to affect perceived intelligibility (Chin & Tsai, 2001; Gordon-Brannan &

Hodson, 2000; Weston & Shriberg, 1992). Again, there does not seem to be a direct relation between intelligibility and a specific linguistic aspect.

Rather, all aspects affect intelligibility interactively (Van Lierde et al., 2005). Also, speech and language aspects are suggested to interact. More specifically, listeners seem to use compensating mechanisms. For example, if the speech of an individual is particularly affected, listeners will rely more on linguistic components in order to decode the utterance of the speaker.

Vice versa, listeners will rely more on clear speech if the speaker demonstrates linguistic errors (Osberger, 1992).

(35)

1.4.2.2 Role of the listener

As discussed in the paragraph on the speech chain, the question whether an utterance is considered (un-)intelligible, is determined by the speaker as well as the listener (Kent et al., 1994; Rietveld & van Heuven, 2016). In other words, when listeners are required to judge children’s (disordered) speech, intelligibility is “a measure of how well a listener can

‘tune in to’ a deviant speech pattern” (Ferguson, 1981: 11). Individual characteristics of listeners thus have an influence on the perceived intelligibility. For example, in order to have a successful conversation, listeners have to know the same language as the speaker. Another example:

a hearing impairment may hamper a swift conversation (Flipsen, 2008).

Another influencing factor is the amount of listener experience. More specifically, studies often suggest that an increased amount of experience with a specific type of speech facilitates its intelligibility (Klimacka et al., 2001; McGarr, 1983; Monsen, 1978; Munson et al., 2012; Osberger, 1992).

For example, listener groups such as primary school teachers or speech and language pathologists have had more experience with (HI) children’s speech and are thus expected to rate the speech intelligibility of these children to be higher than inexperienced listeners.

There are two types of studies in which the variability in listeners is taken into account. On the one hand, there are studies in which the listeners are already experienced with a particular type of speech prior to the experiment (amongst others: Flipsen, 1995; McGarr, 1983; Munson et al., 2012). On the other hand, there are studies in which listeners are familiarised with a particular type of speech as part of the study (Beukelman & Yorkston, 1980; Borrie & Schafer, 2015; Ellis & Beltyukova,

(36)

2008; Ferguson, 1981; Tjaden & Liss, 1995). In this dissertation, the judgements of listeners who have experience with the speech of (HI) children will be compared to those of inexperienced listeners.

Studies comparing experienced and inexperienced listeners do not show univocal results. On the one hand, some studies found that the intelligibility judgements of experienced listeners were consistently higher than those of inexperienced listeners (McGarr, 1983; Monsen, 1978), that experienced listeners used context more efficiently (Osberger, 1992) or made more reliable and valid judgements (Klimacka et al., 2001; Munson et al., 2012). On the other hand, some studies did not find this difference between inexperienced and experienced listeners. They reported that both listener groups performed similarly (Flipsen, 1995; Gillis, 2013; Grandon, 2016; Mencke et al., 1983). Thus, in these studies, the experience that for example speech and language pathologists have gathered in their professional career did not lead to a higher proficiency in judging HI children’s speech.

There are several possible explanations for these contradicting results. One possibility is related to the speech intelligibility of the children. Some studies suggest that the children’s intelligibility might be so low that – even with considerable experience – decoding the speech is sheer impossible (Mencke et al., 1983; Svirsky et al., 2007). The opposite scenario is also possible: the intelligibility is so high that experienced as well as inexperienced listeners perceive perfectly what the speaker intended to say.

(37)

Moreover, there are several reasons as to why experienced listeners rate an utterance to be highly intelligible. First, it can be expected that experienced listeners are indeed better at decoding children’s speech and thus judge their speech intelligibility more correctly or more consistently.

However, it can also be expected that experienced listeners overestimate children’s speech because of their experience with a wide variety of speech.

Since they hear different degrees of disordered speech on a daily basis, their perception of unintelligible speech might have shifted: whereas inexperienced listeners may find one type of speech fairly unintelligible, the experienced listeners may appreciate it better because they – consciously or unconsciously - compare it to an even less intelligible type of speech. Thus, a high intelligibility score is either due to the result of better speech decoding skills or to the overestimation of the experienced listener (Beukelman & Yorkston, 1980; Munson et al., 2012).

1.4.2.3 Type of utterance

Imitated/read speech vs. spontaneous speech

Concerning the speech samples that are used for intelligibility assessments, previous studies have explored several options. First of all, one of the main distinctions that has to be made is whether the speech samples originate from imitated or read speech or from spontaneous speech. In imitated speech tasks, speakers are either prompted to imitate speech utterances that are demonstrated by the examiner or to read aloud utterances. Prior to the assessment, the researcher selects the speech samples that the speaker has to imitate or read. Next, listeners judge the speech samples by means of, for example, orthographic transcriptions. The

(38)

fact that the researcher has written transcripts, i.e. a model, of the utterances that were imitated by the speakers, has the important advantage that the number of correctly identified words in the transcriptions of the listeners can easily be calculated. Alternatively, the speech intelligibility assessment is performed by means of spontaneous speech samples.

Spontaneous speech can be elicited by pictures, in informal conversations or by story (re)telling. From these speech recordings, speech samples are extracted and presented to the listeners. The main advantage is that spontaneous speech is more representative for day-to-day speech and thus has a higher ecological validity. The main disadvantage is that, in contrast to imitated speech, there is no model of the speech utterances. In other words: spontaneous speech utterances are unpredictable and uncontrollable to a large extent. The lack of a model also has implications on the scoring procedure, since the utterances of the speaker cannot be compared with a prewritten transcript.

Words vs. longer speech samples

The second factor concerning the type of utterance is related to the linguistic unit. The spectrum of linguistic units goes from small units such as words up to longer stretches of speech. All types of utterances imply different elicitation tasks for the speaker. For example, at the word level, the most common tasks are a picture naming task or reading a list of words (Huttunen, 2008; Löhle et al., 1999; Vieu et al., 1998). At the sentence level, there are several preset imitation tasks that reoccur in several studies. The most frequently used sentence tasks are the Beginner’s Intelligibility Test

(39)

(Monsen, 1978) and the McGarr Sentence Intelligibility Test (Mcgarr, 1981).

A combination of both BIT and Monsen Sentence Test is often performed when the intelligibility of illiterate children (i.e. BIT) as well as older children (i.e. Monsen Sentence Test) is assessed in one study (Ertmer, 2010; Miyamoto et al., 1996; Svirsky et al., 1999; Svirsky et al., 2000a). In these tasks, the examiner reads the sentence aloud and the speaker is instructed to repeat the sentence or, if the children are old enough, they will read the sentences themselves. In some cases, the sentences are illustrated by pictures or objects (Osberger et al., 1994). For longer stretches of speech, the speech samples are extracted from speech recordings of informal conversations. Thus, whereas the first two linguistic units, i.e. words and sentences, mostly originate from an artificial experimental setting, the longer sequences of speech are mostly extracted from spontaneous conversations. In the present dissertation, short sentences as well as longer stretches of speech will be investigated by means of speech samples extracted from spontaneous speech.

1.4.2.4 Type of measurement

In the previous paragraphs, it was already established that studies on intelligibility vary on numerous factors. As a result, there is a large range of different types of measurement. Depending on the speaker, the listener and the type of utterance, (1) the task of the listener and (2) the quantification of the intelligibility score can be assessed differently.

Concerning the listener’s task, the most suited approach mostly depends on the type of utterance. For imitated or read speech, transcriptions as well as rating scales are commonly used. More

(40)

specifically, at the word or sentence level, listeners are mostly instructed to transcribe (orthographically or phonetically depending on their expertise) the utterance of the child (Chin et al., 2012; Chin et al., 2003; Ertmer, 2007;

Mencke et al., 1983; Monsen, 1978; Montag et al., 2014; Osberger et al., 1994; Svirsky et al., 2000a; Tobey et al., 2004). Longer stretches of imitated speech, for example a read text, can in principle also be transcribed, but this is an extremely time-consuming task for the listeners. Therefore, in this context, the listeners are most likely asked to judge the read text on rating scales. However, it should be noted that read texts are more frequent in the assessment of adults’ speech rather than children’s speech.

Therefore, they are not further discussed. Spontaneous speech is almost exclusively judged with rating scales (AlSanosi & Hassan, 2014; Bakhshaee et al., 2007; De Raeve, 2010; Ellis & Pakulski, 2003; Toe & Paatsch, 2013;

Van Lierde et al., 2005), because (1) they mostly investigate long stretches of spontaneous speech and it would be unfeasible to transcribe them and (2) because transcribing spontaneous speech has some important limitations that also affect the calculation of the intelligibility score.

Generally, the researcher’s approach to calculate the intelligibility score is fairly straightforward. For transcriptions (of imitated or read aloud speech), the intelligibility score is mostly the percentage correct averaged by the number of listeners. For rating scales, the ratings of the listeners are averaged. For the transcription of spontaneous speech, however, the calculation of the percentage correct is problematic because there is no model of the utterance. Hence, the researcher cannot compare the transcription of the listener to the “original” utterance that the speaker had

(41)

transcription of spontaneous speech. In the studies of Flipsen and Colvard (2006) and Lagerberg et al. (2014), listeners were instructed to transcribe the spontaneous speech samples orthographically and to indicate unintelligible syllables with for example “0”. For the calculation of the intelligibility score, Flipsen and Colvard (2006) tested different approaches to estimate the number of unintelligible words based on the number of unintelligible syllables. For example, one measure was based on the assumption that (English) child speech contains 1.25 syllables per word.

The number of unintelligible words in this case is thus the number of unintelligible syllables divided by 1.25. In the study of Lagerberg et al.

(2014), the conversion of syllables into words was not performed. Here, the intelligibility score was determined by dividing the number of intelligible syllables by the total number of syllables. However, both studies approach intelligibility in a very coarse manner and heavily rely on the judgements of each listener. Thus, there is a need for an intelligibility measure that is practicable and more explicit than estimations of the number of (un-)intelligible syllables. In this dissertation, an alternative approach to assess the intelligibility of spontaneous speech is explored (chapter 2).

1.4.3 Studies on the intelligibility of hearing-impaired children Beginning of cochlear implantation: comparisons CI-HA

In the domain of HI children’s speech, there is a long tradition of investigating these children’s speech intelligibility. In the 1970s and 1980s, several studies examined the speech of deaf or partially-hearing children and adolescents through transcription tasks and included both the

(42)

judgements of experienced and inexperienced listeners. In terms of scores, there seemed to be a consensus that the speech of deaf individuals is around 20% intelligible to inexperienced listeners (Markides, 1970;

Osberger et al., 1993; Smith, 1975; Svirsky et al., 2000a). Interestingly, all studies reported a higher intelligibility score for the experienced listeners (Ferguson, 1981; Markides, 1970; McGarr, 1983; Monsen, 1978 - but see Mencke et al. 1983).

In the 1990s, the positive effect of cochlear implantation on speech perception was established and studies started to address the speech intelligibility of children with CI. Initially, researchers were especially interested in how children with CI performed as opposed to those with a traditional acoustic HA. For example, the study of Miyamoto et al. (1996) found that CI users reached speech intelligibility scores of over 40% after 3.5 years of device experience. These scores exceeded those of HA users with a hearing threshold between 101 and 110 dB HL, but were lower than the scores obtained by HA users with a lower hearing threshold. Similarly, Osberger et al. (1994) reported a mean score of 48% for CI users that used oral communication as their primary communication mode. Thus, in the early studies on the speech intelligibility of children with CI, the children did not yet reach comparable levels of speech intelligibility to children with traditional HA. However, these studies were characterised by large ranges in chronological age and hearing thresholds of the tested subjects.

Moreover, the lack of additional (hearing related) information within and between the studies complicate the interpretation and comparisons between those early studies. Gradually, factors such as the age of

(43)

1999; Miyamoto et al., 1999). More specifically, it was found that HA children with hearing levels of 90-100 dB HL and the group of CI children that was implanted earliest, i.e. between the ages of 2 and 4, both obtained intelligibility scores of 60-90% (Löhle et al., 1999). Compared to the study of Miyamoto et al. (1996), children with CI thus seemed to have improved to a point that they were on a par with HA users. This trend continued and children with CI increasingly surpassed children with HA (Baudonck et al., 2010b; Gillis, 2017; Lejeune & Demanez, 2006). For example, in the study of Van Lierde et al. (2005), listeners rated the intelligibility of children with CI to be only slightly impaired, whereas that of the children with HA was judged to be moderately impaired. Thus, in the studies that compared the speech intelligibility of CI and HA children, CI users initially performed poorly, yet changes such as earlier implantation led to the fact that the CI children oftentimes outperformed peers with HA, leading to “better outcomes than have historically been expected for children with severe and profound hearing impairments” (Flipsen & Colvard, 2006: 103).

Current state of the art: comparisons CI-HA-NH

Because of the promising results of cochlear implantation,

“language acquisition and development on a par with that of children with normal hearing is no longer considered unrealistic” (Chin et al., 2003: 441).

Therefore, rather than comparing with children with HA, the speech of children with CI was increasingly often compared with that of NH children.

One of the first studies comparing the two groups consisted of a transcription task of imitated short sentences (Chin & Tsai, 2001). In this longitudinal study of 3-to-6-year old children with CI, the mean

(44)

intelligibility score was significantly lower than that of NH children at each year interval. However, the intelligibility increased with each year and some children reached scores of over 80% (Chin & Tsai, 2001). Therefore, children with CI are considered as “gap closers” when their development of speech intelligibility is compared longitudinally to that of NH children (Nicholas & Geers, 2007; Yoshinaga-Itano et al., 2010).

However, the group of children with CI is characterised by a large amount of variability. This variability can be ascribed to the heterogeneity of CI users and thus, more specifically, to characteristics related to the individual child, their hearing or their environment (Boons et al., 2013).

Amongst others, studies on intelligibility have taken into account the mode of communication, e.g., oral or total communication (Osberger et al., 1994;

Tobey et al., 2004; Vieu et al., 1998), the age of implantation (AlSanosi &

Hassan, 2014; Habib et al., 2010; Tye-Murray et al., 1995) and the length of device use (Montag et al., 2014; Peng et al., 2004; Uziel et al., 2007).

Focus: factors age of implantation and length of device use

The question whether age of implantation or length of device use is most crucial in children with CI has been an ongoing discussion for quite some time. Concerning the age of implantation, there is convincing evidence of a so-called sensitive period in which the plasticity of the auditory system is at its highest (Kral & Sharma, 2012). Therefore, implantation in this time frame, i.e. preferably before the age of two, is recommended and leads to the best speech and language outcomes (Nicholas & Geers, 2007; Svirsky et al., 2007). Similarly, in terms of the

Referenties

GERELATEERDE DOCUMENTEN

[r]

K en c ij fers voor twee selecties wegvakken: 1: gesloten voor (brom)fietsers, met parallel voorziening;. 2: (brom)f~tsstrook op

The instrumentation used for measuring the temperature, pressure and water flow rate is covered in this section. 2-1 respectively) measuring the inlet and outlet water

This chapter proceeds to discuss data analysis of the sequencing results such as sequencing quality, de novo assembly and mapping to the IWGSC scaolds and gene sets as well

She presented syllables varying on a /b/ – /d/ continuum between ‘‘badge’’ and ‘‘dadge.’’ When the initial phoneme was ambiguous, poor readers tended more than normals

On the assumption that poor readers have problems with verbal memory when phonological aspects of coding the input are critical, combined with the notion that speech is special

A sense of fairness is an important milestone in children's interpersonal, social, and moral development (Fehr et al., 2008; Güroğlu et al., 2014; Steinbeis & Singer, 2013), yet

Finally, there was no significant difference (p’s > .20) in the detection of /ʃ/-words between the different speaker groups. As the listener was not a native