• No results found

Table of Contents Table of Contents

N/A
N/A
Protected

Academic year: 2021

Share "Table of Contents Table of Contents"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Table of Contents

Table of Contents ... 1

1. The internship provider – Speech Lab Groningen ... 2

2. Research activities and outcomes ... 2

2.1. Feedback perturbation project... 3

2.1.1. Research question and hypotheses ... 3

2.1.2. Learning outcomes and activities ... 4

2.2. Thesis project – Speech production in congenitally blind speakers of Australian English ... 4

2.2.1. Project background, research question and hypotheses ... 4

2.2.2. Activities and learning outcomes ... 5

2.3. Speech Lab activities outside of main project ... 8

2.4. Anticausatives in Serbian project... 9

3. Internship reflection ... 9

(2)

Internship report

Name of Student: Milica Janic Student number: s3859134 Name of EMCL+ supervisor: Dörte de Kok

Name of internship supervisor: Martijn Wieling

This report details the research activities, learning outcomes, and reflection on the student internship done in partial fulfilment of the requirements for the completion of the EMCL+ programme (European Master’s in Clinical Linguistics+). The internship was done at the Speech Lab Groningen during the last semester of the programme, beginning on March 1st

2020 and ending in October 2020. The main internship supervisor was prof. dr. Martijn Wieling, with most of the daily supervision done by PhD candidate Teja Rebernik. Prof. dr. Dörte de Kok served as the internal EMCL+ internship supervisor. Due to the extraordinary circumstances caused by the COVID-19 worldwide pandemic, the majority of internship activities was conducted online.

1. The internship provider – Speech Lab Groningen

Speech Lab Groningen, operating within the University of Groningen, is led by prof. dr. Martijn Wieling. The lab specialises in the investigation of speech production using state-of-the-art acoustic and articulatory methods, including electromagnetic articulography and ultrasound tongue imaging. Its members include Bachelor’s and Master’s students as well as PhD candidates and research assistants working on the several currently active projects at the lab. Disordered speech and dialectal variation within the Netherlands are areas of major focus for the research projects the lab organises, including research on speech in Parkinson’s disease, Duchenne muscular distrophy, as well as L1 and L2 speech.

Speech Lab Groningen was chosen as the internship provider for several reasons. Since the lab focuses on both disordered speech and phonetics research, it was an ideal research environment to deepen and develop the knowledge acquired during the EMCL clinical linguistics programme. I have been very interested in the possibilities of investigating language disorders through the lens of phonetics since the beginning of EMCL+ when this discipline was the focus of the semester spent at the University of Eastern Finland. As the rest of the programme emphasises other approaches and disciplines, a chance to participate in speech perception and production research seemed like a good learning opportunity. The possibility of learning about advanced research tools such as electromagnetic articulography and the opportunity to work within a real lab setting in collaboration with a team of scientists also contributed to this choice. Since the lab includes students at all stages as well as experienced researchers, it was a suitable environment for an inexperienced intern to receive thorough instruction and supervision while being involved with real projects.

2. Research activities and outcomes

The internship consisted of work on three different projects at various stages. The first project regarding the influence of individual auditory perception differences on the extent of feedback perturbation adaptation was the intended main project to be used for the thesis, but was shortly replaced with a new project due to the COVID-19 regulations placing limits on in-person data

(3)

collection. The main internship project was concerned with speech production in congenitally blind speakers of Australian English, and the data analysed during the project was also used in the writing of the master’s thesis. Finally, a small contribution was made to a project about anticausative verbs in Serbian/Croatian/Bosnian outside of the Speech Lab in collaboration with prof. dr. Srdjan Popov and PhD candidate Nermina Cordalija. In addition to the main projects, the internship consisted of in-person and later online biweekly lab meetings, as well as weekly progress meetings with Teja Rebernik. I also presented my thesis/internship project to the Neurolinguistics group at the University of Groningen (online). I wrote two research proposals, one of which evolved into my master’s thesis.

2.1. Feedback perturbation project

Initially, the agreed upon internship project was focused on feedback perturbation in healthy young speakers of Dutch and was a part of a PhD project on speech planning and monitoring in Parkinson’s disease led by Teja Rebernik. The plan was to use the data coming out of this project to compare the young healthy population to the Parkinson’s patients, as well as to pilot the experiment for that study. Due to the COVID-19 pandemic, only the preliminary stage of this project was realised. I researched the topic and developed a research question and hypotheses to be tested and wrote a research proposal on the basis of this. I took no further part of this project beyond this step.

2.1.1. Research question and hypotheses

Speech motor control is a complex process which consists of two mechanisms: speech planning and speech monitoring (Guenther et al., 2006). According to the DIVA (Directions Into Velocities of Articulators) model of speech production (Guenther et al., 2006, Tourville & Guenther, 2011, Guenther & Vladusich, 2012), the motor maps which detail the previously learned motor programs between speech sounds and articulatory movements comprise the feedforward control mechanism, while the online detection and correction of the differences between the planned auditory signal and the perceived auditory feedback make the feedback control mechanism. These two systems work simultaneously in order to allow for fluent speakers to quickly adapt their speech when there is a mismatch between the expected and perceived auditory signal (Guenther et al., 2006). The two systems have been studied separately using random and consistent auditory feedback perturbations (e.g. Houde & Jordan, 1998, Purcell & Munhall, 2006, Liu et al., 2012). Random perturbations should affect the feedback mechanism as online monitoring of the speech signal is required, whereas applying consistent perturbations should affect the feedforward control system since speakers are required to update their motor speech maps.

Although healthy participants usually have no issue with adapting to the feedback perturbations, considerable variability in the individual adaptation is reported (e.g. Cai et al., 2010, Villacorta et al., 2007). The DIVA model (Guenther et al., 2006) predicts that fluent speakers with relatively good auditory acuity will more quickly and accurately detect the changes that occur in the auditory signal because of perturbations. Villacorta, Perkel & Guenther (2007) tested consistent perturbations of vowel formants (F1) and found a weak correlation between the participants’ auditory acuity and the amount they adapted to the perturbation. Martin and colleagues (2018) conclude that general auditory acuity is a very good predictor of adaptation to perturbed auditory feedback, having looked at both F1 and F2 formant perturbation. However, other studies found no such correlation (Feng et al., 2011; Cai et al., 2012). It is thus not clear to what extent perception abilities affect the adaptation to feedback perturbations in healthy young people.

(4)

I formulated the following research question – To what extent do speakers adapt to feedback

perturbations depending on how well they perceive small differences in the speech signal, and does this differ between random and consistent feedback perturbations? The planned method

was to use both random and consistent auditory feedback perturbations in healthy young participants, while correlating their results with their performance on a perception task. Based on the DIVA model and previous studies’ results (Villacorta et al., 2007, Martin et al., 2018) I hypothesised that a correlation will be found between individual auditory acuity and magnitude of adaptation to feedback perturbations where participants with better perception ability would show greater adaptation to the perturbations. I also hypothesised that this relationship is stronger for random perturbations than consistent.

2.1.2. Learning outcomes and activities

Work on this project involved the thorough reading and summarisation of the relevant literature in the field of speech production, auditory feedback, and feedback perturbation, as well as the independent formulation of a research question, hypotheses, and a planned methodology. As I was unfamiliar with this topic prior to the internship, this helped prepare me for the literature review to be done for the thesis and acquainted me with the leading theories and methods used within the subfield of phonetics. Despite the fact that the project was ultimately abandoned, I also gained valuable experience writing a research proposal.

2.2. Thesis project – Speech production in congenitally blind speakers of

Australian English

The central part of the internship was participation in the Speech Lab Groningen’s project on speech production in congenitally blind speakers of Australian English. I joined this project after the data was already collected by Pauline Veenstra, the research assistant who was responsible for the project. The data was collected in collaboration with Macquarie University in Australia, and participants were recruited with the approval of the Macquarie University Human Sciences Human Research Ethics Committee (Ref. nr. 5201800341). I was responsible for analysing a part of the data and formulating appropriate research questions and hypotheses, which I later incorporated into my master’s thesis.

2.2.1. Project background, research question and hypotheses

Visual cues provided by the visible articulators such as the lips and jaw are not redundant in the process of speech perception. There is an interactive relationship between audio and visual cues as demonstrated by the McGurk effect (McGurk & MacDonald, 1976), where presentation of incongruent audio and visual stimuli results in perception of a phoneme that is a combination of the two modalities. Presentation of stimuli in the audiovisual modality also results in better identification scores than either modality alone in tasks with acoustic noise (Robert-Ribes et al., 1998). However, the fact that blind speakers are able to produce sounds correctly indicates that visual input is not vital in speech production. The link between perception and production has been demonstrated in several studies, meaning that the role of visual input in perception may translate to production as well. Perkell and colleagues (2004) showed that discrimination accuracy between vowels in perception is related to production of greater acoustic contrasts in the same vowels. In cochlear implant users who have diminished auditory perception, vowel spacing in production is smaller than in hearing speakers (e.g. Ménard et al. 2007). Infants are sensitive to both audio and visual cues in both perception (Kuhl & Meltzoff, 1982) and production (Legerstee, 1990). Deprivation of visual input in infants and children has been found to result in phonological disorders and delays (Mills, 1983; Elstner, 1983), suggesting that visual cues are important in speech acquisition.

(5)

Previous studies on speakers of French have found that congenitally blind speakers produce smaller contrast distances between vowel pairs and that they have smaller average vowel spacing (AVS) than sighted speakers (e.g. Ménard et al., 2009; Ménard et al., 2016). A study on Dutch found the opposite pattern of results (Veenstra, et al, 2018), indicating that the effect may be language specific, as different languages have different articulatory settings (Honikman, 1964). However, yet another study on French speakers found no difference between the acoustic realization of vowels in blind and sighted participants (Turgeon et al., 2020). The contradicting results may be explained by differences in methodology. The studies differ in terms of vowel context, formant frequency value scales, the number of formants extracted for analysis, the chosen vowel contrasts, and the method for calculating vowel spacing and contrast distances. Another explanation could be the small number of subjects in each study as well as individual differences between the blind speakers.

The thesis which was written on the basis of this project aims to answer two main research questions:

1) Is there a difference in the acoustic realization of monophthongs of Australian English between sighted and congenitally blind speakers?

2) Does the analysis strategy for formant extraction (manual vs automatic) affect the results when comparing vowel production between sighted and blind speakers of Australian English?

We hypothesized that we will find a difference between blind and sighted speakers with blind speakers producing vowels spaced closer together in the vowel space than sighted, in line with previous studies on French (e.g. Ménard et al., 2009 and Ménard et al., 2013) as well as preliminary results of a study done on the same data (in publication). This would translate to smaller VSA, smaller AVS, smaller contrast distances between key vowel pairs, and more within-category dispersion. We also expected that this difference will be more pronounced in manually extracted data compared to automatic, since automatic data is less precise and more error-prone, and thus may obscure small differences between the groups.

2.2.2. Activities and learning outcomes

The tasks I was assigned within this project included the following: literature search and review, independent formulation of research questions and hypotheses based on pre-existing data, the formulation of a suitable methodology to be used in data analyses, data analysis of a select subset of pre-collected data (manual and automatic extraction of vowel formant frequencies using PRAAT (Boersma & Weenink, 2016)), calculation of relevant acoustic measures on the basis of extracted vowel formant frequencies using the statistical programming language R (R Core Team, 2013) and R environment RStudio (RStudio Team, 2020), and collaboration with other researchers involved in the project.

The output produced from these activities consisted of a dataset containing all extracted vowel formants as well as datasets with the performed calculations, a written thesis proposal, and finally a master’s thesis. The activities performed are explained in more detail below.

2.2.2.1. PRAAT data analysis

For the purposes of answering the research questions described in the previous section, I used PRAAT (Boersma & Weenink, 2016) to perform manual and automatic formant analysis on a subset of the pre-collected data. As part of the prior project, the acoustic recordings were already manually checked and pre-processed in Praat, and the target vowels from the elicited tokens were manually segmented and labelled by a research assistant. This resulted in Praat textgrids which consisted of marked phonemic boundaries. Before proceeding to extract formant values, I checked the segmentation and adjusted boundaries where necessary (this

(6)

happened rarely). The data consisted of 1311 recorded target words produced by ten blind and ten sighted Australian English speakers in carrier sentences. Each sentence contained three target words with the same target vowel, and each speaker produced two sentences per vowel, which made for 66 tokens per speaker, 6 for each vowel (as there were 11 vowels altogether). The carrier sentences were elicited in the following form: “word as in hVd as in V” with V-word symbolising a vowel-initial V-word, hVd being a vowel inserted into a consonantal context to make a word or pseudoword, and V being a vowel produced in isolation.

Manual formant value extraction was done on each token separately. First, the recording containing the sound file and the corresponding text grid were loaded into Praat. For each recording, the automatically determined formant tracks (generated by Praat’s LPC algorithm and accessed through the main menu option “Show formants”) were used as a basis for extracting the formants. Standard settings, preset in Praat, were used for each file, with the maximum frequency being set to 5500Hz, looking for 5 formants, with window length at 0.025, and the dynamic range set to 30dB. A visual examination of the spectrogram with superimposed formant tracks served to determine if formant points were placed in the correct spot, matching the visually distinct concentrations of acoustic energy in darker bars on the spectrogram. If the tracking was correct, an approximate midpoint was determined visually to serve as the extraction point. In case another part of the vowel showed better stability, determined based on the F1, F2 and F3 formant bands being visually distinct and clearly present, it was chosen over the visual midpoint as long as it was not the very beginning or end of the vowel (to avoid coarticulation effects). For each vowel, the three formant frequencies were extracted by manually getting the formant readings one by one using keyboard shortcuts F1, F2, and F3. If it was clear the automatically generated formant tracks were inconsistent with what was visible on the spectrogram, formant settings were adjusted for each problematic token individually until a satisfactory looking formant track was produced before extracting the formants. In most cases, this involved reducing the number of formants to be tracked to three or four, as well as reducing the maximum frequency at which to look for formants accordingly depending on the speaker, the target vowel, and the context. The maximum frequency at which to look for formants was initially set by looking at the approximate value of F3 on the spectrogram and adjusting from there based on gender (lower for male speakers) and the F0 of the speaker in question. A trial-and-error approach was then used to find the settings which best represent the formant values as seen on the spectrogram. An example of a correction can be seen in Figure 1, where F3 was mistracked as being much lower than it should be, in the same energy band as F2. The formant track after settings adjustment can be seen in Figure 2. The duration of each vowel was also manually recorded.

(7)

Figure 1. Example of mistracked token /ae/ - formant tracks shown in red

Figure 2. Vowel token from Figure 2. post settings adjustment to 4200Hz max, 3 formants - formant tracks shown in green

Automatic extraction was done using a Praat formant value collection script taken from Lennes (2003) and edited to extract more than one point per vowel. It specified that formants F1 to F5 should be captured at 11 time points per vowel - starting at 25% of the vowel duration, in steps of 5% until ending at 75%. The time between 25% and 75% was chosen to avoid any coarticulation effects. Standard settings (5500Hz max, 5 formants, window length 0.025, dynamic range 30dB) were used for female speakers, same as the baseline used for manual extraction. For male speakers, due to their lower F0 values, max 5000Hz was used instead. As I was a very basic Praat user before the internship, I was first taught how to properly segment speech and how to extract formants manually. I learned the importance and meaning

(8)

of the different formant settings and how to adjust them. This allowed me to expand on the knowledge I gained from courses held at the University of Eastern Finland, such as the Speech Synthesis course and the Methods in Speech Corpora, and develop the skills necessary to independently analyse speech in Praat. I also learnt to examine spectrograms, waveforms and spectral slices and identify formant tracks with visual inspection by analysing a large number of tokens over several months. In addition to this, I deepened my knowledge of PRAAT scripting and gained practice in using existing scripts and adjusting them according to the needs of my research. I feel confident in using PRAAT independently in future projects.

2.2.2.2. Data analyses – acoustic measures

In order to test my hypotheses I chose several acoustic measures of vowel space size to calculate for blind and sighted speakers. Vowel space area (VSA) was calculated per speaker per context in order to quantify the overall difference in vowel spacing in production between blind and sighted speakers using the phonR package (McCloy, 2016). The corner vowels were defined as /i:/, /æ/, /ɐː/, /oː/, /ʊ/ by visual inspection of vowel plots for each speaker and speaker group. VSAs were also calculated with 4 corner vowels, /i:/, /æ/, /ɐː/, /oː/ in order to compare the two VSA configurations. The calculations were done separately for manual and automatic data (midpoint values). Euclidean distances (EDs) between vowel pairs differing in place of articulation only (/ ʉː/ vs / oː/ and / æ/ vs / ɐ/), and place of articulation and rounding (/ i:/ vs / ʉː/, /ɪ/ vs / ʊ/, / ɔ/ vs /e/, and /e:/ vs / ɜː/) were calculated first for F1xF2 (two dimensions) between the mean manual values of each vowel in a vowel pair per context, for every speaker separately, using the following formula: √(𝐹1− 𝐹1)2+ (𝐹

2− 𝐹2)2. Following this, the same

calculation was applied for distances in three-dimensional space, with F1, F2, and F3 values, by adding F3 into the formula. Average vowel spacing (AVS) is a measure mostly used in clinical populations, and it is calculated as the average of EDs between all possible vowel pairs in the vowel space for each speaker per context (Lane et al., 2001). Finally, I calculated intra-category dispersion as the ED between each vowel token of a intra-category and the mean for that category per speaker (Menard et al., 2009).

In the process of data analysis, I had to learn about the benefits and disadvantages of each possible vowel space size measure, learn to calculate each of them, and learn to write appropriate code in the R programming language (RCoreTeam, 2012) in order to apply the calculations to my data. Although I attended two introductory statistics courses which also used R during the EMCL+ programme, I was not very confident in its use and lacked the skills to write extensive code like the kind needed for this project. During the course of working on the data, I developed these skills considerably and became a more confident R user, thanks to the guidance of Teja Rebernik and Martijn Wieling. This also helped me while doing statistical analyses for my thesis. I was also granted access to a very comprehensive introductory statistics course for PhD students taught by prof. dr. Martijn Wieling during the internship, which enabled me to develop my limited knowledge of especially linear mixed effects models that I then employed for my statistical analyses.

2.3. Speech Lab activities outside of main project

Besides the main project I was involved in, the internship also included other activities as a member of the Speech Lab. I participated in biweekly lab meetings (which were later moved online due to corona crisis) where I experienced how a typical research group functions and was familiarised with many current topics within speech research from the work of other members. Each meeting started with an updates session where everyone recounted the work they did since the last meeting and expressed any problems or concerns they had. The other

(9)

members would then help think through possible solutions or give helpful comments. I found this a very accommodating environment to freely ask questions and participate in discussion. During the internship, the Lab was in the process of equipping a Speech Lab Van which would serve as a moving experiment lab. This could enable researchers to reach more participants than usual, especially those who have movement issues (like Parkinson’s disease) or those who live outside of Groningen (dialectal research). It was quite informative and fun to witness this process from the concept to the final execution, and all the steps involved in such a project, from design matters to administrative issues. The van was discussed in several lab meetings – we talked about which features it needed to have, how it should look, and how it should work. Outside of lab meetings, I also had weekly meetings with Teja Rebernik who helped me with any and all questions, issues, and tasks that I had to do both for my thesis and my internship. Before the corona crisis, I was briefly shown in one of these meetings how to properly attach electrodes for Electromagnetic articulography (EMA), and how to prepare the subject. For this, I was present for part of the data collection performed on one subject. This was a unique opportunity to get to experience the use of EMA first-hand, as it is a state-of-the-art method for data collection of articulatory movements of the tongue, lips, and jaw. In another meeting, I was shown a basic tutorial along with other students from the lab on how to use MATLAB to analyse data produced by EMA. I have never used MATLAB prior to this, so I found this to be very interesting and useful for the future.

Additionally, I had meetings with my supervisor prof. dr. Martijn Wieling on several occasions to discuss the project I was working on. I also got a chance to meet with prof. dr. Michael Proctor to discuss my project and clarify questions about Australian English and possible analyses suitable for the features of that variety of English to study blind speech. The presentation I held for the University of Groningen Neurolinguistics group on my thesis project and progress was very valuable as well, since I got helpful comments and experienced what it was like to present my work to other researchers.

2.4. Anticausatives in Serbian project

Outside of the Speech Lab, during my internship I also took a small part in a project about anticausatives in Serbain/Croatian/Bosnian which was led by prof. dr. Srdjan Popov and Nermina Cordelija together with other EMCL+ students (Danja Lješević and Arina Veseli). We had agreed to assist in stimuli creation and data collection. The stimuli we helped create were sentences with 10 Serbian verbs in four conditions (transitive SVO, transitive OVS, anticausative SV, anticausative VS). The project was based on the following research questions:

1. Does the scrambled word-order in OVS sentences in

Bosnian/Croatian/Serbian/Montenegrin induce greater processing cost at the verb and postverbal regions?

2. Is there a greater processing load in postverbal regions in case of SV sentences with anticausative verbs due to reactivation of the subject at the gap position?

3. Internship reflection

I was very satisfied with the internship, as I feel that the knowledge and skills I developed and/or deepened were very much in line with EMCL+, following up nicely on the courses I

(10)

attended during the programme. I attended courses such as Methods in Speech Corpora and Speech Synthesis which acquainted me with Praat, allowing me to develop my knowledge during the internship. As phonetics is not as represented as other fields of linguistics in the programme, the internship provided ample opportunity to build on an area I had little experience with. I had a chance to experience a speech lab environment and see how a research group functions. Rather than only learning about the project I was assigned, through the weekly lab meetings I learnt about various different topics researched by other lab members and saw how problems and setbacks are dealt with in a real-world setting. Being a part of a larger research project was also an important aspect of the internship that I appreciated, as I learnt how to coordinate with other researchers working on the same project and how large projects were dealt with.

The competencies and acquired skills included the following:

1) Acquiring of new knowledge in the field of phonetics and acoustic analysis, as well as the topic of congenital blindness and visual information in speech

2) Technical skills

2.1) Use of the PRAAT program (Boersma & Weenink, 2016) for speech analysis – segmentation, vowel formant extraction, spectrogram visual inspection etc.

2.2) Use of the RStudio environment (RStudio Team, 2020) for statistical analysis and acoustic data measurements

2.3) Basic use of the MATLAB program

2.4) Proper techniques and procedures in applying tongue electrodes for Electromagnetic articulography and preparing a subject for the experimental session

3) Communication skills

3.1) Participation in a research group on an academic project, with biweekly lab meetings and weekly progress meetings with the daily supervisor (Teja Rebernik)

3.2) Presentation of the research project to the members of the Neurolinguistics group at University of Groningen

4. References

Boersma, P., & Weenink, D. (2019). Praat: doing phonetics by computer [Computer program]. Version 6.1.08, retrieved 5 December 2019 from http://www.praat.org

Cai, S., Beal, D. S., Ghosh, S. S., Tiede, M. K., and Guenther, F. H. (2012). Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation. PLoS ONE 7:e41830. doi:

10.1371/journal.pone.0041830

Cai, S., Ghosh, S. S., Guenther, F. H., and Perkell, J. (2010). Adaptive auditory feedback control of the production of formant trajectories in the Mandarin triphthong / iau/ and its pattern of generalization. J. Acoust. Soc. Am. 128, 2033–2048. doi: 10.1121/1.3479539

(11)

Feng, Y., Gracco, V. L., & Max, L. (2011). Integration of auditory and somatosensory error signals in the neural control of speech movements. Journal of Neurophysiology, 106, 667–679.

Guenther, F. H., & Vladusich, T. (2012). A neural theory of speech acquisition and production.

Journal Of Neurolinguistics, 25, 408–422.

Guenther, F. H., Ghosh, S. S., & Tourville, J. A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language, 96(3), 280–301.

Honikman, B. (1964). Articulatory settings. In D. Abercrombie, D. B. Fry, P. A. D. MacCarthy, N. C. Scott, & J. L. M. Trim (Eds.), In honour of Daniel Jones: Papers contributed on the occasion

of his eightieth birthday 12 September 1961 (pp. 73-84). London: Longman.

Houde, J. F., & Jordan, M. I. (1998). Sensorimotor adaptation in speech production. Science, 279, 1213–1216.

Legerstee, M. (1990). Infants use multimodal information to imitate speech sounds. Infant behavior

and development, 13(3), 343-354. doi:10.1016/0163-6383(90)90039-B

Liu, H., Wang, E. Q. et al. (2012). Vocal responses to perturbations in voice auditory feedback in individuals with Parkinson’s disease. PLoS One, 7(3).

Martin, C. D., Niziolek, C. A., Duñabeitia, J. A., Perez, A., Hernandez, D., Carreiras, M., & Houde, J. F. (2018). Online Adaptation to Altered Auditory Feedback Is Predicted by Auditory Acuity and Not by Domain-General Executive Control Resources. Frontiers in Human Neuroscience, 12. doi: 10.3389/fnhum.2018.00091

McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746-748. doi:10.1038/264746a0

Ménard, L., Polak, M., Denny, M., Burton, E., Lane, H., Matthies, M. L., . . . Vick, J. (2007). Interactions of speaking condition and auditory feedback on vowel production in

postlingually deaf adults with cochlear implants. The Journal of the Acoustical Society of

America, 121(6), 3790-3801. doi:10.1121/1.2710963

Ménard, L., Toupin, C., Baum, S. R., Drouin, S., Aubin, J., & Tiede, M. (2013). Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults. JASA, 134(4), 29752987

Ménard, L., Trudeau-Fisette, P., Cote, D., & Turgeon, C. (2016). Speaking clearly for the blind: Acoustic and articulatory correlates of speaking conditions in sighted and congenitally blind speakers. PloS One, 11(9), e0160088. doi:10.1371/journal.pone.0160088

Mills, A. E. (1983). “The development of phonology in the blind child,” in Mills (Ed.), Language Acquisition in the Blind Child: Normal and Deficient (pp. 18-41). London & Canberra: Croom Helm

Perkell, J. S., Guenther, F. H., Lane, H., Matthies, M. L., Stockmann, E., Tiede, M., & Zandipour, M. (2004). The distinctness of speakers’ productions of vowel contrasts is related to their

discrimination of the contrasts. The Journal of the Acoustical Society of America, 116(4), 2338-2344. doi:10.1121/1.1787524

Purcell, D. W., & Munhall, K. G. (2006). Adaptive control of vowel formant frequency: evidence from real-time formant manipulation. Journal of the Acoustical Society of America, 120,966977.

Robert-Ribes, J., Schwartz, J.-L., Lallouache, T., & Escudier, P. (1998). Complementarity and synergy in bimodal speech: Auditory, visual and audiovisual identification of French oral vowels in noise. The Journal of the Acoustical Society of America, 103(6), 3677-3689. doi:0001-4966/98/103(6)/3677/13/$10.00

Tourville, J. A., & Guenther, F. H. (2011). The DIVA model: a neural theory of speech acquisition and production. Lang Cogn Process, 26(7), 952–981.

(12)

Turgeon, C., Trudeau-Fisette, P., Lepore, F., Lippé, S., & Ménard, L. (2020). Impact of visual and auditory deprivation on speech perception and production in adults. Clinical Linguistics &

Phonetics, 1–27. doi:10.1080/02699206.2020.1719207

Veenstra, P., Everhardt, M. K., & Wieling, M. (2018). Vision deprived language acquisition:

Vowel production and ASR efficacy. Poster presented at the 16th Conference on

Laboratory phonology (LabPhon), Lisbon, Portugal.

Villacorta, V. M., Perkell, J. S., and Guenther, F. H. (2007). Sensorimotor adaptation to feedback perturbations of vowel acoustics and its relation to perception. J. Acoust. Soc. Am. 122, 2306–2319. doi: 10.1121/1.2773966

Referenties

GERELATEERDE DOCUMENTEN

This thesis analyzes the influence on the organizational identity of an acquired organization when it comes to an acquisition with a low degree of integration.. Based

The nonparametric test is employed in the research to explore that the succession date in the first half-year could improve the company financial performance while there are

For this very reason, the following section will show two instances of detailed economic sanctions lifted which will try to show the time sensitivity of import levels

The aim of the paper was to answer the question whether corruption ‘sands or greases’ the wheel of international trade. The findings of the estimation illustrate the relevant

The results show that when the competition in a market is high, marketing investments have more effect on having a positive credit rating, and in turn, reduce firm risk..

In the context of this study, it is stated that at this stage, behavioural intentions can be captured (e.g. likelihood that a consumer will continue purchasing products from the

Therefore, H10a: Gamblers with a prevention focus show a higher win-expectation and win- experience after a gain and a more negative win-expectation and win-experience

This report subsequently describes the (i) activities of the regulated entities, (ii) the theoretical framework and criteria for determining suitable peers, (iii)