• No results found

3. Study 1: A longitudinal study into the acquisition of spatial devices in two

3.4 Method

The study reported here draws on data collected in a longitudinal cohort study carried out between 2009 and 2015 by the Deaf Studies Research Group (DSRG) of Utrecht University of Applied Sciences (UUAS, Hogeschool Utrecht). Randomly selected students (n = 43) of four successive cohorts, who participated voluntarily, were followed during their four-year undergraduate study for either interpreter or teacher of NGT at UUAS. In order to compare the learner SL2-data to the production of L1-signers, we analyzed the production data of L1-signers who performed the same task.

3.4.1 Participants

For this particular study into the use of space, we transcribed and coded the data of two hearing female SL2-participants, referred to here with the pseudonyms Anna and Charlotte. Both SL2-participants were 19 years old when they enrolled in the program. The two SL2-participants neither had deaf acquaintances nor prior knowledge of NGT at the onset of their study and this research. Both participants had learned three foreign (spoken) languages in secondary school and had Dutch as their L1.

The three L1-participants (referred to as Nina, Peter, and Tess) performed the same task. Their mean age was 31 years. Two have deaf parents, and one has hearing parents. All have been using sign language since early childhood.

3.4.2 Procedures

The SL2-participants were interviewed by a skilled (deaf or hearing, see Appendix 3B) signer every ten weeks. These six- to ten-minute interviews covered everyday topics that were of interest to the participant, such as family, work, and hobbies. The interviews were not scripted, nor were subthemes determined in advance. When necessary, the interviewer encouraged the participant to expand their descriptions by asking follow-up questions. We obtained permission from the three L1-participants to analyze an existing recording of them performing the same task. The recorded interviews were transcribed and annotated.

The transcription was done in ELAN5 (Crasborn & Sloetjes, 2008) by two annotators and trained student assistants. Utterances of both interviewer and participant were marked on separate tiers and provided with a (free) translation. Subsequently, each sign produced by the participant was represented by a Dutch gloss on two tiers representing the dominant hand and the non-dominant hand. The eight transcriptions made by the student assistants were checked and corrected by annotator 1 (first author). The other recordings were fully transcribed by annotator 1 (n = 8) and annotator 2 (n = 13).

Both annotators were hearing, fluent SL2-signers who had received linguistic training. To ensure consistency of transcription, they used a codebook created for this purpose and organized meetings to discuss uncertainties. If needed, a linguist and/or a deaf informant were consulted.

After each meeting and subsequent consultation, the codebook was adjusted and refined. Inter-rater reliability was calculated for the glosses of three interviews (10 percent of the set), resulting in satisfactory inter-rater reliability figures of 86 percent, 93 percent, and 86 percent.

Once the utterances had been glossed, they were identified as analyzable or not. Incomplete utterances were excluded, as well as clarification requests, sign negotiation, frozen routines, and minors. Routines are signs or utterances that are memorized or fossilized (Lillo-Martin, Quadros, Berk & Hopewell-Albert, 2015), and minors are short utterances of which “no productive morphosyntactic structure can be presumed” (Van den Bogaerde, 2000, p.51). For this study, we considered as minors short back-channeling responses or evaluations about what was uttered, produced in isolation, such as NICE, EXCITING, GOOD. The last category to be excluded were exact imitations of an utterance of the interviewer. A schematic representation of the process of exclusion can be found in Appendix 3A.

An overview of the recordings, their length, the number of utterances and the number of included and excluded items, the number of analyzable signs, and the total amount of time each participant was signing is provided in Appendix 3B.

5 ELAN is developed by the Max Planck Institute for Psycholinguistics, The Language Archive, Nijmegen, The Netherlands (https://tla.mpi.nl/tools/tla-tools/elan/).

3.4.3 Coding

The analyzable utterances were coded by annotator 1 for the occurrence of devices to create and utilize spatial loci. Each instance of a sign being produced at a non-neutral location, as well as signs being directed towards non-neutral locations, was assigned a code corresponding to one of the categories displayed in Figure 3.2. An additional code was added if the sign was repeated for clarification. These repetitions were removed from the dataset at a later stage in order to prevent overestimation of participants’

performance. Moreover, extra codes were added to indicate whether a sign was modified according to the viewpoint of a character or from an observer perspective. Additional tiers were created to categorize the nature of pointing signs (e.g., establishment, reference, repetition, etc.) and to provide extra information about the verbs produced.

To avoid overestimation of use of space, we were conservative in coding instances of ambiguous spatial modification. An ambiguous form resembles the citation form, which makes it hard to identify whether the sign is form, and the locations of the established referent(s) were congruent with the location(s) attached to the verb, point, or nominal, the signs were coded as modified, with an additional code ‘congruent’, following Cormier et al.

(2015).

During the coding, a logbook was kept describing coding decisions, uncertainties, and typical (learner) behavior. Uncertainties were discussed with two sign language linguists, and codes were adjusted if needed. After finishing the process, all transcriptions and codes were re-examined.

Furthermore, a subset of the data (10 percent) was re-coded to check consistency with regard to the coding, resulting in intra-rater reliability scores of 83 percent (interview 303-2B) and 84 percent (307-1D).

In the next section, we will present the most interesting findings. We will start with a quantitative analysis, followed by a qualitative analysis.