• No results found

Looking for complexity: an analysis of NGT nonsense signs in the Dependency Model

N/A
N/A
Protected

Academic year: 2021

Share "Looking for complexity: an analysis of NGT nonsense signs in the Dependency Model"

Copied!
78
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

30-7-2018

Looking for

complexity

An analysis of NGT nonsense

signs in the Dependency Model

UNIVERSITY OF AMSTERDAM

MA General Linguistics

Supervisor: Dr. Roland Pfau

Lianne Vink

(2)

1

Samenvatting

Non-word repetitietaken worden door onderzoekers vaak gebruikt in het beschrijven van taalverwerving en het diagnosticeren van kinderen met een taalstoornis. Deze taken zijn ook ontwikkeld voor Britse Gebarentaal en Nederlandse Gebarentaal (NGT) in de vorm van een ‘non-gebaar repetitietaak’ (NSRT). In deze taak zien gebaarders een gebaar dat fonotactisch mogelijk is in hun taal, maar niet bestaat, en worden vervolgens gevraagd om het gebaar te herhalen. Een belangrijk onderdeel van een non-word/non-gebaar repetitietaak is het manipuleren van de complexiteit van de non-woorden/non-gebaren. Onderzoekers hebben veelal de fonologische parameters handvorm en beweging gemanipuleerd voor non-gebaren, maar de complexiteit van een (non-)gebaar is nooit gerelateerd aan een specifiek fonologisch model. In het eerste deel van deze studie is een groep van 40 non-gebaren uit een NSRT voor NGT geanalyseerd in het Dependency Model. Daarnaast is de complexiteit van hun (fonologische) representatie bepaald. De analyse toont aan dat de non-gebaren gecategoriseerd kunnen worden in vijf verschillende complexiteitniveaus, gebaseerd op het aantal fonologische kenmerken (‘features’) die nodig zijn om een gebaar te beschrijven. Daarna, in het tweede deel van de studie, zijn de nieuwe complexiteitscores gerelateerd aan de imitaties van acht Dove gebaarders die deelnamen aan de NSRT. Op deze manier werd onderzocht of de complexiteit van de non-gebaren het aantal fouten kan voorspellen die gemaakt worden door de gebaarders. De resultaten tonen inderdaad aan dat de kans dat een gebaarder een ‘simpel’ gebaar correct imiteert hoger is dan de kans dat een gebaarder een ‘complex’ gebaar imiteert. Daarnaast speelt de fonologische kennis van gebaarders ook een belangrijke rol in the NSRT, aangezien het fonologische systeem soms binnendringt in de imitaties van de gebaarders, wat resulteert in een ‘default’ articulatie van een gebaar. De resultaten laten zien dat het Dependency Model gebruikt kan worden om complexiteit van een non-gebaar te onthullen. Complexiteit kan echter berekend worden op verschillende manieren, en dus is de zoektocht naar complexiteit nog altijd bezig.

(3)

2

Abstract

Non-word repetition tasks have gained wide acceptance among researchers in describing language acquisition diagnosing children with a language disorder. These tasks have also been developed for British Sign Language and Sign Language of the Netherlands (NGT) in the form of a non-sign repetition task (NSRT). In these task, signers see a sign that is phonotactically possible in their language, but does not exist, and are asked to repeat it immediately. A crucial part of non-word/non-sign repetition tasks is the manipulation of the non-words/non-signs for complexity. For non-signs, researchers have mainly manipulated the phonological parameters handshape and movement, but (non-)sign complexity was never related to a specific phonological model. Therefore, in the first part this study, a set of 40 non-signs from the NSRT for NGT is analysed in the Dependency Model of to assess the complexity of their (phonological) representation. The analysis shows that the non-signs can be categorized into five different levels of complexity, based on the number of phonological feature specifications that are needed to represent a sign. Then, in the second part, these new complexity scores were related to the imitations of eight Deaf signers who participated in the NSRT, to investigate whether the complexity of the non-signs predict the number of mistakes that were made by the signers. Indeed, it was found that the odds that a signer imitates 'simple' signs correctly are higher than the odds that the signer imitates 'complex' signs correctly. Moreover, the phonological knowledge of the signers plays an important role in the NSRT as it sometimes penetrates the imitations of the signers, resulting in default articulations of phonological features. The results show that the Dependency Model can be used to reveal complexity of non-signs. However, as sign complexity can be assessed in different ways, the search for (sign) complexity is still ongoing.

(4)

3

Preface

I owe a lot of people a great debt for their incredible help and contributions to this thesis, for without them this research would not have been possible. This thesis is the final part of my studying time at the University of Amsterdam (UvA), which I have enjoyed very much. I want to thank all the professors, classmates, friends, PhD students and others for helping me enjoy my time at the UvA.

This thesis would not have been possible without the contributions of a number of people, and I would like to thank them for that. First, I would like to thank dr. Roland Pfau, my thesis supervisor. I am very grateful for his supervision and his detailed feedback, as well as his guidance when I had any questions. It really helped me to move forward and improve my thesis. I would also like to thank my classmate Katie Florian, for her support during this period of thesis writing. Of course, I would also like to thank dr. Vadim Kimmelman for his critical and valuable feedback on the pre-final version of my thesis.

I would like to give special thanks to dr. Els van der Kooij and her collegues Ellen Ormel and dr. Inge Zwitserlood. The data analysis would not have been possible without their help and (critical) feedback. I am very grateful for all the time and effort they put into helping me with the analysis of my thesis.

Last, but not least, I would like to thank Ulrika Klomp for her indispensable contributions to this thesis. Without her, I would never have had to chance to work on this interesting topic. I am very grateful for her permission to use her data set. I would also like to thank her for her valuable thoughts and insights on important parts of this thesis. Of course, I am also grateful to the participants of the NSRT for their permission to share their language data for this study.

Finally, I would like to give my thanks to my family and friends, who have always supported me in the past few years and have given me the opportunity to study (sign) linguistics.

(5)

4

Index

1. Introduction ... 6

2. Phonology of sign languages ... 8

2.1. Handshape ... 10

2.2. Location ... 14

2.3. Movement ... 15

2.4. Non-dominant hand ... 17

2.5. Interim summary: sign language phonology ... 19

3. Non-sign repetition tasks ... 21

4. Methodology ... 24

4.1. The Dependency Model ... 24

4.1.1. The theoretical assumptions of the Dependency Model ... 24

4.1.2. Nodes and features ... 26

4.1.3. Phonetic implementation ... 31

4.2. Methodology part I: non-sign analysis ... 33

4.2.1. Materials ... 33

4.2.2. Analysis procedure ... 33

4.2.3. Charting complexity ... 34

4.3. Methodology part II: scoring the imitations of the NSRT ... 35

4.3.1. Participants ... 35

4.3.2. Data collection procedure ... 36

4.3.3. Scoring method ... 36

5. Results part I: non-signs and their complexity ... 38

5.1. Examples of non-signs and their complexity ... 38

5.2. Frequency of features ... 40

5.3. Comparison of complexity scores ... 42

5.4. Implications for the NSRT ... 43

(6)

5

6. Results part II: sign complexity and NSRT scores ... 46

6.1. Error rate and sign complexity (Dependency Model) ... 46

6.2. Error rate and sign complexity (Klomp 2015)... 50

6.3. Correlation between error rate and feature specification ... 52

6.4. The influence of phonology on the NSRT ... 53

7. Conclusion ... 55

8. References ... 56

Appendices ... 58

Appendix A – All the phonological representations of the non-signs ... 58

Appendix B – descriptions of ‘mistakes’ of the signers ... 72

(7)

6

1. Introduction

Imagine you are talking to a friend and s/he suddenly asks you what you think about the new ‘naki’ /neki/1. Although this word is phonotactically possible in English – i.e. it confirms to the rules that govern how speech sounds can be arranged in English (cf. Coady & Evans 2008) – it is not a real known word in the language. Since this word does not occur in the language and has therefore no (known) meaning to you, you must rely on the phonological and acoustic information to repeat it back correctly (e.g. to ask your friend what it means).

Over the last two decades, researchers in linguistics have frequently asked listeners to repeat nonsense words. In these non-word repetition tasks (NWRTs), listeners hear a word that is phonotactically possible in their language, but does not exist, e.g. spundle, and are asked to repeat it immediately. This task taps on many linguistic processes, such as (speech) perception, phonological encoding (segmenting the signal into units), phonological memory, phonological assembly (assembling the units for articulation), and articulation. (Coady & Evans 2008). To repeat a non-word correctly, a listener must encode the signal into phonological (speech) units, create an adequate representation of these units, and memorise it long enough to report it back. While many of the linguistic processes involved in the task generally come easy to speakers of a language, they have been shown to be problematic for children with a developmental language disorder (DLD)2. These children score significantly less accurate on a NWRT (see Coady & Evans 2008 for a review), and because it taps on so many underlying linguistic skills, it is a powerful diagnostic tool to identify language disorders.

To investigate whether this tool can also be used to identify language disorders in sign language, Marshall and colleagues (2006) developed a non-sign repetition task for British Sign Language (BSL) which they piloted on a group of deaf children. In a later study, Mann et al. (2010) used a NSRT to investigate what the effect of the phonetic complexity of a non-sign would be on its perception and articulation within different age groups of deaf and hearing children. In both studies, phonetic/phonological complexity of a non-sign was manipulated by generating two levels of complexity (simple or complex) for the phonological parameters handshape and movement, resulting in four levels of complexity.

Recently, Klomp (2015) has developed a NSRT with 40 nonsense signs3 for Sign Language of the Netherlands (Nederlandse Gebarentaal; NGT). The NSRT was administered with 10 hearing CODAs (Children of Deaf Adults), who were bilingual in Dutch and NGT. One of the purposes of this study was to investigate the effect of complexity of the nonsense signs on the scores of the NSRT. Non-sign complexity was manipulated by generating two levels of handshape complexity (simple or complex) and three levels of movement complexity (path, internal or double movement), resulting in six levels of complexity.

Importantly, in all NSRT studies mentioned above the non-signs were manipulated for handshape and movement. These parameters were coded as simple or complex based on

1 Word is derived from the quasi-universal non-word repetition task of Chiat (2015).

2 A term that is frequently used in the literature is ‘specific language impairment’ (SLI). Recently, however, a panel of experts reached consensus on using the term ‘developmental language disorder’ (DLD) for children with a language impairment. Therefore, in this study, DLD refers to the same disorder as SLI (see Bishop 2017 for a discussion on reaching agreement on the terminology for children with a language disorder).

(8)

7

previous phonological research. However, complexity of the non-signs has never been related to a specific (phonological) model. Therefore, the purpose of this study is to analyse the 40 non-signs of Klomp (2015) by describing them within the Dependency Model of van der Kooij (2002); a phonological model that is based on and has been extensively used for NGT (the sign language of focus in this study). Secondly, I want to investigate whether the non-signs need to be recategorized for complexity based on the new analysis. For this purpose, I (re)analysed the imitations of (a part of) Klomp’s Deaf participants and related the new non-sign complexity scores from the Dependency Model to the scores of participants on a non-sign repetition task. My general research question is: Does phonological complexity (as analysed in the Dependency Model) affect the imitation of nonsense signs in Sign Language of the Netherlands?

The outline of this study is as follows: in the next section, I will start with an overview of sign language phonology. In this section, the main parameters of a sign language – handshape, location and movement – will be discussed, as well as the role of the non-dominant hand in two-handed signs. In section 3, I will discuss non-sign repetition tasks in more detail. Then, in section 4, I will provide an overview of the Dependency Model that was used for the analysis in this study and describe the materials and methodology of this study. In section 5, the first part of the results will be presented and discussed. The second part of this study will be discussed in section 6. Section 7 concludes this study.

(9)

8

2. Phonology of sign languages

At a first glance, spoken languages and sign languages appear to be very different at the level of phonology. While speech is shaped by (movements of) the articulators that are inside a tube extending between the lips and the vocal cords, the sign signal is shaped by the hands, face and body (Sandler 2012). A sign signal can thus be perceived by all (sighted) people, whereas you will need an MRI to see the articulators of a spoken language (Sandler 2017). Figure 1 shows the different articulators participating in the production of spoken and sign language.

(a)

(b)

Figure 1. Articulators of spoken language (a) and sign language (b); italicized items are static (Sandler 2017: 46).

Another striking difference between speech and sign is how they are perceived. The movements of the sign articulators are perceived by our eyes, while the movements of the speech articulators are perceived by our ears. In other words, there is a modality difference: the communication channel of spoken and sign languages is different, and it is often assumed that this ultimately leads to structural differences between the two language modalities (see

(10)

9

Crasborn 2012).4 In Figure 1b, for example, we see that the two hands of the signer have different handshapes and orientations; they constitute two different signs in Israeli Sign Language (ISL), WHITE on the dominant (in this case, right) hand and THERE on the

non-dominant (left) hand. Furthermore, the raised eyebrows and the head position signal that these signs are part of a dependent clause. Finally, the squinted eyes indicate that the information is shared between the signer and his interlocutors. In other words, the sign language signal constitutes a compositional structure that is conveyed by the hands, face and head simultaneously. In contrast, a vocal tract can only articulate one sound and one pitch at a given moment in the speech signal, and as such the structure of speech is (typically) conveyed sequentially by the speech articulators (Sandler 2017).

For a long time, people thought that the signs of a sign language were just holistic pictorial gestures, comparable to pantomime. However, in 1960, William Stokoe published a monograph that would disprove this conception forever. In his seminal work, Stokoe showed that the signs of American Sign Language (ASL) are built up of a finite number of discrete, contrastive and meaningless units (Stokoe 1960). These units can be combined and recombined to create new signs. Stokoe identified three (major) units (also called parameters): handshape, location and movement. These parameters are considered phonological, because each parameter can be used individually to create a minimal pair – signs that can be distinguished from each other by substituting a specification of one of these parameters by another specification of the same parameter (Sandler 2017: 47). Three examples from NGT are shown in Figure 2. The signs in 2b and 2c differ in only one parameter from the sign in 2a. The signs

HOLIDAY and ALSO have the same handshape, orientation and movement, but differ in their location (the cheek and the chest respectively). In contrast, the sign LIVE only differs in its handshape ( versus ) from the sign HOLIDAY, but shares its movement and location.

(a) (b)

4 Nevertheless, it is important to realize that the modality difference is not truly ‘black-and-white’. Spoken languages can be perceived both auditorily and visually. Hearing people often use manual gestures to

complement their speech, and these gestures can have many different functions. Furthermore, visual cues of the movements of the mouth can influence the perception of consonants (the McGurk effect; see Crasborn 2012).

(11)

10

(c)

Figure 2. Minimal pairs in NGT. The signs HOLIDAY (a) and ALSO (b) which only differ in location; and TO-LIVE (c) which only differs in handshape from HOLIDAY (van der Kooij 2002:

21-22).

Research into sign language phonology has demonstrated that the parameters handshape, location and movement each have their own internal structure. In sections 2.1 to 2.3, each of the parameters will be looked at in more detail. Lastly, sign languages make use of an extra articulator: the non-dominant hand. This non-dominant hand can function as a phoneme, a morpheme, or even a sign and occur simultaneously with the information that is signed with the dominant hand (see Figure 1 where the sign WHITE is made with the dominant hand, and the sign THERE with the non-dominant hand; Sandler 2012). The role of the non-dominant hand will be discussed in section 2.4.

2.1. Handshape

The hand can assume various forms during articulation of a sign or a gesture. To form a particular handshape, one or more fingers are “selected” and these fingers can either be in an extended, closed, curved or bent position (see Figure 3; Sandler 2012). Since handshape can function as a contrast for different signs (see Figure 2a and 2c), it was treated as a single unit in the earliest analyses (e.g. Stokoe 1960). In these models, the whole hand is represented with one symbol. However, later research showed that the “handshape unit” consists of various features (or properties) that may each contribute independently to the handshape contrasts. Two important classes of features are: selected fingers and finger position (Sandler & Lillo-Martin 2006).5 Selected fingers refer to “those fingers that can change their aperture [=position] (open-close) during the articulation of a sign and appear to be foregrounded” (Brentari 2011: 199).

5 In the literature different terms for ‘finger position’ have been used, for example ‘finger configuration’ (van der Hulst & van der Kooij in press) and “joints” (Brentari 2011).

(12)

11 Figure 3. All fingers selected (thumb ignored) in extended, closed, curved and bent position (Sandler 2012: 165).

The selected fingers are subject to phonological constraints.6 For example, it was found that the group of selected fingers stays the same throughout the articulation of the sign (Brentari 2011). This means that the sign in Figure 4 is ‘illegal’, since the number of selected fingers changes from one (the index finger) to three (the index finger, ring finger and pinkie).

Figure 4. A nonsense sign from the study of Brentari et al. (forthcoming). The handshape on the signer’s right hand changes from the index finger to one where the middle finger touches the thumb (Brentari 2011: 197).

On the other hand, the position of the unselected fingers is often predictable from the position of the selected fingers: if the selected fingers are open, curved, or bent – i.e. anything except closed – then the unselected fingers are closed. Conversely, if the selected fingers are closed, the unselected fingers are open (Sandler & Lillo-Martin 2006: 163). For instance, both and have one selected finger: the index finger7. The two handshapes differ in the position of the selected and unselected fingers. In the former, the selected finger has an extended finger position, and the unselected fingers are closed. In the latter, the selected finger is in a closed position, and the unselected fingers are extended. Note, however, that this ‘unselected fingers redundancy rule’ is not absolute, since the handshape is also common in many sign languages. In this handshape, both the selected and unselected fingers are closed.

Handshapes can also be distinguished from each other by finger position (of the selected fingers). In Figure 5a, we see the ASL sign for CANDY which is articulated by an extended index

6 See also Sandler (2012) for an overview of common phonological constraints on a sign that have been found across sign languages.

(13)

12

finger. In contrast, the ASL sign APPLE is articulated with a bent index finger. All other features (number of selected fingers, location, movement) are identical in both signs.Figure 6 shows some examples of finger positions that have been found to be contrastive in ASL.

Figure 5. The ASL signs for CANDY (a) and APPLE (b). The selected fingers, location and movement of these two signs are identical, they differ in their finger position. CANDY is

articulated with an extended finger, APPLE is articulated with a bent finger (Brentari 2011: 201).

Figure 6. Contrastive finger positions in ASL with all fingers selected (Brentari 2011: 201). In addition, the fingers can change their position during the articulation of a sign. This is also known as a hand-internal (or local) movement, or an ‘aperture change’ (van der Kooij 2002; Brentari 2011). The number of possible aperture changes is constrained by the hand sequencing constraint which states that if there are two finger positions in a sign, then one must be ‘open’ and the other ‘closed’ (Sandler & Lillo-Martin 2006: 154; see also Brentari 2011: 202). This means an aperture change as -> is permitted, but an aperture change as -> is not.

In current models of sign language phonology, handshape is usually described within the theory of Feature Geometry (Clements 1985). In this theory, the features of a certain phonology category (in this case handshape) are hierarchically organized according to the physiology of the articulators. Finger position, for example, has been found to be dependent of the selected fingers, both physiologically and phonologically. The selected fingers refer to one or more fingers as a whole, whereas the finger position refers to the finger joints (which can either be closed, extended, curved, or bent; see also Figure 3). Physiologically, the joints are part of the finger, and therefore they are dependent of the finger. Phonologically, the selected

(14)

13

fingers must all have the same finger position during the articulation of a sign (see Sandler 2012). Moreover, the finger position can change during the articulation of the sign (e.g. from closed to open in the NGT sign TO SEND; -> ), but the selected fingers must remain constant (see above). Thirdly, if there is assimilation of the selected fingers to a preceding of following sign, then the finger position assimilates along. An example is the ASL compound FAINT, displayed in Figure 7. The sign is composed of the signs THINK and DROP. In the compound form, the selected fingers of the sign DROP, as well as their position, spread to the first part of

the sign. Crucially, a compound where only the selected fingers but not their position assimilates, is not attested.

Figure 7. Independent sign members of the compound (MIND + DROP) and the compound

(FAINT) with selected fingers and finger position assimilation (images based on Sandler &

Lillo-Martin 2006: 155).

In Figure 7, we see that the orientation of the hands assimilates in the compound as well. The observation that assimilation of the orientation is not independent from the assimilation of the selected fingers and their position has been taken as evidence that orientation is a subordinate of handshape (Sandler & Lillo-Martin 2006).8

Although Stokoe (1960) did not consider orientation to be a (major) separate parameter of signs, later research showed that orientation can indeed be contrastive in some signs. Two examples are the signs CHILD and THING in ASL. These two signs only differ in the orientation

of the palm. Orientation was first described in terms of ‘absolute orientation’ which refers to the direction of the palm and/or the fingers (in space). However, later research by Crasborn and van der Kooij (1997) showed that a description in terms of absolute orientation cannot capture all the variations of handshapes, and therefore they proposed to describe the orientation of signs in ‘relative terms’. In this case, the orientation refers to “the part the hand that points in the direction of the end of the movement (the final setting) or towards the specified location” (van der Hulst & van der Kooij in press: 11; see section 4).

88 However, it is possible that orientation assimilates alone in compounds, without the fingers and their position (see the example OVERSLEEP in Sandler & Lillo-Martin 2006: 156). For now, it is important to note that if the selected fingers and their position assimilate, the orientation must assimilate as well.

(15)

14

Within sign language phonology, handshapes are often characterized as being more or less ‘marked’. Markedness refers to a cluster of properties that characterize a handshape as less marked. A list of properties of ‘unmarked’ handshapes is given in Table 1 below:

Table 1. Examples and properties of unmarked handshapes (based on Sandler & Lillo-Martin 2006: 161-162).

Common properties of unmarked handshapes

1. Unmarked handshapes are ‘maximally distinct’

2. Unmarked handshapes are easiest to articulate motorically.

3. Unmarked handshapes are the most frequent handshapes, both cross-linguistically and within a sign language (e.g. ASL).

4. Unmarked handshapes are acquired first by children

5. When the shape non-dominant hand is different from that of the dominant hand in two-handed signs, then its shape is (typically) limited to one of the unmarked handshapes (see also section 2.4).

6. Children and aphasic patients often substitute a more “marked” handshape for an umarked handshape.

2.2. Location

Location is another major phonological parameter of sign language. It has a special status that is different from that of handshape and movement. Firstly, in sign recognition tasks, location is the first parameter to be identified. Secondly, priming tasks showed that there was an inhibitory effect between primes and targets that shared the same location. This has been interpreted as further evidence that location helps the ‘viewer’ in early sign recognition (Orfanidou et al. 2009). Lastly, in a sign-spotting task by Orfanidou and her colleagues, it was found that the parameter location was the least affected in sign misperceptions (see Orfanidou et al. 2009 for more details). Therefore, they concluded that location is “the least ambiguous parameter during online sign recognition” (Orfanidou et al. 2009: 311).

The parameter location can also be divided into different subclasses, just as the internal structure of handshape. The usual distinction is that between a major body area or place and setting features within this area (Sandler & Lillo-Martin 2006; Sandler 2012). Signs only move within one major body area per morpheme. These locations are the head, the trunk, the non-dominant hand, and the arm. As each sign contains a movement (see also below), a beginning and end location within these major areas can be specified. These locations are called settings. While the place of articulation – e.g. the head or the trunk – must remain constant within a sign, the setting values may change (see section 2.1 for a similar distinction between selected fingers and finger position; Sandler & Lillo-Martin 2006). In Figure 8a, for instance, location is the trunk and the setting of the sign changes from the contralateral side of the trunk (the opposite

(16)

15

side of the body relative to the articulating hand) to the ipsilateral side of the trunk (the same side of the body as the articulating hand; van der Kooij 2002: 163). Similarly, in Figure 8b, the place of articulation is the head, and the sign moves from a setting high on the head to a setting low on the head.

(a) (b)

Figure 8. The NGT signs for LATE (a) and SERIOUS (b). The sign in (a) is specified for the location [trunk] and the settings [contra] to [ipsi]; the sign in (b) is specified for the location [head] and the settings [high] to [low] (van der Kooij 2002: 250-251).

To sum up, the articulation of the sign remains within the same location, but the specific settings within this location may change. The specification of two different settings results in a path movement (see section 2.3).

2.3. Movement

When you see a person communicating in a sign language, the movement of the hands is the most salient part. Movement is a fundamental parameter because signs are not well-formed without it (Sandler & Lillo-Martin 2006; Sandler 2011). This is surprising because it is possible to string movementless signs together into sentences, with only phonetic transitional movements between them. Moreover, movement can be the only distinctive feature between two different lexical signs – and thus create a minimal pair – as in the example of ISL in Figure 9. Finally, movement plays an important role in the verb agreement system of sign languages and in temporal aspect inflections (see Sandler 2012 for more details).

There are two types of lexical movement that can characterize a sign: path movement and hand-internal (or local) movement. A path movement is “generated at the shoulder or elbow and results in moving the hand in a path through space” (Sandler 2011: 578). Examples of a path movement are shown in Figure 8 and 9, where the hand moves from one location (or more specifically setting; see section 2.2) to another. A path movement can have different shapes: the default is a straight movement, but signs can also have an arc-shape movement (see Figure 9b), a ‘Z’ shape (as in the NGT sign LIGHTNING) or a circle movement (as in the NGT sign

(17)

16 Figure 9. The ISL signs for ESCAPE (a) and BETRAY (b) which only differ in in movement. The sign ESCAPE has a straight movement, while the sign BETRAY has an arc movement (Sandler

2012: 164).

A hand-internal movement is “generated either by the wrist, resulting in orientation change, or by the fingers, resulting in a change in the shape of the hand” (Sandler 2011: 578). Two NGT examples of a hand-internal movement are illustrated in Figure 10. In the sign in 10a only the selected fingers change their position (i.e. an aperture change). In contrast, in the sign in 10b only the orientation of the hand changes. Just as with path movements, there are different types of hand-internal movement, such as opening, closing (see Figure 10a), curving, bending, rotating, and nodding (Sandler 2011).

(a) (b)

Figure 10. The NGT signs for WET (a) and DIFFICULT (b). The sign in (a) has a closing aperture

change; and the sign in (b) has an orientation change from prone to supine (van der Kooij 2002: 219, 227).

A path and a hand-internal movement can occur independently, as in Figures 8 and 10, or they can be combined. If they occur together in one sign, they are articulated simultaneously, and their begin and end point are aligned (van der Hulst & van der Kooij in press). To illustrate, in the sign NICE in NGT, shown in Figure 11, the closing movement of the fingers is aligned with the downward path movement.

(18)

17 Figure 11. The NGT sign NICE,which contains both a path and a hand-internal movement that

have their beginnings and endings aligned (van der Kooij 2002: 80).

Finally, some signs also have a secondary movement that involves rapid repetitions of an aperture or orientation change, or else a wiggling movement of the fingers. These secondary movements can also occur independently or together with a path movement.

2.4. Non-dominant hand

One unique property of sign language phonology is that there are two symmetrical articulators: whereas in spoken language, we only have one tongue, in sign language, we can use two hands. However, research has shown that the role of this second hand is restricted, and that it cannot convey independent messages. Phonetically, it is very difficult to motorically control the two limbs independently, and it would be impossible to perceive two completely different movements. Similarly, at a cognitive level, it would be difficult to perceive and articulate two independent thoughts at the same time (Crasborn 2011).

Signers usually have a preferred hand that they use in one-handed signs, but the distinction between either the left or the right hand as the preferred hand does not have any linguistic function. Furthermore, signers may reverse the relation between their preferred and non-preferred hand at will and at any time. This process has been called ‘Dominance Reversal’ (Crasborn 2011). The preferred hand is usually called the strong or dominant hand in signing, and this is the hand “that is most active, realizing the movement in two-handed signs where one hand moves with respect to the other, passive, hand” (Crasborn 2011: 225). Conversely, the other hand is referred to as the weak or non-dominant hand.

Two-handed signs can be divided into symmetric and asymmetric signs – or, alternatively, balanced, and unbalanced signs. In balanced signs, both hands move, while in unbalanced signs one hand functions as a location for the other hand. Examples of these two types of two-handed signs in NGT are shown in Figure 12.

(19)

18 Figure 12. Balanced signs STAND, TRAFFIC and WEAKLING in NGT (a) where both hands move;

unbalanced signs EVIDENCE, ILL and PHONOLOGY in NGT (b) where one hand acts as a location

for the other hand and only one hand moves (Crasborn 2011: 227; see also Sandler 2012: 175). Balanced signs are subject to the Symmetry Condition, whereas unbalanced signs are subject to the Dominance Condition (Battison 1978: 33–34; in Morgan & Mayberry 2012: 149):

• Symmetry Condition: if both hands of a sign move independently during its articulation, then both hands must be specified for the same location, the same handshape, and the same movement (whether performed simultaneously or alternatingly).

• Dominance Condition: if the hands of a two-handed sign do not share the same specification for handshape (i.e. they are different), then one hand must be passive while the active hand articulates the movement, and the specification of the passive handshape is restricted to be one of a small set (from ASL):

In section 2.1, however, we have seen that in sign language phonology, we rather refer to selected fingers and their configuration than to the handshape as a whole, for the selected fingers and their configuration can be marked or unmarked. Yet Eccarius and Brentari (2007; in Crasborn 2011) found that two-handed signs can have a maximum of two marked phonological structures. They revised the original Dominance Condition to account for classifier constructions, a special part of the sign language grammar where it is found that the two hands can each function as a separate (meaningful) morpheme rather than a phonological unit (Sandler 2012). The Symmetry and Dominance Condition seem to be more relaxed in these constructions, however, there is still a limit to the amount of complexity that can be manifested by the two hands (Eccarius and Brentari 2007: 1187; in Crasborn 2011: 229):

(20)

19

• Revised Dominance Condition: (a) If both hands do not share the same specification for both selected fingers and joints (i.e. the handshapes are different), then (b) one hand must be passive while the active hand articulates the movement, and (c) the form as a whole (i.e. the selected fingers and joints for both hands) is limited to two marked phonological structures, only one of which can be on the passive hand.

Figure 13. Classifier construction in NGT for ‘Human stands on airplane’ (Crasborn 2011: 229).

The sign in Figure 13, for instance, violates the (unrevised) Dominance Condition as formulated above, as the non-dominant hand has a -handshape. In fact, the sign has a marked finger selection for both the dominant ( ) and the non-dominant hand ( ). The aperture, however, cannot be marked, because then there would be three marked phonological structures, which would violate the condition. Therefore, the selected fingers of the active hand are extended (=unmarked).

Interestingly, van der Kooij (2002) found that in unbalanced two-handed signs in NGT the ‘default’ unmarked handshape of the non-dominant hand is the -hand. All other handshapes would have some morphemic status (as in Figure 13) or are iconically motivated, as in the sign TEA in NGT, where the

-

handshape of the non-dominant hand refers to a cup (see also Crasborn 2011).

2.5. Interim summary: sign language phonology

In this section, we have seen that sign language have phonology; “a level of structure that is not governed by meaning but by form” (Sandler 2017: 54). The phonological parameters of a sign language – handshape, location and movement – each have their own internal structure. So, “in both spoken and signed languages, a phonological level exists, characterized by contrastive features, hierarchically organized feature categories, syllables, and structural elements that are linear, all organized around form rather than meaning. These properties suggest a common cognitive system in some sense”, as noted by Sandler (2017: 58). Importantly, however, some phonological properties that exist in both modalities are found in different proportions: whereas spoken languages organize their units mostly in a linear way, there is little linearity within a

(21)

20

sign, as most units are realized simultaneously. Additionally, in both language modalities, there are arbitrary and iconic elements, but whereas spoken languages are mostly characterized by arbitrary elements, iconicity is quite pervasive in all levels of structure in sign languages. These alternations can be explained by differences in auditory and visual perception, together with differences in articulators and memory capacity (Sandler 2017).

Finally, it is important to realize that, although I have mostly referred to NGT in illustrating the constraints and organisation of the sign language phonology, the same characteristics seem to hold across sign languages in general (Sandler 2012). Sign languages are relative young languages; no known sign language seems to be older than a few hundred years. Nevertheless, based on phonological research of newly emerging sign languages (see Sandler et al. 2011), we can predict that more language-specific phonological constraints and processes will develop when sign languages grow older over time.

(22)

21

3. Non-sign repetition tasks

This study focuses on a non-sign repetition task (NSRT). In this task, a signer sees a nonsense sign that is phonotactically possible in his/her language but does not exist. After seeing the sign, the signer is asked to repeat it back. These non-sign repetition tasks are based on a non-word repetition task (NWRT), developed for spoken languages. These NWRTs have recently gained wide acceptance in linguistic research and are frequently used to measure phonological working memory capacity, even though the task taps on many underlying language processes, such as speech perception, lexical and phonological knowledge, motor planning and articulation (Coady & Evans 2008: 2). For this reason, a NWRT was found to be a powerful tool to explore the language difficulties of children with a developmental language disorder (DLD), since many of the linguistic processes that are necessary for an accurate repetition of a nonsense word are problematic for these children. It is important to realize, however, that because an NWRT taps on so many underlying skills, it says nothing about the nature of the underlying deficits (Coady & Evans 2008).

Non-words have, by definition, no lexical meaning (i.e. they are unfamiliar) and have a zero frequency. Therefore, the accuracy of the word repetition cannot be influenced by factors as word frequency, familiarity, and age of acquisition; factors that play an important role in the performance children with DLD. Nonetheless, Snowing et al. (1991; in Coady & Evans 2008: 10) argue that “children will use any lexical knowledge to support non-word repetition, including knowledge of phonological, morphological and prosodic regularities”. For example, various studies have found that children with a typical language development (TD children) and children with DLD are both affected by word-likeness effects. That is, they repeat non-words more accurately if these non-words reflect properties of the lexicon (Coady & Evans 2008).

Although many different non-word repetition tasks have been developed for spoken language – with even an attempt to develop a quasi-universal NWRT (see Chiat 2015) – the development of such a task for sign language is still much in its infancy. Marshall and her colleagues (2006) were the first to attempt this challenge. They noticed that the underlying cause of DLD is still debatable, and they argued that investigating DLD in sign languages could shed new light on some of the theories. To this end, a non-sign repetition task for British Sign Language (BSL) was developed “in order to investigate phonological abilities in signing children and adults, and to determine whether such a task might provide a useful diagnostic test for SLI signing children” (Marshall et al 2006: 350). Just as in NWRTs, the non-signs in the NSRT were manipulated for (phonetic) complexity. Importantly, in spoken language there are two ways of manipulating (the complexity of) the non-words (Mann et al. 2010: 66): manipulating the number of syllables, resulting in differences in the quantity of the phonological material; or manipulating the segmental content, syllabic complexity and/or metrical structure of the stimuli, resulting in differences in the nature of the phonological material. Both the quantity and the nature of the phonological material (or representation) seem to have an impact on the participants’ accuracy on the task: both children with typical and atypical language development score less accurate when the length and the complexity of the non-words increase (see Mann et al. 2010 for an overview of the studies). For sign languages, it is difficult to manipulate the length of a sign, since signs are overwhelmingly monosyllabic

(23)

22

(Sandler 2008). Consequently, two other sign parameters have been manipulated for phonetic complexity: handshape and movement.

In the studies by Marshall et al. (2006) and Mann et al. (2010), both handshape and movement were manipulated to control for the complexity of the non-signs. The levels of phonetic complexity in a 2x2 design are shown in Table 2. At level 0, the sign has an unmarked handshape (

, ,

,

) and one type of movement9 (classified as a ‘simple movement’). At the next level, level 1a, the sign has an unmarked handshape and a movement cluster (a ‘complex movement’ that consists of a path movement combined with a hand-internal movement). The signs at level 1b have a marked handshape (all handshapes other than the set of simple handshapes) and a simple movement. Lastly, the signs at level 2 have a marked handshape and a movement cluster.

Table 2. Levels of complexity of the non-signs in Marshall et al. (2006) and Mann et al. (2010) (based on Klomp 2015: 40).

Handshape

Simple Complex

Movement

Single Level 0 Level 1b

Cluster Level 1a Level 2

Mann et al. (2010) tested a total of 91 deaf children, who were divided into three age groups: 3-5 years old, 6-8 years old, and 9-11 years old. Furthermore, 46 hearing children with no experience of BSL were included to determine whether language experience and/or language-specific phonological knowledge influenced the repetition accuracy. In accordance with one of their hypotheses, the authors found that repetition accuracy was lowest for the most (phonetically) complex items, even though signs with a simple handshape and simple movement were not repeated more accurately than signs with either a complex handshape or a movement cluster. Still, children made more errors in signs with a complex handshape, and movements were deleted more often (in movement clusters) if the sign had a complex handshape (i.e. in signs of level 2). Secondly, for all age groups, it was found that signs with an internal movement were repeated less accurately than signs with a path movement. Thirdly, hearing children performed significantly worse on the NSRT than the deaf children, which indicates that phonological knowledge does play a role in the repetition of non-signs.

In a later study, Klomp (2015) developed a NSRT for NGT to investigate whether the effect of (phonetic) complexity could also be found in this sign language. Contrary to the studies of Marshall et al. (2006) and Mann et al. (2010), Klomp divided the movement into a path movement, a (hand-)internal movement and a cluster of both movement types. Based on the results of the study of Mann et al. (2010), a hand-internal movement was classified as more complex than a path movement. This results into two extra levels of complexity, as is shown in Table 3. 10 CODA’s (Children Of Deaf Adults), aged between 21-61 years old, were recruited to participate in the NSRT.

9 Note that in the design of Marshall et al. (2006) and Mann et al. (2010) a path movement and a hand-internal movement are considered equally complex, in contrast to the study of Klomp (2015; see below).

(24)

23 Table 3. Levels of complexity of the non-signs in the study of Klomp (2015) (based on Klomp 2015: 47).

Handshape

Unmarked Marked Movement

Path Level 1a Level 2a

Internal Level 1b Level 2b Path + Internal Level 1c Level 2c

An initial analysis indicated no significant effect of handshape on the scores at the NSRT. In a second analysis, three items were excluded because they involved a handshape change from a marked handshape to an unmarked handshape or vica versa. This time a significant effect was found. Furthermore, it was found that signs with a movement cluster were repeated significantly less accurately than signs with either a hand-internal or path movement. However, no significant difference was found between signs with a path movement and signs with a hand-internal movement. Klomp (2015) hypothesized that children are sensitive to this movement distinction, while adults are not. She concluded that with a few adjustments and adaptations to make the test more suitable for children, the NSRT could be used in future research to DLD in deaf and signing children.

As a next step into that direction, the purpose of the present study is to (re)analyse the 40 nonsense signs of the NSRT of Klomp (2015) in a specific phonological model of NGT: the Dependency Model. Then, the relative complexity of new phonological descriptions will be related to the scores of (a part of) Klomp’s participants, to see if the stimuli needs to be adjusted, for example by adding new signs, or recategorizing the current set.

(25)

24

4. Methodology

This study is divided into two parts. In the first part, I (re)analysed 40 non-signs of the NSRT developed by Klomp (2015) by describing the phonological features of the non-signs. This description is based on a phonological model called the Dependency Model (van der Kooij 2002). Hence, this model forms a crucial part of the methodology of this study, and therefore it will be introduced and discussed in section 4.1. Then, in section 4.2, I will discuss my analysis procedure of the non-signs using this Dependency Model, as well as my method for assessing the complexity of these non-signs.

In her original study, Klomp (2015) tested 10 CODA’s on the NSRT. Afterwards, she recruited new groups to participate in the same test. In the second part of this study, I reanalysed the imitations of eight Deaf participants of such a new participant group of Klomp’s study. The imitations of the participants were reanalysed, because I use a slightly different scoring method than Klomp. The characteristics of the participant group as well as my scoring method will be discussed in section 4.3.

4.1. The Dependency Model

Various phonological models have been proposed to describe the different parameters of a sign and how they relate to each other. Each model has its own strengths and weaknesses, and so far no consensus has been reached about which model fits best for the phonological description of signs. For this study, I will focus on the Dependency Model developed by van der Kooij (2002), because this model is based on NGT and has previously been applied extensively to this language, which is also the sign language of focus in this study. This should not be taken to imply that other models could not be used for the same goal. Other models that have been developed in the literature are the Hand-Tier model by Sandler (1989; see also Sandler & Lillo-Martin 2006) and the Prosodic model by Brentari (1998). I will not discuss these models in further detail here; the interested reader is encouraged to consult the stated references for more detailed descriptions of the two models.

In section 4.1.1, I will start with an introduction of the Dependency Model and its theoretical assumptions based on van der Kooij (2002). Then, in section 4.1.2, I will discuss the features and structure of the model in more detail. Lastly, in section 4.1.3, I will explain the notion of phonetic implementation rules, which are a crucial part of the model.

4.1.1. The theoretical assumptions of the Dependency Model

Figure 14 shows the tree structure of the Dependency Model together with all the features that occur in the model. The (end) nodes are represented in black; the features are represented in light grey. The three nodes that are represented in italics (dynamic orientation, finger configuration and setting) are dynamic and represent the movement aspects of a sign (see also below; Demey & van der Kooij 2008; see also Channon & van der Hulst for a discussion of dynamic nodes).

Following the principle of Dependency Phonology (cf. Ewen 1995), the compositional structures of phonological representations are binary, and these binary structures are ‘headed’ (van der Kooij 2002; Demey & Van der Kooij 2008). Heads express perceptually salient,

(26)

25

invariant and stable information that cannot spread independently without their dependents. Finger Selection, for instance, is the head of Finger Configuration. This way, the structure of the model accounts for the Selected Fingers constraint discussed in section 2.1, which states selected fingers cannot change during the articulation of the sign. The Finger Configuration is dependent on the Selected Fingers, both phonologically and physiologically (see section 2.1).

Both heads and dependents can have branching structures (see Figure 15). Branching in (invariant) head structures is interpreted as merging the two features resulting in one phonetic event (e.g. [one:all] for Selected Fingers, see Figure 15a). In contrast, branching in dependent structures is interpreted sequentially, as a sequence of two phonetic states (e.g. [close] → [open] for Finger Configuration, see Figure 15b; Demey & van der Kooij 2008).

(27)

26 Figure 15. Branching of a head (H) structure (a); and branching of a dependent (D) structure (b) (van der Kooij 2002: 40).

Finally, in this phonological model, the features are unary. Van der Kooij (2002: 43) mentions that “unary features can be regarded as extreme versions of underspecified binary features, since the presence or absence of a unary feature also provides a binary contrast […] An important difference between binary and unary features is the fact that one cannot refer to the set of signs that is defined by the absence of a unary feature”.

4.1.2. Nodes and features

In this section, I will briefly discuss all the relevant nodes and features in the model. The information in this section is based on van der Kooij (2002) and van der Hulst and van der Kooij (in press).

4.1.2.1. The Active Articulator

Handshape is one of the most complex parameters of a sign language (see section 2.1). In the Dependency Model, handshape is subsumed under the ‘Active Articulator’ node. Handshape is not viewed as a holistic unit but consists of a Finger Selection unit and a Finger Configuration unit. The selected fingers refer to those finger ‘that are ‘foregrounded’ (selected), as opposed to the ‘backgrounded’ (non-selected) ones (van der Hulst & van der Kooij in press: 5). The Selected Fingers node can contain two features, [all] and [one], and these features can occur by themselves or in combination. When the features are combined, they enter into a head-dependency relation (see Figure 15a), which results in four possible ‘handshapes’. These features can also be combined with the feature [ulnar], which allows for three extra possible handshapes (see Table 4). The position of the thumb is not accounted for in the model, since it is often predictable and can thus be described with phonetic implementation rules (see section 4.3). However, when the thumb is the only selected finger, it must be specified with the feature [out] (e.g. in the -handshape).

(28)

27 Table 4. The features of the Selected Fingers node and their interpretations (van der Hulst & van der Kooij in press: 8)

Finger

Selection [one]

[one => all] ‘one over all’

[all => one]

‘all over one’ [all] Index Index and Middle Index, Middle & Ring all four

Side: [ulnar]

Pinky Index and Pinky Middle, Ring & Pinky

The features in the Finger Configuration node specify the position of the selected fingers. The head node Aperture refers to the most frequent finger configurations and contains the features [open] and [close] that specify the relation between the selected fingers and the thumb. Again, these features can occur by themselves or in combination, but contrary to the features in the (static) Finger Selection node, the aperture features are dynamic and are interpreted sequentially (see Figure 15b). A combination of aperture features results in a hand-internal movement. So, a hand-internal movement as in the NGT sign TO-SEND ( → ) would be specified with the

feature [all] in the Selected Fingers node, and the feature [close] → [open] in the Aperture node. The finger joints can also be flexed, as in the C-hand ( ) or the ‘claw’-hand ( ). For these handshapes, the feature [curve] is specified. Lastly, the feature [wide] is specified when the fingers are abducted in a sign (without aperture change), for example when the fingers express individual entities (as in the number FIVE ‘

) or express the meaning of vastness (e.g. in the NGT sign ROOM;van der Kooij 2002: 152).

4.1.2.2. Orientation

Orientation can be described in absolute and in relative terms. According to Crasborn and van der Kooij (1997: 38), absolute orientation refers to “the orientation of the hand in space, or, more informally, the direction that the sides of the hand are pointing at”. However, they argue that describing a sign in terms of absolute orientation cannot capture all the handshape variations that are found. Therefore, they propose to describe the signs in terms of ‘relative orientations’, which is “the part the hand that points in the direction of the end of the movement (the final setting) or towards the specified location” (van der Hulst & van der Kooij in press: 11; cf. section 2.1). Put differently, the relative orientation specifies “how the hand relates to the place of articulation” (Crasborn & van der Kooij 1997: 38). The feature in the Relative Orientation node refers to the part of the hand that relates to (i.e. points to) the location, for instance, the palm side ([palm]), dorsal side ([back]), thumb side ([radial]), pinky side [ulnar] or wrist side ([root]) of the hand, as well as the tips of the selected fingers ([tips]; see Figure 16).

(29)

28 Figure 16. The different sides of the hand (Crasborn 2012: 10)

In the Dependency Model, there is only one phonological orientation change: the rotation of the forearm. Other orientation changes such as pivoting or nodding movements are analysed as regular setting changes (see section 4.2.3). The Dynamic Orientation node is specified for the features [prone] and [supine], and these features typically occur in pairs. A clockwise rotation of the hand is specified by a sequence of [prone] to [supine], a counterclockwise rotation is rendered by the sequence [supine] to [prone] (van der Kooij 2002). Figure 17 shows the NGT signs for TO-KNOW and TO-RECOGNIZE and their specifications in the Relative Orientation and

Dynamic Orientation nodes.

Figure 17. The representations of the NGT signs TO-KNOW (a) and TO RECOGNIZE (b) in the

(30)

29 4.1.2.3. The Passive Articulator

The Passive Articulator node branches into the (head) node Location and the (dependent) node Setting. As already discussed in section 2.2, the Location node refers to the major place of articulation or area within which the hands move, and the Setting node refers to the specific beginning and end locations within this major place of articulation (see also van der Hulst & van der Kooij in press). The Location node dominates several features, which indicate the many locations where a signer can articulate a sign: the head, the trunk, the arm, and the non-dominant hand (see below). Within the head area, further location distinctions can be made, as is shown in Figure 18. These further distinctions are made with the features [head: high] and [head: mid] which can occur by themselves or in a dependency relation. Signs that are articulated in the neutral space in front of the signer are underspecified for location (see also section 4.2.3). Van der Kooij (2002) argues that this is the most unmarked location in terms of frequency, and it is more susceptible to spatial modifications of signs (e.g. in agreeing verbs).

Figure 18. Location distinctions on the head (van der Kooij 2002: 174).

In section 2.4 the Symmetry and Dominance Condition for two-handed signs were discussed. As for the former, van der Kooij (2002: 44) mentions that “this restriction is reflected in the current model by not having a separate node for the weak hand in mono-morphemic signs, thus implying that the weak hand can only be present in exactly the same way as the strong hand is”. As for the latter, it is mentioned that the fact that the non-dominant hand cannot have a distinct movement or location is “precluded by the fact that a separate weak hand node does not exist and that the weak hand functions as a location feature, thus not allowing the weak hand to have a location of its own” (ibid.). Furthermore, the fact that the handshape of the non-dominant hand is restricted if it functions as a location for the dominant hand is reflected by the fact that the location specification [hand] is by default interpreted as a -hand. In addition, the specific

(31)

30

location on the hand is specified as [hand: broad] (=the palm side), [hand: narrow] (=the radial side), and [hand: broad: reversed] (=the dorsal side)10. If the handshape of the passive non-dominant hand is the same as the one of the non-dominant hand, then the feature [symmetrical] for the Manner of Articulation is also added to the representation.

The features that are dominated by the Setting Node are represented in Table 5. Setting features typically occur in pairs and concern the distance to a location (proximal-distal), the height within a location (high-low), and the lateral side of the location (ipsi-contra) (van der Hulst & van der Kooij 2006). A sequence of two setting features results in a path movement. Figure 19 shows how a sign can be represented within the Passive Articulator.

Table 5. Setting features in the Dependency Model (table based on van der Hulst & van der Kooij 2006: 274)

Setting features

Ipsilateral Setting on the same side of the body as the articulating hand Contralateral Setting on the opposite side of the body as the articulating hand High Above the middle of the specified location

Low Below the middle of the specified location Near Within a few inches of the place

Far A comfortable arm’s length from the place

Proximal For the locations [hand] and [arm]: near the fingertip edge

Distal For the locations [hand] and [arm]: near the heel (or elbow) edge or base

Figure 19. The representation of the NGT sign TIRED within the Passive Articulator node of the Dependency Model (van der Hulst & van der Kooij 2006: 274).

4.1.2.4. Manner of Articulation

The parameter movement does not constitute a separate node in the Dependency Model. Instead, it is represented by branching (dependent) nodes, such as Setting, Dynamic Orientation and Finger Configuration. As discussed in the sections above, a sequence of two setting features renders a path movement; a sequence of two dynamic orientation features results in an orientation change; and a sequence of two aperture features is interpreted as a hand-internal

10 Van der Kooij (2002) proposes a tripartite distinction for the locations of the hand: [hand:broad],

[hand:narrow], [hand:broad:reversed]. The feature [reverse] is considered to be an extra feature (similar to the dependency relation between in the feature [one:all]). If a location is specified with the feature [reverse], this feature is counted as an extra feature on top of the other features (i.e. a location specification of

(32)

31

movement. The default interpretation of the path movement is a straight movement (as in the sign TIRED in Figure 19), but a path movement can also be circular ([circle]) or bidirectional. Furthermore, all types of movements can also be repeated multiple times during the articulation of the sign.

4.1.3. Phonetic implementation

An important theoretical assumption of the model is that all predictable and redundant information of a sign is kept out of the phonological representation. To achieve this, implementation rules are needed that associate the underlying phonological representations with a certain phonetic realization (Demey & van der Kooij 2008). There are two types of phonetic implementation rules (PIRs). First, there are default or redundancy rules that connect the phonological features to a certain phonetic realization. Default rules will fill in empty (underspecified) nodes in the model. For example, if the Location node remains underspecified, a default rule renders the feature [neutral space] as a location of the sign. Additionally, default rules can supply a default phonetic realization to features that are specified. For example, the default interpretation of the feature [one] is selection of the index finger (contrary to the pinky

where the extra feature [ulnar] needs to be specified).

Another type of implementation rules are ‘allophonic’ rules. These rules supply an allophonic realization of a feature based on the presence or absence of another phonological feature (Demey & van der Kooij 2008). For example, the

-

handshape can be considered as an allophone of the -handshape if the fingertips are oriented towards a part of the head or the body. In other words, if a location feature for the head or body and the relative orientation [tips] are part of the phonological representation, the feature [all] ( ) is realized as . An example of this phonetic implementation rule is displayed in Figure 20. The sign MADAM from Flemish

Sign Language (Vlaamse Gebarentaal; VGT) is specified for the relative orientation [radial], since this is the side of the hand that contacts the body. In this sign, the fingers take the default extended position (Figure 20a). In contrast, in the sign SIR, it is the tips of the fingers that contact the head, and this results in the bending of the fingers (Figure 20b).

Figure 21 summarizes the association of the phonological representation with the phonetic realization and their alternation with the PIRs. If we read the scheme top-down, then the PIRs operate on the underlying phonological representation to realize a certain phonetic articulation. Conversely, if we read the scheme bottom-up, it represents an abstraction of the phonetic realization to an underlying phonological representation.

(33)

32 Figure 20. The VGT sign for MADAM (a) with the relative orientation [radial], and the VGT

sign for SIR (b) with the relative orientation [tips] (Demey & van der Kooij 2008: 1119).

Figure 21. The levels of the phonology-phonetics interface (Demey & van der Kooij 2008: 1122).

(34)

33

4.2. Methodology part I: non-sign analysis

In this section I will briefly discuss the first part of my study. In section 4.2.1 I will start with a short description of the materials that were used in this part of the study. These materials were developed by Klomp (2015). Next, in section 4.2.2., I will show my analysis procedure of the non-signs, using the phonological features of the Dependency Model (see section 4.1). Lastly, in section 4.2.3, I will discuss the different levels of sign complexity in the Dependency Model, and how the complexity of the non-signs was assessed in this study.

4.2.1. Materials

For the first part of this study, I analysed the 40 nonsense signs of the NSRT of Klomp (2015). In her study, the 40 nonsense signs are divided into 36 target signs and 4 practice signs, which were taken from a selection of 90 non-signs. A Deaf signer checked and confirmed that the signs were phonotactically well-formed in NGT. Half of the signs are two-handed. All the signs are recorded with a neutral facial expression and without a mouthing, because it is difficult for the signer who produced the stimuli to produce mouthings for nonsense signs (see Orfanidou et al. 2009). The stimuli were articulated by a native female Deaf signer, whose appearance stays the same in all the video clips. The duration of the clips was standardized for an average duration of 2 seconds.

4.2.2. Analysis procedure

All 36 non-signs of the NSRT of Klomp (2015), as well as the four practice signs, were analysed within the Dependency Model, resulting in a total of 40 signs. For the analysis, a table as in Table 6 was created for every sign, which contains all the nodes of the model. If a certain feature needs to be specified for a sign, it is represented in the column to the right of the relevant node. In contrast, if a feature is not specified, this column is left empty. An example of a non-sign and its representation is shown in Figure 22 and Table 6.

Referenties

GERELATEERDE DOCUMENTEN

A fundamental pedagogical approach to the moral education of English spealr...ing secondary school children. Die opstandige

Figures 3 (gated) and 4 (exhaustive) show the mean waiting time (which is identical for each queue, because of symme- try) versus the number of queues in the system. In each figure

We study the cycle time distribution, the waiting times for each customer type, the joint queue length distribution at polling epochs, and the steady-state marginal queue

Experimentele 2-dimensionale mesh-generator voor elementverdelingen van 3-hoekige elementen met 3 knooppunten, bestaande uit een of meer subnetten.. (DCT rapporten;

Concluding that the transfer tattoos did fit with the strategic intent and the resource- and capability endowments of the firm, the internal business

The forecast performance mea- sures show that, overall, the CS-GARCH specification outperforms all other models using a ranking method by assigning Forecast Points according to

But I know that if they want to try it, I am unlikely to be able to stop them — and nor is the Public Health minister (our public health Supernanny) or indeed Superman himself.

(Beide) geloven dat de wet / het verhogen van de minimumleeftijd niet zal helpen (om jongeren minder te laten roken.). 20