• No results found

Patterns of Handshape in the Acquisition of Classifier Constructions in Turkish Sign Language (TID)

N/A
N/A
Protected

Academic year: 2021

Share "Patterns of Handshape in the Acquisition of Classifier Constructions in Turkish Sign Language (TID)"

Copied!
47
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Patterns of Handshape in the Acquisition of Classifier Constructions in Turkish Sign Language (TID)

Veerle Schoon University of Amsterdam

Author note

Veerle M. Schoon, Amsterdam Centre for Language and Communication, University of Amsterdam

This paper was written as part of the completion of the research Master’s Program Linguistics at the University of Amsterdam. Supervised by Dr. Beyza Sümer, this research was conducted at the Multimodal Language and Cognition lab, Centre for Language Studies, Nijmegen, using data from the NWO VICI funded project of Prof. Asli Özyürek (Project no: 277-70-013). Special thanks to Beyza and Asli for giving me the opportunity to work with this rich dataset.

Correspondence concerning this thesis should be addressed to Veerle Schoon, Ryelandstraat 21, 3573SZ, Utrecht

(2)

Abstract

In sign languages, spatial events are canonically encoded in an iconic way; through classifier constructions. An accumulating body of evidence that suggests a facilitating role of iconicity on acquisition. Furthermore, when lexical signs have variants with different levels of iconicity, namely object- and action-based signs, children prefer to use the action-based signs with higher levels of form-meaning mapping. Despite their iconic quality, classifier constructions are not acquired until late childhood. The current study sheds light on iconicity and complexity in the acquisition of classifier constructions by considering the influence of possible action bias of preceding lexical handshape on the handshape in classifier constructions. Picture descriptions (e.g. a pen next to a cup) of deaf children acquiring Turkish Sign Language (TID) were analyzed for handshape type in lexical signs (object or action) and classifiers (entity or handling). The results show that children use significantly more action-based handshapes in lexical signs and significantly more handling classifiers, even though entity handshapes are appropriate in locative expressions. Moreover, lexical handshape predicted classifier handshape, showing that a process of assimilation from the lexical handshape to the classifier takes place. I argue that action and handling handshapes are easier for children to abstract because they have a congruent link between form and children’s action/motor experiences. This iconic action and, through assimilation, handling bias interferes with children’s selection of the appropriate classifier handshape for locative expression, namely an entity classifier.

(3)

Introduction

First language acquisition of sign language takes a similar course compared to spoken language on many levels, with major milestones at the same stages (Chen Pichler, 2012). For instance, young deaf children learning sign language show manual babbling (Petitto & Marentette, 1991). Some research even reports that in lexical development, signing children reach their milestones of first signs a little earlier than hearing children produce their first words (Bonvillain, Orlansky & Folven, 1990). These similarities in acquisition can be attributed to the language universals that all spoken and sign languages portray, and that show that sign languages are natural, fully fledged languages that are fully expressive and linguistically complex (see Sandler & Lillo-Martin, 2006, for a review). For instance, sign languages show duality of patterning, where meaningless units of handshapes, place of articulation and movements, combine to make meaningful signs (Stokoe, 1960).

However, sign languages may also differ from spoken languages on various levels of linguistic structure. These differences can be attributed to the visuo-spatial modality which has an effect on the structures that sign languages may have (Pfau, Steinbach & Woll, 2012). In more recent years, research on sign language acquisition has steered towards exploring the modality effects of acquisition, i.e. differences in sign language grammar and structure due to the language being in the visuo-spatial modality, that affect sign language development (Chen Pichler, 2012). One such structure where sign language acquisition seems to take a different path than spoken language acquisition, is the classifier construction, which is also highly iconic i.e. has many resemblances between the language form and its meaning. Sign languages are particularly enriched

(4)

with iconic forms, as the visuo-spatial modality lends itself well to produce language forms that have some non-arbitrary link between visual form and meaning. Iconicity is also present in spoken languages, however, for instance in the form of onomatopoeia or sound-symbolism, which consist of spoken language forms that imitate sounds from the environment (Toda, Fogel & Kawai, 1990; Yoshida, 2012; Laing 2019). Besides, an accumulating body of evidence suggests a facilitating role of iconicity in (sign) language development (e.g. Thompson et al, 2012).

The current paper focuses on classifier constructions in first language acquisition of Turkish Sign Language (Türk İşaret Dili, henceforth TID). Classifier constructions in sign languages are highly iconic, yet morphologically complex. They are polymorphemic verbal complexes, with morphemic elements consisting of simultaneously expressed salient characteristics of referents’ location, motion or action by the manual articulator (i.e. the hands) (Supalla, 1982; Zwitserlood, 2012). Even though classifier constructions are highly iconic, several studies point out that they are not mastered until late childhood (e.g. Kantor, 1980; Supalla, 1982; Slobin et al., 2003; Tang et al., 2007).

This relatively late acquisition has been suggested to have several causes. Firstly, children must be able to abstract the correct handshape for the semantic category of the referent, for instance in American Sign Language (ASL), pens require different handshapes than books do, because they fall into different semantic categories in terms of their shape; pens require an extended index finger, while books require a flat handshape (Kegl, 2003). The categories may also be based on semantic categories such as animacy (Kantor, 1980). Next, children must be able to coordinate the hands

(5)

with different handshapes simultaneously, to express how referents relate to each other in space, or move in certain directions (Slobin et al, 2003). Lastly, in sign language, locative expressions (e.g. a box on a table) require different handshape types, referred to as entity handshapes, than expressions that have a (human) agent acting upon referents (e.g. a man carrying a box), referred to as handling handshapes. Differentiating handshape according to event type appears difficult for children (Schick, 1990; Brentari et al. 2013).

The current study concentrates on classifier handshape only, and considers factors that may influence the use of different handshape types by children. Lexical signs may have multiple variants in different sign languages, i.e. two different lexical signs for the same object that are used interchangeably by adult signers, for example the sign for TOOTHBRUSH in Turkish Sign Language (TID) in Figure 1, has two variants with a different handshape, one where the hand represents the toothbrush itself, and the other where the hand represents using or holding the toothbrush. When lexical signs have variants, children prefer to use variants where the handshape is

action-based (i.e. the hand represents acting upon the referent), over handshapes that

(6)

are perceptual-based (i.e. the hand represents perceptual features of the object). This is because the form of action handshapes, where the hand in the sign represents the hand using the object, are easier to abstract and process. As children are in the process of learning a sign language, and mapping language forms to their action and motor experience, they thus portray action bias. (Ortega, Sumer & Ozyurek, 2016).

It is currently unknown whether action bias surfaces in other parts of sign language grammar, and the relation of action bias with respect to classifiers has not been studied to date, as classifiers are rather complex structures that are acquired relatively late. The current study thus investigates the influence of lexical handshapes, that may be action- or object-based, on the handshapes used in classifier constructions, that may be entity or handling.

1. Background 1.1 Characteristics of Classifier Constructions

Classifiers, also referred to as ‘depicting handshapes' (Zwitserlood, 2012:158) are morphemic elements, in the form of handshapes, that are observed in almost all sign languages (Zwitserlood, 2012)1. Moreover, these classifier handshapes are used in polymorphemic classifier constructions which are considered to make sign languages typologically distinct from other languages. This typological distinction from other languages has been made because sign language is expressed with multiple articulators;

1 One exception of a sign language without classifiers is Adamorobe Sign Language from Ghana, as reported by Nyst (2007), where the canonical way to depict motion of referents is through a general direction sign and a lexical sign.

(7)

two hands, the face and the body, it allows for expression of morphemic elements such as classifiers in classifier constructions, in a simultaneous manner rather than a sequential one (Engberg-Pedersen, 1993). Classifiers are expressed by particular configurations, or handshapes, of the manual articulator (i.e. the hands), and represent entities by denoting salient characteristics of said entities. Classifier constructions are the verbal complexes in which classifiers function as morphological elements. Classifier constructions are composited as follows: first, the signer has to introduce the referents under discussion, this is done with lexical signs (stills a. and b. in Figure 2). Second, the signer will use classifiers to represent the referents’ location or motion, or to represent an agent (previously introduced with a lexical sign), holding or handling an object. An example of a classifier construction is given in Figure 2,where the signer first introduces the referents (the cup and the lollipop) and then localizes them in the

Figure 2 An example of a classifier construction with an entity classifier in Turkish Sign Language (TID). First,

the referents are introduced with lexical signs; the signers first signs the lexical sign for the ground object, in this case the cup, and next the figure object, the lollipop. Next, classifiers are used to indicate how the two objects are located in relation to each other.

a. CUP b. LOLLIPOP c. RH: CL:cylinder shaped entity

LH: CL:thin, long entity ‘there is a cup’ ‘there is a lollipop’ ‘the lollipop is next to the cup’

(8)

sign space to indicate the spatial relation of the two objects.2 Locative expression is one of the functions of classifier constructions, and as can be seen in the example, the referents and their spatial relation can be expressed in a simultaneous way. Note that for locative expressions, there is a fixed order in presenting the referents. First, the ground object is introduced with a lexical sign, this is the bigger, more backgrounded object. Next, the Figure object is introduced, which is the smaller, more foregrounded object. The spatial relation is always expressed with the Figure in relation to the Ground (Talmy, 2003).

Classifiers may be used to express a multitude of meanings in classifier constructions. However, there is an ongoing debate in the literature on the categorization of different classifiers, in terms of their function and structure. Classifiers have been categorized in a variety of ways by a variety of sign language researchers (e.g. Supalla, 1982; see Zwitserlood, 2012 for a review). This is also because different sign languages have different classifier inventories, i.e. they each employ different semantic categories to classify different referents. For instance, ASL employs a specific handshape for vehicles, where the thumb, index finger and middle finger are extended, while other sign languages may not have this specific handshape for this specific semantic category (Kantor, 1980). However, not much cross-linguistic work has been executed so far. The current paper therefore also assumes that there is enough similarity between sign languages and their structure, to assume that classifiers are structured similarly.

(9)

Besides, even though there is different nomenclature and overlap between categories of classifiers in the existing literature, two main semantic, but mostly structural categories seem to surface from studies on a variety of sign languages (Zwitserlood, 2012). The first is referred to as ‘(Whole) Entity Classifier’ and consists of classifiers of which the handshape denotes a particular semantic or shape feature of the referent. The hand is configurated in a manner that bears a perceptual resemblance to the referent to some extent, e.g. an outstretched index finger to represent the long, thin shape of a pen. The second category consists of so-called ‘Handling Classifiers’, which are used to represent entities that are being held, moved or acted upon, usually (but not exclusively) by a human agent. The handshape in handling classifiers is configurated as if one were holding or moving the referent (Zwitserlood, 2012).3 Please refer back to Figure 2 for an example of a classifier construction with an entity classifier, and see Figure 3 for a classifier construction with a handling classifier. In Figure 3, the signer again first introduces the referents under discussion, the man (the agent) and the box. Next she uses a handling classifier to indicate that the man is carrying the box. Handling classifiers require the handshape to correspond to how an object is held or acted upon; in other words, if the object is really small this requires a different handshape, and would perhaps only require one hand, while if the object is really big, such as the box in Figure 3, this requires a different handshape.

3 The literature also distinguishes other, more debated categories, such as size and shape specifiers, or body classifiers (e.g. Supalla, 1982; 1986), but the current paper focuses on entity and handling classifiers only.

(10)

The distinction between handling and entity classifiers is not based only on semantics, but also on how the two function in the grammar, in terms of the verbs that they combine with in classifier constructions. Classifiers most often occur in combination with verbs that either denote a referents’ motion through space, location in space, or that denote the handling of referents. Here we can see where the structural distinction: entity classifiers are used in constructions that express location events (sometimes in combination with a motion from one location to another), while handling classifiers are used in constructions that express a (human) agent holding or moving an object, which require different verb types (Benedicto & Brentari, 2004; Zwitserlood, 2012).

Figure 3 An example of a classifier construction with a handling classifier in Turkish Sign Language (TID). First

the signer introduces the referents, the man (agent) and the box. Next, the signer uses a handling classifier to indicate the man carrying the box. Note that the handshape corresponds to the shape of the box, if the box was really small, this would have required a handshape with the thumb and index finger closer together. (Sumer, 2015:16)

a. MAN b. BOX c. CL: handle entity

(11)

As elaborated above, classifiers are morphologically complex, therefore learning the structural and semantic difference between the two types of classifiers, might be what causes delay in acquisition of classifier constructions. However, as classifiers are complex on other levels beyond the distinction between entity and handling classifiers, the cause of the delay in acquisition should be further explored using previous studies on classifier acquisition. The next section will outline previous studies on the acquisition of classifiers in more detail, and will shed light on the reasons for the delay in acquisition and will give explanation on why the acquisition of classifier constructions should be revisited.

1.2 Acquisition of Classifier Constructions

The complex polymorphemic structure and relatively high iconicity of classifier constructions has made the acquisition of said constructions an important research interest for many years. Before the previous literature on classifier acquisition is explained, it is important to elaborate on iconicity, as classifiers are highly iconic. Besides, next to classifiers, the theme of iconicity also plays a role in a lot of the sign language acquisition literature. In many previous studies, it has been shown that iconicity, i.e. a non-arbitrary link between language form and meaning, facilitates acquisition. For instance, in spoken language, iconic co-speech gestures facilitate spoken word-learning (Stefanini, 2009), and are used to increase action-meaning repertoires in verbs (Özçalişkan, Gentner, & Goldin-Meadow, 2014). Furthermore, onomatopoeia (i.e. sound-symbolism or linguistic forms that imitate sounds from the environment) has a considerable part in children’s early vocabulary and is suggested

(12)

to bring advantages in early perception, production and interaction (Laing, 2019). In sign language, iconic forms are abundant, therefore the relation between sign language acquisition and iconicity has been an important topic in sign language acquisition literature. For example, Thompson and colleagues (2012) show that iconic forms are easier to learn and process and furthermore are comprehended and produced in the sign language vocabulary at earlier stages of acquisition. Note here that it is important that children’s general cognitive development also plays a role in the recognition of iconic signs, therefore iconicity only gives them the advantage once they are able to abstract the meaning from the iconic form (Tolar et al. 2007). Nevertheless, iconicity facilitates referential mapping, i.e. creating referential links between form and meaning and also facilitates language processing. Iconicity also facilitates language processing. Language processing is embedded in motor and sensory systems in the brain, and as iconicity is the congruent link between language forms and their meaning, iconicity therefore facilitates language processing (Perniss & Vigliocco, 2014).

However, despite the high iconicity of classifier constructions, and the fact that iconicity has been shown to facilitate processing and acquisition, evidence has shown that classifier constructions are not acquired until late childhood. Earlier studies on classifier acquisition focused on the semantic aspects of classifiers, namely the handshape selection for the semantic categories that classifiers create (e.g. in ASL, vehicles require different entity classifiers than people do). For instance, Kantor (1980) focused on production and comprehension of entity classifiers only, and found that children as young as 3 years old recognized the syntactic and discourse environments requiring classifier usage, and never used inappropriate handshapes for their referent

(13)

domains. Still, late mastery of classifier constructions is reported, and it seems the difficulty lies more in the morphological complexity of classifier constructions (Schick, 1990). For example, Slobin and colleagues (2003) find in their natural dataset that pre-school children acquiring ASL or sign language of the Netherlands (Nederlandse

Gebarentaal, NGT), use both entity and handling handshapes at an early age (around

3 years) but “… even 12 year-olds do not use the entire set of options in an adult-like fashion.” (Slobin et al, 2003:287). As they used natural data, it appeared that children had difficulty especially in those contexts that required a lot of different syntactic and morphological elements, and longer stretches of signing were required (Slobin et al. 2003).

The types of morphemes that pose difficulty were further investigated by Schick (1990), using elicitation tasks to investigate handling and entity handshapes, as well as agentive and non-agentive contexts. She finds that handling handshapes were produced with the least accuracy, and explains this result with the fact that entity handshapes do not require further modification of the handshape to correspond to the physical dimension of each object, while handling handshapes do; a handling handshape for a pen is different than a handling handshape for a bottle, because those dimensions are different. In contrast to handshape production, it was easier to produce the loci, the locations in space corresponding to the objects, for verb agreement in an agentive context, than for expression of spatial relations (Schick, 1990).

Thus, the literature is inconclusive on what exactly causes the delay in acquisition of classifiers, and which of the handshape types are acquired earlier. The main gist seems to be that multiple factors are involved, such as coordination of both

(14)

handshapes and their movement and choosing the correct handshape for the event type (agent vs. non-agent). Moreover, each study focuses on different elements of the classifier construction, narrowing down the exact cause of the delay remains difficult and these studies therefore entertain opposing conclusions on the development of said elements (Kantor, 1980; Schick, 1990; Slobin, 2003).

However, the aforementioned studies all focused on the classifiers themselves, and not so much on the lexical signs preceding the classifier predicate, even though this is part of the construction, and of importance to include when investigating acquisition of classifiers. As, Brentari and colleagues (2013) mention, in ASL, handshape works differently in nouns (i.e. lexical signs) than in verbs (i.e. classifier predicates). In ASL and other sign languages, as explained in the previous section, the handshape of lexical signs does not change according to grammatical context (i.e. agentive or non-agentive context), while the handshape of classifier predicates does; in an agentive context, a handling handshape is required, while in non-agentive context, it is required to use an entity handshape. Distinguishing these handshapes thus requires an understanding of word-class distinctions of the language, which could prove difficult for children, and a reason for the late acquisition of these two classifier types. Brentari and colleagues (2013) investigated exactly this; whether children were able to use handshape appropriately according to grammatical context, taking into account the lexical handshapes that are required to introduce referents in a classifier construction. They looked at elicited productions of nine children, and used stimuli that either had an object lexical handshape in ASL, such as BOOK, or an action lexical handshape, such as LOLLIPOP (see Figure 4 for a still of the citation form of BOOK and Figure

(15)

5 for still of the citation form for LOLLIPOP in ASL). The study found that children had a tendency to overuse entity handshapes, and were therefore not adult-like in the agentive contexts (where a handling handshape was required), especially in those cases preceded by a lexical sign that had an object handshape (e.g. BOOK). They explained this result by saying that ASL has a bias of object handshapes overall, and that a process of “borrowing” or assimilation4 of handshape may occur in the cases of lexical object handshapes and agent contexts. The explanation of assimilation does not suffice for all their results, however. It does not explain why children were able to switch from a lexical action handshape to an entity handshape in no-agent contexts.

4 Henceforth, I will refer to the process of the handshape of the classifier (host) taking the handshape of the lexical sign as assimilation. Phonological assimilation has also been defined as a phoneme changing to be

similar to nearby phonemes. In sign language phonetics and phonology, however, this is defined as

coarticulation. Meanwhile assimilation is more categorical, one phonetic category or element of the phoneme (in this case the handshape) remains the same (Crasborn, 2012).

Figure 4 Citation form of the sign for BOOK in ASL. Notice that the hands represent the object itself (object-handshape). From: Spreadthesign.com (2019)

Figure 5 Citation form of the sign for LOLLIPOP in ASL. Notice that the hand represents holding the lollipop (action-handshape). From: Handspeak.com (2019)

(16)

As the results from Brentari and colleagues (2013) cannot be fully explained, it is necessary to study the matter further. Recent work has uncovered facts about children’s use of handshapes in lexical signs, that slightly contrast the work of Brentari et al. (2013). Signs may have lexical variants, for example adult signers of TID, may sign TOOTHBRUSH with a perceptual handshape, or an action handshape5 (please refer back for an example in Figure 1),both these variants are accepted and used interchangeably by adult signers. Moreover, the incidence of different sign variants for the same concept has become more evident with the emergence of corpus-based databases of sign languages across the world (Ortega, Sumer & Ozyurek, 2016). Considering the existence of object-based and action-based variants in TID, Ortega and colleagues (2016) looked at how children dealt with this variation in the input, and hypothesized that children would actually prefer to use action-based signs, considering the fact that speaking children use action-based co-speech gestures accompanying words for different objects (Stefanini et al. 2009), and that learning is easier when forms overlap with earlier motor experience (Yu, Smith, & Pereira, 2008). Their hypothesis was confirmed: children seem to prefer to use action-based lexical handshapes over object-based lexical handshapes, at least for the cases where both variants are accepted lexical signs by adult signers. Ortega et al. (2016) explain that this action bias may be related to children’s motor experience, and is in line with the notion of iconicity as structure mapping, which states that the more overlap exists

5 Brentari et al. (2013) use the term object-based for lexical signs where the hand represents the object itself. Ortega et al. (2016) use the term perceptual-based for lexical signs where the hand represents perceptual features of the object itself. I consider these definitions as being equal in the current study, therefore I use the terms object-based and perceptual-based interchangeably from here on; I consider these terms to be

(17)

between a phonological and a conceptual representation, the more this eases processing and acquisition (Emmorey, 2014). This would make action-based signs easier to abstract and produce, as they are more closely related to how the children may have interacted with the referent objects in real life, and in the sign, the hand represents the hand acting upon the object, rather than an object itself. It is important to stress here that Ortega and colleagues (2016) consider different levels of iconicity, and action-based variants are thus more iconic, as they represent a more congruent link between the linguistic form and the real world. The current paper also considers action-based lexical variants to be more iconic.

If action-based signs are easier to acquire, this contradicts what Schick (1990) found, she concludes that children acquire entity handshapes before handling handshapes. Moreover, children’s action-bias as shown by Ortega et al. (2016) stands in contrast with what Brentari et al. (2013) found in classifiers; that children over-produced entity handshapes. With this discrepancy in mind, it is important to revisit the acquisition of classifier handshapes, considering that; (a) some signs have lexical variants with different levels of iconicity and (b) children have bias for action-based lexical signs, which may influence the handshape used in classifiers, and a reinvestigation might shed light on action-bias and iconicity in relation to other parts of children’s sign language grammar. Classifier constructions lend themselves well to investigate iconicity, more specifically action bias, in relation to the complexity of certain sign structures.

(18)

2 Present Study

Here we investigate the acquisition of classifier constructions in Turkish Sign Language. Classifier constructions are polymorphemic verbal complexes that are reported to be acquired quite late (some studies report an age of mastery of up to 13 years old; Slobin et al, 2003). In the acquisition of classifiers, it appears that children have difficulty with choosing appropriate handshapes for different contexts (Kantor, 1980; Schick, 1990). However, these studies did not take into account the possible influence of preceding lexical handshape, which can be either object-based or action-based. Brentari and colleagues (2013) were the first to investigate the influence of preceding lexical handshape, and found that children over-used entity handshapes in agent contexts (i.e. not in an adult-like fashion). They attributed this to a process of “borrowing” or assimilation, where the handshape of the preceding lexical sign is borrowed into the classifier predicate. Moreover, it was recently shown that when a sign has two lexical variants (one action-based and one object-based) children prefer to use the action-based variants (Ortega et al. 2016), which, if this action-bias is present in other parts of sign language grammar, would actually predict over-use of handling handshapes.

On a more methodological note, Brentari et al. (2013) showed that children appear to have difficulty in agent contexts, and explained that this may be due to the fact that one can always describe the end-state of an agent context, which does not require the use of a handling classifier. To prevent such a misinterpretation of the (static) stimuli, the current study controls for event type or context, by only focusing on classifiers in locative expressions. In that case, the only appropriate handshapes to

(19)

use in the classifiers, are entity handshapes. The present study also regards the influence of preceding lexical handshape on the classifier handshape. Moreover, as children have an action bias in lexical signs, this may be the case for classifier handshapes as well, because of iconicity a structure mapping (2014), which posits that a congruent link between form and experience, increases learnability and processing of said forms. However, as complexity of classifier predicates is quite high, children may also assimilate the preceding lexical handshape into the classifier construction. The relationship between lexical handshape, when a sign has lexical variants, and classifier handshape has not been studied yet. Our research question can thus be formulated as follows:

How does handshape in lexical signs (object vs. action) modulate the acquisition

of classifier handshapes (entity vs. handling) in sign language?

One could predict the outcome of our study in three ways: First, it could be the case that children already master classifier handshape in locative expressions at this age (cf. Brentari et al. 2013). This would mean that children will use entity handshapes regardless of the preceding lexical handshape. Second, it could be the case that children have an action bias in classifiers as well; they will use more handling handshapes overall, and the preceding lexical handshape will not be a predictor for the classifier handshape. The third possible outcome of the current study is that children will assimilate the lexical handshape into the classifier, i.e. lexical handshape (action or object) will be a predictor of classifier handshape (handling or entity).

(20)

3 Method 3.1 Summary of The Current Methodology

The current data come from a larger dataset intended to investigate spatial descriptions by child and adult deaf signers of TID, they were collected as part of a NWO VICI funded project (PI: Ozyurek, A. project no: 277-70-013). In the task, participants were asked to describe a picture with two objects next to each other. For this analysis, we focused on the figure (i.e. smaller, foregrounded) objects only, and selected 6 lexical items that either had an object-based lexical variant only (n=2, CORN and RAZOR), an action-based variant only (n=2, LOLLIPOP and SPOON), and both variants (n=2, TOOTHBRUSH and PEN), see Figure 5 for video stills of citation forms of each of the signs for the items. The lexical variants for each item were confirmed by descriptive adult data on the same task, and the TID corpus dictionary (Makaroğlu & Dikyuva, 2017). The exact procedure and materials for this study are elaborated in the Procedure section below.

(21)

a. CORN b. LH: RAZOR

c. LOLLIPOP d. RH: SPOON

e. PEN (object-based variant) LH: PEN (action-based variant)

Figure 6 The lexical signs for the objects selected to be analyzed for this study. a) and b) are object-based signs, c)

and d) are action-based, e) has both variants. The variants for TOOTHBRUSH have been exemplified earlier in Figure 5

(22)

3.2 Participants

Fourteen Deaf and native signing school-age girls (n=7) and boys (n=7) between the ages of 7 and 9 (mean age=8.26 yrs, SD=1.12 yrs), from Istanbul, Turkey, were recruited for this study. All participating children attended a Deaf school in Istanbul, and had lived in Istanbul all their lives. The children’s parents were also Deaf and native or late signers of TID. The next section will elaborate a bit more background of TID.

3.3 Turkish Sign Language (TID)

Use of sign language in Turkey dates back as early as the Ottoman empire (Miles, 2000). However, there is no evidence that the sign language that was used in the Ottoman court of the sultan, is in any way related to the modern Turkish Sign Language (TID) that is the language of the Turkish Deaf community today (Miles, 2000). Even though the deaf community has grown to around 3 million deaf people (Ilkbasaran, 2013), unfortunately, TID has presently not been recognized as an official language in Turkey, and it is not (yet) used to teach in deaf schools. Education in deaf schools has mostly been through oral methods to date, with teachers sporadically using TID signs to support their teaching. Students have been observed to use TID amongst themselves, however (Sumer, 2015).

(23)

3.4 Procedure

The participants were instructed to describe the spatial relation between the two objects in a picture shown on a laptop screen. The screen was divided into four sections, each section displaying a picture with two objects (e.g. a lollipop and a glass) in contrasting spatial relationships (e.g. the lollipop next to, on, behind or in the glass. See Figure 7a for an example of a stimulus display.), with an arrow indicating the target picture. The target pictures of this dataset included different types of spatial relations, but for this study we only selected the descriptions of objects next to each other. The children were divided into two groups, so each item was given as a stimulus seven times.

(24)

The task was self-paced, participants could press a key to display the pictures. They would describe the target picture to which the arrow was pointing to a deaf confederate who had a booklet with the same four pictures, and the confederate would point to the picture that the participant was describing. The choice of a confederate (and not another participant or the parent of the child, for example), was made on

a. . .

Figure 7

a. Example of stimulus display, with the arrow pointing to the target picture: ‘lollipop next to the cup’. b. Experiment set-up, the child has to describe the picture to which the arrow is pointing on the computer

screen, to a confederate who has to point to the same picture on the booklet. The table was placed low enough (below the waist) so that the participant was free to move their arms and had enough space to sign.

(25)

purpose; to prevent any elicited feedback or priming from the interlocutor during the experiment. As the objects were the same in all four pictures, the goal of communicative exchange focused on the right type of spatial relation, to elicit classifier constructions. The whole task was video-recorded from two angles, to ensure that every part of the signing was captured. The table was set low enough, below the waist, so that the upper body and hands were visible and the participants had enough space to sign (see figure 7b for a schematic representation of the task set-up).

3.5 Coding

Each description was transcribed using ELAN (Wittenburg et al. 2006) by a hearing Turkish research assistant with a considerable amount of experience with TID, and all transcriptions were checked by a Deaf TID user. The transcriptions were then categorized into lexical signs and their corresponding classifier predicates. A minimal criterion for the trials was that a participant should at least produce a lexical sign to introduce the figure, and a localization in space as the classifier predicate, any trials that did not have both of these elements were excluded from further analysis (i.e. Ground omissions were allowed). For a few examples of constructions that were included and others that were excluded based on these criteria, please refer to Figure 8. For example, the boy in Figure 8a did not introduce the figure object (i.e. lollipop) with a lexical sign, or even located it in the sign space, figure omissions were not permitted so this trial was excluded. The girl in Figure 8b did not introduce the target figure object (i.e. the razor) with a lexical sign and thus this trial was excluded as

(26)

well.6 In Figure 8c, the girl appropriately introduces the figure object (i.e. corn) with a lexical sign, and then goes on to localize it in the space and in relation to the ground object, this trial was included even though the girl did not introduce the ground object with a lexical sign. Ground omissions were allowed because the current study focuses on figure objects only. Moreover, ground omissions have been shown to occur relatively often in data of children acquiring classifier constructions (e.g. Morgan, Herman, Barriere & Woll, 2008; Sumer, 2015b).

6 Attentive readers may have noticed that the girl in Figure 8b has her right hand near her cheek, which could indicate that she was signing RAZOR some frames before, the video will however show that the girl was scratching her nose instead.

(27)

a. RH: CUP ‘there is a cup’

b. LH: CL: cylinder entity ---hold--- RH: CL:entity

‘there is a cup’ ‘there is a long thin entity next to the cup’

c. LH: CORN (2h) CL: cylinder shaped entity RH: CORN (2h) CL: entity ---hold---

‘corn’ ‘the corn is here’ ‘a cylinder shaped entity is next to the corn’

Figure 8 examples of excluded and included trials: a. was excluded because the child does not introduce the target

figure item (in this case the lollipop), or where it is located in relation to the cup. But only signs CUP and goes to the next item b. was excluded because the child does not introduce the figure object ( i.e. the razor) with a lexical sign c. was included because even though she does not introduce the ground object with a lexical sign, and the order of signs is non-canonical, the current study is focused on the handshapes of the figure object and of the classifier for the figure (in this case the corn), which she introduces with a lexical sign (action-based) and localizes in relation to the ground object with an entity classifier.

(28)

For each lexical sign, it was decided whether the sign was perceptual-based or action-based, and coded in a binary way; if the handshape of the lexical sign represented the object itself (e.g. an outstretched index finger to indicate the long thin shape of a pen), this was coded as object-based. If the handshape represented holding, handling or using the object, this was coded as being action-based. A similar method and similar criteria were used to code the classifiers. If the classifier handshape represented the object itself, this was coded as being an entity classifier, if the classifier handshape represented holding or handling the object under discussion, this was coded as being a handling classifier. Any double encodings, i.e. the use of both types of lexical sign or classifier in the same construction, were given a special label to code for, however, no double encodings were found in the current dataset.

4 Results

A total of 84 picture descriptions were analyzed, but unfortunately only 33 remained useable for the statistical analysis, this many had to be excluded because they did not meet the criteria mentioned in the Methodology section above. The proportions of action- and perceptual-based lexical handshapes were calculated by dividing each category by the total number of classifier constructions (i.e. lexical sign for the figure object + localization) produced. The same was done for the handling and entity classifier ratio. These ratios are illustrated in Figure 9, which shows that ratio of object vs. action handshapes for lexical signs is very similar to the ratio of entity vs. handling classifiers. Note that data for all items has been collapsed, this was done because of the small number of items that could be included in the analysis. The similarity

(29)

between ratios of lexical sign handshapes and classifier handshapes could indicate support for our third hypothesis; namely that children assimilate lexical handshape into the classifier construction. To confirm this hypothesis, statistical program R (R Core Team, 2018) was used for generalized logistic regression modelling (glm from package lme4, version 1.1-15; Bates et al. 2017) to see whether lexical handshape predicted classifier handshape. Classifier handshape was the dependent variable, while lexical handshape was the fixed effect. Both were coded with orthogonal contrasts (action/handling = 0.5 and perceptual/entity = -0.5) Due to many incomplete trials, the number of usable data points was too small (n=33) to include random effects in the model. However, lexical handshape did predict classifier handshape (p<0.005; log odds ratio, exp(b )= 168). Table 1 shows the summary of the model from R, note that the p value for the fixed effect lexical type is significant.

As a sanity check, a Chi-squared test comparing ratios of lexical handshape type and classifier handshape type was also performed, and this confirmed the significance (p<0.005). The results reveal that there is a process of assimilation; if children produced an object-based lexical sign, they were likely to use an entity classifier, while

Table 1

Table of coëfficients of the generalised linear model from R

Estimate (b) Std. Error z value p (>|z|)

(Intercept) 0.6161 0.7390 0.834 0.404479

Lex_type-object+action 5.1240 1.4780 1.4780 0.000527 ***

(30)

if they produced an action-based lexical sign, they were more likely to produce a handling classifier. Figure 10 shows that on only about 20% of trials (n=6), children changed the handshape, using a different handshape in the classifier than they did for the lexical sign (see example in Figure 11). Out of these six trials where the handshape was changed, the children used a different handshape. However, as in the example in Figure 11, children in these trials were still going from an action handshape to a handling classifier (n=4), or from an object into an entity handshape (n=1). Albeit

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Lexical sign Classifier

Pr op or tio ns o f h an dh sap e ty pe s

Proportions of perceptual/entity and action/handling handshapes

action - handling perceptual - entity

(31)

this is not assimilation, children still used more action and handling handshapes overall.

Figure 11 an example of a handshape change from lexical sign to classifier predicate

RAZOR CL:handle thin object

‘razor’ ‘the razor is here’

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

perceptual - entity action - handling change

Assimilation (blue) and handshape change (grey) proportions

Figure 10 Proportions of trials where children assimilated the lexical sign into the classifier, the third bar shows

the percentage of trials where children changed the handshape from lexical sign to classifier (see figure 11 for example of handshape change).

(32)

5 Discussion

Classifier constructions are iconic polymorphemic constructions that are abundantly present in sign languages, yet appear to be challenging for children to acquire, with earlier research reporting full mastery only until late childhood (e.g. Slobin et al. 2003). Many causes have been attributed to this late acquisition, such as difficulty with choosing the appropriate handshape to classify the referents of an utterance according to the different semantic categories that classifiers represent (Kantor, 1980), or the difficulty of coordination of both hands with different handshapes for multiple referents at the same time (Slobin et al. 2003). The current paper focuses however, on the difficulty that lies in the different handshape types, i.e. entity or handling, that are required according to the type of event that is being described, respectively a locative event and an event with a (human) agent.

To elaborate; previous research has shown that children appear to over-use entity handshapes in agentive contexts, where actually a handling handshape is required (Brentari et al. 2013), and that they therefore acquire locative expressions with more ease than they do events with a human agent (Schick, 1990). However, in classifier constructions, the referents are introduced with lexical signs, which may have either object- or action-based handshapes, even variants with both handshape types for the same referent. A study by Ortega et al. (2016) has shown that for lexical signs that have an object and action-based variant, children actually prefer to use variants with action handshapes. This action bias may stem from the relation with children’s early motor experience. These early experiences ease production of action handshapes due to this language form being more closely related to acting upon the referents in

(33)

real life, compared to entity handshapes that require higher levels of abstraction. If this is the case, also considering that classifiers are highly iconic, this could mean that children show action bias in classifier constructions as well. Moreover, the preceding handshape from the lexical sign introducing the referents, may be of influence on the classifier handshape.

As action bias in relation to classifier constructions had not been studied to date, the current study has investigated classifier constructions produced by children between the ages of 7 and 9 who were natively acquiring TID. The lexical signs preceding the classifiers were also investigated to provide a complete picture of action bias in lexical signs in relation to possible action bias in classifiers. The study controlled for event type by only looking at locative expressions, which canonically require the use of an entity classifier.

First, the current findings for the lexical signs corroborate with Ortega et al. (2016); children used many more action-based lexical handshapes for all items investigated. This bias for action-based signs can be explained in the same way that Ortega et al. (2016) have done; with the theory of iconicity as structure mapping as coined by Emmorey (2014). This theory elaborates that form-meaning mappings where there is a congruent link between phonological form an meaning, are more easily abstracted and therefore learnt. Action-based signs are therefore easier compared to object-based signs, because they create a congruent link between phonological form and children’s action/motor experience (Emmorey, 2014).

Upon critical investigation of the current data, it even seems that the children used action handshapes for lexical signs of which the adult form only has a

(34)

perceptual-based form (i.e. RAZOR and CORN). This could be an indication of the children making mistakes in the lexical signs, which is quite unusual at this age. As sign languages, similar to spoken languages, constantly evolve and change (Schembri & Johnston, 2012), it might also be the case that there are more variants of certain signs in TID that have not been studied or described yet, but are used by adult signers. This thus stresses the importance of studying adult data on the same task in more detail.

However, the preference for action-based handshapes is in line with studies on speaking children’s co-speech gestures. When children between 2 and 5 years old are prompted to produce the word for an object, they will relatively often accompany this word with an iconic co-speech gesture that represents acting upon the object. For instance, for the word comb, they would act out holding and using the comb, while uttering the word. These representational (i.e. iconic) gestures indicate the activation of motor programs and actions associated with the objects in the prompts (Stefanini et al. 2009). This is similar to what happened in the signing children; they use more action-based handshapes because it has more direct structural mappings between form and motor experience. In the case of RAZOR and CORN, the activation of these motor programs seems to overrule the linguistic knowledge that prompts the children to produce the appropriate lexical handshape.

Continuing with the results on the classifiers, children also used significantly more handling handshapes than entity handshapes, even though locative expressions in TID require the use of an entity classifier. I argue that to a considerable extent, children also portray action bias in classifiers. Iconicity as structure mapping (Emmorey, 2014) could give an explanation of these results, as it did for the lexical

(35)

signs. Emmorey (2014) posits that phonological forms that have more overlap with a conceptual representation, are easier to process and learn. The complexity of classifier constructions in the form of coordination of simultaneous elements, classifying the semantic categories, and choosing the appropriate handshape for the event type, appears too difficult for children, which is supported by previous research (e.g. Kantor, 1980; Schick, 1990; Slobin; 2003). Due to this complexity, children use handling handshapes because it is easier to abstract those compared to the appropriate entity handshape.

It is important to note, and makes the finding surprising, however, that in the case of the current stimuli, no action with the objects was involved in any way, they were static pictures without agents in them, yet the children in this study represented them with action and handling handshapes. This confirms that language forms that have a congruent link with meaning are embedded in our motor and sensory systems, and are therefore more easily processed and learnt (Perniss & Vigliocco, 2014).

To provide a deeper explanation, a parallel can be drawn between the current results and children’s co-speech gesture patterns. It appears that speaking children accompany their verbs with co-speech gestures depicting the action. More importantly, the use of gestures increased the variety of verbs that the children could produce, motor experience thus seems linked to language learning in the domain of verbs as well (Özçalişkan, Gentner, & Goldin-Meadow, 2014). Classifier constructions, and more specifically their predicates, are verbal complexes. Hence, it is not surprising that it is easier for children to link iconic action and handling handshapes to these verbal complexes, as well as they did for lexical signs.

(36)

As handling handshapes were so prevalent in the dataset, one might wonder whether handling handshapes could be appropriate for static events. Unfortunately, for TID we cannot be sure, one would have to study adult data on the same task for comparison, Brentari et al. (2013) stress the importance of comparison to adult data in their discussion, because they found that adults did not consistently use handling handshapes in agent contexts (i.e. they over-used entity handshapes like the children in the study did). However, the consensus in the literature seems to be that canonical locative expression in sign languages is done with entity handshapes, and canonical expression of an agent acting upon a referent is done with a handling handshape (Zwitserlood, 2012; Ozyurek, Zwitserlood & Perniss, 2010). Considering that the canonical manner of expression is the largest portion of the children’s input, it seems that children are not able to abstract from this input the appropriate entity handshape, and rather are guided by their motor and action experiences. These motor experiences seem to overrule their linguistic knowledge. In the case of classifiers, action bias actually slows them in the process of acquiring the appropriate handshapes for locative expressions.

Moreover, upon further investigation of the results, it was found that the use of an action handshape in the lexical sign, predicts the use of a handling handshape in the classifier. The children thus seem to use more handling handshapes in classifiers, and I argue that they have action bias in classifiers as well. However, what seems to drive this bias is a process assimilation or “borrowing” as coined by Brentari et al. (2013), where the handshape of the lexical sign is borrowed into the classifier, in other words, the handshape stays the same across the whole construction. It is important to

(37)

note here that we also found that children assimilate when they used an object handshape for the lexical sign, and thus mostly used an entity handshape in those cases. Even when we look at the cases where the children changed the handshape (which were rather marginal in the dataset, n=6), they did not assimilate but they still used an action type handshape and a handling classifier (n=4, as in the example in Figure 11 in the results section), or they used object handshape and still used an entity handshape (n=1). Handshape types thus seem to be congruent in almost all of our trials. Moreover, this further confirms action and handling handshapes prevail in this data, and children seem to portray action bias in multiple parts of sign language grammar.

Thus, from these findings, it appears that children between the ages of 7 and 9 are not yet able to fully master the distinction between the word classes of classifiers and lexical signs, and have difficulty with understanding that locative expressions require entity classifier handshapes. However, it is also important to underline the action bias that has been found, because it seems that action bias, albeit in combination with assimilation, blocks children from using the appropriate handshapes in locative classifier constructions.

To summarize, the fact that children used handling classifiers even though entity classifiers were required, reconfirms previous findings that children do not master classifier constructions until late childhood (e.g. Slobin et al. 2003; Schick, 1990). This furthermore corroborates findings that children have strong action bias in multiple parts of grammar, namely lexical signs and classifiers. It seems this action bias stems from a link formed between children’s motor experience and phonological

(38)

form, which is present in action-based lexical signs as well as handling classifiers. Besides, children also seem to struggle with the high morphological and structural complexity of classifier constructions. As action and handling handshapes are easier to process and learn since they have a more congruent link between form and meaning (Emmorey, 2014), this causes the children to over-use action and handling handshapes. Moreover, it seems that, as action bias is so prevalent and assimilation from lexical sign to classifier occurs, this interferes with their use of the appropriate classifier handshape for locative expressions.

Further research is required to investigate in more detail where this pattern in the children’s sign language grammar has its source. It may be the case that children are influenced by child-directed input where parents modulate their classifiers in an action-biased manner, as was found for lexical signs by Ortega et al. (2016). Moreover, I stress the importance of comparing the children’s data to data from adult native signers on the same task, to rule out any possibility of variable input. Sign languages, like any other language undergo variation and change (Schembri & Johnston, 2012), therefore citation forms may not be an accurate representation of what kind of input children really receive.

Furthermore, our dataset only included items from which the handshape required little to no modulation to go from lexical handshape into classifier handshape. i.e. it easily lent itself for assimilation because all lexical handshapes were also possible classifier handshapes (albeit handling was not appropriate for our stimulus set). It would be interesting to investigate what children do for signs that do not allow direct assimilation of lexical handshape to classifier. In other words, use stimulus objects with

(39)

lexical handshapes that are not possible classifier handshapes for the semantic category of the object. In such a study, it could be investigated in more detail whether phonological and articulatory complexity plays a role, or that in those cases, children understand that a classifier environment requires a special classifier handshape (cf. Slobin et al. 2003). Moreover, further investigation could shed more light on whether children are able to use entity and handling handshapes appropriately for the different event type contexts, in the case of lexical items that do not lend themselves well for assimilation as a classifier handshape (cf. Brentari et al. 2013). Moreover, it is important to address and prevent methodological issues with sign elicitation. The current paradigm appears to work well for locative expressions. However, if one were to investigate agentive events where handling classifiers are required, perhaps a different paradigm with moving stimuli in the form of videos for instance, is needed to properly test the use of entity and handling classifiers in children.

Conclusion

To conclude, the current study provides new insights into sign language acquisition, more specifically the acquisition of classifier constructions and its relation with iconicity, as well as complexity. It shows that children are guided by iconicity when producing spatial language, and strikingly, in the case of classifier constructions, this does not help them in acquiring the appropriate forms for locative expressions. The strong action-bias in lexical signs that seeps into classifier constructions through a process of assimilation, can be explained by the congruent link between action-based handshapes and children’s action and motor experience which makes these forms easier

(40)

to process and learn (Ortega et al. 2016; Emmorey, 2014). Unlike other studies that show that iconicity can be a facilitating factor for sign language acquisition (e.g. Thompson et al. 2012), the current study is confirms that action bias from lexical signs assimilates into classifiers, which in turn seems to interfere with children’s classifier productions. The present study opens up a new path for sign language acquisition research, where iconicity levels may be included as a non-binary factor that influences acquisition in different ways and on different levels of sign language grammar.

(41)

References

Bates, D., Maechler, M., Bolker, B., Walker, S., (2017) Linear Mixed-Effects Models using 'Eigen' and S4 [Computer Software] version 1.1-15

Benedicto, E., & Brentari, D. (2004). Where did all the arguments go?: Argument-changing properties of classifiers in ASL. Natural Language & Linguistic

Theory, 22(4), 743-810.

Brentari, D., Coppola, M., Jung, A., & Goldin-Meadow, S. (2013). Acquiring word class distinctions in American Sign Language: Evidence from

handshape. Language Learning and Development, 9(2), 130-150.

Bonvillian, J. D., Orlansky, M. D., & Folven, R. J. (1990). Early sign language acquisition: Implications for theories of language acquisition. In Volterra, V., Erting, C. (Eds.) From gesture to language in hearing and deaf children (219-232). Berlin, Heidelberg: Springer.

Chen Pichler (2012) Acquisition. In R. Pfau, M. Steinbach & B. Woll (Eds.) Sign

Language: An International Handbook. (647-686) Mouton de Gruyter.

Crasborn, O. (2012) Phonetics. In R. Pfau, M. Steinbach & B. Woll (Eds.) Sign

Language: An International Handbook. (4-20) Mouton de Gruyter.

ELAN version 5.3 [computer software] (2019) Nijmegen: Max Planck Institute for Psycholinguistics, The Language Archive.

Emmorey, K. (2014) Iconicity as structure mapping. Phil. Trans. R. Soc. B 369(1651). http://dx.doi.org/10.1098/rstb.2013.0301

(42)

Engberg-Pedersen, E. (1993). Space in Danish Sign Language: The semantics and

morphosyntax of the use of space in a visual language. Signum:

SIGNUM-Press.

İlkbaşaran, D. (2013). Communicative practices of deaf people in Turkey and the sociolinguistics of Turkish Sign Language. In E. Arik (Ed.), Current directions

in Turkish Sign Language research (19-53). Newcastle upon Tyne, UK:

Cambridge Scholars Publishing.

Imai, M., Kita, S., Nagumo, M., & Okada, H. (2008). Sound symbolism facilitates early verb learning. Cognition, 109, 54–65.

Kantor, R. (1980). The acquisition of classifiers in American Sign Language. Sign

Language Studies, 28(1), 193-208.

Kegl, J. A. (2003). Pronominalization in American Sign Language. Sign language &

linguistics, 6(2), 245-265.

Laing, C. (2019). A role for onomatopoeia in early language: evidence from phonological development. Language and Cognition, 1-15.

doi:10.1017/langcog.2018.23

Lapiak, J. (2019) ASL sign for lollipop. Handspeak [website]. http://www.handspeak.com. Retrieved: 20 June 2019

Lydell, T. (2019) Spread the sign [website]. https://www.spreadthesign.com/ Retrieved: 20 June 2019

Makaroğlu, B., & Dikyuva, H. (2017). The Contemporary Turkish Sign Language

Dictionary. [website] Ankara: The Turkish Ministry of Family and Social

(43)

Miles, M. (2000). Signing in the Seraglio: mutes, dwarfs and gestures at the Ottoman Court 1500-1700. Disability & Society, 15(1), 115-134.

Morgan, G., Herman, R., Barriere, I. and Woll, B. (2008). The onset and mastery of spatial language in children acquiring British Sign Language. Cognitive

Development. 23(1) 1-19. doi: 10.1016/j.cogdev.2007.09.003

Nyst, V. A. S. (2007). A descriptive analysis of Adamorobe sign language (Ghana). (Doctoral dissertation) Retrieved from LOT Publications.

Ortega, G., Sümer, B., & Özyürek, A. (2016). Type of iconicity matters in the vocabulary development of signing children. Developmental psychology, 53(1), 89.

Özçalişkan, Ş., Gentner, D., & Goldin-Meadow, S. (2014). Do iconic gestures pave the way for children's early verbs?. Applied psycholinguistics, 35(6), 1143-1162. Özyürek, A., Zwitserlood, I., & Perniss, P. (2010). Locative expressions in signed

languages: a view from Turkish Sign Language (TİD). Linguistics, 48(5), 1111–1145. https://doi.org/10.1515/ling.2010.036

Perniss. P. (2007) Space and Iconicity in German Sign Language (DGS). (PhD Dissertation) Nijmegen: MPI Series in Psycholinguistics.

Perniss, P., & Vigliocco, G. (2014). The bridge of iconicity: from a world of experience to the experience of language. Philosophical Transactions of the

Royal Society B: Biological Sciences, 369(1651), 20130300.

Perniss, P., Lu, J. C., Morgan, G., & Vigliocco, G. (2018). Mapping language to the world: The role of iconicity in the sign language input. Developmental Science, 21(2).

(44)

Petitto, L. A., & Marentette, P. F. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251(5000), 1493-1496.

Pfau, R., Steinbach, M., & Woll, B. (2012). Sign language: An international

handbook (Vol. 37). Walter de Gruyter.

R Core Team. (2018). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundations for Statistical Computing. Available online at: https://www.R- project.org/

Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge, England: Cambridge University Press.

Schick, B. (1990). The effects of morphosyntactic structure on the acquisition of classifier predicates in ASL. Sign language research: Theoretical issues, 358-374

Slobin, D. I., Hoiting, N., Kuntze, M., Lindert, R., Weinberg, A., Pyers, J., ... & Thumann, H. (2003). A cognitive/functional perspective on the acquisition of “classifiers”. In K. Emmorey (Ed.) Perspectives on classifier constructions in

sign languages (271-296). Psychology Press.

Stefanini, S., Bello, A., Caselli, M. C., Iverson, J. M., & Volterra, V. (2009). Co-speech gestures in a naming task: Developmental data. Language and

Cognitive Processes, 24(2), 168–189. http://dx.doi.org/10

.1080/01690960802187755

Sumer, B. (2015a) Scene setting and referent introduction in Turkish and Turkish

Sign Language (Türk Isaret Dili, TID): What does modality tell us? MA

(45)

Sumer, B. (2015b) Acquisition of spatial language by signing and speaking children: A

comparison of Turkish Sign Language (TID) and Turkish. PhD Dissertation,

Radboud University Nijmegen.

Supalla, (1982) Structure and Acquisition of Verbs of Motion and Location in

American Sign Language (PhD Dissertation). University of San Diego.

Talmy, L. (2003). The representation of spatial structure in spoken and signed language. Perspectives on classifier constructions in sign language, 169-195. Tang, G., Sze, F. & Lam, S. (2007) Acquisition of Simultaneous Constructions by

Deaf Children of Hong Kong Sign Language. In: M. Vermeerbergen, L. Leeson & O. Crasborn (Eds.), Simultaneity in Signed Languages. Form and Function (283-316). Amsterdam: Benjamins.

Thompson, R. L., Vinson, D. P., Woll, B., & Vigliocco, G. (2012). The road to language learning is iconic: Evidence from British Sign

Language. Psychological science, 23(12), 1443-1448.

Tolar, D., Lederberg, A., Gokhale, S., & Tomasello, M. (2008) The Development of the Ability to Recognize the Meaning of Iconic Signs, The Journal of Deaf

Studies and Deaf Education, 13(2), 225–240.

https://doi.org/10.1093/deafed/enm045

Wittenburg, P., Brugman, H., Russel, A., Klassmann, A. & Sloetjes, H. (2006) ELAN: a Professional Framework for Multimodality Research. In: 5th

International Conference on Language Resources and Evaluation (LREC

(46)

Zwitserlood, I. (2012). Classifiers. In: R. Pfau, M. Steinbach & B. Woll (Eds.) Sign

(47)

Appendix

Glossing conventions

LH: left hand, anything after the colon was

signed with the left hand

RH: right hand, anything after the colon was

signed with the right hand

(2h) indicates a two-handed sign

APPLE capital letters indicate a lexical sign

CL:entity CL stands for classifier, anything after the colon indicates the type and semantic category that the classifier

represents

---hold--- indicates the hand configuration from the previous still was held until the next frame

Referenties

GERELATEERDE DOCUMENTEN

For each categorized word usage in the test set, we predict the target word based on its labeled category: the ranked list of word forms corresponding to the content fea- ture of

However, cross-sectional ownership patterns can also results from the presence of different segments acquiring different sets of products: (1) a segment of households acquiring

The effect of this is that the priori- ties between the three goals discussed earlier are changed: the representation of the acquired knowledge depends on the acquisition

Results suggest that before age 3, children only accept hoeven to appear with either the sentential negation niet ‘not’ or the negative quantifier geen ‘no.’ After age 3,

In the first part of the experiment, a modified version of Nation’s (1999) Vocabulary Levels Test as well as a listening test (Richards, 2003) were assigned to the participants,

expected to influence the hypothetical order of acquisition. If in a particular residences several banks have affiliates, this implies increasing competition opposed to a

By analysing the imitation performance of 88 monolingual Mandarin children (2; 11 – 4; 09; M = 3; 11; SD = 0; 6; 44 girls) in an elicited imitation task, I examined the

From the above-mentioned definitions and descriptions it is obvious that a task-based syllabus would be structured differently from what Skehan proposed (i.e. identifying the