• No results found

2. Theoretical background

2.4 The signing space: functions and spatial devices

2.4.6 The relationship between gesture and sign

Section 2.3.3.3 briefly introduced the topic of gestures, and discussed different types of gestures that have been distinguished in the literature, namely (i) emblems, (ii) pantomimic gestures, (iii) representational gestures, (iv) beat gestures, (v) deictic gestures, and (vi) cohesive gestures. Sharing the same medium of expression, some types of gestures show similarities to either lexical signs or grammatical constructions that are characteristic of sign languages (Özyürek, 2012).

In Table 2.2 the types of gestures discussed in Section 2.3.3.3 are presented beside comparable sign language constructions.

Table 2.2. Similarities between (manual) gestures and sign language constructions.

Gesture type

Form/function of gesture Comparable sign language forms or constructions

(ii) Pantomimic gestures Bodily enactments to report an action

(b) use of Handle classifiers in spatial verbs and (some)

(c) use hands to indicate 3D dimensions [modeling]

(iv) Beat gestures Rhythmic movements accompanying speech

---Gesture type

Form/function of gesture Comparable sign language forms or constructions

(b) pointing gestures to loci in gesture space (arbitrary loci or imagined referents)2

(b) pointing signs to loci in signing space (arbitrary loci or imagined referents)

(vi) Cohesive gestures2 Use of gesture space to maintain discourse cohesion

Use of signing space, coreference (a) ‘anchoring’ of a

discourse-referent to an area in gesture space, reuse of this area to refer to this entity3

(b) recurrent use of some physical gesture

Notes: 1 Examples of metaphoric classifiers can be found in Thumann (2013); 2 Examples of deictic pointing gestures can be found in Fenlon, Cooperrider, Keane, Brentari and Goldin-Meadow (2019, p. 12); 3 Examples of cohesive gestures can be found in So, Coppola, Licciardello and Goldin-Meadow (2005) for silent gestures, in Zwets (2014) for co-speech gestures directed at imagined referents, and in Parrill and Stec (2017) for abstract referents.

When mapping these gesture-sign correspondences onto the devices presented in Figure 2.10 (see for a visual representation Appendix 2A), it becomes apparent that SL2-learners have different types of gestures at their disposal that might scaffold their learning. Here, we highlight three ‘gesture-sign parallels’ that are of importance in the studies presented in this thesis:

(i) the use of a hand-as-object gesture/Whole Entity classifier (iii-d in Table 2.2), (ii) the use of a hand-as-hand gesture to mimic or represent an action/Handle classifier in agreement verbs and spatial verbs (ii-b/iii-a in Table 2.2), and (iii) the use of pointing signs (v-a/v-b in Table 2.2).

The first parallel of importance is the parallel between hand-as-object gestures and Whole Entity classifiers. Various authors have noted similarities between gestures used by non-signers to depict the motion and/or location of objects or characters, and Entity classifier predicates used by signers (Singleton, Morford & Goldin-Meadow, 1993; Schembri, Jones & Burnham,

2005; Brentari, Coppola, Mazzoni & Goldin-Meadow, 2012; Quinto-Pozos &

Parrill, 2015; Janke & Marshall, 2017). These hand-as-object gestures and Entity classifier predicates have in common that the signer’s hand depicts the entire entity. The non-signer depicted in Figure 2.2 (Section 2.3.3.3), for example, uses a -handshape (palm oriented downwards) to depict a rollercoaster-cart. Quinto-Pozos and Parrill (2015) have shown that non-signers use such depictive gestures in situations similar to those in which signers employ Entity classifiers, that is, in scene descriptions from an observer perspective (see Section 2.4.2.1). However, non-signers have been shown to draw upon a set of handshapes that is much larger than the discrete set of classifier handshapes employed by signers, and their productions lack consistency (Singleton et al., 1993; Janke & Marshall, 2017, Brentari et al., 2012; Schembri et al., 2005), leading Janke and Marshall (2017, p.10) to conclude that

(T)he task for learners of sign is not to learn how to represent objects using their hands, but rather to narrow down the set of handshapes that they have potentially available to them to the set of classifier handshapes that is grammatical in the sign language they are learning, and to select from that set accurately and consistently.

An interesting finding reported by Singleton, Goldin-Meadow and McNeill (1995) is that co-speech gesturers, in an elicitation task, did not show simultaneous use of two hand-as-object gestures to depict two objects in relation to each other. Instead, participants used one hand-as-object gesture or they used another strategy that did not involve hand-as-object gestures.

Interestingly, when they did use a hand-as-object classifier in these constructions, this was predominantly for moving objects. The absence of simultaneous two-handed hand-as-object gestures in co-speech gesture contrasted with the gesturing of participants who performed the same task without speech (‘silent gestures’). The latter participants did use both hands simultaneously to depict the spatial relationship between objects. This research shows that, although not present in co-speech gesture itself, the simultaneous use of both hands (as depicted earlier for a signed construction in Figure 2.5a) is available to novel signers.

A second parallel found in various studies is the one between the use of

‘hand-as-hand gestures’ by non-signers and Handle classifier predicates by signers, to denote how a character handles or manipulates an object (Brentari et al., 2012; Quinto-Pozos & Parrill, 2015). Again, a notable difference can be found in the handshape inventories displayed by both groups. Brentari et al. (2012) noted that non-signers “display a fair amount of finger complexity” as compared to signers (p. 12). Non-signers replicate to a large extent the actual configuration of the hand, while signers have a much smaller, grammatically constrained set of Handling handshapes at their disposal.

The third gesture-sign parallel worth mentioning concerns the use of pointing gestures and pointing signs. Both pointing gestures and signs are used to attract the attention of the addressee to something in the immediate environment (direct deixis). Moreover, both signers and gesturers have been found to point to loci in signing space/gesture space (Zwets, 2014; Fenlon et al., 2019). However, pointing gestures are not conventionalized, whereas pointing signs have entered the sign language grammar (e.g., they function as pronominals, locatives, determiners; but see the discussion below). In a recent study, Fenlon et al. (2019) have demonstrated that pointing signs, as compared to pointing gestures, exhibit more consistency with respect to several formational features, such as handhape, duration, and use of the dominant vs. weak hand. In addition, pointing signs were more reduced in form and more integrated into the prosodic structure of the utterance than pointing gestures. Thus, although pointing signs and gestures are superficially similar, there are subtle differences in form and structural integration. Another difference, reported in Zwets’ (2014) study on NGT signers and Dutch gesturers, is that gesturers used pointing gestures towards

‘an empty location’ (i.e., gesture space) to refer to imagined referents (cf.

‘surrogates’, see Section 2.4.2.1), but not to establish a locus “that has no other purpose than making further reference possible” (p. 191), i.e., pointing signs towards arbitrary, abstract loci. Fenlon et al. (2019), however, report not having found this difference.35

35 Unfortuntately, Fenlon et al. (2019) do not provide numbers on the distribution of the use of pointing signs/gestures towards imagined referents versus pointing signs/gestures towards abstract loci in signing/gesture space.

These studies into non-signers are informative regarding the SL2 learning process, since they provide evidence that novel SL2-learners have a gestural repertoire at their disposal to start out with, which might scaffold the acquisition of particular constructions. As such, gestures may provide a substrate for SL2-learning (Marshall & Morgan, 2015; Janke & Marshall, 2017). At the same time, gestures can be a source of negative transfer, since learners might fail to recognize the “additional layer of linguistic convention”

(Quinto-Pozos & Parrill, 2015, p.30) present in sign language constructions, but not in their gestural counterparts. As mentioned previously (Section 2.3.3.2), Ortega and Morgan (2015a, 2015b) provided evidence for negative transfer at the phonological level, showing that novel learners and non-signers produced lexical signs that have an iconic gestural counterpart (e.g.,

WRITE) less accurately than signs that do not have a gestural look-alike.

Presumably, the fact that participants had access to the meaning of the iconic signs made them less attentive to the exact phonological structure. Similar findings are reported by Chen Pichler (2011, p. 110).

Before proceding to the next section, a brief comment is in order regarding the ongoing debate on the appropriate characterization, gestural versus grammatical, of the spatial devices discussed in this chapter. This discussion is best understood in context of the history of sign language linguistics.36 Before the advent of the field of sign language studies in 1960 (described in Section 2.1), sign languages were regarded as primitive gestural systems. The main focus of initial research (1960–1980) was to disprove this myth, by demonstrating that sign languages are structured similar to spoken languages, and should thus be treated as full-fledged human languages.

Signed utterances were analyzed according to the prevalent (structuralist) grammatical framework for spoken languages (i.e.,by describing them in terms of phonemes, morphemes and syntactic structure), arbitrariness was stressed, and the role of iconicity and gestures was largely ignored (McBurney, 2012; Kendon, 2008; Goldin-Meadow & Brentari, 2017). Gesture and sign were regarded as completely distinct categories, with one being linguistic (signs) and the other non-linguistic or paralinguistic (gestures).

36 For reviews of sign language linguistics in relation to gesture, see Kendon (2008), Goldin-Meadow and Brentari (2017) and Müller (2018).

Within the structuralist (generative) grammatical framework, the spatial structures discussed in this thesis are analyzed as combinations of morphemes:

- Classifier predicates are analyzed as consisting of a movement root with affixes signifying the movement, orientation and location of the entity depicted (e.g., Supalla, 1982; Zwitserlood, 2003);

- Agreement verbs are analyzed as a stem combining with affixes (location and orientation) to indicate the verb’s subject and object (e.g., Padden, 1988);

- A subset of pointing signs has a linguistic function and points to a locus in space (referential index, R-locus) that carries linguistic meaning (Lillo-Martin & Klima, 1990; Lillo-Martin & Meier, 2011).

However, in the course of time, some researchers have pointed out that a purely grammatical analysis is insufficient to explain the internal structure of these spatial structures (e.g., Liddell & Metzger, 1998; Liddell, 2003a, 2003b;

Dudis, 2004; Schembri et al., 2005; Johnston, 2013a, 2013b; Fenlon, Schembri & Cormier, 2018). According to these authors, the morphemic analysis is problematic, since it is impossible to provide a finite or listable set of ‘location morphemes’, given the fact that there is an infinite number of possible loci in signing space. Likewise, other gradient aspects of these constructions, such as orientation of the hand or movement properties, are non-listable. To account for this complexity, known as the ‘listability problem’ (Sandler & Lillo-Martin, 2006), these authors suggest that spatial constructions contain both categorical (linguistic) and gradient (gestural) components:

- ‘Classifier predicates’ (depicting verbs, depicting signs) are analyzed as containing a meaningful handshape (i.e., a morpheme) combined with gestural elements depicting the movement, orientation and location of an entity (Liddell, 2003b);

- ‘Agreement verbs’ (indicating verbs) are composed of a morphemic element (the handshape) combined with gestural elements (location and direction) (Liddell, 2003a);

- Pointing signs are similar to deictic gestures (Liddell, 2000b, 2003a;

Johnston, 2013a, 2013b).

In the above list, the terms ‘classifier predicates’ and ‘agreement verbs’ are placed between single quation marks, since alternative terms, given between

brackets, are used for these phenomena, reflecting the different perspectives. In this thesis, we have chosen to refer to these phenomena with the terms (Entity) classifier predicates and agreement verbs. However, this does not imply that we take an explicit stance in this debate. For the purposes of this thesis, that is, providing a description of the SL2 acquisition of selected phenomena and investigating the effectiveness of teachings strategies regarding one of them, the gesture vs. grammar issue is irrelevant.

SL2-learners have to acquire these phenomena, and the rules governing them, regardless of their linguistic/gestural analysis.

2.4.7 Previous research regarding acquisition and emergence of spatial