• No results found

4. Study 2: A longitudinal study into the acquisition of classifier predicates in

4.5 Discussion

In this study, we aimed at better understanding the developmental stages that L2-learners of NGT pass through in their acquisition of classifier constructions, and at providing insights into typical learner characteristics.

Below, we relate our findings to other studies and highlight novel findings.

4.5.1 Findings in relation to other studies into SL2 acquisition of classifier predicates

Recapitulating the findings from Section 4.4, we observed that after a year of instruction, all SL2-participants succeeded in producing two-handed classifier constructions in order to depict the targeted scenes. The majority of the SL2-participants (11 out of 14) applied a two-handed classifier construction in 80% or more of the responses. This outcome, in combination with the observation that the first classifier predicates appeared already after a short period of (untargeted) instruction, might lead to the conclusion that classifier predicates are not very difficult to acquire. These findings contrast with previous results reported by Ferrara and Nilsson (2017), Boers-Visker and Van den Bogaerde (2019, see Chapter 3), and Marshall and Morgan (2015), who claim that classifier predicates are difficult to acquire.

The different outcomes could be attributed to differences in task type.

Marshall and Morgan investigated a different set of objects, and both Ferrara and Nilsson (2017) and Boers-Visker and Van den Bogaerde (2019) examined the use of classifier predicates in extended spatial descriptions, while our study consisted of prompts that elicited short (mono-clausal) descriptions.

Marshall and Morgan (2015) reported that the selection of the appropriate handshape caused difficulties, whereas the learners reported in Ferrara and Nilsson (2017) experienced difficulties in the production of orientation and location. Our study shows that the learners, when experiencing difficulties, struggle with both handshape and orientation.

Movement, on the other hand, does not cause much problems.

Our data corroborate previous results obtained by Ferrara and Nilsson (2017) regarding difficulties the participants encountered in coordinating the hands to depict a scene. Our participants demonstrated similar difficulties, resulting in misplacement of classifier predicates or a need to switch hands during the depiction.

4.5.2 Findings in relation to L1 acquisition

Our data are in agreement with observations in the L1 literature regarding (i) handshape substitutions (Supalla, 1982; De Beuzeville, 2006), (ii) errors and difficulties regarding the expression of Figure and Ground, (iii) sequential realization of constructions that are expected to be expressed simultaneously (e.g., Supalla, 1982; Tang et al., 2007), (iv) failure to specify referents (Slobin et al., 2003; Tang et al., 2007), and (v) use of whole-body language (Tang et al., 2007; De Beuzeville, 2006).

Literature regarding L1 acquisition suggests that classifier predicates are hard to learn because of their complex structure. One could argue that a moving object is more difficult to depict than a static object, since it includes an additional meaning component (i.e., movement). Yet, we did not find evidence that scenes containing movement were harder to depict, or more error-prone, than static scenes. On the contrary, the first classifier predicates referencing moving objects were produced prior to or during the same session as classifier predicates referring to static objects, suggesting that it is not harder, but easier for SL2-learners to combine a classifier handshape with a movement root. We will return to this in the following discussion on the possible influence of gestures.

4.5.3 Findings in relation to literature on gestures

In Section 4.2.1.5, we discussed the resemblance between ‘hand-as-object gestures’ produced by sign-naïve individuals, and Entity classifier predicates.

The existence of these gestures suggests that novel learners use gestures as substrate’ to build their knowledge upon (Marshall & Morgan, 2015; Janke &

Marshall, 2017). The early appearance of classifier predicates in our study indeed suggests that learners might have used their gestural knowledge to bootstrap their acquisition. One could thus argue that the early appearance in the data is an artefact of the coding process, that is, a result of miscoding gestures as classifier productions. This, however, seems implausible, given the results of the baseline session conducted prior to the start of the program. Recall that 11 of the 14 participants had no prior knowledge of NGT, and as such, their productions during the baseline test can be considered as ‘silent gestures’. Yet, only four participants produced a classifier-like gesture to denote a car or bike (both moving) during this pre-test, while, after two weeks of instruction, more than twice as many

participants produced an Entity classifier for the same objects. This is evidence that almost all of the participants have used their gestural knowledge, however, some of them at a slightly earlier point. At this point, we can only speculate about why only four of the participants applied this at a slightly earlier point. Our data suggests that the challenge for the learners lies in acquiring the rules and conventions that govern Entity classifiers, but not gestures. Janke and Marshall (2017) hypothesize that the challenge for learners is not the acquisition of classifiers as a phenomenon per se, but rather to “narrow down the set of handshapes that they have potentially available to them to the set of classifier handshapes that is grammatical in the sign language they are learning” (p. 10). The present study points in the same direction, that is, the challenge seems to lie in the acquisition of the appropriate classifier handshapes and the ‘default orientations’ (e.g., the difference in the default orientation of the NGT classifier for a bike and a car), as well as learning the conventions regarding Figure and Ground.

Another finding that supports the idea of ‘gesture as substrate’ or

‘transfer’ is the observation that Entity classifiers for moving objects appear in the data earlier or at the same time as classifiers for static objects. In Section 4.2.1.5, we mentioned that Singleton et al. (1995) reported that co-speech gesturers produce classifier-like elements for moving objects, but not for static objects. This gestural behavior could account for the observation that our learners produce classifiers for moving vehicles, despite the fact that these classifiers consist of more meaningful components. If the presence of more components resulted in a more complex construction (as suggested in the L1 literature), one would expect these structures to appear later, or to cause more difficulties – contrary to what we observed.

The ‘positive transfer’ of gesture could explain the relative ease in acquiring a structure that is absent in the mother tongue of the learners. That is, despite the language distance between the students’ L1 and the TL, some structures, notably iconic sign language structures that have ‘gestural cousins’, are acquired relatively fast and with less effort than one would expect.

4.5.4 Novel findings

Our study, being the first systematic and longitudinal investigation into the SL2 acquisition of classifier predicates and two-handed classifier

constructions, adds substantially to our understanding of the SL2 acquisition of these constructions.

With regard to developmental stages, our investigation shows that classifier predicates representing vehicles (bicycles, cars) appeared early, followed by classifiers for standing persons. Classifiers representing sitting persons and animals appeared much later. Possible explanations for this observation are (i) markedness of the handshape involved, and (ii) the fact that the hand/fingers represent a part of the entity. As shown in Figure 4.5, classifier handshapes for sitting persons and animals are marked, in contrast to the classifiers for vehicles and standing persons. Marked handshapes are more difficult to articulate (Boyes Braem, 1990), which leads to a prolonged acquisition period in L1 acquisition. This explanation is in line with findings reported by Schick (1990) regarding the L1 acquisition of classifier handshapes. As for the second explanation, the fact that classifier predicates denoting sitting persons and animals do denote parts of the entity (i.e., the legs), whereas the other entities represent the entity as a whole, might account for the difference in acquisition. The self-invented classifiers shown in Figure 4.14 suggest that learners have a natural tendency to represent an entity as a whole (e.g., by using a bent finger to denote the posture of a sitting person)

The observation that the depiction of a construction featuring a car and a bicycle on the horizontal plane precedes a stacked (vertical) combination is interesting and has, to our knowledge, not been described before. The fact that a learner who has discovered that classifiers can be used to position objects in relation to each other in the vertical plane, apparently does not automatically conclude that the same classifiers can also be placed on top of each other, might suggest that during the first stages of acquisition, learners do not (fully) decompose these constructions. An alternative account would be that stacked constructions are less frequent in the input the learners received. Yet, this explanation is not satisfactory since one could argue that a learner, once he or she has discovered a rule, should be able to apply this rule to new constructions, even though he or she has not encountered the construction in the input.

A third novel finding is the failure to use the -classifier as an alternative for the -classifier, resulting in physically difficult and off-target scene descriptions (see Section 4.4.3.1).