• No results found

Code-Mixing vs Mouthing: Similarities and Differences

N/A
N/A
Protected

Academic year: 2021

Share "Code-Mixing vs Mouthing: Similarities and Differences"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Code-Mixing vs Mouthing:

Similarities and Differences

Nabil Nabo

Student number: 1175560 University of Amsterdam Master General Linguistics Supervisor: Roland Pfau August 2020

(2)

Abstract

Mouthing is the unvoiced pronunciation of spoken words that originated in spoken language in combination with signs or signed utterances. In the literature, the status of mouthings is debated. Some scholars attribute them to sign language since they have a crucial prosodic role and contribute to meaning creation. Others believe that it is a form of simultaneous code-mixing since most of the mouthings stem from spoken language and do not contribute to meaning creation. This literature study reviews the similarities and differences between code-switching, code-blending, and mouthings regarding their types and functions. We argue that we cannot put all mouthings in one basket. The obligatory ones can be attributed to sign language due to the essential role they play at both semantic and syntactic levels. On the other end of the continuum, the redundant ones, which constitute the majority, can be considered a form of code-blending between sign language and a modality that profits the oral channel in language production and the visual channel in language perception.

Acknowledgments

I would first like to thank my thesis advisor dr. Roland Pfau, whose office was open whenever I had a question about my writing. He consistently steered me in the right direction whenever he thought I needed it. I would also like to acknowledge dr. Silke Hamann, the second reader of this thesis, who has been supportive during my study at the UvA.

I must express my very profound gratitude to my mother for providing me with unfailing support and continuous encouragement throughout my years of study. I am also gratefully indebted to my teacher, Ahmad Kayali, who always has faith in me.

Finally, I dedicate this thesis to my father who helped me in all things great and small. I hope he is in a better place now.

(3)

Table of contents

1. Introduction

1.1. Present study 1.2. Research questions 2. Code-mixing

2.1. Code-switching in sign language 2.1.1. Definition 2.1.2. Types of code-switching 2.1.2.1. Spoken-signed code-switching 2.1.2.2. Signed-signed code-switching 2.1.2.3. Reiterative Code-switching 2.1.3. Functions of code-switching 2.1.3.1. Preserve identity

2.1.3.2. Express notions difficult to access in a certain language

2.1.4. Switch point in signed-signed code-switching 2.2. Code-blending

2.2.1. Definition

2.2.2. Types of code-blending

2.2.2.1. Spoken based code-blending 2.2.2.2. Sign based code-blending 2.2.2.3. Mixed code-blending 2.2.2.4. Full code-blending 2.2.3. Functions of code-blending 2.2.3.1. Express nuances 2.2.3.2. Preserve identity 2.3. Mouthings 2.3.1. Introduction 2.3.2. Types of mouthings 2.3.3. Functions of mouthings 3. Discussion 5 5 7 8 10 10 12 12 14 15 17 17 18 19 20 20 21 21 22 24 25 26 26 27 28 28 30 32 37

(4)

3.1. Similarities between code-mixing and mouthing 3.1.1. Code-switching vs mouthing

3.1.2. Code-blending vs mouthing

3.2. Differences between code-mixing and mouthing 3.2.1. Code-switching vs mouthing 3.2.2. Code-blending vs mouthing 3.3. Third modality 4. Conclusion References 37 38 40 42 42 43 46 48 50

(5)

1. Introduction

At the beginning of this study, it is essential to reveal the motive that made me interested in code-blending. Socially, a bimodal bilingual (a child of deaf adults, abbreviated as CODA) has many similarities with children belonging to parents of different ethnicities (abbreviated as CODE). In either situation, the children are ignored by both communities by either including them ignoring their other identity’s component, or by excluding them as they belong partially to the unknown other. Both CODAs and CODEs identify themselves as belonging to two opposing worlds (Bishop 2018). From personal experience as a CODE, I was stigmatized by the surrounding community as I was neither fully Kurd nor fully Arab. I assume that the CODAs’ situation is alike; they neither belong entirely to the hearing nor the deaf world.

1.1. Present Study

Bilingual is a term used to refer to individuals who know and use two languages or more (if they are multilingual persons). This group of language users can be divided into two diverging subgroups:

a) balanced bilinguals who are equally proficient in two (or more) languages; b) persons who are native speakers of a language and speak another language

(other languages) sufficiently.

(Field 2011:16) The first form is rare, according to Zentella (1997), who argues that bilingualism only implies the ability to speak two languages meaningfully. Accordingly, speaking two languages at a native level is not a prerequisite. Bilingualism is not limited to spoken languages; it is also a prevalent phenomenon among sign language users. Regarding sign languages, all signers are bilinguals in a sign language and a spoken language since they receive deaf education where they acquire a sign language in parallel with a spoken language usually in its written form (Swanwick 2010; Knight and Swanwick 2013: in Zeshan and Panda 2015: 90). At the same time, they are different from the hearing uni/bimodal bilinguals in the way they obtain their two languages. The CODAs depend on two different sensory systems in the process of language acquisition. They employ their visual system to acquire the signed language and the acoustic system for spoken language acquisition. Zeshan and Panda (2018) contradict this

(6)

statement arguing that the term bimodal bilingualism also includes deaf signers who are proficient in at least one signed language and a spoken one. Zeshan and Panda (2018) state that the term bimodal bilingualism also includes deaf signers who are proficient in at least one sign language and a spoken one. It is essential to make a clear distinction between sign bilingualism and bimodal bilingualism. The term sign bilingualism is used in the context of “deaf education” where a sign language is acquired in parallel with a spoken language, usually in its written form; for example, a signer of ASL being exposed to spoken English. The deaf signers are considered as bilinguals as they use at least one signed language and its corresponding spoken language in its voiceless or written form. Unimodal bilingualism is a term used to refer to the co-use of two or more languages within the same modality; that is, having the ability to speak multi-languages, and these two or more languages are represented in the same modality. The speech-speech unimodal bilingual can only produce an utterance in a single language since they use a single output channel, and this applies to the sign-sign unimodal bilingual as well. In contrast, bimodal bilingualism refers to hearing people who acquire a sign language from their parents (or elsewhere) and a spoken language from the surrounding environment. They co-use both languages, and this phenomenon is often referred to as code-blending (Zeshan and Panda 2015). As mentioned, all deaf people are bilinguals since they know at least a sign language and insert elements form a spoken language. Some linguists consider these elements as internal parts of the sign language itself (mouthings); others argue that they are a form of code-switching or even code-blending. Zeshan and Panda (2018) define bimodal bilinguals as the ones who are capable to use both spoken language and signed language. That includes all signers and speakers of deaf parents since they acquire a sign language at home and a spoken language of the surrounding community.

At the outset of this section, it is worth noting that the term ‘CODA’ refers to the child of deaf adults regardless of whether they are hearing or deaf (according to Baker and van den Bogaerde 2005). In this study, however, it will connote only the hearing children of deaf adults. Furthermore, I will consider code-mixing as the umbrella term, which covers both code-switching and code blending.

This is a comparative literature study that aims to investigate two distinctive phenomena, namely, code-mixing (code-switching and code-blending) vs. mouthings (chapter 2). In the same chapter, the types and functions of each phenomenon are

(7)

introduced. In chapter 3, the similarities and differences between code-mixing and mouthings will be studied through reviewing literature that tackled these issues. The two phenomena appear to be similar to, and at the same time different from each other. They have many shared functions, but there is a fine line that distinguishes code blending, code-switching, and mouthing. In this study, these three different phenomena will be explicitly discussed and how they are similar to and different from each other. So, the focus will be on the types and functions of each of these phenomena. It is known that code-switching occurs between languages in the same modality; yet, it is also possible to happen across modalities. In other words, signers code-mix between a sign language and a spoken one. It is possible that a signer stops signing and starts mouthing; however, this phenomenon is rare.

The switch point will be studied since it is one of the crucial differences that distinguish code-switching from code-blending. In the former, there are two competing lexical items where an individual makes a decision and spells out only the winning candidate (vocabulary). In code-blending, no decision is reached and as a result, both candidates are articulated. Subsequently, I will check whether there is a dominant language in blending as in code-switching.

Regarding mouthings, I will investigate whether they are an internal part of sign language or a form of code-mixing. In light of the fact that the European signed languages have mouthings, in contrast to ASL which uses fingerspelling instead, we can ask whether European CODAs code-blend more than the American ones? Why can/not we consider the simultaneous act of mouthing and signing as code blending?

1.2. Research questions

This study aims to answer the following (main) research question:

• Can we consider the simultaneous act of mouthing and signing as code blending or an intrinsic part of sign language?

Sub-questions to be studied are:

• What are the similarities and differences between mouthings and code-mixing? • As in code-switching, is there a switch point in code-blending?

(8)

2. Code-mixing

Code mixing is just, as its name suggests, the mixing of two languages or more, either sequentially or simultaneously. Muysken (2000) differentiates between three processes that occur in the intra-sentential code-mixing.

• Insertion of a lexical item within a Matrix language frame as in the following example, where Spanish is the matrix language, and an English prepositional phrase (PP) is embedded.

(1) Yo anduve in a state of shock por dos dias. ‘I walked in a state of shock for two days.’

(Spanish/English; Pfaff 1979: 296 in Muysken 2000:5) The code-mixing process in this situation is similar to borrowing: “the insertion of an alien lexical or phrasal category into a given structure” (Muysken 2000:3). Muysken (1995) states that borrowing occurs below or at the lexical level (sub-lexical) while code-switching is above the lexical level (supra lexical). “Code-switches will tend to occur at points in discourse where the juxtaposition of L1 and L2 elements does not violate a syntactic rule of either language, i.e. at points around which the surface structures of the two languages map onto each other” (Poplack 1980:586 in Müller and Cantone 2009:201).

• Alternation of structures from different languages. Muysken terms this process as switching while the other two are referred to as mixing. In alternation, there is code-switching (abbreviated as CS) at grammatical and lexical levels, as shown in example (2). It shows CS from Dutch into Arabic.

(2) maar ‘t hoeft niet li-’anna ida seft ana . . . but it need not for when I-see I

‘but it need not be, for when I see, I . . .’

(Moroccan Arabic/Dutch; Nortier 1990: 126 in Muysken 2000:5) • “Congruent lexicalization of material from different lexical inventories into a shared grammatical structure” (Muysken 2000:3). Example (3) shows that when a lexical item that

(9)

is identical or similar in both languages occurs in a specific language, it triggers the codes of the other language. Dutch waar is very similar to English where, and the grammatical structure is the same in English and Dutch so it is an ideal switch point.

(3) Weet jij [whaar] Jenny is?

‘Do you know where Jenny is?’ (Dutch: waar Jenny is)

(English/Dutch; Crama and van Gelderen 1984 in Muysken 2000:5) Muysken (2000) postulates that the difference between these three processes is gradual as shown in Figure 1. When long elements are inserted in a sentence, the grammar of the second language is activated. Since there are similarities between language at higher-level structures, a gradual transition between alternation and congruent lexicalization can occur. For example, immigrants insert lexical items from the new language into their dominant language, and this can develop into congruent lexicalization and alternation later on.

Figure 1. Schematic representation of the three main styles of code-mixing and transitions between them (Muysken 2000:9)

In this chapter, the focus is laid upon mixing as an umbrella term for both switching and blending. It is divided into three main parts, switching (2.1),

(10)

code-blending (2.2), and mouthings (2.3). Code-switching is explicitly defined and introduced in (2.1.1). Then, we differentiate between three types, spoken-signed (2.1.2.1), signed-signed (2.1.2.2), and reiterative switching (2.1.2.3). The motives that stand behind code-switching are preserving identity (2.1.3.1) and expressing notions difficult to access in a certain language (2.1.3.2). Then, we investigate the switch-point in Section 2.1.4. The second part tackles code-blending which is defined and introduced in Section 2.2.1. After that, we present the four types of code-blending: spoken based code-blending (2.2.2.1), signed based code-blending (2.2.2.2), mixed code-blending (2.2.2.3), and full code-blending (2.2.2.4). Next, we will present two functions of code blending (2.2.3), namely to express nuances (2.2.3.1), and to preserve identity (2.2.3.2). In the last part of this chapter (2.3), we study mouthings, define and introduce the notion (2.3.1), present the types (2.3.2), and present the functions (2.3.3). In the next part, code-switching will be explicitly discussed.

2.1. Code-switching 2.1.1. Definition

One of the phenomena that occur in the group of bilinguals is code-switching. Poplack (1993) defines Code-switching as "the juxtaposition of sentences or sentence fragments, each of which is internally consistent with the morphological and syntactic (and optionally, phonological) rules of the language of its provenance." (Poplack 1980: 591 in Muysken 2000:11). Whereas Myers-Scotton (1993) gives a slightly different definition arguing that “Code-switching is the selection by bilinguals or multilinguals of forms from an embedded language (or languages) in utterances of a matrix language during the same conversation.” (Myers-Scotton 1993a: 4 in Muysken 2000:15). It is, according to Bullock and Toribio (2009), the bilingual's ability to alternate between two languages (or more) within the same conversation without much effort. So, it is proficiency that influences the language user’s ability to code-switch. Hence, the ideal bilingual is someone who “switches from one language according to appropriate changes in the speech situation (interlocutor, topics, etc.) but not in an unchanged speech situation and certainly not within a single sentence” (Weinreich 1953: 73 in Muysken 2000:1). The intra-sentential CS can be attributed to the lack of proficiency. Poplack (1980), on the other hand, argues that speakers who tend to code-switch are usually proficient in both languages they use (Muysken 2000:2). This means that when they switch,

(11)

they respect the semantic and syntactic rules that govern the language. Romaine (1995) states that CS occurs when "the items in question form part of the same speech act. They are tied together prosodically as well as by semantic and syntactic relations equivalent to those that join passages in a single speech act" (as cited in Hauser 2000:44).

To study CS, it is crucial to differentiate between three essential approaches. The first one is the structural approach, which focuses on what code-switching can tell about language structure at several levels (lexicon, morphology, phonology, syntax, and the like). The second, as Bullock and Toribio state, is the psychological approach which studies CS “to understand the cognitive mechanisms that underlie bilingual production, perception, and acquisition”; and the third is the sociolinguistic approach which deals with “the social factors that promote or inhibit CS and views it as affording insights into social constructs such as power and prestige” (Bullock and Toribio 2009:14).

Müller and Cantone (2009) argue that the structural aspects of CS help to investigate whether structural and pragmatic constraints govern this speech style. They made a distinction between inter-sentential and intra-sentential CS. The former occurs when a bilingual person switches to a language while answering or speaking to a person who speaks another language. Example (4) is taken from Müller and Cantone (2005). A bilingual German-Italian child (2.5 years old) switches to German-Italian while speaking to a German interlocutor.

(4) Aurelio: ieio battone (= bottone) ieio (= Aurelio) button

Adult: was möchtest du habn? what want you have? ‘What do you want?’

Aurelio: battone ieio (o) voio button ieio it want

Adult: was möchtest du? what want you?

‘What do you want?’ Aurelio: il battone ‘The button.’

(12)

The intra-sentential CS (Muysken calls it “code-mixing”) is the main focus of syntacticians in their study of structural constraints on CS. It is “the juxtaposition of elements from two (or more) languages within one sentence” (Müller and Cantone 2009: 200). As illustrated in examples (5) and (6), a bilingual French-German child and a bilingual Spanish-English child switch into German and English, respectively, within the same utterance.

(5) French–German ça c’est pas warm this it is not warm ‘This one is not warm.’ (6) Spanish–English

un sheep ‘a sheep’

(Müller and Cantone 2009: 200) 2.1.2. Types of code-switching

CS is a common phenomenon that occurs between languages irrespective of the modality in which languages are produced and perceived. CS also occurs when signers of two (or more) sign languages communicate with each other. Some bilinguals code-switch between a sign language and a cued language (a spoken language in its visual modality). Moreover, CODAs code-switch sequentially as well (although this occurs rarely). In the next sections, these types will be tackled in more detail.

2.1.2.1. Spoken-signed code-switching

Code-switching is a modality-independent phenomenon and occurs across modalities. Signers switch between sign language and a spoken language in its written form, such as using a cued speech. Leybaert and LaSasso (2010) define cued speech as the visual form of a given language (cued English, cued Dutch, etc.) profiting from the lexical signs in combination with mouth movements to convey a message and avoid ambiguity. Hauser (2000) tested a ten-year-old American-Korean bilingual deaf girl who is fluent in ASL and cued English to check whether CS occurs between these two languages. He videotaped NQ (a subject) for about

(13)

four hours and used cued English at the end of the conversation. He found that bilinguals code-switch when they forget a particular word in their matrix language; so it can be attributed to a lack of proficiency. In example (7), NQ is not sure whether she signed the word in the right way, so she code-switches into cued English for clarity. NQ does not know how to sign “tornado” so she signs “hurricane”. Then she switches into cued English to clarify what she said.

(7) HURRICANE.. . CL:1 (tornado) . . . tornado SCARY

(Hauser 2000:66) Bishop, Hicks, Bertone, and Sala (2007) tested 10 Italian CODAs (two of them had hearing mothers); most of them worked with the deaf community as interpreters. The authors videotaped them while talking about their childhood, family, and relationship. These topics were chosen to stimulate the bimodal frame of mind. The authors documented 178 bimodal utterances and found that they could divide their data into three categories; namely, code-blend, code-switch, and others. What I find interesting is the proportion of code-switches is 36%. This proportion is much higher than what Emmorey et al. (2008) found.

Donati and Branchini (2012) state that code-switching occurs between different modalities when an individual stops using a language and starts using another from a different one. They contradict what Bishop, Hicks, Bertone, and Sala (2007) found. In their data, this is not so common among the bimodals, yet it occurs. The examples below show cases from the data of Donati and Branchini (2012).

(8) It. e poi l’ha preso and then it’have.3SG take.PTCP LIS: CUT-HEART TAKE-HEART ‘(He) has cut the heart and has taken it’

(9) It. sei tu be-3SG you-SG LIS: SAY SAY YOU

‘(He) says: it’s you’

(14)

In examples (8) and (9), the information resides in both modalities. The subject code-switches between Italian and LIS. Although the CODA produces both the utterance in two different modalities, s/he code-mixes sequentially where S/he stops producing a language and starts producing another following the word order of both modalities. These examples show that it is possible to activate both articulatory channels. It is more like what Muysken calls Alternation between spoken languages. Anyway, this sort of switching does not occur very frequently; even less than code-switching between signed languages which will be discussed in the next section.

2.1.2.2. Signed-signed code-switching

Little literature is available on signed-signed code-switching for many reasons. The most important factor is iconicity which results in many shared lexical items across sign languages. This often makes it difficult to identify code-switches between sign languages. In 2002, Quinto-Pozos researched eight Mexican signers who lived in Texas in the Southwest of the US to see what strategies the signers use to communicate with each other. In the deaf community there, both American Sign Language (ASL) and Mexican Sign Language (LSM) are used. The subjects participated in one-on-one interviews and group discussions. Quinto Pozos videotaped them in a spontaneous conversation and documented the similar meaningful elements that occur in both sign languages. He noticed that some signers vary their productions of ASL and LSM according to the situation they are in (they speak to other bilinguals). He also noticed that, as in spoken languages, the subjects use reiterative CS to clarify an idea, accommodate another signer or emphasize (it will be discussed separately in the next section, see example (11) next section).

Another research was conducted by Zeshan and Panda (2015) who videotaped a casual conversation of four signers (two males and two females) who are fluent in Indian Sign Language (ISL) and Burundi Sign Language (BuSL). The subjects were from Burundi and were within a BA program for signers in New Delhi. The researchers aimed to investigate how both ISL and BuSL contribute to this conversation and what lexical and grammatical choices signers make. They focused on negators and wh-questions to investigate the lexical choices and grammatical structures. They found that there are contributions from both ISL and BuSL to the conversation. For the negators, the signers tend to use the signs that belong to both sign

(15)

languages; for questions, signers used BuSL for specific question words and ISL for general question words.

Consider example (10), a CS from Zeshan and Panda (2015:104), where signers code-switch between Burundi Sign Language (BuSL) and Indian Sign Language (ISL). Note that I and B refer to ISL and BuSL vocabulary items, respectively, while S refers to the signs that have the same form in both languages:

(10) B:WHY S:IX1 S:MEET S:IX1 B:HONEST S:DON’T-KNOW

I:OWN3 I:BACKGROUND S:DON’T-KNOW [Clip 5, 02:17, WK] ‘Because when I’d meet her, honestly, I don’t know her background.’

Here the bilingual begins signing in BuSL, which is his dominant language (or, as Muysken calls it, matrix language). The Matrix Language frame suggests that in bilinguals’ speech, there is a dominant language that provides the sentence with grammatical frames. The minor language is the embedded one that provides the sentence only with lexical elements (Muysken 2000).

2.1.2.3. Reiterative Code-switching

Reiterative code-switching implies repeating a message uttered in a language in another language. For example, bilinguals, both speakers and signers, repeat what they say in another language either to avoid ambiguity or to emphasize. Bentahila (1983) argues that repeating utterances does not always serve clarity otherwise to achieve emphasis. He gives many examples of reiterative CS, where subjects say something in a language and then repeat it in another language. He contradicts Kachru (1977), who argues that a bilingual uses the reiterative CS as a strategy to avoid misunderstanding. Bentahila states that usually repeated utterances are not clearer than the first one, as shown in (11). In this example, the informant repeats the phrase meaning “I’ve asked him” in both Arabic and French to add emphasis to what has been said.

(11) l muhim swltu je lui ai demandé

‘The important thing is I’ve asked him, I’ve asked him.’

(16)

In the data that are presented in Quinto Pozos (2009), several reiterative CS instances occur. This CS serves several functions, such as emphasis, support, clarification, getting attention, accommodation, and reinforcement. Consider example (12) that is taken from Quinto-Pozos (2009:230).

(12) point-middle finger (for listing) TOMATO TOMATE

ADD-INGREDIENTS MIX gesture: “thumbs-up”

‘(…and then you take) tomatoes and you add them to the other ingredients and mix everything together. It’s great.'

Quinto Pozos (2009) explains this example. Here the interviewer looks at three other interlocutors. Among these people, one does not use ASL, so he code-switches into LSM for clarity and to make sure that he makes himself clear to all other participants. There is a pause between the two signs (TOMATO and TOMATE), which means the CS serves an emphasis function. Another example that serves another purpose is given in (13) from Quinto-Pozos (2009:231).

(13) NO/NO ME/YO NO/NO++ ME/YO gesture: shake-finger DEAF/SORDO

__head shake for negation__

gesture: “wave hand to negate” ME FAMILY FAMILIA MY/MI gesture/emblem: “well”

‘As for me, my family is not Deaf. Oh well.’

In example (13), there is no pause between FAMILY and FAMILIA. He says that this CS serves the aim of drawing attention to this particular sign. The reiterative CS occurs also in Zeshan and Panda (2015: 119). Mind the following examples:

(14) S:IX3-DIST S:SOLVE I:DIFFICULT+ B:DIFFICULT [Clip 3, 02:52, SN] ‘To resolve this with all of them was very difficult.’

(17)

Here DIFFICULT is articulated in ISL twice (as indicated by the ‘+’), then the signer code-switches to BuSL. In this way, DIFFICULT is emphasized.

(15) I:INDIA I:LIKE (HESITATION) B:LIKE S:IX1 I:NOT [Clip 3, 01:08, SN] ‘India, I like, ehm … I don’t like.’

The doubling here (reiterative CS) expresses hesitation.

From the above examples, we find that it may be complicated to determine the cause of this kind of CS. In the next section, it will be shown what aims CS serves.

2.1.3. Functions of code-switching 2.1.3.1. Preserve identity

As several researchers postulate, bilinguals code-switch only within a bilingual community where they share a multi-language identity with other speakers to indicate their membership in two cultures; one is dominant, and the other is minor. Grosjean (1982:309) argues that:

Bilinguals are in a total monolingual mode in that they are speaking (or writing) to monolinguals of one-or-the-other of the languages that they know. At the other end of the continuum, bilinguals find themselves in a bilingual language mode in that they are communicating with bilinguals who share their two languages and with whom they normally mix languages.

Bentahila (1983) asked a sample of bilinguals about their attitude towards this phenomenon and found it contradicts Grosjean; 72,22% disapproved of this language style. They attributed this speech style to a lack of identity, lack of education, and the like. Contrarily, Zentella states that individuals who are proficient in more than one language code-switch. He states that this code-switching could occur “for several possible reasons: (a) for emphasis, (b) to recognize Zentella's U.S.-born identity, or (c) to show off her knowledge of English -the 'prestige language” (1997:88 as cited in Hauser 2000:46). Sometimes it “carries overt prestige” within the strata of a bilingual community (Bullock and Toribio 2009:10). However, CS could also be interpreted as a negative reflection of speakers’ cognitive abilities or even a negative social manner, especially in the immigrants’ new communities.

(18)

2.1.3.2. Express notions difficult to access in a certain language.

Some linguists attribute CS to a lack of proficiency. Bentahila (1983) investigated this phenomenon in Morocco, where most people are fluent in both Arabic and French. He examined the switches that occurred in seven and a half hours recording of several Arabic-French balanced bilinguals who were between seventeen and forty years old. He noticed that speakers have the ability to produce the same terms in both Arabic and French, but they have a preference to switch to one of them while talking about specific topics. For instance, they tend to code-switch to French when they need to express technical terms, and codeswitch to Arabic for swearing, insult, or even stereotyped phrases to avoid a pause. He also noticed that speakers have difficulty in explaining something in a particular language before they code-switch, and therefore they begin to produce the same idea in another language. Example (16) shows that the Arabic-French bilingual speaker tends to code-switch to French while speaking about education since French is widely spread in all institutional institutes in Morocco.

(16) bħal daba f la faculte fhmti tadxul ts̆uf le doyen

‘For example, at the university, you understand, you go to see the dean.’

(Bentahila 1983:234) So, speakers code-switch to French technical terms, numbers, or concepts associated with Europe, whereas they switch to Arabic to insult, swear, or speak about religion. As explained above, the lexical items used to express a particular topic reflect the speaker’s tendency to code-switch to a language more than the other because the vocabulary is more accessible. Example (9) illustrates CS into Arabic to express an insult.

(17) elle n’a qu’a etudier un peu d’histoire nʡal bu:ha avant de me poser des questions ‘She’d better study a bit of history, damn her father, before asking me questions.’

(Bentahila 1983:235) As shown, bilinguals code-switch either to preserve their identity or to express terms that are difficult to access in one of the languages they speak. We have to keep in mind that the switches occur at a certain point, which will be discussed in the next section.

(19)

2.1.4. Switch point in code-switching

Let’s consider the following example, a CS between English and Dutch: (18) Wan ik kom home from school.

‘When I come home from school.’

(English/Dutch; Clyne 1987: 759 in Muysken 2000:11) Here the Dutch kom and the English come sound similar so when it (kom) occurs it triggers English not only at a lexical level but also at a grammatical level. The next section deals with another form of code-mixing, namely code-blending. The code-switching in the previous example happens at a point where a shared word between English and Dutch kom occurs. Muysken (2000) checked examples from Frisian-Dutch data and found a similar result: there is a tendency to code-switch when two words from Dutch and Frisian are similar.

Let’s consider another example (19) from Zeshan and Panda (2015:122): (19) B:PLAN S:FLY S:IX2 S:THINK S:THERE I:INDIA S:THINK B:WHAT (…).

[Clip 3, 06:32, CN] ‘As you were planning to fly, what were you thinking about India?’

CN begins signing in BuSL, then articulates four signs that are shared in both sign languages (S), and after that code-switches into ISL, then again to S and BuSL. This is in line with Lehtinen’s statement:

For any intra-sentence code-switching to be possible at all, there must exist in the two languages some constructions which are in some sense similar, so that certain syntactic items from each language are equivalent to each other in specific ways. Further reflection, supported by an examination of the corpus, shows that the similarities must exist in what is known as the ‘surface grammar’ of sentences.

(Lehtinen 1966:153 in Muysken 2000:11) It seems that code-switching occurs when the shared lexical item (S) triggers ISL and pushes it to the fore. The shared element facilitates CS. Clyne (1972) presents the notion of lexical

(20)

triggering. He postulates that similarities between languages facilitate CS and trigger other elements from that language.

2.2. Code-blending 2.2.1. Definition

As mentioned before, CODAs are hearing subjects who are skilled in a signed and a spoken language and can, therefore, produce utterances using both modalities synchronically. They are bimodal bilinguals who can code-blend as both sensory systems (acoustic and visual) are employed. Linguists are interested in the production of the bimodal bilinguals since this phenomenon does not occur in the utterance of unimodal bilinguals. No one can speak English and Dutch simultaneously. Code-blending is the use of the spoken and manual channels simultaneously to produce a full message. It is the interference of a modality to another. In Code-blending, CODAs co-use two languages of different modalities i.e. a visual system and an acoustic system. Two autonomous sensory systems are involved in the process of perception and production of the bimodals. The spoken language is produced by the vocal tract, while the signed language depends mainly on the movement of the hands and body. Hereafter, we refer to them as visual-gestural language and acoustic-vocal ones. In the code-blended production of the bimodals, they do not have to stop a language to use the other, as is the case in the code-switching of the unimodals; CODAs can use them in parallel. Code-blending is similar to speaking while gesturing but the difference is that the former is goal-directed while the latter is purely spontaneous. CODAs also may engage code-switching by alternating between two modalities sequentially. Still, there should be a dominant modality accompanied by components of the embedded one. The primary language (or matrix language as Muysken calls it) is the one that provides the utterances with the syntactic structure, and processing it depends on several automatic features. Yet, inserting components from the embedded language happens spontaneously. The matrix language affects the production of the embedded one. Even when CODAs use a language in a pure spoken or signed context, they may show an effect of the other language.

In the next section, the types of code-blending are divided according to the base language the CODA uses.

(21)

2.2.2. Types of code-blending

Code-blending can be divided into four categories based on the matrix language, the language that provides the grammatical structure for the utterance.

2.2.2.1. Spoken-based code-blending

This sort of code-blending resembles most cases presented in previous studies. The acoustic-vocal modality provides the CODA’s production with the grammar. The utterance is produced entirely in spoken language, according to the spoken language rules, and the signs are simultaneously produced with some words. However, the signs do not add any semantic value to the utterance (Van den Bogaerde and Baker 2005). The following examples are from spoken Italian and LIS.

(20) It. La strega dà la mela a Biancaneve The witch give.3SG the apple to Snow White LIS: CL-GIVE

‘The witch gives the apple to Snow White’ (21) It. La regina dice

the queen say.3SG LIS: SAY SAY

‘The queen says’

(Donati and Branchini 2012:104) In the examples shown, the dominant modality is the spoken one accompanied by elements from the signed modality, but they do not have to be synchronized. The verb ‘give’ in both modalities come in parallel in (20), whereas SAY is produced twice in LIS but is not synchronized with an equivalent verb from Italian in (21).

These examples show that despite the presence of a dominant modality, the other one keeps its word order and grammar. The addition of the signed elements is redundant and does not add any semantic value to the utterance, thus contradicting the claim which states that this addition is due to a lack either in the user’s production or in the signed language

(22)

itself. The extra elements from a non-dominant language could serve an emphasis end (as Benthilla explained in his paper, see CS between signed languages).

Other similar examples from Bishop (2010:225). (22) I’m pretty [sure] they were [hearing].

SURE HEARING (23) I think it is [mostly English].

MOST ENGLISH

In these examples, the base language is English and there are insertions of lexical items from ASL that semantically mirror elements from spoken English. It is worth noting that these added elements do not add any extra information to the meaning either. In the next section, another type of code-blending is tackled, which rarely occurs.

2.2.2.2. Sign-based code-blending

In this type of code-blending, the matrix language is signed; the whole proposition is expressed in sign, and there are insertions of spoken words that do not contribute any semantic value to the utterance. The visual modality thus provides the utterance with the grammar. The following example from Van den Bogaerde (2000) illustrates this type.

(24) BICYCLE RED OUTSIDE out

‘The red bicycle is outside.’ (Van den Bogaerde 2000:99) Here the utterance is entirely produced in a signed language, namely Sign Language of the Netherlands (NGT) and the word from Dutch is congruent with only one sign, OUTSIDE. Again, the proposition is structured according to the signed language rules and grammar, and the spoken elements do not add any semantic value to the utterance. Other examples from Bishop (2010:226), involving ASL and English, are presented in (25) and (26).

(25) [That] [hearing] [way] THAT HEARING WAY

(23)

(26) [Cute] [plus] [deaf ] [poor] [lalalala] CUTE PLUS DEAF POOR PITY ‘(She’s) cute, deaf, and poor. What a pity.’

In the above examples, ASL provides the proposition with syntactic structure, while English is congruent and does not add any information to the meaning. Non-signers cannot understand these utterances which are less transparent.

The following examples are from Bishop, Hicks, Bertone, and Sala (2007:98). In this example, the Italian word order is rearranged to fit grammatical structure of LIS.

(27) [con papa] (lo) [accompagno] (per) [fare] (le) [mostre]. [Ama fare] (le) [mostre insieme]

CON PAPÀ ACCOPAGNO FARE MOSTRE. AMA FARE MOSTRE INSIEME (ai) [sordi] (per). [parlare], [sapere nuovo], [io insieme] (ai) [sordi] ci sto. SORDI PARLARE SAPERE NUOVO, IO INSIEME SORDI

Translation:

[with] [dad] [I go] [put on art exhibit]. [He loves] [to exhibit] [together] [deaf] WITH DAD GO SHOW ART. HE LOVE SHOW ART TOGETHER DEAF

[talk] [know] [new] … [I] [together] [deaf], [I’m there]. TALK KNOW NEW… PRO-1 TOGETHER DEAF AM

The subject here is talking about how she helps her dad to display his paintings in different galleries. The sentence is in spoken Italian but follows the grammar of LIS, so the matrix language is LIS, according to Muysken’s definition. Similar results were found in Emmorey, Borinstein, and Thompson (2005). They researched native English and ASL and found that the informants repeat spoken verbs according to the grammar of ASL. For instance, a subject reduplicates the verb CHASE in ASL to express that the bird was chased all around the room. They also noticed that the spoken verb was uttered twice as well.

In the next type, lexical items are necessary to produce a full proposition; it is not possible to leave them out.

(24)

2.2.2.3. Mixed code-blending

In this sort of code-blending, the proposition is built on elements from both modalities. Both signed and spoken languages are crucial to complete the meaning, as they complement each other. The two articulatory channels work in parallel to produce a full sentence. Of course, there are still some redundant elements which can be elided. They can either belong to the same word class or a different one as shown in the following example from Van den Bogaerde (2000:99).

(28) HORSE big

‘The horse is big’

In this example, the noun is uttered in NGT, and the adjective that modifies it is produced simultaneously in Dutch. Other examples from Italian and LIS are provided by Donati and Branchini (2012). In the examples in (29) and (30), it is not possible to leave any element out since all of them contribute to the meaning of the utterance. In this sort of code-blending, the utterance is distributed over two sensory systems with some redundant duplications. The recipient has to pay good attention to grasp the full meaning since the two modalities complement each other. The elements from both languages are crucial to complete the meaning.

(29) It. dalla regina cattiva to.the queen wicked LIS: GO WICKED ‘(He) goes to the wicked queen’

(Donati and Branchini 2012:110) (30) It. Biancaneve è la più bella

Snow White be.3SG the most beautiful LIS: IX BEAUTIFUL ALL ‘Snow White is the most beautiful of all’

(25)

In (29), the verb is produced in LIS and its locative argument in Italian. And in (30), ALL is only uttered in LIS to provide the interlocutor with the other term of comparison.

In the following section, both modalities are produced fully. 2.2.2.4. Full code-blending

In this sort of blending, both modalities are produced fully and independently as if two monolinguals utter them. In other words, the utterances are well-formed in both modalities, but they need not be exactly the same in both spoken and signed language. Consider the following example from Bishop (2010:228), in which both the English and the ASL string are grammatically correct.

(31) How [old] are [you]? OLD PRO

In the previous example, both English and ASL are grammatically correct, and any monolingual who uses either language can grasp the meaning by depending on their proficiency of either modality.

Sometimes, a certain language can express a notion that does not exist in the other. For example, in LIS, there is a locative sign which is provides a meaning that is not identified by the other modality, i.e. Italian. Mind the examples from LIS by Donati and Branchini (2012:105).

(32) It. I sette nani sono saliti The.PL seven dwarf.PL be.PRES.3PL climb.PTC LIS: SEVEN DWARVES CLIMB ON-SHOULDERS ‘The seven dwarves have climbed on the shoulders’

In this example, LIS specifies to where the dwarves climbed, which is not expressed in Italian since the verb saliti (‘climb’) in Italian is generic (general and vague) while in LIS it should be specified to where the dwarves have climbed.

(26)

(33) it. Calcio... mi piace l’ Italia Soccer me.DAT like The Italy LIS: SOCCER LIKE INTER ‘As for soccer, I like Italy/Inter’

(Donati and Branchini 2012:105) This example shows a clear mismatch between LIS and Italian. The team produced in LIS is a hyponym of the general team articulated in Italian. The signed element INTER serves to identify which Italian soccer team the speaker means.

Like most types of code-blending (all except mixed code-blending) show, the embedded semantic elements are redundant. The next section tackles the functions of code-blended utterances and why CODAs produce them.

2.2.3. Functions of code-blending 2.2.3.1. Express nuances

CODAs make use of the mixture of both modalities’ grammar, which both play a major role in identifying nuances that do not occur in either modality. For example, ASL provides the meaning with an extra-grammatical feature that does not occur in English alone such as the use of space; this gives the CODA an advantage of expressing accurate notions that cannot be expressed by unimodal bilinguals (Bishop 2010). This is shown in the examples from Emmorey, Borinstein, and Thompson (2005), where signers code-blend to express notions that exclusively exist in a certain language and not in the other. For example, “some ASL verbs inflect for location (e.g., not just “jump” but “jump toward a location”), and classifier predicates often convey additional visual-spatial information not expressed by the English verb” (Emmorey, Borinstein, and Thompson 2003:667). ASL and English forms were semantically equivalent but each modality contributes different information. The verb jump is presented in both modalities; it is inflected in English to agree with the singular subject whereas the sign JUMP indicates the location to where Sylvester jumps, i.e. forward.

(34) P1: “So Sylvester who’s on the ledge [jumps into] the apartment.” JUMP

(27)

(35) P7: “I [don’t] [think] he would [really] [live].” NOT THINK REALLY LIVE

(Emmorey, Borinstein, and Thompson 2003:666) Even while CODAs speak to monolinguals, there are intrusions from ASL into English to express nuances that are expressed in either modality. CODAs profit from the fact that both channels are activated in their minds. This interference suggests that the gesture-speech is very much affected by the sign language acquisition. Moreover, knowing a sign language enables the speaker to access a bigger lexicon; it provides speakers with vocabularies not available in spoken language. Their production is unique due to the simultaneity of speech and sign.

2.2.3.2. Preserve identity

Although they hear, the CODAs considered themselves to be connected culturally more to the deaf community than the hearing one since they, in their childhood, did not recognize the difference between them and their deaf siblings until an older age. Besides, the deaf consider them as deaf since they assimilate into Deaf culture. The only difference is that they have the ability to hear and consequently use either of both languages; they are also able to switch between languages sequentially or simultaneously.

Bishop and Hicks (2005) argue that the hearing culture is different from the deaf one; nonetheless, the CODAs do not recognize that difference until a certain age. At a later age, CODAs identify themselves as belonging to both hearing and deaf identities. Code-blending has a crucial role in identifying CODAs’ identity.

It is worth noting that by recognizing the oral culture and values in many aspects of life, like education (given that the deaf children receive the mainstream education that focuses on spoken language and from an early age), the deaf often experience their norms, culture, and language as being ignored (compared to the oral ones). This bias affects CODAs, as they are not perceived as bilinguals by the surrounding hearing community, which in turn results in a mutual ignorance by both hearing and deaf people. Code-blending is a tool that CODAs use to identify themselves as belonging to both “opposing worlds”, as Bishop and Hicks (2005) call it. They state that CODAs are often sent to a speech therapist before entering school because they acquired a sort of a non-native pronunciation from their deaf parents. This stigma affects to an extent the production of CODAs. It also affects their attitude towards

(28)

CODAs’ speech. The informants in Preston (1994) were very critical of this sort of utterances. They think that CODA-talk is restricted to family and using it outside of a CODAs’ context is “tantamount to a betrayal of a family and cultural trust’’ (223: in Bishop and Hicks 2005).

In Emmorey, Borinstein, and Thompson (2003), several subjects told the researchers that they code-blend unconsciously even when they communicate with non-signers. They do not adapt their utterances to the hearing status of their interlocutors who do not know ASL. According to them, the sign-affected speech is a form of bilingual communication which allows this interference. That contradicts what happens in code-switching, both languages are active in the mind but only one candidate is spelled out; and unimodal bilinguals adapt their language behavior to their interlocutors. I predict that the use of code-blending is a way of indicating a bimodal identity as opposed to the unimodal one. In the next section, another phenomenon of simultaneous production is investigated.

2.3. Mouthings

2.3.1. Introduction

Hands are the main communicative means in signed languages; still, non-manual markers such as the eyebrows, the torso, and the mouth may also contribute to identifying the meaning of the utterance. Nonmanual markers such as head and body movements, facial expressions, and mouth patterns play a crucial role at a grammatical and prosodical level (Pfau and Quer 2010). For example, in ASL, a manual utterance can be negated by simultaneously combining it with a headshake. The mouth activities are among the most essential non-manual markers and can be divided into:

• Spoken components (which Schermer called later-on mouthings): they are mainly derived from spoken languages.

• Oral components (which she called later on mouth gestures): the mouth performs movements that imitate the action itself, e.g. the mouth movements that accompany the signs APPLE, VOMIT, LAUGH

(Sutton-Spence 2007:81) Given that spoken languages are promoted in media, schools, and the lives of people, they have, to a great deal, affected their corresponding signed languages. Day (1995) postulates

(29)

that English has more influence on signers from hearing parents than signers from deaf parents. The more signers are exposed to a spoken language the more proficient in that language they become. Hence, many researchers have claimed that all signers are multilingual; it is because they are competent in at least a signed language and its corresponding spoken one. Zeshan (2001), Pfau, and Quer (2010) agree that this proficiency comes as a result of mainstream education which signers receive at schools.

Due to the fact that signed languages are visual systems, the mouthed words are uttered without vocalization. Individuals who can lipread may understand the spoken-like mouth movements, i.e. mouthings, but not the mouth gestures which cannot be traced back to a spoken language. The former stems from spoken words that transformed to voiceless mouth movements without their prosodic features (Vogt-Svendsen 2001). In this part, the focus will be laid upon mouthing, which is a crucial part of the mouth actions.

Mouthing is a movement of the mouth that corresponds with a word of the surrounding spoken language. Pfau and Quer (2010) define it as the unvoiced articulation of words (or word parts, usually their first part) of the corresponding language. For example, when a signer signs ‘SCHOOL’ accompanied by a mouthed ‘school’. It plays an essential role in most European signed languages but not in ASL where fingerspelling has a major role. According to Boyes Braem (2001), mouthings in Swiss-German Sign Language DSGS are very common given that the users do not finger-spell, the technique that incorporates words from the spoken language into the signed language as is the case in ASL. Another reason is that the DSGS is not standardized, so it is likely for a signer to meet another signer who uses a different sign dialect. In this case, mouthing may serve as an identifier of the sign which could have another manual form in the other dialect. The last reason is that the oral language is highly estimated by both deaf and hearing communities so the mouthing of German lexical elements has become important features in DSGS. Mouthing also helps to identify the meaning of the manual part of the sign (Pfau and Quer 2010). While signing, using elements from a spoken language may help non-fluent signers to grasp the meaning. As illustrated in the prior example, it is easier to understand the sign ‘SCHOOL’ when it is accompanied by its counterpart from the spoken language. Boyes Braem (1985) argues that mouthings help “when there is some doubt about the meaning of a sign”(as cited in Zeshan 2001:268). Pfau and Quer propounds that some mouthings are considered to be redundant and do not serve any meaning such as the Dutch words bloem and moeder in the previous example. In these

(30)

two examples, the mouthing and the manual sign have one meaning and belong to the same lexical category (Boyes Braem 2001). Yet, other mouthings serve to disambiguate the meaning of general manual signs that have more than one connotation and avoid misinterpretation. It is not possible to specify the meaning of the NGT sign SMALL-OBJECT without an accompanying mouthing which might be ‘pea’, ‘pearl’, or ‘detail’.

It is important to state that this feature is what diverges the European signed languages from ASL. Padden (1980) claims that mouthings are not part of ASL where the fingerspelling plays a major role, unlike the European signed languages where fingerspelling plays little or no role. As a result, this phenomenon has attained the attention of European researchers (as cited in Pfau and Quer 2010). Schermer (2001) indicates that one of the most considerable differences between ASL and NGT is the occurrence of NGT mouthings even by Deaf signers from Deaf families. She asserts that in contrast to ASL, which depends on fingerspelling in identifying the general signs, most European signed languages use mouthings as meaning specifiers. So, the focus on this phenomenon came primarily from European scholars since it is one of the most salient manifestations of the spoken language influence on signed languages.

2.3.2. Types of mouthings

Mouthings are mostly articulated simultaneously with a manual sign; nonetheless, they may come alone without an accompanying sign as found in the data of Schermer (2001) who distinguishes two types of mouthings: with or without a manual part. Bank (2014) coded mouth actions into five categories that accompany the manual part of the sign: standard mouthing, mouthing variant, overlap from adjacent signs, mouth gesture, or no mouth action. In this study, mouth gestures and ‘no mouth action’ will be excluded since they are irrelevant. We will adopt the following categories of mouthings:

• Mouthings without a manual part: or solo mouthings as Bank, Crasborn, and van Hout (2016) call it. In their corpus study, they found that this sort of mouthing is common in NGT for short phrases like yes and no and used while the signer’s hands are engaged with doing something else. This sort of mouthing does not occur very frequently (11.75 % of the cases Schermer found in her data collected in the 1980s). Schermer (1990) argues that this sort of mouthing is affected directly by a spoken language, i.e. Dutch. She reports that 60% of

(31)

mouthings without a manual part are function words, 11.75% are nouns and 28.25% are uninflected verbs.

• Mouthings with manual parts: which occur either in their reduced form or in their full form. They can be divided into three sub-types:

1. Standard mouthing: this sort of mouthing is the most frequent where the mouthings have the same meaning as the manual signs they accompany.

2. Mouthing variant: in this type, mouthings specify the manual sign as illustrated in the following example from Bank (2014:25)

(36) Manual: SIGN Mouth: tolk

‘interpreter’

Meaning: Sign language interpreter

In example (36), the lexical information resides in both mouth and sign. The mouthing tolk specifies the sign SIGN to form a complex meaning. Here mouthing is obligatory and has a specifying function.

3. Overlap: in this type, the manual sign can accompany a mouthing that belongs to its neighboring sign. In example (37) from NGT, the mouthing learn accompanies LEARN and overlaps with SIGN. The example is taken from Bank (2014:31).

(37) Manual: SIGN WANT LE ARN SIGN // CHILD ... Mouth: <mouth gesture> wil__ leren______ kind_ ... ‘want’ ‘learn’ ‘child’

In the following, we will only be concerned with the latter type, i.e. mouthings with a manual part. The different functions of mouthings will be discussed, and illustrated with examples, in the next section.

2.3.3. Functions of mouthings

Ajello, Mazzoni, and Nicolai (2001) claim that mouthings are essential parts of signed languages since they facilitate communication. The mouthed lexical items that accompany

(32)

the manual parts of the signs occur to “disambiguate minimal pairs or to specify or complement the meaning of the sign” (Schermer 2001:277). In analyzing her data, she divides them into two broad categories: i) mouthings that identify, specify, disambiguate or complement the sign, and ii) mouthings that do not add any additional value to the sign

Mouthings belonging to category (i) are obligatory since, without the mouthed part, the sign will be vague and unclear to the interlocutor Four functions have to be distinguished. • Disambiguate a minimal pair: Numerous signs have more than one meaning; for example,

in NGT, FRIEND, BROTHER, and SISTER have the same sign. So, imouthing here is crucial to understand what the signer means. Some signs have a broad meaning and need to be specified since the sign itself is unclear and vague, e.g. ‘bed’ and ‘egg’ in NGT (Schermer 2001:277).

• Identify the syntax of manual part of the sign. The following example from Dutch helps us to understand more.

(38) ZEGGEN+ gezegd (SAY+ said)

(Schermer 2001:278) Mouthings here come in their complete form and serves to define the tense. The manual part of the sign is uninflected and the mouthing specifies the tense.

• Specify the meaning: the mouthed element functions as a homophone to the manual part of the sign i.e. the mouthed part of the sign can be a type of the manual part, or there is a semantic relation between the two different parts. IN NGT, we find examples as the following:

(39) DORP+ stad (VILLAGE + town) LAND + weiland (LAND + meadow) GEITEN+ gras (GOATS+ grass)

(Schermer 2001:278) As it is shown in the previous examples the manual part and mouthing are semantically related. The mouthing in the third example identifies which kind of goat is mentioned.

(33)

• Complement the manual part of the sign as in the following NGT-Dutch examples: (40) ETEN+ grazen (EAT+ grazing)

WILLEN+ ik wil (WANT+ I want) ROEPEN+ mama (CALL+ mama) ROK + mooi (SKIRT + nice).

(Schermer 2001:278) As illustrated in the examples, the manual part of the sign does not provide us with a full proposition so mouthing is crucial to complement it. What is eaten would not be understood unless the mouthing existed. And we cannot identify the agent of the verb willen, i.e. ‘want’ without the mouthed element. Similarly, the recipient is unknown until it is mouthed. In the last example the mouthing functions as an adjective for the noun rok. Crasborn, van der Kooij, Waters, Woll, and Mesch (2008) postulate that mouthings may function as a morpheme that is semantically incongruent with the manual part of the sign to express complex meaning. For example, when the NGT sign ROEPEN (call) occurs with the mouthing mama (mama), they form a complex meaning ‘mama roepen’ (call mama), so the presence of mouthed components has a significant role.In this regard, Schermer (1985) argues that “the existence of a pure sign language, without the occurrence of any speech, among deaf adults, is more or less a theoretical construct” (288: as cited in Bank, Crasborn, and van Hout, 2015:42).

Mouthings can also serve a lexical integration for proper names that do not have a sign because this lexical item is not an intrinsic part of the signers’ culture. An example from Ajello, Mazzoni, and Nicolai (2001:235): “settequaranta is the number of an income-tax return form, which is often referred to as a ‘seven forty’”.

Signers commonly adjust mouthings to the length of the signs they occur with, i.e., sign and mouthings are temporally aligned.

Crasborn, van der Kooij, Waters, Woll, and Mesch (2008) noted that mouthings are not limited to a single sign, they may also spread onto the neighboring signs and bind the elements in the clause together. They define mouth spreading as the extension of mouth action over not only the corresponding sign but also the neighboring one(s) progressively or regressively. Figure (2) shows the (progressive) spreading of the mouthed element doctor

(34)

from the sign it is associated with (DOCTOR) onto the predicate GO. We notice that the onset of the mouthing doctor accompanies DOCTOR while the remaining part co-occurs with GO.

Manual: DOCTOR______GO______________________________________

Mouth: d_____________o__________________(k)t____________e________(r)___ Meaning: ‘I go/went to the doctor’

Figure 2. Selection of video frames illustrating the simultaneous signing of DOCTOR GO with the single mouthing ‘dokter’ (Bank, Crasborn, and van Hout, 2015:42).

A cross-linguistic study examined the spreading of mouthing in three European signed languages, namely, British Sign Language, Swedish Sign Language and Sign Language of the Netherlands. Consider the following example from NGT.

(41) dorp’ ‘jongen’ ‘woon’

DORP INDEX JONGEN PERSOON-CL WONEN INDEX VILLAGE INDEX BOY PERSON-CL LIVE INDEX ‘There was a boy who lived in a village.’

Crasborn, van der Kooij, Waters, Woll and Mesch (2008:59) The example contains three instances of progressive spreading from a lexical sign onto a functional sign. The mouthing ‘jongen’ (boy), for instance, stretches from the noun BOY to the next classifier (PERSON), binding the constituents of the noun phrase BOY-PERSON and forming a complex meaning (a boy). Mostly, mouthings spread over only one sign, as shown in the example above, but sometimes they stretch over more than one sign. An example from BSL by Crasborn, van der Kooij, Waters, Woll, and Mesch (2008:62) shows that the mouthing of modal verb ‘will’stretches forward from the function sign WILLto the following two signs WIN AT-LAST.

(42) ‘will’ _ WILL WIN AT-LAST ‘Will win in the end.’

(35)

It has been suggested that spreading of mouthing, as in (41) and (42) may indicate the existence of a prosodic domain, namely a prosodic word. Schermer argues that when mouthings are mandatory, they are considered as part of the sign but, what makes them essential, especially in cases where they do not provide any information?

Mouthings that do not add any additional value to the sign are not obligatory and can be left out as in the following examples:

• Mouthing in their short form: (43) VRAGEN+vraa (ASK);

MOEDER+moe (MOTHER); KINDEREN+ kinder (CHILDREN); BUITEN+ bui (OUTSIDE).

• Mouthing in their complete form:

(44) GOEDKOOP+ goedkoop (CHEAP+ cheap); GEIT + geit (GOAT+ goat);

NU+ nu (NOW+ now);

SNEEUWEN+ sneeuwen (SNOW + snow); ETEN + eten (EAT + eat).

(Schermer 2001:278) In these examples, the mouthings, which are partial in the first four examples and complete in the last five ones, do not provide any additional information. They are redundant since the signs and mouthings have the same lexical value. Here the signer’s use of mouthings comes as a result of the superior status that spoken languages are considered to have. Boyes Braem (2001) found that the early learners use mouthings as code-switches to German for either constructed speaking, or narrative emphasis and perspective. She found out that early learners are inclined to mouth for several discursive purposes which she terms as “constructed speaking”. It can be reported speech of a non-signing person or a deaf signer

(36)

who uses mouthings to communicate with a hearing person. She postulates that those mouthings are a kind of tactic to involve a non-signer in the conversation.

(37)

3. Discussion

Given the advent of cochlear implant technology, one might hypothesize that the use of sign languages might decrease and that the role of spoken language will increase. At the same time, the impact of spoken language might also decrease, as sign languages extend their lexicons at the cost of disambiguating mouthings. To me, this implies that signs for certain concepts are lacking and sign languages are building up sign lexicon. So, signers mouth as they do not find a proper sign that represents their ideas. Due to a lack of signs, they are forced to borrow a lexical item from a corresponding spoken language and use it. Like bilinguals, all deaf signers are affected in a way with the spoken language of the surrounding community, like any minority that is familiar with the majority language. Sign language is acquired in informal ways (from peers) rather than from parents (Baker and Van den Bogaerde, 2005); moreover, deaf education focuses on teaching spoken language. This results in a great number of spoken language lexical items in signed language.

In this chapter, the research questions will be answered. The first one is ‘What are the similarities and differences between code-mixing and mouthings?’, and the second is ‘Why can/can’t we consider mouthing a part of sign language? First, I will discuss the similarities and differences in sections 3.1 and 3.2., and give answers to the three sub-questions. At the outset of part 3.1, a brief introduction shows the similarities between code-switching and code-blending. In part 3.2, the differences between code-switching and code-blending will be introduced. In each one of them, a sub-research question is answered. Then, in section 3.3, I will discuss arguments why we can or cannot consider the simultaneous act of mouthing and signing as code blending. This will give an answer to the main RQ and suggests a third modality where redundant mouthings could be attributed to.

3.1. Similarities between code-mixing and mouthing

In the outset of this part, it is essential to remember that we use code-mixing as a cover term for code-switching and code-blending. As shown in (2.2), in code-blending, we have a dominant language which provides the utterance with the syntactic structure. For most cases, the spoken language is the matrix one; anyway, they may vary according to the context.

Referenties

GERELATEERDE DOCUMENTEN

Consequently, the results from testing whether or not domestic parent size influences distance between headquarters and international venture could answer the question which of

In fact, Tangwa’s problematic is clearly stated in the following terms: The linguistic dilemma facing African countries can be very simply stated: should African countries

disciplinaire en geografi sche grensoverschrijdingen (Elffers, Warnar, Weerman), over unieke en betekenisvolle aspecten van het Nederlands op het gebied van spel- ling (Neijt)

In dit onderzoek wordt het Mackey-Glassmodel uit het onderzoek van Kyrtsou en Labys (2006) gemodificeerd zodat het een betrouwbare test voor Grangercausaliteit wordt, toegepast op

Note: a goal-setting application is more-or-less a to-do list with more extended features (e.g. support community, tracking at particular date, incentive system and/or

and secondary education are the reports Key data on teaching languages at school in Europe (Eurydice/EuroStat 2012 ) and Integrating immigrant children into schools in Europe

The authors of this article are of the opinion that in the case of wheat production in South Africa, the argument should be for stabilising domestic prices by taking a long-term

​De JGZ gaat twee pilots uitvoeren: een pilot waarbij alle ouders  van pasgeboren baby’s informatie over kinderrechten krijgen en een pilot met  kinderen van tien tot en met twaalf