• No results found

Incidental learning of typologically unknown languages through minimal exposure to films scenes: The effect of L2 subtitles

N/A
N/A
Protected

Academic year: 2021

Share "Incidental learning of typologically unknown languages through minimal exposure to films scenes: The effect of L2 subtitles"

Copied!
50
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Incidental learning of typologically unknown

languages through minimal exposure to films

scenes: The effect of L2 subtitles

Julia Schwarze

Student number: s2529122

MASTER THESIS

FOR THE ATTAINMENT OF THE ACADEMICAL DEGREE

Master of Arts

IN THE PROGRAM

MA Applied Linguistics TEFL / MA Taalwetenshappen TEFL

IN THE FACULTY OF

Arts / Faculteit de Letteren

AT THE

RIJKSUNIVERSITEIT GRONINGEN

1

st

Supervisor:

Dr. W. Lowie

(2)
(3)
(4)
(5)

Content

ABSTRACT!...!II!

1. INTRODUCTION!...!1!

2.THEORETICAL BACKGROUND!...!5!

Input and speech segmentation!...!5!

The noticing hypothesis, incidental learning and the role of frequency!...!7!

Input processing!...!9!

Enhanced audio visual input!...!11!

Subtitle processing!...!12!

Assessing incidental learning: Dimensions of vocabulary acquisition!...!14!

3.THE STUDY!...!17!

Participants!...!17!

Method!...!17!

4.RESULTS!...!19!

RESULTS OF THE PASSIVE RECOGNITION TEST!...!19!

ANOVA of acceptance by stimuli type!...!19!

Comparison of means between conditions!...!20!

Repeated measures analysis of frequency levels!...!21!

Test of association between conditions for TW accuracy by frequency!...!22!

Most frequently accepted stimuli!...!23!

RESULTS OF THE ACTIVE RECALL TEST!...!24!

5.DISCUSSION!...!28!

6.LIMITATIONS AND FURTHER RESEARCH!...!33!

7.CONCLUSION!...!34!

8.BIBLIOGRAPHY!...!35!

9. APPENDIX!...!40!

I.!...!40!

(6)

Abstract

(7)

Strip away all that is usually associated with traditional second language learning. Imagine a situation of exposure to an unfamiliar language, without any knowledge of the target language, without meaning negotiation, without interaction, without instruction, with no regard for motivation, intention or individual differences and keep only the prospect of acquisition. The contrast to traditional language classes raises interest in the potential for early stages of foreign language learning in an exposure situation like this. A learning context that disregards intention and prior knowledge has in recent years become of interest in language acquisition research. The concrete and indeed very commonplace environment which matches these criteria, can be found in media streaming (a term referring to the transfer of data (as audio or video material) in a continuous stream especially for immediate processing or playback online). Foreign language movies and series, especially from America, enjoy great popularity all over the world. Many countries do not dub (a term denoting the audio transfer method of films by which the original soundtrack of a medium is replaced by a target language audio track) the foreign productions, but keep the original soundtrack when broadcasting. Instead, native language subtitles are provided as a cheaper, quicker and more flexible alternative to dubbing. In Europe alone, 28 countries employ this technique. Figure 1 shows a map of Europe in order to illustrate the extent to which subtitling is used in national broadcasting over the continent. However, television is still restricted to broadcasting media from only a handful of foreign languages. In mainland Europe for example the non-dubbed programs originate mainly from other European countries, or include the highly popular English language shows from America and England.

(8)

native languages makes it near impossible to cater subtitles to them all. As a result of the lingual diversity of any given interest group, viewers have to rely on subtitles in a lingua franca, English, in which they may vary greatly in proficiency. One example

includes popularly accessed media from the pool of Asian dramas, which are rarely available dubbed or broadcast outside of Asia. Under these circumstances watching with subtitles is no longer the most economic alternative, but it is the only alternative.

By choice, individuals spend hours watching a series for entertainment purposes. The amount of exposure time to a language, which viewers have little or no knowledge of, leads to viewers' increasing familiarity with the target language. The fertile ground for language learning is apparent and recognised within the interest group. In social networks, blog posts and advertisements by the companies that bring these shows to a wider audience the incidental acquisition of vocabulary is often humorously topicalised. The caption 'One of the first words u learned was 'saranghae' in Korean' was published by VIKI.com1

, and alludes to the frequency of

1

(9)

the phrase 'I love you' in the highly popular romantic comedies they make available for a global audience. Even though, by comparison 'I love you' is much less frequent than for example 'How are you?' it is often the most anticipated (and probably most re-watched) phrase and has therefore more perceptual salience than 'good morning'.

These observations are at the surface of more fundamental research about second language acquisition (SLA). Studies in SLA and language teaching have yielded positive evidence for the effect of subbed movies on lexical development (in Perego et al., 2010, p. 244f.). The research field of foreign language learning through subtitled videos conflates findings from various studies and diverse research in linguistics and adjoining disciplines: knowledge about speech segmentation, noticing as an essential precursor of language acquisition, factors influencing noticing, the effects of bimodal (audio-visual) input in SLA, and appropriate testing of acquisition after minimal exposure. However, the combined factors of this acquisition context present a relatively unmapped area, since they introduce a new aspect (namely the effect of subtitles) to the coalescence of the above-mentioned independent variables. Processing of L2 subtitles without regard for L2 proficiency, incidental word learning of an unfamiliar and unknown target language through minimal exposure to audio-visual input is at the centre of the current study. Consider the following example, which could readily occur in online video streaming. A speaker of an Indo-European (more precisely Germanic) language watches a movie its original audio track is in a Japonic language, of which the viewer has no prior knowledge, with subtitles in his L2 (English), the lingua franca of online communities in which the speaker may have any level of proficiency that suffices for him to follow the subtitles. Under these circumstances, would he be able to acquire certain lexical items incidentally? So far there have only been two investigative studies that examined incidental learning after minimal exposure Gullberg, Roberts, Dimroth, Veroude and Idefrey (2010) and Gullberg, Roberts and Dimroth (2012). Their findings pioneer research in this area, and pave the way for more inquisitive exploration.

By focussing on the effect of subtitles on incidental learning the current study expands the on the scope of Gullberg et al.'s (2010; 2012) studies. It adds to the pool of studies on incidental learning under the premise that no previous knowledge of the

(10)
(11)

2. Theoretical background

In the absence of linguistic clues the auditory input remains noise. One can easily test this by listening to news in an unknown language, in the author's case, for example Greek. Even within the confines of a small topic like the weather forecast, one could listen for hours without ever establishing the form-meaning link associated with the concept 'rain'. If an additional perceptual channel was added, like visual, the endeavour may turn out more successful. That is, if one were able to somehow segment the word from the rest of the speech stream that sounds like, well, Greek.

Recent studies by Gullberg et al. (2010; 2012) tested precisely this human capability, speech segmentation after minimal exposure, in a group of native Dutch speakers. The participants watched a weather forecast in Mandarin Chinese, which no participant had any pre-existing knowledge of. After being exposed to the foreign speech and visual input for seven or 14 minutes respectively, participants were tested on word recognition. The results showed that even such complex input forms as tone languages can be segmented, or rather can trigger the beginnings of the segmentation, after as few as seven minutes. Returning to the example of segmenting the speech stream of Greek, it follows that in an audio-visual context a brief exposure can result in the initial step of establishing form-meaning relationships, by making the form accessible to the listener through successful segmentation. The extent and limitations of this process will be discussed through findings in previous research on input, speech segmentation, noticing and incidental learning. The results of these studies form the foundation of Gullberg et al.'s and the present study.

Input and speech segmentation

(12)

At the onset of learning, during the first moments of exposure to unfamiliar input when it is still noise to the listener and there is no pre-existing TL knowledge to draw on, the listener is faced with the very first task of language acquisition. Upon exposure to prosodic input, the listener has to segment the speech stream (Carroll, 2004), eventually map meaning onto an identified form chunk and in the long run generalise this meaning in order to deal with novel items. This is the very condensed and simplified form of the initial steps that form the beginnings of language learning according to Klein (1987, p. 59ff.). For the present study only the first two of these

are relevant.

"Possibly the first step in learning a language is learning to segment the speech signal so that the continuous sound stream is perceived as a linear sequence of sound forms. To reach a phase where one can do this, L2 learners must acquire a complex coalition of cues to prosodic units, including prosodic words."

(Carroll, 2006, p. 22)

The author continues by stating that these cues vary for different languages. A proposition furthered by other research of adult learners' speech segmentation ability. Studies on phonological processing and word segmentation yielded information about the keen differentiation sensitivity to acoustic signals (Broersma 2005; Cutler 2001; Cutler, Mehler, Norris and Segui 1986; Flege and Wang 1990; Altenberg 2005). Metric segmentation and phonotactic constraints are language specific and also depend on the pre-existing linguistic knowledge of the listener. Prosodic cues, are also language specific, but require a certain proficiency level in the TL to be harnessed. For a detailed discussion of segmentation cues specific and intrinsic to Japanese see Cutler and Otake (1994), MacQueen, Otake and Cutler (2001), Minematsu and Hirose (1995) and Otake, Hatano, Cutler, and Mehler (1993). In the current context individual variation in segmentation based on individual differences is not an issue. Gullberg et al. (2010; 2012) have shown that the beginnings of this process are triggered after very brief exposure times.

(13)

acquisition of unknown words is tested. An example of this is Tekmen and Daloğlu's (2006) study testing L2 lexical acquisition of speakers with varying degrees of proficiency in the TL in their ability to learn through extensive reading (Further studies of incidental vocabulary acquisition with previous TL proficiency are summarised by Dongzhi (2009). In another context Waring and Takakai (2003) tested word learning of ESL speakers by replacing words in a proficiency adequate text with non-existing substitute words (for further discussion see also Coady, 1997). The limited background on speech segmentation under the premise of no pre-existing knowledge of the TL requires a closer look at additional factors contributing to recognition of words in the speech stream.

In the process of segmentation, noticing can be both a contributor to successful segmentation and the result thereof (see also Carroll, 2006). It is a focal concept in incidental learning research (Schmidt, 1990; 1995; 2001; 2012) and its role in language comprehension in general is crucial. Its dependent variables like salience and frequency and insights into input and intake will be discussed in turn in the next section.

The noticing hypothesis, incidental learning and the role of frequency

"The 'noticing hypothesis' states that what learners notice in input is what becomes intake for learning."

(Schmidt, 1995, p. 20)

(14)

negligible as a precursor of acquisition. However, even if attention is not supported by intention, attention can be driven by other factors, for example high interest material (Elley, 1991), which by default increases attention and consequently increases the chances of acquisition. The current study is based on the context of online media streaming, a situation induced through high interest in international shows. A situation, in which viewers are not discouraged from watching a video even though they have to rely on subtitles for understanding. It follows that the video is the sole controlled attention-encouraging factor in this study and thus may be assumed as a valid context for incidental learning.

Even though attention is an important factor for noticing, there is more to language acquisition than that. Schmidt (1990, p. 139) states "intake is that part of the input that the learner notices". For this it does not matter whether the learner was attending to form or inadvertently notices a linguistic form. Noticing is necessary to convert input into intake. On a critical note, it should be highlighted that it is not possible to define what the intake is. For the current study however, this is not an issue. Under the premise that the speech stream will be segmented with the help of our cognitive resources and that noticing potentially facilitates as well as results from this progress, either consciously or unconsciously but most likely both, certain units become continually available for processing for the initial stages of vocabulary acquisition. Therefore the question of what is noticed comes second to the fact that units are being noticed. In addition, and in reference to the situation examined in this study, when input arrives through multiple modalities (visual and audio) attention capacity is larger, hence increasing the chances of noticing. Noticing does not occur randomly, it is in turn the result of the input.

[what is noticed are] “elements of the surface structure of utterances in the input, instances of language, rather than any abstract rules or principles of which such instances may be exemplars”

(Schmidt, 2001, p.5)

(15)

commonly noticed elements "prosodic prominence, positional salience, familiar phonological make-up" (Kim, 1995, p. 71). Eventually, he also closes the argumentative cycle by postulating, that it is important to bear in mind that "noticing is not always a function of the learner's intention" (Kim, 1995, p. 66). In addition to these results, Gullberg et al. (2010; 2012) also studied another feature, unique to multi-modal input, which can contribute to noticing. They studied the effect of gestural enhancement, a factor that will be addressed in more detail later.

Among the most prominent language un-specific factors facilitating noticing is frequency, a topic of study in linguistics and SLA that it widely discussed in the literature (Ellis, N., 2002; Larsen-Freeman, 2002; Gass & Mackey, 2002). Gass and Selinker (2001, from Gass, 1988) underscored the complexity of the issue:

"Something which is very frequent in the input is likely to be noticed. On the other hand, particularly at more advanced stages of learning, stages at which expectations of language data are well-established, something that is unusual because of its infrequency may stand out for a learner."

(p. 402)

In the current case the second claim is not of regard, the effect of high frequency in early L2 acquisition however is one of the questions addressed by this thesis. Kim (1995) was more reluctant than Gass' to affirm the unconditional adjuvant role of frequency for noticing in SLA. He provided evidence that frequency only has an effect on noticing under two conditions: High frequent and low frequent, which qualifies Gass and Selinker's position. Gullberg et al. (2010; 2012) agree with this proposal and as a result only added two frequency variables. High frequency and low frequency are relative terms, interpreted as different absolute numbers by different researchers. In order to reveal any possible frequency effect this study employs a more detailed distinction by including four frequency levels. It is important to bear in mind that frequency, while a widely acknowledged factor contributing to noticing, is not an isolated variable. In movie dialogues in particularly, frequency interacts with the factors named by Kim (1995) to form a dynamic system that influences noticing.

Input processing

(16)

(the process by which a new lexical entry is created in the mental lexicon), a field that is well researched in L1 acquisition (Carey and Bartlett, 1987), have been transferred to the L2 acquisition process with inconclusive results (Carey, 2010; Rohde & Tiefenthal, 2000; Swingley, 2010). Even though the findings are mixed it is evident that minimal exposure is sufficient for phonological representations to be created in the mental lexicon, though the extent to which lexical or other linguistic information is mapped onto them remains vague. While the debate about whether or not adults fast map is still on-going, these tentative results are supported by research from other linguistics disciplines.

(17)

discussed in the previous paragraph may well be both conscious and unconscious. Phonological processing is a highly theoretical complex, supported by bottom-up research, interested readers are referred to Baddeley and Gathercole, 1998; Baddeley, 1981; Burgess and Hitch, 2006; Gathercole, Willis, Emslie, & Baddeley, 1992; Gathercole and Hitch, 1997; Jacquemont and Scott, 2006; and Majerus, Poncelat, Elsen and Van der Linden, 2006 for a more detailed discussion. For the present study it is important to bear in mind, that phonological processing is a key step in word learning and becomes part of the word learning complex together with segmentation and noticing.

Enhanced audio visual input

After discussing the processing steps that are involved in word learning from input exposure it is time to return to Gullberg et al.'s (2010; 2012) pioneering studies and discuss them in more detail. The authors took the research on SLA input processing a step further. They include in their research what has been neglected in SLA research so far; segmentation of an unknown typologically unrelated language after minimal exposure. Their study sheds light on the influence of frequency, gestural enhancement and syllable count on word recognition and form-picture mapping.

In their 2010 and 2012 studies, 41 Dutch native speakers were exposed to a controlled Mandarin Chinese weather forecast grouped into two conditions: varying exposure time of 7 and 14 minutes respectively. The input had been scripted specifically for the purpose of the experiment, controlling target words (TW) for frequency (high=8 repetitions and low=2 repetitions), syllable count (mono- and disyllabic) and gestural enhancement (highlighted or not). Consequently, the rest of the script (made up of 'padding words') was equally controlled, to prevent interference with the TW criteria. The distractors for the immediate post-test were real words chosen under the condition of maximum difference. Results showed that adult learners can segment the speech stream successfully, and correctly recognise words in isolation that occurred as little as eight times during the exposure. Form-picture mapping shows the same frequency effect plus an effect of gestural highlighting, suggesting that form-meaning mapping at this stage requires accumulative cues for acquisition to progress.

(18)

are unlikely to be sufficient to accentuate the speech stream, due to the much broader thematic range of the language and variation in use. In weather forecasts, gesturing can improve the form-meaning establishment, as it can draw attention to the concept associated with the highlighted form. In this case they are stable, controlled as in Gullberg et al.'s studies and used by a single person. In films, gesturing, though a natural form of highlighting, may not be specific enough and too inconsistent to cover the range of topics that are part of the dialogue and hence have a limited potential to facilitating form-meaning relationships. In the situation examined by the current study subtitles are as much part of online television as weather icons of the forecast. Hence, instead of focussing on gestures in movies, the effect of subtitles seems the more prominent effect to test.

The question addressed here, is therefore whether subtitles affect noticing and form-meaning mapping. In a similar context as Gullberg et al.'s participants, are there effects of frequency and subtitles for utilising unfamiliar linguistic input for word learning? Baars (1983) proposes that events (stimuli) remain unconscious (unnoticed) if they are either un-interpretable in the current context or so stable as to be part of the context. Klein (1986, p. 59) mentions the role of 'parallel factors', that can allow input interpretation without understanding the utterance. In theory, this would mean, input in an unfamiliar typologically unrelated language remains unnoticed and we can get by relying on situational context (gestures or translations in the form of subtitles). However "segmentation starts in the absence of meaning" (Gullberg et al., 2012, p. 17) hence the start of the learning process is inevitable. We merely rely on varying additional factors like gesturing to facilitate noticing. In the current study the effect of subtitles as an accumulative cue is tested.

Subtitle processing

Considering subtitles as a means of furthering word-learning means presupposing the human ability to handle two visual stimuli during video watching (video and on-screen text). A look at the state of the art of subtitles processing will shed light on the cognitive effort associated with multi-modal input processing as well as provide insights into the efficiency of subtitles in lexical acquisition.

(19)

discussed above, it follows that listeners simply have less time than readers to focus on linguistic information, even more so when they are unfamiliar with TL. Spoken language is also continuous (i.e. there are no obvious boundaries between words), likely requiring segmentation of the speech stream in order to recognise words in the input. As discussed above, this process is not innate to the phonological input, but to the listerner and requires time in order to complete and be efficient in input processing. After minimal exposure to the foreign speech stream, this process is still in its early stages. Even for more proficient speakers this may still cause processing breakdowns, because they can miss subsequent or forget previous parts of the input as part of their reduced processing abilities in the TL (e.g. Goh, 2000). Such problems can negatively affect comprehension and hinder acquisition. This set of problems refers to TL audio input only. Enhancing the input through gestures, as tested by Gullberg et al. (2010; 2012), is one way of compensation.

In the context of online video entertainment, the audio is consolidated through a second input channel. The warranted question arises as to whether our eyes can follow dual visual input simultaneously without loss of cognitive efficiency. Perception and processing of additional input channels is discussed in the following section.

Subtitling is the language transfer practice in media used most widely in Europe (recall Diagram 1). Taken together 28 countries (26 countries plus two regions in two countries) use it: Belgium (Flemish-speaking), Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, Greece, Hungary, Iceland, Ireland, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Sweden, Switzerland (German-speaking), Turkey and United Kingdom (Safar et al., 2011, p.7). That means a notable pool of people encounter three-fold input in the form of foreign language audio tracks, and videos with L1 subtitles on a regular basis when watching national television or going to the cinema. In the example underlying this study, this pool of people is expanded by the number of viewers handling online television in a foreign language with lingua franca subtitles.

(20)

De Bruycker, 2007; Kruger et al., 2010). With a little bit of practice the subtitle reading becomes automatised and processing of the information is effortless (for average complexity topics likes movies or dramas, which were used in the above studies). Eye-tracking experiments suggest that depending on the situation (bad subtitles, redundancy of information etc.) the viewer adapts his attentional strategies, causing changes in eye-movement patterns, fixation times etc. Results highlight our cognitive adaptability, and make it clear that no cognitive strain is placed on the viewer that could diminish his ability to process subtitles efficiently. To this end, there is no reason to suggest the viewer could be incapable of utilising the subtitles for word learning in general. Additionally, the ability to adapt to various viewing conditions so effortlessly allows generalisation of findings over populations and disregard individual differences, so long as there are no major factor limitations like reduced eyesight, memory impairment or attention deficits involved (Perego et al., 2010).

An exemplary study that yielded relevant results about the effect of subtitles on lexical acquisition of an unknown language is Bisson et al.'s (2012) study on subtitle-facilitated language learning of related languages. Even though the target language (Dutch) was closely related to participants' native language (English), their study parallels the current idea in so far as participants had no prior knowledge of the TL and viewing conditions were with and without L2 subtitles. In their study British English participants watched a subtitled movie sequence in the TL. Bisson et al. (2012) found no effect of subtitles on vocabulary acquisition. However, the authors did not test recognition of phonological units in isolation, but tested recognition of matched and mismatched translations. In this respect the authors adhered to the traditional assessment of word learning through checking the establishment form-meaning relationships. A more comprehensive approach to testing word learning and its onsets would take the complexity of acquisition into consideration by also testing participant's active recall.

Assessing incidental learning: Dimensions of vocabulary acquisition

(21)

Donkaewbua, 2008, p. 158).

Vocabulary acquisition, according to Van Zeeland and Schmitt (2013) is more complex than the dual yes/no choice of whether or not a form-meaning link has been established. Van Zeeland and Schmitt studied incidental word learning through listening, to discover the degree of acquisition possible in this setting. They follow Nation (2001), who discusses several aspects of the depth of vocabulary knowledge. Van Zeeland and Schmitt consider the dimensions framework an essential part of assessing acquisition through listening. Their test focuses on form, grammatical characteristics, as relevant fractals and the ability to access them during active recall. They suggest that in this acquisition complex word recognition and passive recall often precede active meaning recall of fractal information. Schmidt (1995) highlighted the importance of differentiating passive recognition and active recall. It seems sensible to consider the dimensions of vocabulary learning separately.

(22)

Bridging scientific gaps

The mixed results of a frequency effect support findings discussed above. The inconclusive outcomes are unsatisfying and maintain the need for further investigation on frequency effects. In this study of incidental learning through minimal exposure, frequency will remain a factor of investigation. Following Van Zeeland and Schmitt (2013) four frequency levels are investigated, related to the number of iterations in a given video. In addition to aiming to contribute to the effect of frequency on word learning, the effect of L2 subtitles is considered.

In online international media, English subtitles make content accessible for a broad range of viewers. While simulating this exposure context in order to explore the language learning potential the following questions are addressed:

• Do L2 subtitles, regardless of the viewer's proficiency, have an effect on word recognition after minimal exposure?

• How does frequency affect passive recognition scores?

(23)

3. The study

Participants

26 native speakers of Dutch, studying an undergraduate program at the Rijksuniversiteit Groningen, Netherlands participated in the current study. Speakers were between 18 and 30 years of age, with no pre-existing knowledge of the target language: Japanese. The advantage of testing on Dutch speakers is that the method of transfer in their country is subtitling, in so far their experience with subtitled movies and television is utilised to increase their efficiency in handling subtitles in the course of this experiment. Participants were not selected for English (L2) proficiency. However, all participants had adequate L2 knowledge as part of the requirements for their studies. In order to acknowledge the reality of the simulated situation no other factors were controlled for. The 6 males and 20 females were randomly assigned to one of two conditions: with L2 subtitles (sub) and without subtitles (raw).

Method

The study is centred around watching two scenes from a Japanese drama 'Koibumi Biyori' (engl.: A good day for a love letter). Each scene is an independent episode of 20 minutes length. It is a school set drama, with two different self-contained stories. Two conditions were created for watching these scenes. While both groups heared the Japanese original audio track, only one group saw English subtitles. The participants of the two groups 'subtitled' and 'raw' watched the scenes individually. The scenes are presented as part of an online survey in the participants' L2: English. The viewing was followed up directly by two immediate post-tests. Japanese was chosen as target language due to its open vowel phonology. Target words were all free morphemes, except for one. The choice for free morphemes as target words was based on Slobin (1985, p. 1164) reports that bound morphemes are not as perceptually salient more difficult to notice.

(24)

to create the list of 30 target stimuli to be used in the recognition test. The target items, ranging in frequency from > 10 to < 3, were separated into four categories. In addition 60 distractors were created, 30 of which were non-words, which match Japanese phonological rules. For a full list of units presented to the participants, see appendix 1.

The aim of this study is to simulate incidental word learning in the context of unmediated exposure. This situation encompasses the potential for unintentional learning without task, but instead solely based on exposure and noticing. The role of tasks for noticing has been discussed by Thornbury (1997). He concludes that what is necessary for the task will be remembered, whether there is an intention to learn or not. Consequently, in order to avoid task interference in the current experiment the task had to be based on focussing interest on watching. Hence, it was essential that the instructions were carefully chosen. The task instructions were adapted from Bisson, van Heuven, Conklin and Tunney's (2012), who used the same phrasing for testing native and foreign language subtitle processing. "They were asked to watch the film as they would normally do at home and were told that there would be some questions to answer afterward about the film and about themselves. Participants were not explicitly asked to read the subtitles nor were they told to pay attention to the FL. Furthermore, they were not told that there would be a FL vocabulary test after the film." (Bisson et al., 2012, p. 6).

The first immediate post-test was an audio recognition test, in which students were presented with an audio file of approximately 3 seconds length and needed to decide in a multiple choice style answer whether or not they had encountered the words in the previous scenes. In order to avoid guessing, students were encouraged to pick the 'I don't know' option. No correction or feedback was provided regarding the participant's success in answering. The purpose of this test was to determine noticing of phonological units in the input. A comparison across the two conditions will reveal whether or not noticing is facilitated or inhibited by subtitles and whether there was an effect of item type for the accuracy scores.

(25)

4. Results

Results of the passive recognition test

In order to convert the answer scores for statistical analysis, a score of 1 was awarded for each correct response and a score of 0 for every incorrect response. Responses of the 'I don't know' category were excluded from the tally. Table 1 shows an overview of the results.

ANOVA of acceptance by stimuli type

The first step of the analysis investigated whether within-condition responses between target items and non-target items were different. An analysis of variance within groups was run to determine whether there was a difference between item type (target word (TW), distractors (d), non-words (NW)) and viewing condition (raw / sub). Accuracy scores deviated slightly from normal distribution. There was a significant effect of item type in the raw condition on accuracy scores, F (2, 36) = 4.169, p < 0.05. A Post Hoc analysis revealed that TW accuracy (M=15, SD=4.8) and non-words rejection (M=9.5, SD=5.5) differed significantly from each other (p < 0.05), none of the others varied significantly. Similar results were found for the subtitled condition: there was a significant effect of item type in the sub condition on accuracy scores, F (2, 36) = 5.558, p < 0.01. A Post Hoc analysis revealed that TW accuracy (M=12.3, SD=6) and non-words rejection (M=4.9, SD=5.7) differed significantly from each other (p < 0.01), none of the others varied significantly. Both exposure groups treated the target words differently from the distractors and non-

words as seen in table 1 they are more accurate in identifying words they had encountered in the video. The experimental group (M=15.1) successfully identifies more target items than the group watching with subtitles (M=12.3). In order to not disregard the false rejection (and in the case of non-target stimuli: false acceptance) scores table 2 shows the complementary data. There was no significant effect of item type in the raw condition on accuracy scores, F (2, 36) = 1.271, p > 0.05. Similar results were found for the subtitled condition: there was no significant

Table 1

Mean accuracy scores and standard deviations of target and non-target stimuli. Accuracy scores of 1 were awarded to correct rejections of non-target words for each participant.

Group # target items distractors non-words

raw (n=13) 15,1 (4,8) 10,3 (5,7) 9,5 (5,5)

(26)

effect of item type in the raw condition on accuracy scores, F (2, 36) = 1.529, p > 0.05. The twelve mean scores of accuracy and rejection are depicted in diagram 2.

Comparison of means between conditions

(27)

Diagram 2: mean scores of acceptance and rejection of target and non-target stimuli by viewing condition.

There was a differences in non-word rejection scores between the two conditions. Distributions of the non-word rejection scores for the two groups were not similar, as assessed by visual inspection. Non-word rejection scores for the ‚raw‘ condition (mean rank = 40.02) were statistically significantly higher than for the ‚subbed‘ condition (mean rank = 20.89), U = 164.5, z = -4.307, p = < 0.01. In summary this means that only the rejection scores of non-target stimuli were statistically significantly different between the two viewing conditions.

Repeated measures analysis of frequency levels

In the third step of the analysis, the target words were examined in isolation in order to investigate the effects of viewing condition and frequency on the depended variable accuracy score. The independent variables had two and four levels respectively: viewing condition (raw / subtitled) and frequency of target words (>10 / 9-6 / 5-3 / <3). The mean accuracy scores are shown in table 3.

Raw condition frequency levels showed two outliers in the frequency category 6 to 9, as assessed by inspection of a boxplot for values greater than 1.5 box-lengths from

0! 2! 4! 6! 8! 10! 12! 14! 16!

TWacc! TWrej! Drej! Dacc! NWrej! NWacc!

(28)

the edge of the box. The range of that category was 5 the outliers were accuracy scores of 1 and 6. Since the category was normally distributed and the outliers were not extreme, they were preserved in the data. Mauchly's Test of Sphericity indicated that the assumption of sphericity had not been violated, χ2

(5) = 7.252, p > 0.05. Accuracy scores were statistically significantly different at the different frequency levels, F(3, 36) = 9.930, p < .01, partial η2

= 0.4.53. A post hoc analysis with a Bonferroni adjustment revealed that acceptance scores statistically significantly differed between frequency extremes: more than 10 iterations (M=3.9, SD=1.5) and fewer than 3 repetitions (M=2.5, SD=1.6), p < 0.05, and between 5 to 3 (M=4.9, SD=1.8) iterations and fewer than 3 repetitions. No further differences were statistically different. Subtitled condition frequency levels showed no outliers in the data, as assessed by inspection of a boxplot for values greater than 1.5 box-lengths from the edge of the box. Mauchly's Test of Sphericity indicated that the assumption of sphericity had not been violated, χ2

(5) = 5.074, p >0.05. Accuracy scores were statistically significantly different at the different frequency levels, F(3, 36) = 4.179, p < .05, partial η2

= 0.258. A post hoc analysis with a Bonferroni adjustment revealed that acceptance scores statistically significantly differed between frequency extremes: more than 10 iterations (M=3.6, SD=2) and fewer than 3 repetitions (M=2.2, SD=1.5), p < 0.05, but not between any of the other frequency categories. The striking score of the 5-3 category in both conditions, but in the raw condition in particular will be more closely examined in the analysis of the accepted stimuli.

Test of association between conditions for TW accuracy by frequency

The experimental groups treat stimuli from different frequency categories different, as shown in the previous section. The raw condition group has consistently higher accuracy scores in all frequency categories compared to the group watching with subtitles. The accuracy scores are visualised in diagram 3. A chi-square test for association was conducted between viewing condition and frequency levels for TW acceptance scores. All expected cell frequencies were greater than five. There was no

Table 3

Mean TW accuracy scores and standard deviations for target stimuli per frequency level by participants.

Group # < 10 9-6 5-3 < 3

raw (n=13) 3.9 (1.5) 3.78 (1.3) 4.9 (1.8) 2.5 (1.6)

(29)

statistically significant association between viewing condition and frequency levels for TW acceptance scores, χ2

(3) = 0.880, p > 0.05. A second test for TW rejection scores showed that all expected cell frequencies were greater than five. There was no statistically significant association between viewing condition and frequency levels for TW acceptance scores, χ2

(3) = 2.958, p > 0.05. A chi-square test for association was conducted between viewing condition and non-TW for false acceptance scores. All expected cell frequencies were greater than five. There was no statistically significant association between viewing condition and non-TW for false acceptance scores, χ2

(1) = 2.755, p > 0.05. A chi-square test for association was conducted between viewing condition and non-TWs for rejection scores. All expected cell frequencies were greater than five. There was no statistically significant association viewing condition and non-TWs for rejection scores, χ2

(1) = 2.524, p > 0.05.

Most frequently accepted stimuli

In the context of these results a presentation of the most frequently accepted

Diagram 3: Mean accuracy scores by frequency levels

0! 1! 2! 3! 4! 5! 6! 7! 8!

freq!<10! freq!6=9! freq!3=5! freq!<3!

(30)

stimuli across sets can reveal more qualitative data of similarities between viewing conditions. Tables 4 & 5 show the highest acceptance scores of both experimental

conditions. The majority of participants in both groups tagged the non-word etokane as a familiar. And while the high scores of the subtitled condition correlate with the number of iterations the participants were exposed to, the high scores of the raw condition are more varied through frequency and stimulus sets. From the ranking in

table 4 it can be seen, that the target word 'ai' of frequency group 5 - 3 accounts for a lot of the scores in this category. This in addition to the two other stimuli from the frequency group 5 - 3 explains the striking accuracy scores seen in table 3.

Results of the active recall test

The final part of the analysis concerned the evaluation of the active recall test. In the active recall task the experimental group watching with subtitles gave more accurate answers. A list of all responses of both viewing conditions side by side can be found in the appendix. Responses were rectified and categorised as follows: translation, lexical information, episodic information, pragmatic information, syntactical information, and frequency information. Only correct responses were

Table 4

Highest acceptance scores per stimuli across sets for the viewing condition: raw (n=13)

accepted stimulus frequency

11 ai 3 - 5

11 etokane non-word

10 kawaii > 10

10 sono tame distractor

9 nani > 10 9 -kun > 10 9 boku 6 - 9 9 yoroshiku 3 - 5 9 ohayou 3 - 5 9 iie distractor 9 usagi distractor Table 5

Highest acceptance scores per stimuli across sets for the viewing condition: raw (n=13)

accepted stimulus frequency

(31)

categorised. Other, and consequently excluded, response types were: recognition information (I heard it, it was used in episode...), confusion with another stimulus (baka ⇔ boku), unspecific episodic, declarations of previous knowledge (This is the only word of Japanese I knew before going into the experiment). The mixed up responses were relocated to their intended target2

. The mix up occurred four times in the raw condition and once in the subtitled condition. Table 6 lists the rectified responses of both experimental groups. Table 7 tallies the correct answers across all response categories between both viewing conditions. When looking at the response types it is evident that the subtitled group could most frequently recall translations. However, the raw group also showed instances of correct translation. Correctly translated TW in the raw condition comprised the stimuli 'baka', 'arigatou', 'gomen' and 'hai', excluding 'kawaii' because the participant knew the word prior to the experiment. The contexts in which these stimuli occurred in the video were different: 'baka' was high frequent, and highly salient in both videos. In the second video it occurred in a teasing fight at the very last scene of the video. 'Arigatou' has a lot of contextual salience, just like 'gomen'. 'Hai' is a very small word, easy to get lost in continuous speech.

Table 6

Rectified and categorised responses of the active recall task for both viewing conditions. freq stimulu

s

response category

response: raw response

category

response: sub > 10 nani episodic

episodic

“If I remember it correctly, this is what the girl in the first episode said to her crush when she was upset with him in the classroom before she broke down in tears." “Said in first episode in classroom where boy en girl were standing in front of each other at the end" translation & pragmatic „means 'what'. Indicates surprise. Used mostly as an exclamation.“

kawaii translation “it means cute" translation translation “Meaning: cute, sweet." “Means 'cute'." baka lexical episodic “it's a swearword I guess"

“This is what the guy and the girl at the end of episode two exclaimed

translation translation &

pragmatic

“This word also means: stupid (?)."

“Means 'idiot'. commonly used as insult between boys

2

(32)

translation episodic & pragmatic episodic & pragmatic episodic & pragmatic

teasingly at each other. "it means idiot or something"

“I think this is the word that the guy and the girl yelled at each other happily at the end of the second episode. As if they're teasing each other."

“this was probably the word that the girl and boy used frequently at the beginning and at the end of the second clip. it looked like they were fake-arguing with one another, just messing around and having a good time."

“the boy and the girl pushing each other away (for fun)"

translation translation &

frequency translation

and girls alike apparently. Doesn't seem to carry much offense."

“stupid"

“Stupid! In the second episode the girl exclaims this several times"

“Meaning: stupid, annoying.”

-kun - - syntactic

syntactic

“suffix of the blond boy's name, mentioned in the first video by the girl who received the letters."

“Used to indicate peers, positioned behind the name."

9 - 6 demo - - translation “means 'but', used

mostly at the start of a sentence. Indicates a contradiction.”

ouji-sama

- - episodic "second film when she

is asking herself who her prince is“

yokatta - - lexical “Not sure what it

means, seems to indicate emotion, possibly sadness or empathy.“

matte! - - translation "Stop“

5 - 3 ohayou lexical3 “Hello' lexical2

lexical2 “Means 'hello', used often by itself."

"Hello“ arigato u translation translation translation

“it means sorry, or thank you. Am in doubt now"

“This means thank you. "

“This means 'thanks'"

translation translation translation translation translation

“Meaning: thank you. “Means ''Thanks''" “Means 'thank you'. “thank you"

“Thanks" gomen translation “it means sorry" - -

< 3 hai translation translation &

episodic

“it means yes"

“This is what the girl in episode two exclaimed when the teacher called

translation &

episodic

“Meaning: yes. Used in second episode: when the teacher addresses the student, this is the

3

(33)

episodic

her name and she was staring at the guy in class. It probably means 'yes' as that is what most people would say in such a situation, regardless of the language. Or it means something like 'huh' but that sounds too impolite to say to a teacher." “I remember that in one of the episodes the main character is supposed to answer a teacher's question, but doesn't pay attention. I think she said this word."

translation

student's reply." “I think this means 'yes'. The girl in the second episode said it when she was asked to read a page out loud by her teacher."

“Means 'yes'."

In summary it can be stated that native Dutch speakers with no pre-existing knowledge of Japanese were able to segment some of the TL audio input successfully, which is evidenced in their ability to recognise the target sound units in isolation. This ability was not affected by the presence or absence of L2 subtitles. Frequency effects occurred in both viewing conditions, however only for frequencies below three or above 10 iterations. The results from the immediate active recall task indicate that the subtitled group had drawn a greater variety of linguistic information from the minimal exposure, however, the raw group also exhibited instances of successful translations and inference.

Table 7

Rectified responses of the active recall test. Number of responses per answer type for both viewing conditions

Response type raw sub

(34)

5. Discussion

The effect of subtitles on passive recognition

The results of the within-condition comparison of item type recognition provide robust evidence that both viewing conditions equally enabled viewers to differentiate between target stimuli and non-words. The two groups also behave similarly in their within-condition rejection scores. In this respect, both groups have equal starting points for further analysis. In the between groups comparison of the twelve mean scores of acceptance and rejection, the most intriguing result was the answer to the question of whether subtitles had an effect on word recognition. The statistical analysis revealed no difference between the groups in either accuracy score (TW acceptance and TW rejection). However the raw group's significant advantage in accurate rejection of non-TW, suggests that subtitles inhibit phonological perception in such a way that viewers are at a disadvantage in passive recognition. The raw group was more confident at correctly rejecting fake stimuli, while their false acceptance scores did not differ (considering guessing was limited through the option of choosing 'I don't know'). It follows from this that, while viewers handled stimuli they had actually encountered similarly (neither acceptance nor rejection being more confident in either group), phonological knowledge in the raw group was such as to allow them to reject non-TW more confidently than the subbed group. This result supports tentative hypothesising about the advantageous conditions for phonological perception in the raw viewing condition. In conclusion, it can be surmised that segmentation was not facilitated by subtitles, meaning that in this precursor of learning no group had an advantage over the other when being tested on passive recognition. These results supplement Bisson et al.'s (2012) findings on subtitle effects.

The effect of frequency on passive recognition

(35)

homogeneously by showing similar effects of frequency. This in turn corroborates previous findings of the effect of extreme frequencies by Van Zeeland and Schmitt (2013) and Gullberg et al. (2010; 2012). The striking frequency effect on TW accuracy scores of the raw disrupts this construct. It is very likely, that frequency interacts with other linguistic features, to produce these results. As was evident in the high scores, certain words accounted for this anomaly. Increased recognition of these words may not exclusively be due to their frequency. At the surface level, these words can neither be counted among the very salient or very transparent. Results from the active recall test do not yield any further clues at this point, as to which factors might be involved here. In Gullberg et al.'s study frequency effects on form recognition occurred between 2 and 8 occurrences and between 3 and 7 in Van Zeeland and Schmitt (2013). It follows that frequency effects occur independent of viewing condition, and are not inhibited or facilitated through L2 subtitles. This does not exclude the probability of both variables interacting and cumulatively affecting other aspects of vocabulary learning.

Active recall: discussion of quality

Considering that there was no difference between conditions on passive recall, and more importantly no disadvantage of the subtitled group over the raw condition in TW recognition, the next question to address is whether the different levels of acquisition are affected by either experimental condition. The results of the active recall test provide cautious, temporary and limited insights into aspectual vocabulary learning through minimal exposure with under the premise of no pre-existing knowledge.

(36)

recall test included references to the popular Internet slang term 'kawaii' (engl. cute). Choosing an even more remote language for minimal exposure studies in the future could control for this aspect. Gullberg et al.'s Chinese weather forecast, originally deemed too complex for the current study due to the tonal nature of the language, now seems more suitable for the purpose of testing incidental learning after minimal exposure than Japanese. For the current study, the incidences of pre-existing knowledge were very few and did not constitute a factor of limitation or compromise of the results drastically. Responses that indicated previous knowledge were eliminated during rectification of the data. The remaining responses are still sufficient to draw first conclusions about the effect of subtitles on vocabulary learning.

Response types in active recall

Unsurprisingly perhaps, the group watching the film scenes with subtitles most readily used translations as a means of active recall. The interesting aspect can be found in the responses from the raw group. There were several instances of successful translation that highlight the ability to map meaning onto form through inference of meaning from the video alone. In this group's responses there are several clues as to how we make sense of unfamiliar input. Through some of the extensive responses it is possible to trace the thoughts that lead to successful inferencing and also understand the necessary context required for effortless form-meaning mapping. It needs a very salient appearance to be noticed and trigger meaning inference. As can be seen from the responses in the raw group, participants were particularly receptive to contextual salience. This is evident in the example of 'hai' (engl. 'yes'). The stimulus elicited the following response: "This is what the girl in episode two exclaimed when the teacher called her name and she was staring at the guy in class. It probably means 'yes' as that is what most people would say in such a situation, regardless of the language. Or it means something like 'huh' but that sounds too impolite to say to a teacher." It follows that besides salience, generality, commonality and obviousness play a role when inferring meaning. A context that provides these prerequisites is well equipped to facilitate form-meaning mapping. Though this logical connection that triggers inferencing is not inherent to the context. Another participant from the same group described the same scene but did not conclude what the word could mean.

(37)

stimulus 'ouji-sama' (engl. 'prince'): "Something to do with the letter" was the answer given by a participant of the raw viewing condition. The main, character uses 'ouji-sama' a lot in this scene, because she wonders who her anonymous prince and the love letter's author is. Elicitation the meaning 'prince' from this context is highly unlikely without subtitles. The factor of obviousness is not given and the lack of commonality of the term 'prince' probably also plays a role in the unsuccessful inference. However, such deficient episodic encounters of a phonological unit, might still be at the foundation of that would allow the form-meaning connection to be established, given more exposure to more unequivocal input.

It is in line with the opinion of Skehan (1989) that it is the above scenario, to assume that a lack of cue rich contexts in incidental learning, may prevent progress in vocabulary learning, even over longer exposure times, without focus on form, beyond these contextually salient words. However, this argument does not consider the other fractals of language acquisition drawn from minimal exposure that also constitute learning. While the raw group could not acquire as much linguistic information as the subtitled group, their response types being limited to translations, lexical and pragmatic information and episodic memories, they could still accumulate aspects of the meaning.

The subbed group, contrary to expectation, did not solely rely on translations as responses types, though they were the dominant form of answer. The participants of that viewing condition also successfully identified the suffix '-kun'4

. A possible explanation for that is that they could check the audio track against the subtitles and noticed the correlation between the names (and were able to identify them as such) and their correlation with the sound '-kun', while the raw group may not have picked up on that correlation. The results indicate that subtitles support the accumulation of linguistic cues that facilitate form-meaning mapping. Though the study cannot rule out that given more time and more unambiguous input, the raw group could have arrived at the same level of acquisition.

Frequency effects on active recall

The effect of frequency on active recall, though not statistically assessed, shows a clear tendency towards supremacy of the subtitled group being frequency independent. The subtitle group could provide stable response numbers in all

4

(38)
(39)

6. Limitations and further research

The study has answered the research questions about the effects of subtitles on incidental learning of unknown and typologically unrelated languages, but at the same time has raised further questions. Limitations of the current study lie in the nature of the study design: most prominently the size of the cohort ad lack of a control group. There was insufficient control for pre-existing TL knowledge and the entertainment value of school-based love stories for university students is questionable, which may have negatively affected the interest driven attention allocation. The disregarded individual differences in phonological processing may in future research support focus on the variation in word learning potential that may put the learning potential from the experimental context at a disadvantage compared to traditional learning methods. The choice of the free writing task as a test of vocabulary acquisition is susceptible to challenge. While the active recall test incorporates Van Zeeland and Schmitt's (2013) claim about levels of acquisition it may be that this is only true for learners with previous knowledge in the TL, since beginners may rely on chunk memorisation first and foremost.

(40)

7. Conclusion

(41)

8. Bibliography

Altenberg, E. P. (2005). The perception of word boundaries in a second language. Second Language Research, 21(4), 325-358.

Baars, B. J. (1983). Conscious contents provide the nervous system with coherent, global information. In R. J. Davidson, G. E. Schwartz & D. Shapiro (Eds.), Consciousness and self-regulation III: Advances in research and theory (pp. 41-79) Springer US.

Baddeley, A. (1981). The concept of working memory: A view of its current state and probable future development. Cognition, 10(1–3), 17-23.

Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4(11), 417-423.

Baddeley, A. (2003). Working memory and language: An overview. Journal of Communication Disorders, 36(3), 189-208.

Baddeley, A., Gathercole, S. E., & Papagno, C. (1998). The phonological loop as a language learning device. Psychological Review, 105(1), 158-173.

Baddeley, A., & Hitch, G. (1974). Working memory. Psychology of Learning and Motivation, 8, 47-89.

Baddeley, A., Papagno, C., & Vallar, G. (1988). When long-term learning depends on short-term storage. Journal of Memory and Language, 27(5), 586-595.

Bisson, M., Van Heuven, W. J. B., Conklin, K., & Tunney, R. J. (2012). Processing of native and foreign language subtitles in films: An eye tracking study. Applied Psycholinguistics, 35(02), 399-418.

Broersma, M. E. (2005). Phonetic and lexical processing in a second language. (Unpublished PhD). Radboud Universiteit, Nijmegen.

Brown, R., Waring, R., & Donkaewbua, S. (2008).

Incidental vocabulary acquisition from reading, reading-while-listening, and listening to stories. Reading in a Foreign Language, 20(2), 136-163.

Burgess, N., & Hitch, G. J. (2006). A revised model of short-term memory and long-term learning of verbal sequences. Journal of Memory and Language, 55(4), 627-652.

Carey, S., & Bartlett, E. (1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15, 17-29.

(42)

Carroll, S. E. (2004). Segmentation: Learning how to hear words in the L2 speech stream. Transactions of the Philological Society, 102(2), 227-254.

Carroll, S. E. (2006). Salience, awareness and SLA. In M. G. O'Brien, C. Shea & J. Archibald (Eds.), Proceedings of the 8th generative approaches to second language acquisition conference (pp. 17-24). Somerville, MA: Cascadilla Proceedings Project.

Coady, J. (1997). L2 vocabulary acquisition through extensive reading. In J. Coady, & T. Huckin (Eds.), Second language vocabulary acquisition: A rationale for pedagogy. p. 225-237. Cambridge: Cambridge UP.

Cutler, A. (2001). Listening to a second language through the ears of a first. Interpreting, 5(1), 1-23.

Cutler, A., & Otake, T. (1994). Mora or phoneme? further evidence for language-specific listening. Journal of Memory and Language, 33(6), 824-844.

Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable's differing role in the segmentation of french and english. Journal of Memory and Language, 25(4), 385-400.

Dongzhi, W. (2009). Assessment on research methods of incidental vocabulary acquisition. Teaching English in China CELEA, 23(5), 120-127.

d’Ydewalle, G., & De Bruycker, W. (2007). Eye movements of children and adults while reading television subtitles European Psychologist, 12(3), 196-205. Elley, W. B. (1991). Acquiring literacy in a second language: The effect of

book-based programs. Language Learning, 41(3), 375-411.

Ellis, N. C. (2009). Optimizing the input: Frequency and sampling in usage-based and form-focussed learning. In M. H. Long, & C. J. Doughty (Eds.), The handbook of language teaching. Oxford: Blackwell Publishing.

Ellis, N. C. (2002). Frequency effects in language processing. Studies in Second Language Acquisition, 24(02), 143-188.

Ellis, N. C., & Larsen-Freeman, D. (2006). Language emergence: Implications for applied Linguistics—Introduction to the special issue. Applied Linguistics, 27(4), 558-589.

Flege, J., & Wang, C. (1990). Native-language phonotactic constraints affect how well chinese subjects perceive the word-final english /t/-/d/ contrast Journal of Phonetics, 17(299), 315.

(43)

SSLA, 24(249), 260.

Gass, S., & Selinker, L. (2001). Second language acquisition: An introductory course . Mahwah, NJ: Erlbaum.

Gass, S. (1997). Modeling second language acquisition. Mahwah, NJ: Erlbaum. Gass, S. (1988). Integrating research areas: A framework for second language

studies. Applied Linguistics, 9(2), 198-217.

Gathercole, S. E., Hitch, G. J., Service, E., & Martin, A. J. (1997). Phonological short-term memory and new word learning in children. Developmental Psychology, 33(6), 966-979.

Gathercole, S. E., Willis, C. S., Emslie, H., & Baddeley, A. D. (1992). Phonological memory and vocabulary development during the early school years: A

longitudinal study. Developmental Psychology, 28(5), 887-898.

Goh, C. C. M. (2000). A cognitive perspective on language learners' listening comprehension problems. System, 28(1), 55-75.

Gullberg, M., Roberts, L., & Dimroth, C. (2012). What word-level knowledge can adult learners acquire after minimal exposure to a new language? Iral, 50, 239-276.

Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. Language Learning, 60, 5-24.

Gupta, P., & MacWhinney, B. (1997). Vocabulary acquisition and verbal short-term memory: Computational and neural bases. Brain and Language, 59(2), 267-333.

Hong, N. T. P. (2013). A dynamic usage-based approach to second language teaching. (Unpublished PhD). Rijksuniversiteit Groningen, Groningen. Jacquemont, C., & Scott, S. K. (2006). What is the relationship between

phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10(11), 480-486.

Juffs, A. (2004). Representation, processing and working memory in a second language. Transactions of the Philological Society, 102(2), 199-225. Kim, H. (1995). Intake from the speech stream: Speech elements that L2 learners

(44)

Krashen, S. D. (1985). The input hypothesis: Issues and implications. London: Longman.

Larsen-Freeman, D. (2002). Making sense of frequency. Studies in Second Language Acquisition, 24(02), 275-285.

MacWhinney, B. (1999). The emergence of language. Mahwah, NJ: Erlbaum. Majerus, S., Poncelet, M., Elsen, B., & Van der Linden, M. (2006). Exploring the

relationship between new word learning and short-term memory for serial order recall, item recall, and item recognition. European Journal of Cognitive Psychology, 18(6), 848-873.

McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in japanese speech segmentation. Journal of Memory and

Language, 45(1), 103-132.

Minematsu, N., & Hirose, K. (1995). Role of prosodic features in the human process of perceiving spoken words and sentences in japanese. Journal of the

Acoustical Society of Japan (E), 16(5), 311-320.

Nation, I. S. (2001). Learning vocabulary in another language. Cambridge: Cambridge UP.

Nystrom, L. E., Braver, T. S., Sabb, F. W., Delgado, M. R., Noll, D. C., & Cohen, J. D. (2000). Working memory for letters, shapes, and locations: FMRI evidence against stimulus-based regional organization in human prefrontal cortex. Neuroimage, 11(5), 424-446.

Otake, T., Hatano, G., Cutler, A., & Mehler, J. (1993). Mora or syllable? speech segmentation in japanese. Journal of Memory and Language, 32(2), 258-278. Perego, E., Del Missier, F., Porta, M., & Mosconi, M. (2010). The cognitive

effectiveness of subtitle processing Media Psychology, 13, 243-272. Robinson, P. (1995). Attention, memory, and the noticing hypothesis. Language

Learning, 45(2), 283-331.

Rohde, A., & Tiefenthal, C. (2000). Fast mapping in early L2 lexical acquisition. Studia Linguistica, 54(2), 167-174.

Safar, H., Modot, A., Angrisani, S., Gambier, Y., Eugeni, C., Fontanel, H., . . . Verstreppen, X. (2011). Study on the use of subtitling: The potential of

Referenties

GERELATEERDE DOCUMENTEN

Moreover, it focuses on experience by working on multiple different projects and posits that experience of leaders in different projects has a positive effect on learning

Specifically, we tested the hypothesis that the effect of subtitles on learning depends on the English proficiency of the students and the Visual-Textual Information Complexity

The effect of practice schedules on task performance, self-efficacy and flow perception has been investigated and results show that a random practice condition leads to better

Product management AI- department Gamification features Effectiveness of the method Superior user acquisition Machine learning Higher revenues Competitive

We kunnen bijvoorbeeld denken aan algemene preventie, door alle (toekomstige) ouders beter voor te lichten over een optimale omgang met stressoren en stress van hun kind.

How is the learning of argument structure constructions in a second language (L2) affected by basic input properties such as the amount of input and the moment of L2 onset..

It acquires meaning representations for individual words from descriptions of visual scenes, mim- icking an important aspect of human language learning, and can effectively use

Figure 4 shows examples of images retrieved via projections of single words into the visual space using the M ULTI T ASK model. As can be seen, the predicted images