• No results found

The effects of L2 proficiency, familiarity of subtitles and subtitling type on multimodal information processing

N/A
N/A
Protected

Academic year: 2021

Share "The effects of L2 proficiency, familiarity of subtitles and subtitling type on multimodal information processing"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The effects of L2 proficiency,

familiarity of subtitles and subtitling

type on multimodal information

processing

Shen Jian Student number: S3572633

MA in Multilingualism

Faculty of Arts

University of Groningen

Supervisors:

Dr. Aurélie. Joubert.

Second reader:

Dr. C.S. Gooskens

6 July, 2020

(2)

i

Acknowledgments

I would like to express my great gratitude to my supervisor, dr. A.D.E. Joubert, for her consistent guidance, helpful suggestion, and valuable critiques during the study, especially because the first half of the year 2020 has been a difficult time due to the coronavirus. Though physical meeting is impossible during suck time, she still organized several online meetings to follow up the progress of my thesis. I also appreciate the help from Hanneke Loerts for providing me valuable literature to give me ideas for the experiment design.

Furthermore, I want to give my special thanks to my classmate Haiying for the recommendation about how to conduct the experiment online. Thanks to her, I could do the experiment entirely online since the face-to-face experiment was impossible in such a special situation. Besides, I have received helps from my Spanish friend Alejandro Nadal. I would like to thank him for translating the instructions and questions for my experiment.

Finally, I would like to thank my family and friends. Writing a thesis has been more difficult than ever due to the corona crisis. I could not finish my study without their support and encouragement.

(3)

ii

Table of Content

Acknowledgments... i 0.Abstracts ... iv 1. Introduction ... 1 2. Background ... 5

2.1 Subtitles and SLA ... 5

2.1.1 Different types of subtitling help understanding and remembering ... 5

2.1.2. Subtitles help vocabulary learning ... 6

2.1.3 L2 proficiency level for subtitles and SLA ... 7

2.1.4 The types of subtitling and SLA ... 7

2.2 Multimodal information processing theories ... 8

2.2.1. Selection of attention theories ... 8

2.2.2 Dual coding theory ... 11

2.3 Subtitling processing ... 15

2.3.1 Automatic reading behavior ... 15

2.3.2 Subtitles and audio processing ... 16

2.4 Present study ... 17 3. Methodology ... 22 3.1 Platform... 22 3.2 Participants ... 22 3.3 Materials ... 23 3.3.1 Background information ... 23 3.3.2 Selfreport of proficiency ... 23

(4)

-iii

3.3.3 Familiarity of subtitles ... 24

3.3.4 Stimuli ... 24

3.3.5 Truefalse questions ... 25

3.4 Procedures ... 26

3.5 Design and analyses ... 27

4. Results ... 29

4.1 Total score of Dquestions ... 29

4.2 The effect of the subtitling type ... 32

5. Discussion ... 35

6. Conclusion ... 40

References ... 43

Appendices ... 48

Appendix A: Information about the participants ... 48

Appendix B: Subtitles ... 50

English subtitles of the first video taken from Limitless ... 50

Spanish subtitles of the second video taken from Inception ... 55

Appendix C: True False Questions ... 61

True False Questions of the first video taken from Inception: ... 61

True False Questions of the second video taken from Limitless: ... 61

Background questions ... 62

(5)

-iv

0.Abstracts

The present study investigates what factors may affect the processing of multimodal information by native Spanish speakers whose second language is English when watching videos taken from two subtitled English movies. The three factors of interest were the L2 proficiency level (intermediate and

advanced), the familiarity of subtitles (high frequency or low frequency of using subtitles), and the type of subtitling (interlingual subtitling for the first video and intralingual subtitling for the second video). After watching each video,

participants had to answer a series of true-false questions based on the information from the audio channel. The subtitles in both videos contained deviated information (two hours in the audio and five hours in the subtitles). The results showed that L2 proficiency level did not affect the processing of

multimodal information, neither did the familiarity of subtitles nor the type of subtitling. However, there was an obvious, but not significant interaction effect of L2 proficiency level and familiarity. For future research, a larger amount of participants, a more accurate test for the L2 proficiency level, and a better system for deciding the familiarity are required.

Keywords: second language acquisition, interlingual subtitles, intralingual subtitles, L2 proficiency, multimodal information processing

(6)

- 1 -

1. Introduction

Nowadays, thanks to the advancement of technology, the spread and exchange of information have become faster and easier than ever. Especially videos like TV programs, movies that were once rare resources have become more accessible to everyone due to the rapid development of the Internet. However, such multimodal information still faces some boundaries. For example, language is one of the main challenges when information wants to be distributed internationally. So far, the two most prevalent measures to “translate” a foreign-language movie are dubbing and subtitling (Koolstra, Peeters & Spinhof, 2002).

On the one hand, dubbing a movie to replace the soundtrack of the original language with the soundtrack of the target language. The dubbed movie can give the audience a better viewing experience, especially if the audience has no knowledge whatsoever the original language. On the other hand, the major disadvantage of dubbing is that it is more time and money consuming. In Europe, only big countries like France, Germany, Italy, or Spain choose to dub foreign-language movies because they have enough resources for dubbing as well as they have a large language community so that there is more audience for these dubbed movies. Also, there is another relatively minor form of dubbing called “voice-over” which is popular in Russia and Poland. It adds a nonsynchronous voice to the movie and does not replace the original text and language (Schirmer Encyclopedia of Film, 2020).

Subtitling, on the other hand, presents the translated information from speech to text without altering the original soundtrack. There are two types of subtitling. If the subtitle is provided in the native or main language of the audience, then it is known as interlingual subtitling. If the subtitles are provided in the same language of the original soundtrack, then it is intralingual subtitling. This kind of subtitling is aimed for deaf or hearing-impaired viewers. Although it is more common to have subtitles in one language, subtitling in both original and other languages simultaneously, such as the

(7)

- 2 -

cases in bilingual countries like Belgium, Finland and Israel, is possible (Schirmer Encyclopedia of Film, 2020).

There is no decisive empirical evidence to prove which one is the better way to translate information. Subtitling and dubbing have their own advantages according to the viewers, the type of the video, and the way in which the video is subtitled or dubbed (Koolstra, Peeters & Spinhof, 2002). However, when it comes to second language acquisition (SLA), it is commonly believed that subtitling is better than dubbing because viewers are processing the soundtrack in the foreign language and the subtitles in their native language at the same time. Processing multimodal information with different languages has been investigated by several studies which state that it helps the acquisition of a foreign language (d’Ydewalle & Van de Poel, 1999; Koolstra & Beentjes, 1999; Van Lommel, Laenen & d’Ydewalle, 2006).

In the interlingual subtitling situation, the L2 is processed through audio information only. While in the intralingual situation, the L2 is processed through both soundtrack and subtitles. In such a way, intralingual subtitling helps the viewer with novel word learning under some conditions (Bird & Williams, 2002). Though the intralingual subtitling seems to be more beneficial, the advantages of intralingual subtitling are based on the premise that the viewers already have enough proficiency in that foreign language. Viewers with basic or low proficiency cannot successfully process the information in an entire L2 environment. Thus, intralingual subtitling does not help the SLA.

So far, many studies have been conducted to confirm the benefits of watching subtitled movies. Subtitled movies help viewers to learn more vocabulary and grammar, especially if they are young kids (d’Ydewalle & Van de Poel, 1999; Koolstra & Beentjes, 1999; Van Lommel et al., 2006). While, few studies have investigated how exactly different modality, namely, audio and textual information affect viewers’ processing of the videos. A previous study showed that when they were given intralingual subtitles, viewers processed subtitles significantly more so than when they were given

(8)

- 3 -

interlingual subtitles, regardless of whether their English proficiency level was intermediate or advanced (Van Sluijs, 2013). However, in that study, there were only 22 participants, and all of them were Dutch, who are familiar with subtitled movies. Therefore, the present study will investigate how people who are different regarding the familiarity of subtitled movies process the multimodal and multilingual information in foreign-language movies. Besides, it is worth investigation that whether the L2 proficiency level and type of subtitling affect the processing of audio information and subtitles for people who are unfamiliar with subtitles the same way they affect people who are familiar with subtitles.

Based on the main interests of the present study mentioned above, four research questions have been put forward. The first research question is: does L2 proficiency level influence the processing of audio information and subtitle. The hypothesis is that L2 proficiency levels affect the processing in a way that the higher the L2 proficiency level is, the more audio information is acquired. The second research question is: does familiarity of using subtitles influence the processing of audio information and subtitle? Regarding the second research question, it is expected that there will be no major influence cause by familiarity. The third research question is: is there an interaction effect of L2 proficiency level and familiarity on the processing of audio information and subtitle? The hypothesis is that there is no interaction effect. The fourth research question is: does type of subtitling affect the processing of audio information and subtitle? Based on the aforementioned findings by Van Sluijs (2013), the expectation is that intralingual subtitling affects the processing in a positive way, meaning that participants should process more audio information when they watch the video with intralingual subtitling.

The final part of the introduction is the structure of this thesis. This thesis is divided into several parts. The first part is dedicated to the theoretical background for the present study, including current situation of subtitles and its effect on second language acquisition, different theories for multimodal information processing, previous studies

(9)

- 4 -

about subtitles processing and the goal of this study. The second part is the research design, including the platform, the participant pool, the procedures, the material used for the experiment and the statistical analysis. The third part consists of the analysis of the data and the results. The fourth part is the interpretation and discussion of the results. And finally, the conclusion of the study will be presented.

(10)

- 5 -

2. Background

2.1 Subtitles and SLA

2.1.1 Different types of subtitling help understanding and remembering

Many studies have shown that subtitles help people better understand the movies, regardless of the type of subtitling. For example, Markham (1989) found out that subtitles greatly improved general understanding of the content for students who had English as second language. Furthermore, he discovered that subtitles offer more assistance when the image and sound are not matching in the videos (Markham, 1989). Another study suggested that watching subtitled English programs does not only facilitate the SLA, but also has an effect on familiarizing the accents of the English language. Fifteen European learners of English with intermediate or high proficiency levels watched nine-hour BBC television programs with English language subtitles. The participants reported that they found the subtitles useful and beneficial to their language development. Subtitles provided a large amount of comprehensible input for them. Moreover, when they were exposed to a certain accent of English, the subtitles provided them with comprehension of the speech to help them get used to such accents (Vanderplank, 1988).

Besides helping understand a movie in foreign languages, subtitles also promote the SLA because they give students a good memory of the languages in the movies. For example, Dutch students who had German as second language were divided into two groups. The control group watched a video without subtitles, while the other group watched the video with subtitles. The results showed that the control group could only recall 43% of the language in the video, while the other group recalled 93% of the language from the video (Gielen, 1988). Such benefits for SLA appear in other languages too. Garza (1991) conducted the experiments among 70 ESL students and 40 students who had Russian as second language. Half students from each group watched five videos without subtitles, while the other half with subtitles in the corresponding second language. The results of the comprehension test showed that subtitles in the

(11)

- 6 -

second language significantly improves listening comprehension. In the test where the students were asked to recall the information, the experimental group also performed better the control group.

2.1.2. Subtitles help vocabulary learning

Subtitles have also been proven that they are helpful for incidental word learning. An early study divided 129 ESL students who spoke minority languages into four groups for learning science vocabulary and concepts in English. In the 12-weeks long experiment, the students studied in four different ways: television programs with subtitles, television programs without subtitles, reading along and listening to the text, and traditional textbooks. The results of the vocabulary test show that the first group outperformed other groups in word knowledge and recall of information (Neuman & Koskinen, 1992).

The same effects appear when the movies use interlingual subtitling. Koolstra and Beentjes (1999) randomly assigned 264 Dutch elementary school students into three groups: (a) watching an English television program with Dutch subtitles, (b) watching the same one without subtitles and (c) watching a Dutch television program. The experiments revealed that students in the group received subtitles had better vocabulary acquisition and recognition of English words.

Furthermore, a more recent study suggested that double subtitles are capable of enhancing the incidental vocabulary learning of an unknown language. An eight-minute Russian cartoon was presented to native Dutch speakers in three different subtitling conditions: (a) interlingual (Dutch) subtitle, (b) double subtitles (Russian and Dutch), and (c) no subtitles. The participants who received double subtitles significantly outscored others in a written word recognition test. Besides, those who watched the cartoon with subtitles (interlingual or double) were better at recalling the sequences of the scenes from the cartoon (Lazareva & Loerts, 2017). These findings suggest that double subtitles are possibly more helpful for SLA, at least regarding vocabulary acquisition.

(12)

- 7 -

2.1.3 L2 proficiency level for subtitles and SLA

Most studies about the effect of subtitles on SLA are based on the premise that the participants have the same or similar English proficiency level. Several studies have been carried out to investigate the effect of proficiency level on subtitles processing and SLA, but the findings are inconsistent. For instance, Prince (1983) found out that all ESL learners get huge benefits from subtitles regardless of their education level and linguistic background. Whereas, the study of Vanderplank (1988) suggested that subtitles may not offer as much help to learners with low proficiency level as they do to learners with intermediate or higher proficiency level, because leaners with higher proficiency level receive more comprehensible input. Another research by Danan (1992) partly agrees with Vanderplank’s idea, indeed. English-speaking college students with beginning and intermediate proficiency level in French watched a five-minute video with interlingual subtitling (French soundtrack and English subtitles), reversed subtitling (English soundtrack and French subtitles) and no subtitles at all. In two subsequent experiments, interlingual subtitling was replaced by intralingual subtitling (French audio with French titles). In the first part of the research, students under the interlingual subtitling condition performed best. The results of the subsequent experiments indicated that bimodal processing improves students’ SLA. Overall, it is suggested that subtitles should adjust according to students’ proficiency levels in order to reach maximum benefits, and bimodal input is encouraged to be used for SLA. 2.1.4 The types of subtitling and SLA

As mentioned above, subtitles are of great help for SLA. However, which one is the best strategy among different types of subtitling? The findings from different studies provide different and even opposite suggestions. In the study by Danna (1995), the group received reversed subtitling scored highest, but the group received interlingual

subtitling performed worst (even worse than the group did not receive any subtitles). However, another study did not yield exactly the same results. Dutch students who had

(13)

- 8 -

types of subtitling. The result of the experiment confirmed that reversed subtitling offered the best help, but disagreed with the former one, that group of interlingual subtitling performed only worse than the reversed subtitling but better than all other subtitling conditions (D’Ydewalle et al., 1996). The third experiment partly agrees with the findings from the aforementioned two studies. However, the study carried by Markham et al. (2001) did not include the reversed subtitling. The video was presented to 169 American college students with Spanish as their second language with interlingual subtitling, intralingual subtitling, and no subtitles. The performance in the test for comprehension of the content showed that group with interlingual subtitling performed best. Therefore, with the previous studies put together, it is believed that reversed subtitling (L1 soundtrack and L2 subtitles) is most helpful for SLA. When the soundtrack is set to be L2, the interlingual subtitling is better than the intralingual subtitling or no subtitles.

Furthermore, different combinations of soundtrack and subtitles have an effect on SLA. Baltova (1999) divided 93 Canadian students whose second language was French into three groups. Each group watched the same video three times. The first group watched the video with (a) L1 soundtrack and L2 subtitles, (b) L2 soundtrack and L2 subtitles, and (c) L2 soundtrack without subtitles. The second group watched with L2 soundtrack and L2 subtitles twice and L2 soundtrack without subtitles. The third group watched the video with L2 soundtrack without subtitles for all three times. In the test for understanding and recalling the content of the video, the first and the second group significantly outscored the third group. However, in the test for acquiring and remembering the vocabulary, the first group performed better than the other two groups, and the other two groups performed similarly well.

2.2 Multimodal information processing theories

2.2.1. Selection of attention theories

(14)

- 9 -

at least two modalities, which are images and sound. Meanwhile, a third modality is added to TV series and movies, especially when they are originally in foreign languages. That modality is subtitling. Therefore, the ability to process imagery, auditory, and textual information simultaneously has become important.

However, processing multimodal information has been considered as a very difficult, nearly impossible task since last century because it is believed that there is a limitation of our cognitive capacity. Therefore, as proposed by Broadbent (1958), stimuli are being filtered or selected at the very beginning of the process. When a certain type of information is selected, it becomes attended information. Thus, attention will not focus on other types of information (Treisman, 1964).

Broadbent created the first selective attention model (Broadbent, 1958). As shown in figure 1, a selective filter is in the middle of the model to choose a certain type of input information based on physical characteristics, such as color, pitch, loudness, and direction. In such a way, only attended information passes the filter to be processed on a higher level.

Figure 1: Broadbent's Filter Model (Farr, 2012)

To support his theory, Broadbent (1958) conducted an experiment to find some evidence. In the experiment, the participants heard different lists of digits in each ear. When asked to report the digits they heard, participants were more likely to report all the digits they heard from one ear then all the digits from the other ear, instead of reporting digits in the order of being presented. Furthermore, the accuracy of ear-by-ear reporting was higher than reporting in the order the digits were presented. This result suggested that information is filtered according to the ear from which the participants

(15)

- 10 -

heard the digits. Since the participants obviously focused on information from one ear then to the other ear in the experiment, Broadbent claimed his theory that the selection of attended information was made at the early stage of processing multiple information. However, Broadbent’s Filter Model has certain flaws. For example, a person can still hear someone calling his name even though he put all your attention in the conversation with his friend. This is called the Cocktail Party Effect (Cherry, 1953), and it was unexplainable according to Broadbent’s Filter Model. It shows that some “unimportant” information is being processed at a certain level. Therefore, Anne Treisman, as a graduate student of Broadbent's, improved Broadbent’s Filter Model. In the attenuation theory she proposed, the unattended information is attenuated instead of being completely blocked. Her theory supports Broadbent’s early-selection theory. In the attenuation model (figure 2), the filter does not only select one type of information to pass, but it also attenuates the unattended information to pass. If the already attenuated information can pass a threshold, it can pass the filter and become attended. Such threshold to decide whether the information gains conscious awareness is determined by the word’s meaning (Klein, 1996). Important words, such as the names in the aforementioned Cocktail Party Effect, have a lower threshold to gain awareness. However, unimportant words have a higher threshold to prevent them from gaining awareness easily. Therefore, the threshold filters the words according to their meanings (Treisman, 1969).

Figure 2: The Attenuation Model (Farr, 2012)

However, since the Attenuation Model was built on Broadbent’s theory that the attended information is selected at the beginning in the process, attention will not focus

(16)

- 11 -

on unattended information even though the unattended information passes the attenuating filter. It can be explained that there is limitation for the cognitive capacity. Therefore, both the earliest attention selection model by Broadbent and later the revised model by Treisman suggest that processing multimodal and multilingual information, such as watching a subtitled video, is highly impossible to process both audio information and subtitles at the same time. Therefore, viewers have to focus on one input channel more than the other one.

2.2.2 Dual coding theory

As the selection of attention theory explained, when people face both image and audio information, they will mostly focus on the attended information. It is impossible to focus on both sources of information at the same time. Therefore, when a third source of information was added, namely, text in the subtitled movies, it will only be more difficult for viewers to shift and acquire information among three modalities. However, according to the Dual Coding Theory proposed by Paivio (1971), people process and restore information in two different ways: verbal associations and visual imagery. Moreover, Paivio (1971) claimed that the two systems could function individually and also together. For example, a person remembers the concept “dog” both as the verbal form: the word “dog” and the visual form: the image of a dog. When asked to recall the concept, the person can think of either the word or the image individually, or both forms simultaneously. No matter in which form the concept has been retrieved, the concept in the other system is not lost and can be used later (“Dual Coding Theory”, n.d.). Such an example proves that visual and verbal information are processed in two different systems in the human mind. Each system has its own code for the processed information. Separate codes are stored in each system and can be retrieved later, and either one of the codes can be used to recall information. Hence, based on the Dual Coding theory, coding stimuli in two different ways helps learners to remember the second language more than if the stimuli are only coded in one way (Paivio, 1983).

(17)

- 12 -

used to represent information in human minds (Pylyshyn, 1973). Empirical evidence shows that memory for verbal information is enhanced if visual information is also presented or if the person can connect the images to the verbal information (Paivio & Csapo, 1973). Similarly, visual information can often be enhanced if it is connected or related to certain verbal information, no matter if this connection is objectively true or imaginary (Anderson & Bower, 2014). Furthermore, Dual Coding theory has been applied in the field of multimodality. More and more teachers and students are familiar with the classes using multimedia. The multimedia presentation has been proven especially efficient for students to acquire new knowledge and information because multimedia presentations make use of both verbal and visual working memory. Therefore, students learn new things by using both process systems, so they are more likely to recall the information after the class (Brunyé et al., 2007). Moreover, research shows that students are better at remembering words with concrete meaning than those with abstract meaning because students cannot restore information in both systems since it is more difficult to connect abstract words with images. Thus, the Dual Coding theory is less helpful when applied on abstract vocabulary learning (Hargis & Gickling, 1978; Sadoski et al., 2004; Yui et al., 2017).

The aforementioned Dual Coding theory mainly concerned the monolingual situation. Thus, the Bilingual Dual Coding Model (figure 3) was proposed by Paivio and Desrochers (1980) to apply the Dual Coding theory in the bilingual context. As shown in figure 3, there are two separate verbal systems to process the stimuli. One verbal system for one language. However, there is only an image system. All three systems can produce outputs, suggesting that every system can process the stimuli independently. Meanwhile, all systems are connected to each other, meaning that codes in one system can activate the concept in other systems. Therefore, they are capable of functioning together.

(18)

- 13 -

Figure 3: The Bilingual Dual Coding Model (Jared et al., 2013)

A recent study has specifically tested the how bilinguals conceptualize in their minds and connect the concepts to the words in L1 and L2 based on the assumption of Bilingual Dual Coding Model that the representation of a concept also includes image representation, and that if two languages are acquired separately, then different image representations will occur in L1 and L2. In the experiment, Mandarian-English bilingual students were asked to name aloud biased images and culturally-unbiased images in both L1 (Mandarin) and L2 (English). The culturally-biased images were images that represent either Chinese or western culture, while culturally-unbiased images are common objects (such as tables) that exist in both cultures. The results showed that culturally-biased images were named significantly faster if the images and the language were congruent. These findings suggest that some image representations are more closely connected to one language than the other, providing support for the Bilingual Dual Coding Model (Jared et al., 2013).

In summary, the early selection of attention theories suggested that watching a subtitled television program leads to a trade-off in the acquisition of information from different input channels because different information sources increase the cognitive load and demand of attention (Treisman, 1969; Yeung, Jin, & Sweller, 1998). In an empirical study by Diao et al. (2007), ESL students were instructed in three ways: (a)

(19)

- 14 -

listening with only audio, (b) listening with a full, written script, and (c) listening with subtitles. The students who received script and subtitles performed poorer in the listening comprehension test than the group only received audio information because their attention was distracted by the textual information (Diao, Chandler, & Sweller, 2007). Not only is the perception of the audio information influenced by subtitles, but also are the images in the video. In the experiment conducted by Chai and Erlam (2008), ten Chinese learners of English were presented with a short video sequence with subtitles, and another ten without subtitles. According to the interview with the ten learners who received video with subtitles, most of them prioritized the reading of subtitles while watching the video, and some of them reported that they had problems paying attention to both image and text at the same time.

However, the (bilingual) Dual Coding Theory suggested that separate verbal and nonverbal process systems can function independently and collectively. Thus, it helps the processing multimodal information such as subtitled videos (Paivio & Lambert, 1981; Paivio, 1990; Jared et al., 2013). An experimental study tested the hypothesis that subtitles can help the understanding of film content without making a significant trade-off between image and textual information, which is assumed by the early selection of attention theory (Peregro et al., 2010). The experiment requested 41 Italian students to watch subtitles video, and fill out questionnaires about the comprehension of the content as well as word recognition. Besides, the participants were asked to do a scene recognition task. The results confirmed the hypothesis. Participants achieved a good understanding of the content and performed well in both word and scene recognition. The tradeoff effect between images and subtitles was not detected. Moreover, the results revealed that participants scored higher in the task of scene recognition than the word recognition, which can be explained by the image superiority from the Dual Coding Theory.

(20)

- 15 -

2.3 Subtitling processing

2.3.1 Automatic reading behavior

According to the Bilingual Dual Coding theory and its following studies, the textual information in the multimodal stimuli facilitates information processing. The textual information does not hinder the processing of other forms of information, which was claimed by the early selection of attention theory. However, it is believed that the subtitles must be processed so that the viewers can have textual input for the sake of SLA. Therefore, many studies have investigated how people process the subtitles when they are watching videos in foreign languages.

Eye-tracking methods haven been used widely to investigate how people allocate their attention when watching a subtitled video. d’Ydewalle, Van Rensbergen, and Pollet (1987) tracked the eye movements of Dutch participants when they were presented with subtitled videos that had different presentation time for subtitles. The participants were divided into three groups: (a) participants who were proficient in the language of the soundtrack, (b) participants who were not proficient in that language, and (c) participants who would not receive the soundtrack. The findings were that participants spent more or less the same amount of time reading subtitles regardless of their L2 proficiency level and the availability of audio information. Therefore, it suggests that reading subtitles seem to be automatic behavior if the subtitles are presented for a normal amount of time. Nevertheless, D’Ydewalle et al. (1987) argued that the findings could be explained by that fact that reading subtitles was a habitual behavior and effortless for Dutch people no matter if they know the language or not.

Hence, to investigate whether people who are not familiar with subtitles perform the same automatic reading behavior, d’Ydewalle et al. (1991) carried out another eye-tracking experiment to compare how American participants and Dutch participants process subtitles. For American participants, they watched an American movie with English subtitles. For Dutch participants, they watched a Dutch movie with Dutch subtitles. The eye-tracking results showed that even though American participants were

(21)

- 16 -

overall not familiar with reading subtitles as compared to the Dutch participants, they spend a considerable amount of time in the subtitled area. The same phenomenon happened on the Dutch participants. Thus, it is concluded that irrespective of the familiarity of using subtitles, subtitles always trigger automatic reading behavior even when subtitles are “useless” because both soundtrack and subtitles are in L1.

So far, it is clear that the availability of audio information, the L2 proficiency level, and familiarity of subtitles do not affect the automatic reading behavior, but does the type of subtitling have an influence on how subtitles are processed? Another eye-tracking study asked English native speakers who had no knowledge of Dutch to watch a part of movie under four conditions: (a) control condition: Dutch audio and no subtitles, (b) intralingual subtitling: Dutch audio and Dutch subtitles, (c) interlingual subtitling: Dutch audio and English subtitles, and (d) reversed subtitling: English audio and Dutch subtitles. The results revealed that participants read the subtitles irrespective of the subtitling condition. However, participants fixated more on the subtitled area when the film soundtrack was in an unknown foreign language (Bisson et al., 2012). Therefore, the type of subtitling does not influence the automatic reading behavior. 2.3.2 Subtitles and audio processing

The aforementioned studies have indicated that proficiency level, the familiarity of subtitles, and type of subtitling do not prevent people from reading the subtitles. Such findings raise a question whether the automatic reading behavior would somehow hinder the perception and acquisition of audio information. The studies by De Bot et al. (1986) and Sohl (1989) can provide evidence that the viewers do not solely rely on textual input but can also process audio input simultaneously.

The experiment conducted by De Bot et al. (1986) investigated whether viewers focused on L1 subtitles, L2 audio or both when they were watching subtitled foreign language programs. Participants were Dutch students who had beginner or advanced English proficiency levels. They were shown short items of a news bulletin in English. In some of the items, the audio and textual information were in conflict. After each item,

(22)

- 17 -

the participants were asked to answer a multiple-choice question, which can be a question about information where subtitles and speech corresponded, but also can be a question about conflicting information. In the second case, the four options included one based on the audio information, one based on the subtitles, and two unrelated, wrong answers. Results showed that overall, all participants had used both audio and subtitle input simultaneously. However, the difference of English proficiency level did affect the focus on different forms of input. The students with advanced proficiency level focused more on the audio than the students with the beginner proficiency level. Therefore, the conclusion is that L2 learners process the audio information at the same time as they process subtitles, but there is a different tendency between processing audio and subtitles depending on the L2 proficiency level.

The second study by Sohl (1989) investigated how children and adults process L1 audio information and L2 textual information by comparing three different combinations of soundtrack and subtitles: (a) soundtrack and subtitles, (b) only soundtrack, and (c) only subtitles. The participants had to react to the flashing light when they were watching the video. The reaction time was counted as a way to measure the process capacity. The interpretation was that the longer the reaction time was, the more attention the participants used to process the video. The results suggested that the age of the participants did not affect the way they processed the multimodal information because there was no significant difference between the reaction time of children and adults. However, the participant reacted to the flashing light significantly more slowly than the other two conditions, meaning that they spent attention in processing both forms of input at the same time.

2.4 Present study

In the previous sections, several theoretical bases have been discussed, including the benefits of subtitles for SLA, the theories for processing multimodal information, and the theories for processing subtitles. There were studies about how the familiarity of

(23)

- 18 -

subtitling and L2 proficiency level affect the processing of audio and subtitles, but those studies did not consider the interaction effect of the two factors (d’Ydewalle et al., 1991; De Bot et al., 1989). Furthermore, the study concerning the effect of familiarity on processing subtitles compares the participants who had different first languages, which can cause unreliability to the finding that familiarity of subtitling does not influence the automatic reading behavior. Therefore, the present study will investigate how L2 proficiency level and the familiarity of subtitling may affect the processing of subtitles and audio. Additionally, the potential influence caused by the type of subtitling is also of interest. In the present study, the L1 was Spanish, and the L2 was English. The familiarity of subtitling quantified as the frequency of using English subtitles when watching an English movie. The types of subtitling used in the experiment were interlingual subtitling (English audio with Spanish subtitles) and intralingual subtitling (English audio with English subtitles).

The L2 proficiency level was being investigated because evidence shows that participants who are more proficient in L2 can have better comprehension and learn more words from the context (Neuman & Koskinen, 1992). Since this experiment has used intralingual subtitling in one of the videos, the participants were required to have at least an intermediate proficiency level of English (above B1 level according to CEFR).

Furthermore, the research by Bisson et al. (2012) shows that participants read the subtitles automatically regardless of the type of subtitling, but his research did not compare the focus on the different input channel. Therefore, it is worth investigating the potential difference of focus on audio and text input when watching video with different types of subtitling.

To figure out the focus of audio information or textual information, the design of the present study was based on the study by van Sluijs (2013): the participants had to watch two videos taken from two different English movies. The first video was presented in interlingual subtitling, namely, audio in L2 and subtitles in L1. The second video was

(24)

- 19 -

presented in intralingual subtitling, namely, audio in L2 and subtitles in L2. After watching each video, the participant had to decide a series of true-false questions based on the audio information. In order to know which form of information the participants focused and processed, the subtitles had been altered to create deviations from the actual audio information. For example, two hours in the soundtrack was changed to five hours in the subtitles. By doing so, it is easy to tell from which input channel the participants processed the information. If the participant answered the question correctly, it meant that the participant focused on the audio. If not, it meant that he or she had processed the information from subtitles.

The first research question is: does the L2 proficiency level influence the processing of audio information and subtitles? The hypothesis is that participants with intermediate proficiency level will focus more on the subtitles than the participants with advanced proficiency because it is assumed that the lower their proficiency is, the more difficulty they have acquiring information from the audio channel. On the other hand, the participants with advanced proficiency level will not rely on the subtitles as much as the other group for the comprehension of the video because their knowledge of L2 is sufficient for processing audio information successfully. These predictions are made based on the findings of De Bot et al. (1986), that L2 learners with beginning level are more subtitle oriented than the learners with advanced proficiency level.

The second research question is: does the familiarity of subtitles influences the processing of audio information and subtitle channels, or in other words in the context of this experiment, does the frequency of using L2 subtitles when watching movies in L2 affect the focus between audio and subtitles? The expectation is that overall, participants will focus more on the subtitles than on the audio regardless of their frequency of using subtitles. Such assumption is made based on the findings by d’Ydewalle et al. (1991) that participants who were not familiar with subtitles (Americans) and participants who were familiar with subtitles (Dutch) spent more or less the same amount of attention on the subtitled area. The two groups of participants

(25)

- 20 -

were chosen because it is well-known that Dutch people watch foreign-language movies with subtitles, and American people do not. Differentiating the familiarity based on the nationalities was efficient, but it had a certain flaw, because the variables in that study included not only the familiarity of subtitles but also native languages of the participants. Hence, the present study set the native language to only one (Spanish), and it is hoped that the result of the present study will confirm the previous findings. The third research question is: is there an interaction effect of L2 proficiency level and familiarity of using subtitles on processing audio information and subtitles. It is expected that participants with advanced proficiency level and high frequency of using subtitles will focus on audio information more than other groups, while participants with the same proficiency level but lower frequency focus on the audio information more or less the same since the L2 proficiency level noticeably affect the focus of input channel while familiarity does not cause a major difference. Then, participants with intermediate proficiency level with high frequency focus on audio less than the groups with advanced level, and participants with intermediate level but with low frequency focus on audio roughly equal.

The last research question is: does the type of subtitling influence the focus of audio information and subtitles? The assumption is that participants tend to process more subtitles than audio under the interlingual subtitling condition because it is commonly believed that L1 subtitles are easier and more effortless for processing than L2 subtitles. Several studies have shown conflicting conclusions regarding this question. One study by d’Ydewalle and De Bruycker (2007) shows that people are more likely to focus on subtitles under the interlingual subtitling condition than under the reversed subtitling condition. Later, the study by Bisson et al. (2012) suggests that there is no difference in subtitle reading behaviors between interlingual and intralingual subtitling. While the study by van Sluijs (2013) indicates that participants under internlingual subtitling condition focused more on the subtitles than intralingual subtitling, irrespective of their L2 proficiency level. Since reversed subtitling was not used in the present study, the

(26)

- 21 -

goal of this study is to correspond to either the finding by Bisson et al. (2012) or the finding by van Sluijs (2013) about the processing of audio and subtitles under interlingual and intralingual subtitling conditions.

(27)

- 22 -

3. Methodology

3.1 Platform

Because of the corona crisis, the present study was conducted completely online. To cover the background questions, stimuli and the tasks, I designed and conducted the experiment on the website Testable. Testable (www.testable.org) is a platform for behavioral research, and teaching. It is a very easy platform to create, run, and share experiments and surveys.

One of the main reasons for using Testable is that it is very friendly for beginners. The researcher does not need any knowledge of programming in order to create an experiment on Testable. You only need to drop the files of the conditions, stimuli under each category. Another reason is that Testable supports all modalities, not only images and words but also sounds and videos for the experiments. Moreover, different options of response are possible, such as text input, button clicks, and keypress. It is also very easy to manipulate to the experimental parameters, including stimuli presentation time and the response time (Testable, 2020).

3.2 Participants

All participants were recruited online from the subject pool of Testable. In total, there were 35 participants in the experiment, including 11 Venezuelan, 5 Spanish, 7 Mexican, 6 Chilean, 4 Argentinean, 1 Italian, and 1 Colombian. Among them, 4 participants were female, 31 were male. The participants were all native Spanish speakers who had English as their foreign language with age from 20 to 30 years old. The education level of the participants was varied, including 1 participant with less than high school, 7 with high school level, 5 with college or technical school, 18 with Bachelor degree, and 4 with Master or Doctorate degree.

Due to the coronavirus, the experiment was carried out completely on Testable website. Therefore, all the participants were recruited online. Having Spanish as their mother tongue was the only and most important requirement for the participants. The

(28)

- 23 -

other variables, such as their locations, and their education levels were varied, partly because it was not feasible to control these variables on the website, for example, the education level, also because these variables can account for the potential difference of processing multimodal information.

3.3 Materials

3.3.1 Background information

The Testable platform allows the researcher to choose existing background information such as age, gender, nationality, education, and ethnicity or add more questions about the personal information of the participants. This part was set before the beginning of the experiment. For the present study, the age of the participants was controlled. All participants were young people with ages between 20-30 years old. Besides, gender and education were collected for background information.

Among all background questions, the age and nationality were blanks to be filled out, while the education level was selected from five options, including less than high school, high school or equivalent, college or technical school, bachelor degree and master/doctorate. See appendix A for the detailer background information of all the participants.

3.3.2 Self-report of proficiency

Before the stimuli, the participants were asked to do a self-report of their English proficiency. The first part is an overall rating of English proficiency. It is based on the Common European Framework of Reference for Languages (CEFR). The CEFR divides language proficiency into six levels, from the most basic A1 level to the most advanced level C2. Furthermore, people with A1 or A2 level can be defined as basic user of that language, B1 or B2 as independent user, and C1 or C2 as proficient user (Council of Europa, 2001). Since there were a lot of participants from Latin American, the self-assessment grid was provided for their information. In the self-assessment grid, the CEFR provides a clear definition as “I can …” for each level from five aspects,

(29)

- 24 -

which are listening, reading, spoken interaction, spoken production and writing. In the second part, the participants had to rate their listening, speaking, reading, and writing ability on a one-to-ten scale, where the one is the lowest and ten is the highest score.

3.3.3 Familiarity of subtitles

In order to investigate the effect of familiarity of using English subtitles on processing multimodal information, the participants were asked to rate their frequency of using subtitles on a scale from one to five, where one meant the participant never watched English movies with English subtitles, and five meant always. Additionally, they had also been asked about their viewing preferences for movies originally in English, which they can choose from four options: dubbed version with or without Spanish subtitles, dubbed version with English subtitles, original version with Spanish subtitles and original version with English subtitles.

3.3.4 Stimuli

Two clips of movies were chosen as stimuli because the characters were talking, not emotionally, but calmly and clearly in normal speed and loudness in a relatively stationary scenario. Therefore, the participants can hear the verbal information clearly without being distracted by moving images. More importantly, the irrelevance between the images and conversations can reduce the risks that participants may accidently acquire the audio information from the images for the tasks (van Sluijs, 2013). The first stimulus was taken from the movie Inception, where the protagonists were negotiating with the boss. The stimulus was taken from the movie Limitless, where the protagonist was being investigated by the police officer. Both movies are originally English movies, and were downloaded only with the original soundtrack.

As for the subtitles, the first stimulus used interlingual subtitling, so it is subtitled in Spanish. Since Inception is a very well-known and international movie, it has its own official subtitles in many languages including Spanish. I retrieved the official Spanish subtitles from the subtitles website YIFY to avoid potential errors caused by

(30)

- 25 -

mistranslation. The second stimulus was presented in English subtitles, so it was easier to find its official subtitles. Both subtitles were downloaded in srt file format, and notepad was used to deviate the subtitles and make the timeline in accordance with the clips. See appendix B for the transcript of the subtitles, among which the red part was deviated.

As the interest of the present study is to investigate if the participants focus more on the audio or subtitles, the subtitles have been deviated in order to make it different from the audio (van Sluijs, 2013). For example, the protagonist should deliver the expansion plan to their employer two hours ago according to the audio, but the number two was altered to five in the Spanish subtitles. These deviations can be lexical, grammatical and semantic, including changing assertive sentence to negative, replacing lexical items with expectable substitutes. These deviations were deliberately made for the true-false questions so that the participants would not relate the questions to any scenes in the video.

In order to combine the deviated subtitles with the videos, the Video Converter Studio from Apowersoft was used. With such software, the first video was cut from Inception from the minute 17:55 to 21:32, and the second was taken from Limitless from the minute 24:18 to 26:02. Because the amount of new information should not be too much to exceed the memory capacity of the participants, the two videos were short, one was about three and a half minutes, and the other was less than 2 minutes. After editing the clips, the deviated subtitles were added to the clips in the Video Converter Studio.

3.3.5 True-false questions

There were six true-false questions for each video. Half of them were questions about the deviated part to test if they focus on audio or subtitles (D-questions). The other half was about the general information of what happened in the video to investigate what type of subtitling gives a better understanding of the videos, and these questions also served as a distraction to hide the real intention of the experiments from

(31)

- 26 -

the participants (G-questions).

Since the goal of this experiment was to investigate how people process the information in the videos, all the true-false questions were presented in both English and Spanish language, with the English version on top of the Spanish version. The English questions were translated into Spanish by a native Spanish speaker who is doing a doctor degree in English linguistics. See appendix C to check all the true-false questions in both languages.

3.4 Procedures

Due to the unprecedented situation when the study was carried out, it was impossible to arrange the participants to do the experiment under a completely controlled environment. The whole experiment was conducted online. The participants received the link which contains the entire experiment and did the experiment at home or anywhere. Though it was difficult to restrict the location and objective conditions for all participants, it suggested the participants do the experiment in a quiet environment and to use headphones for the best experimental performance in the first place once they opened the link.

After the general advice, the participants were requested to fill out the aforementioned background questions. To continue the experiment, they were asked to read the instructions. Same as the true-false questions, the instructions were also translated into Spanish because instruction completely in English would probably cause confusion about the procedures of the experiment for participants with low English proficiency level. The English version of instructions was presented before the Spanish version. See appendix D for full content of the instructions in two languages.

The experiment started with the video taken from Inception. The first video used Spanish subtitles. The video could only be played once. When the video finished, the participants could press NEXT to answer six true-false questions about the audio information of the first video. The response time of each question was limited to 15

(32)

- 27 -

seconds. When the time was over, the text of the questions would be concealed. After six questions, the participants had to watch the second video, which followed by another six true-false questions.

There was no time limitation for background questions nor reading instructions. The time for watching the two videos took less than five and a half minutes. The maximum time cost for all true-false questions was three minutes. Therefore, the duration of the entire experiment was usually less than ten minutes.

3.5 Design and analyses

This study is a quantitative type of research consists of two analyses. The first analysis investigated the effect of L2 proficiency level and the frequency of using English subtitles on processing multimodal information. Therefore, those two were the two independent variables in the first analysis. The proficiency level was divided into two groups, which were intermediate for B1 and B2 and advanced for C1 and C2. Later, the frequency of using English subtitles was labeled as high for a score of 3 or higher and labeled as low for a score lower than 3. There was one dependent variable in the analysis. It was the score of D-questions, which represents how much the participants acquired the audio information. The dependent variables used an interval scale. For each correct answer of the true-false questions, the participant scored one point. If not, he or she scored zero. The higher the final score was, the more he or she concentrated on the audio information and the better general understanding he or she had.

For the first analysis, the normal distribution of D-questions per group was checked by a plotted histogram, the Skewness.2SE, the Kurtosis.2SE, as well as Shapiro-Wilk test of normality. Then, the homogeneity of variance was checked by Levene’s test (Levene, 1960). After that, the statistic details of the data were presented based on the independent variables, and the box plots were made to give a visualization of these details. Later, the significance was checked by the two-way ANOVA test. After the significance test, the partial eta-squared (np2) was calculated to check the effect size

(33)

- 28 -

(Cohen, 1988).

In the second analysis, the independent variable was the type of subtitling. The data was divided into two sets, namely, interlingual subtitling and intralingual subtitling. The dependent variable was the score of D-questions per set. Since the same participants took the same stimuli in the same order, performance of the same group under two different conditions was compared. Therefore, a paired-samples t-test was applied to check the significance. The Levene’s test was not performed because it is not necessary to check the homogeneity of variance for the paired-samples t-test. Besides the Levene’s test, firstly, the same method of checking normal distribution in the first analysis was applied to check the normality per set. Secondly, a description of the data would be given. Along with the description, a bar chart was made to provide a more visualized grasp of the difference between two sets of data. Lastly, the Conhen’s d was calculated to check the effect size.

(34)

- 29 -

4. Results

4.1 Total score of D-questions

First of all, the plotted histograms with a distribution curve were created for each group. Except the advanced proficiency level and high frequency group (advcanedhigh group), other three groups, including intermediate proficiency level and high frequency group (intermediatehigh group), intermediate proficiency level and low frequency group (intermediatelow group) and advanced proficiency level and low frequency group (advancedlow group) showed a bell-shaped distribution curves.

Figure 4 Histogram with distribution curve of the total score of D-questions per group

Then, the values of Kurt.2SE and Skew.2SE were calculated. All Kurt.2SE and Skew.2SE values were between -1 and 1 (respectively, intermediathigh group: N=9, Skew.2SE=0.066, Kurt.2SE=-0.392; advancedhigh group: N=17, Skew.2SE=0.813; intermediatelow group: N=5, Skew.2SE=0, Kurt.2SE=-0.478; advancedlow group:

(35)

- 30 -

N=4, Skew.2SE=0.060, Kurt.2SE=-0.423). At the same time, the Shapiro-Wilk tests were performed (respectively, intermediatehigh: p=0.915; advancedhigh: p=0.04; intermediatelow: p=0.967; advancedlow: p=0.650). As shown in the histogram and the result of the Shapiro-Wilk test, advancedhigh group did not have a completely normal distribution. However, as ANOVA tests are good at handing data from not completely normal distributions, the analysis could continue.

After the normal distribution, the Levene’s test was conduected to check the homogeneity of variance (p=0.123). As the result was not significant, the equality of variance can be assumed.

Then, the differences of performance were calculated according to the proficiency levels and the frequency. The results showed that the groups with advanced proficiency level (N=21, M=3.16, SD=1.6) performed slightly better than the groups with intermediate proficiency level (N=14, M=3, SD=1.66). The groups with high frequency performed (N=26, M=3, SD=1.41) slightly worse than the groups with low frequency (N=9, M=3.44, SD=2.13).

Figure 5: boxplot showing the sum of the D-questions per group

A box plot was created to show a visualized comparison between different groups (figure 5). Overall, intermediatehigh group performed worst among four groups (N=9,

(36)

- 31 -

M=2.44, SD=1.51). On the contrary, intermediatelow group performed best (N=5, M=4, SD=1.58). While groups with advanced proficiency level were in the middle, advancedhigh group performed slightly better (N=17, M=3.29, SD=1.31) than advancedlow group (N=4, M=2.75, SD=2.75). Even though advancedlow group had the smallest number of participants, it had the largest range of score as can be observed from the box plot.

The box plot also reveals that English proficiency level and frequency of using English subtitles have interaction effects on the ability of processing multimodal information. Hence, a profile plot (figure 6) was made to visualize the possible interaction effects caused by English proficiency level and frequency. The two lines in the figure 3 are not parallel to each other, which suggests that English proficiency level and frequency of using English subtitles cause interaction effects. As shown in the box plot and profile plot, participants with advanced proficiency level performed better when they had high frequency. However, the situation was the reversed when the participants had intermediate proficiency level. They performed better when they had low frequency. Additionally, the line of the group with intermediate proficiency level was more skewed than the group with advanced proficiency level, which indicates that frequency has a bigger impact on the intermediate group than on the advanced group.

(37)

- 32 -

Finally, the two-way ANOVA test shows that proficiency level does not have a significant effect, F(1,31)=0.102, p=0.752. Neither does the frequency of using English subtitles have a significant effect, F(1,31)=0.652, p=0.426. Therefore, the results mean that the difference between each group is not statistically significant. Though the p-value of the interaction effect between English proficiency level and frequency is much lower, it did not reach the significant level, F(1,31)=2.808, p=0.104. Thus, the proficiency level interacts with the frequency as suggested in figure 3, but it is not a significant interaction effect.

As a statistical result being not significant is not a guaranty that the effect does not exist, it is useful if the effect sizes are also reported. As the values of partial eta-squared shows, all effects are small (respectively, proficiency level: np2=0.04; frequency: np2=

0.020; interaction effect: np2=0.083). Since the effect sizes of proficiency level and

frequency are small and their p-values are far from the significant level, it can be assumed that English proficiency level and frequency do not affect the processing of multimodal information. On the other hand, the p-value and partial omega-squared of the interaction effect suggests that it is not sure if the interaction effect exists.

4.2 The effect of the subtitling type

Firstly, the difference in scores of D-questions between the interlingual and the intralingual conditions can be considered as approximating the normal distribution, because the Shapiro-Wilk test is non-significant (W(35) = 0.9, p = 0323.). Furthermore, both Kurt.2SE and Skew.2SE values are between 1 and 1 (respectively, 0.239 and -0.295). It can also be checked through the histogram with the distribution curve (Figure 4).

(38)

- 33 -

Figure 7: Histogram with distribution of differences between interlingual and intralingual groups

Generally, the participants performed similarly well under both subtitling conditions (respectively, interlingual: N=35, M=1.51, SD=0.85; intralingual: N=35, M=1.6, SD=1.01). A visualization of the difference of means of D-questions scores from two groups is presented by a bar plot (figure 7).

Figure 8: Bar plot showing the difference of mean between two subtitling types

Following the description of the data is the significance test. The results of the paired sample t-test show that there is no significant difference between the means of

(39)

- 34 -

different subtitling conditions (t(34)=-0.533, p=0.597, 95% CI [-0.412, 0.241]). At last, the value of cohen’s d is -0.09, which means that the effect decreases the means of the score, but the difference is very trivial (McLeod, 2019).

(40)

- 35 -

5. Discussion

The present investigates the difference of processing audio information and subtitles when watching subtitled foreign-language videos. Three factors were taken into consideration, including the L2 proficiency level, the familiarity of using subtitles, and the types of subtitling (interlingual and intralingual). To know which modal of information was actually processed and acquired, the participants were presented with two videos in L2 (English) audio with L1 (Spanish) subtitle in the first video, and L2 (English) subtitle in the second video. In both videos, the subtitles contained deviated information. After watching each video, the participants had to answer true-false questions, which were designed to distinguish whether they processed information from audio channel or subtitle channel.

The first research question was how L2 proficiency influences the processing of audio and subtitles. Before the experiment, it was assumed that participants with lower proficiency level would acquire more information from the subtitles, and participants with higher proficiency would be less misled by the subtitles. Hence, the advanced group should have higher correctness in their answers than the intermediate group. The result shows that advanced group scored a little higher than the intermediate group. However, the difference was very little and not statistically significant. Therefore, it can be interpreted that L2 proficiency level does not significantly affect the processing the audio and subtitle information.

There are two explanations of why there was no significant difference between the two groups. The first explanation is that there might be an inaccuracy of the true proficiency level of each participant because the proficiency level was self-reported by each participant. No objective proficiency test was conducted before the experiment. Another explanation is that the number of participants was small. Since the effect of L2 proficiency level is a small-sized effect, it requires a large number of participants in order to be detected statistically.

(41)

- 36 -

processing of audio information and subtitles. It is hypothesized that there will be no major difference between participants who use subtitles more frequently and those who use subtitles less frequently (d’Ydewalle et al., 1991). Interestingly, the results show that groups with lower frequency of using English subtitles scored a little higher than the group with higher frequency. However, this difference was small and non-significant. Therefore, it can be concluded that the results agree with the finding of d’Ydewalle et al. (1991) and confirm the hypothesis that people process the information from the subtitles more or less the same regardless of their familiarity of subtitles. The possible explanation as to why the group with lower frequency scored higher the group of higher frequency can be that there were more participants in the group of higher frequency. In the present study, the familiarity was represented by the frequency of using subtitles, of which the data was collected on a scale from 1 to 5 where 1 represents that the participant never uses English subtitles while 5 represents that he or she always uses English subtitles. Later, scale 1-2 was coded as low frequency, and scale 3-5 as high frequency. It is possible that such coding system for quantifying the frequency causes the results that groups with low frequency focused more on audio information than groups with high frequency in the present experiment. A system with more detailed description in order to calculate the familiarity of subtitles more accurately needs to be applied for future study.

The third research question was whether there would be an interaction effect caused by L2 proficiency level and familiarity of subtitles. It was predicted that groups with advanced proficiency level would perform better than the groups with intermediate level. In other words, the group with advanced proficiency level would focus more on the audio than the groups with intermediate level. Meanwhile, within the groups of the same proficiency level, the results would not differ much based on their familiarity of subtitles. However, the results did not completely match the prediction. According to the results, the intermediate group with low frequency performed best, even outperformed both group with advanced proficiency level. The result of rest of the

(42)

- 37 -

groups matched the prediction. Both groups with advanced proficiency level performed similarly well, and the advanced group with low frequency scored lowest. Therefore, it is likely that frequency and L2 proficiency level does have an interaction effect on the processing of multimodal information. When the participants have advanced proficiency level, the scores will be higher when the frequency of using English subtitles is higher. On the other hand, when the participants have intermediate proficiency level, the scores will be lower when the frequency is higher. However, though the difference caused by the interaction effect was obvious, it was not significant. Therefore, it can be interpreted that the interaction effect does not significantly affect the processing of audio and subtitle information.

There are several possible explanations for this result. The first one is the number of participants. Not only because the total number of participants was not big enough to detect the interaction effect, which is a very small effect according to the value of eta-squared. Also because the number of participants was not equal for each group, there were 17 participants in the group of advanced proficiency level and high frequency, but only 4 participants in the group of advanced proficiency level and low frequency. The number of participants of each group should be equal for a more accurate and reliable result. The second explanation is the way the questions were asked. By answering a true-false question, participants had a 50% chance of getting a correct answer even though they were not sure whether the answer was true or false. The contingency of answering correctly can be the main cause of the unexpected deviation of the results. The third problem can be the length of the videos. The first video lasted for about three and a half minutes, which can be too long for the participants to restore the information. Therefore, they did not give the correct answer not because they failed to process the audio information, but because they forgot the information.

Finally, the potential influence of the type of subtitling was also of interest to the present study. Two hypotheses were made based on two previous studies. The first one was that the type of subtitling does not affect how participants process the audio and

Referenties

GERELATEERDE DOCUMENTEN

Assessment of asthma control and future risk in children <5 years of age (evidence level B)*. Symptom control Well controlled Partly controlled

Questionnaire 1 was administrated to students during the summer holidays following the first academic year; it is possible to suggest, therefore, that both classes of AIM

The present study examined proficiency and grammar and compared three groups of secondary school students; one experimental group who received a CLT (focus

stellingen. De twee cues werden gemanipuleerd en er zijn twee verschillende versies in beeld gebracht. Ee waren twee basis posts. Een over een bommelding op een middelbare school

Als de bewoners aangeven zich meer cultureel verbonden te voelen met de buurt door bepaalde soort fysieke plek bijvoorbeeld een park, kunstwerk of pleintje zou dit ook

effect of visual stimulus degradation (Sternberg, 1969; Shwartz et al., 1977; Frowein and Sanders, 1978), and it can thus be inferred that these two task variables affect

Table 3 shows the results for model 3, which is used to test whether the association be- tween a change in the reporting frequency and a change in the level of market pressure

Literature suggests that at subunit level information processing requirements are derived from task related uncertainty and ambiguity regarding task complexity,