• No results found

Reading difficulties in adolescents

N/A
N/A
Protected

Academic year: 2021

Share "Reading difficulties in adolescents"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Reading difficulties in

adolescents

Karin Stegeman S1562274

Supervisor: dr. H.I. Hacquebord Second reader: dr. W.M. Lowie Date: February 2010

(2)

2

INDEX

INTRODUCTION ... 4

1. ADOLESCENTS AND READING ... 6

1.1 Reading as a leisure activity ... 6

1.2 Reading in school: receptive reading ... 8

2. FRAMEWORKS USED IN LANGUAGE EDUCATION ... 11

2.1 The Common European Framework of Reference for Language ... 11

2.2 Framework Dutch ... 12

2.3 Referential Framework for Language and Calculus ... 15

2.4 Definition of text levels ... 17

2.5 Concluding thoughts ... 18 3. METHOD ... 20 3.1 Text corpus ... 20 3.2 Participants ... 22 3.3 Materials ... 22 3.4 Procedure ... 23 3.5 Analysis ... 23 4. RESULTS ... 25

4.1 Difficulty rates and ranking ... 25

(3)

3

4.3 Causes of difficulty ... 28

4.4 ADR in relation to causes of difficulty ... 32

4.5 Predicting the difficulty level ... 33

5. DISCUSSION ... 35

5.1 Difficulty rates and ranking ... 35

5.2 Difficulty rates and ranking versus Textscreen ... 35

5.3 Causes of difficulty ... 36

6. CONCLUSION ... 38

6.1 What makes reading difficult? ... 38

6.2 The role of linguistic measures ... 39

7. RECOMMENDATIONS ... 41

APPENDICES ... 45

APPENDIX 1: The Dutch education system ... 45

APPENDIX 2: Overview of General Linguistic Competence according to the CEFR ... 47

APPENDIX 3: “I can statements” from the CEFR ... 48

APPENDIX 4: Overview of text corpus and linguistic measures from each text ... 50

APPENDIX 5: Example of a text from the corpus ... 51

(4)

4

INTRODUCTION

The Programme for International Student Assessment (PISA) examines key competencies in reading, mathematics and science of 15-year old pupils all over the world. The PISA-2009 results reveal that out of 65 countries participating, only 24 score above average on reading competency. In 18 of the participating countries, the reading level of most 15-year olds does not exceed baseline level that is needed to participate in everyday life (OECD, 2010). These results describe the difficulties many teenagers come across in reading.

Although the Netherlands does exceed the average score and ended up tenth on the PISA-2009 list, 14.3% of Dutch pupils is considered to be low literate (Gille, Loijens, Noijons & Zwitser, 2010). And there are many more studies that examine declining reading levels in the Netherlands. (Huysmans, de Haan & van den Broek, 2004; Land 2006 and Kraaykamp, 2002). “The Netherlands de-reads” is even a known saying these days. Due to the growing popularity of television and the internet, people spend less time reading. Especially adolescents seem to lose interest in reading. Adolescents rather spend their spare time on social activities then on reading (Land, 2006), and library use decreases enormously among adolescents (Kraaykamp, 2002).

Not only do adolescents not like reading, a lot of students have difficulty reading. This of course could be an explanation to not like reading. That however is not the main question of this research. This research aims to describe the difficulty that students experience while reading.

Reading is a very important topic in Dutch education today. To improve reading level of students, the committee Meijerink has developed The Referential Framework for Language and Calculus by order of the Dutch ministry. This framework is designed to increase the language level of Dutch students, and for teachers to become more aware of what students should be able to read and write at which level. The framework will become very important in the next few years in Dutch education. Every teacher has to learn to work with the framework. Study book texts and language tests also have to be designed in accordance with the framework.

(5)

5

Frameworks signal issues of difficulty in texts, like different linguistic measures for instance. Diataal, a language testing program also makes use of linguistic measures to define a text‟s difficulty.

This thesis will describe the reading habits of adolescents according to previous studies. After that, I will address the three frameworks mentioned and I will describe what adolescents should be able to read according to these frameworks. The frameworks also tell us something about the text features of different language levels, and thereby, linguistic measures that are important in defining what it is that makes a text difficult.

The study I have done will then be discussed. I had 72 adolescents read texts and I asked them to assign a difficulty rate to each text and to rank the texts according to difficulty. The students also had to indicate what made each text difficult.

The results of my study uncovers that there is a discrepancy between what actually makes a text difficult and students‟ believes about what makes a text difficult. Linguistic measures like Average Sentence Length, Average Word Length and Coverage seem to have an influence on how difficult students believe a text is. The Coverage of a text and Coverage as a cause of difficulty highly correlate with one another. Average Word Length turns out to best predict the difficulty rate assigned by students.

(6)

6

1. ADOLESCENTS AND READING

Before even getting in to adolescents and reading, I should define adolescence. Adolescence is the transition period from childhood to adulthood (Gray, 2002). It starts with physical changes in the child and it ends when the person is viewed by him or herself, and by others, as a full member of the adult community (Gray, 2002). As the physical changes in every child have a different starting point, there cannot be given an exact age at which adolescence sets in. For girls, adolescence tends to set in between 11-13 years of age, and for boys it starts two years later, between 13-15 years of age. Legally, a person becomes an adult at the age of 18. However, adolescent development does not stop here, it can go on until 20-24 years of age (Nelck-da Silva Rosa, 2004).

Very important in adolescence is the amount of changes a person undergoes. Not only the physical changes can be very influential. The adolescents go to secondary education, meet a lot of new people and get more responsibilities. They have to do their homework, engage in social activities and next to all of this, they often have jobs on the side (Nelck-da Silva Rosa, 2004). Adolescents break away from parental control (Gray, 2002) and they try to manage their own time and activities. All in all, it is fair to say that adolescence is a very intense period, and this makes it all the more interesting to study.

1.1 Reading as a leisure activity

Research by Huysmans, de Haan & van den Broek (2004) from the Dutch Sociaal Cultureel

Planbureau supports the quote from the introduction; “The Netherlands de-reads”. This

(7)

7

This decrease is most likely concerned with the upcoming popularity of television and computers.

Land, Sanders & van den Bergh (2006) found that adolescents spend more spare time on social activities, the computer and watching television than on reading. Land, Sanders & van den Bergh had over 700 VMBO- students, 377 girls and 340 boys, fill in a survey on their reading habits and reading preferences. Their study reveals that most VMBO-students find reading boring. If adolescents read, they read magazines, a text for school or a text on the computer about their hobby. Little time is spent on reading a book. If students read a book, it is about their peers or an exciting book (about war) (Land et al., 2006). Land explains that the little reading that adolescents do, has to do with their goal of reading and their motivation. A child who has just learned to read, experiences more joy while reading and has no purpose other than enjoying himself. Tellegen and Catsburg (1987) showed that adolescents do not read for pleasure (intrinsic motivation). They read because they want to know something (instrumental motivation) or because they have to (extrinsic motivation).

A study by Stalpers (2005), also illustrated a decrease in adolescents‟ interest in reading. Stalpers studied the public library use of adolescents. He found that in the age group of 15 to 17, 21% did not renew their membership. In the age group 18 to 20, this number was 44%. Stalpers focused on the relation between adolescents‟ reading frequency and attitudes towards reading and the renewal of their public library membership. The results of Stalpers‟ study uncovers three important things about adolescents‟ motivation to read. Adolescents read when they believe reading will evoke emotions and fantasies. Second, adolescents‟ attitudes towards reading are highly influenced by friends‟ and parents‟ believes and experiences with reading. Adolescents read for instance, so they can join in conversations and know what they are talking about. Furthermore, a positive reading climate at home can stimulate reading in adolescents. At last, an adolescent‟s attitude towards reading is more positive when he believes himself to be a good reader. Adolescents who believe their reading proficiency is low, tend not to like reading, whilst good readers like reading, because they are good at it. This phenomenon is known as the Matthew-effect (Stanovich, 1986) and it implies that „the rich get richer and the poor get poorer‟. Good readers get even better in reading and poor readers fall behind.

(8)

8

children and children might imitate their reading behavior. Children become affiliated with the joy of reading and books probably are available at home, at all times. Or perhaps parents are members of a public library and children benefit from that. By instruction, parents stimulate their children‟s reading behavior by reading stories to them or giving books as a present. This way, a parents‟ reading appreciation shows and a child might adopt this enthusiasm.

Not only parents are important in advancing a child‟s reading behavior and in developing a positive attitude towards reading. School can, and should, also play an important role in this. Kraaykamp (2002) states that attention to culture and reading in school can advance a person‟s cultural interest later in life. Attention to various literary genres can advance a person‟s interest in reading and ones interest in more difficult literature. However, in secondary education, students are obliged to read from a „reading list‟. This reading list is designed to make sure students read a set amount of literature. This obligation to read, might have a negative effect on adolescents‟ reading pleasure. And a decline in reading pleasure might result in a decline in reading over all (Kraaykamp, 2002), which in turn could lead to stagnation in reading development.

So far, we can conclude that studies have illustrated that adolescents do not often read. Very few adolescents read for pleasure in their spare time. If adolescents read for pleasure, they read about their hobbies or interests in a magazine or on the computer, probably not in novels. Adolescents read because they have to, for school or homework. Their attitude towards reading is negative, but their attitude can easily be influenced by for instance their teachers, or their parents and peers.

1.2 Reading in school: receptive reading

In the PISA-2009 report (OECD, 2010), reading is defined as “understanding, using,

reflecting on and engaging with written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society”. This definition demonstrates

(9)

9

The reader reads the text with a specific purpose or goal in mind (OECD, 2010). „Reflecting on texts‟ means that a reader relates his own believes and experiences to what he reads. The reader assesses whether the text is suitable and if it provides the information that he needs (OECD, 2010). The last component, „engaging‟, refers to the motivation to read (OECD, 2010). Readers tend to be more engaged in a text if they want to know more about the topic of the text. If they are obliged to read a text, they probably will be less engaged with the text.

Not all readers possess all skills needed to be a good reader. Some children have difficulty learning to read. Duke & Pressley (2005) state that almost in every classroom, half of the children read above their grade level, and the other half reads below their grade level. Children who have difficulties with reading are commonly called „struggling readers‟. Struggling readers have difficulty with word recognition and often need special attention and practice (Duke & Pressley, 2005). Duke & Pressley (2005) also bring up the fact that parents can play an important role in the reading development of their children. Parents should read with or to the child, so that the child gets familiarized with reading. Not only word recognition is problematic for struggling readers, but Duke, Pressley & Hilden (2004) also emphasize that good readers have a representation of the topic of the text and they are able to relate personal experiences to this, whereas struggling readers have difficulties with this. They also have difficulty with relations in texts. Whereas good readers have expectations of what will happen next and see these expectations confirmed or not, struggling readers hardly form ideas of what will happen and they most probably will not recognize one happening leading to another (Duke, Pressley & Hilden, 2004).

(10)

10

with high scores, have relatively few children scoring on low proficiency levels, 14.3% of Dutch children is considered to be low literate according to PISA (Gille, Loijens, Noijons & Zwitser, 2010). So even though the Netherlands did get a good score, there seems to be room for improvement.

Most of all reading by adolescents is for their education. Research however has proven that 20% of all VMBO-students have difficulty with reading (Onderwijsinspectie, 2006). Hacquebord, Linthorst, Stellingwerf & De Zeeuw (2004) studied text understanding and word knowledge and found that 24% of the VMBO-students do not understand their texts from study books. HAVO and VWO students also experience problems while reading (Kerkhof, 2008). Kerkhof tested the language levels of HAVO 4/5 students and VWO 5/6 students. The results show that only 51,8% of the HAVO students reached the targeted CEF-level of B2 and 48,6% of the VWO students reached the targeted CEF-level of C1.1 32,1% of the HAVO students had a CEF level of B1 or less, and 51,4% of the VWO students reached a CEF-level of B2 or less. These numbers show that almost half of the HAVO students, and not even half of the VWO students reaches the targeted language level finishing high school.

Land (2009) studied VMBO-students‟ preferences and opinions about texts from their study books. She found that 52% of the students find their study book texts boring. Only 13% of the students like their study book texts, or find them interesting and 38% think the texts are instructive. 30% of the students however find the texts in study books tiring and a waste of their time2. The students also stated what would make a study book text a good text. VMBO-students want a study book text to be clear (74%); short (62%); interesting (54%) and 51% said that the text has to be easy to read.

It is very clear that adolescents have difficulties with reading. Not only students of VMBO have difficulty reading, HAVO- and VWO-students do as well. Students do however have a clear idea of what makes a text a good text. This study attempts to describe the difficulties that adolescents come across while reading. What makes a text difficult to read? To get a clear idea of difficulty issues, we should look at frameworks that are designed for language education. These frameworks exemplify the language level of adolescents, of what adolescents should be able to read and of levels of texts.

1

The CEF-levels will be further explained later in the next chapter.

2

(11)

11

2. FRAMEWORKS USED IN LANGUAGE EDUCATION

Teachers make use of frameworks in first- and second language education. The frameworks describe levels of language abilities and this way, teachers and students to are able to assess their levels of language performance on different language skills like reading, writing and speaking. In the Dutch education system there are several frameworks for language. I will address the three most important frameworks in this paragraph. The first one is the Common European Framework of Reference for Language (Council of Europe, 2001), which is used for second language education. The latter two, Framework Dutch (Raamwerk Nederlands) (Bohnenn, Jansen, Kuijpers, Thijssen, Schot, & Stockmann, 2007) and the Referential Framework for Language and Calculus (Referentiekader Taal en Rekenen) (Expertgroep Taal & Rekenen, 2009) are used in Dutch first language education. The frameworks signal issues of difficulty in texts. I will compare these issues of difficulty to the definitions of text levels on linguistic measures from Andringa & Hacquebord (2000).

2.1 The Common European Framework of Reference for Language

The Common European Framework of Reference for Language (further: CEFR) was developed by order of the Council of Europe, and was published in 2001. “The Common

European Framework provides a common basis for the elaboration of language syllabuses, curriculum guidelines, examinations, textbooks, etc. across Europe. It describes in a comprehensive way what language learners have to learn to do in order to use a language for communication and what knowledge and skills they have to develop so as to be able to act effectively.” (Council of Europe, 2001).

This quote describes the purposes and the aims of the CEFR. The CEFR is designed for teachers, students and writers and editors of second language learning textbooks. The framework describes what second language learners have to learn to effectively be able to use a second language. Teachers and other language professionals can assess students‟ language skills and levels better by using the CEFR because it clearly describes the learners‟ competencies.

(12)

12

A B C

Basic User Independent User Proficient User

A1 A2 B1 B2 C1 C2

(Breakthrough) (Waystage) (Threshold) (Vantage) (Effective (Mastery) Operational

Proficiency)

Figure 1: CEFR language proficiency levels (source: Martyniuk, 2006).

The basic user knows the most elementary expressions. The basic user needs the interlocutor to adapt to his level and needs the interlocutors assistance. The independent user is able to interact without much effort and is able to follow a normal speech tempo. It is however obvious that it is not the speaker‟s native tongue. The proficient user has hardly any or no difficulty in using the targeted language. The proficient user is almost similar to a native speaker of the targeted language (Martyniuk, 2006). As said, the three levels from the CEFR all have two subdivisions. An overview of the general linguistic competence one has at each level is presented in Appendix 2.

The language proficiency levels are specified in „I can statements‟ (Martyniuk, 2006). So the general descriptors of proficiency levels have been developed into a self-assessment grid. The „I can‟ descriptors focus on six specific language competencies which are: listening, reading, spoken interaction, written interaction, spoken production and written production. Since this thesis focuses on reading in adolescents, the „I can statements‟ for reading are included in Appendix 3. The „I can statements‟ give an overview of the (second) language reading competence level of adolescents.

2.2 Framework Dutch

(13)

13

materials. Using this Framework, teachers should be able to diagnose gaps in the language development of students or flaws in the teaching materials. The framework can be used to assess a student‟s language level and to describe what is expected from them in tests or assessments. The Framework can also give guidance in developing new materials to teachers and publishers. The Framework Dutch follows the guidelines from the CEFR, but it is adapted to the Dutch language and the level C2 is not included, because C2 is language competence at academic level and VMBO- and MBO students do not study at academic level (CINOP 2007). The Framework describes different aspects of language skills, such as reading, writing, listening and speaking. I will only focus on reading as described in the Framework Dutch, as that is the topic of this study.

(14)

14 Table 1: Important Text Features from the Framework Dutch

Text features

A2 B1 B2 C1

Text length 1-2 pages 2-3 pages Irrelevant Irrelevant

Average Sentence Length 10 15 Irrelevant Irrelevant Cohesion Simple structure. Clear structure and obvious connection. Titles and headings are used. Difficult connections. Implicit connections. Information density High redundant, many repetition.

Not too much information at once. Important information is likely to be repeated. Not necessarily redundant. High information density.

Word use and vocabulary High frequent words. Frequent word combinations, some abstract words. Less high frequent words. Low frequent words. Sentence structure

(15)

15

2.3 Referential Framework for Language and Calculus

By order of the Dutch ministry of Education, the Committee Meijerink has set up a Referential Framework for Language and Calculus. The Referential Framework is designed in order to improve students‟ basic

skills, Language and Calculus, and to even out transitions between school levels (Expertgroep Taal & Rekenen, 2009). Every educational institution has to familiarize themselves with the Referential Framework and teachers have to learn to work with the Framework.

In this study, focus will be on the Referential Framework for Language, in particular the

reading component. The Referential Framework distinguishes four fundamental levels and four striving levels (see figure 2). The blue section, 1S and 2F, is the level a person needs to be able to participate in society. The green and red bars are the boundaries between levels. The higher the education, the higher the level of texts will be. In the Netherlands, the VWO is the highest high school level. To succeed the VWO finals, a person should be able to read texts from level 4F. At university and other higher education after VWO, most texts will probably be at level 4S.

Not only teachers have to learn to work with the Referential Framework, educational tests and assessments also have to be at the right level for students. For instance Diataal (Hacquebord, Andringa, Linthorst, & Pulles, 2006), a reading assessment test has to take into account the level of texts which are used in their assessments.

The Referential Framework has no exact measures to assess reading or text levels. The Committee Meijerink has made descriptions for each level. Teachers will have to read texts and use the descriptions from Meijerink to assess the level of a text. Table 2 displays the descriptions of each reading or text level.

(16)

16 Table 2: Descriptions of levels on reading informative texts (Expertgroep Taal & Rekenen, 2009) (own translation).

Level 1F Level 2F Level 3F Level 4F General description

Reading formal texts

Is able to read simple texts about everyday topics and topics that relate to everyday life.

Is able to read texts about everyday topics, topics that relate to students‟ everyday life and topics which are not that closely related to everyday life.

Is able to independently read a great variety of texts on topics from the (vocational) education and of societal topics. Reads with notion of the fullness as well as details.

Is able to read a great variety of texts about any subject from the (vocational) education and of societal topics and is able to understand these in detail. Texts

Text features The texts are of simple structure; the information is ordered clearly.

The texts have low information density; important information is marked or is repeated. Not too many (new) information is introduced simultaneously. The texts mainly exist of frequently used (or for students everyday) words.

The texts have a clear structure.

Contexts in the texts are clearly stated. The texts mainly have low information density and are not that long.

The texts are relatively complex, but have a clear composition, which can be exemplified by the use of headings. The information density can be high.

The texts are complex and the structure is not always clear.

Tasks

1. Reading informal texts

Is able to read simple informative texts, such as reference books, (simple) texts from the internet, simple schematic overviews.

Is able to read informative texts, like schoolbook texts, study related texts (for language and business classes), standard forms, popular magazines, texts from the internet, notes and schematic information (in which different dimensions are combined), and the everyday news in newspapers.

Is able to read

informative texts, such as advisory material, brochures from authorities (more formal language),

texts from (used) methods, but also newspaper reports, business correspondence, complex schedules and reports on the own business occupation.

Is able to read informative texts with high information density, such as long and difficult reports and condensated articles.

2. Reading instructions Is able to read simple instructive texts, such as (simple) itineraries and instructions to assignments (from a method).

Is able to read instructive texts, such as recipes, common instructions and manuals, and the package leaflet accompanying medicines.

Is able to read instructive texts, such as

complicated instructions in manuals

accompanying unknown devices and procedures. 3. Reading persuasive

texts

Is able to read persuasive texts, such as texts in schoolbooks for language and business courses, but also advertisements, and advertising brochures.

Is able to read persuasive, often redundant, texts, such as ads, commercial advertisements, advertising brochures, but also brochures from authorities, of slightly opinionating articles from magazines.

Is able to read persuasive texts, among which texts from schoolbooks and opinionating articles.

Is able to read persuasive texts, among which texts with difficult

(17)

17

2.4 Definition of text levels

In order to develop a text comprehension test, Andringa & Hacquebord (2000) selected texts from schoolbooks for primary and secondary education. The texts were ordered according to source: the level of books they came from, assuming that text difficulty corresponds to educational level (Haan, 2010). The texts were then analyzed linguistically, taking into account three linguistic measures: Average Word Length (AWL), the number of letters per word divided by the number of words, Average Sentence Length, which is calculated by dividing the number of words per sentence by the number of sentences, and Coverage is the percentage of words from the „Basiswoordenlijst‟ (essential word list) composed by De Kleijn & Nieuwborg (1991). This analysis generated a definition of text levels on the three linguistic measures. The results from Andringa & Hacquebord, are presented in Table 3. Groep 7 and

Groep 8 are the highest levels of primary education. Bavo 1 corresponds to texts from VMBO

1, Bavo 2 corresponds to texts from HAVO 1 and Bavo 3 corresponds to texts from VWO 1.

Table 3: Definition of text levels on three linguistic measures: AWL, ASL and Coverage (Andringa & Hacquebord (2000)

Text level

AWL ASL Coverage Source level 1 < 4,7 < 10,0 > 88,5 Groep 7 2 4,7; 4,8 10,0 – 11,5 86,0 – 88,5 Groep 8 3 4,9; 5,0 11,6 – 13,0 83,5 – 85,9 Bavo 1 4 5,1;5,2 13,1 – 14,5 81,0 – 83,4 Bavo 2 5 5,2 > 14,6 > 81,0 < Bavo 3

(18)

18

2.5 Concluding thoughts

The three frameworks mentioned in this chapter present different levels of reading competency. Of course not all students will completely reach all stages, as the language and education level of students differs. According to the frameworks, students with low proficiency level will be able to read short, simple texts and to recognize familiar names, words and simple phrases. The topics of texts for low proficient readers are mostly about everyday life or everyday situations. As adolescents develop their reading skills and become more proficient readers, they will able to understand texts that are not so closely related to their field of interest. The main ideas of a text will be understood, but it is not likely that students will understand texts in full detail. At the third stage, according to the Framework Dutch (Bohnenn, et al. , 2007) and the Referential Framework for Language and Calculus (Expertgroep Taal & Rekenen, 2009), the adolescent can read a great variety of texts about different topics. Texts outside their field of interest can be understood completely, but they might need a dictionary for specific terminology. At the most advanced stages, according to the frameworks, the adolescent can understand and interpret any kind of written language and is able to understand these in great detail.

The three frameworks also specify important text features for each level. Both the Framework Dutch and the Referential Framework of Language and Calculus, give examples of text features that might influence the level of a text. The text features mentioned are: text length, average sentence length, cohesion or structure, information density and vocabulary. In all frameworks, at the lowest levels, texts are relatively short and make use of short, simple sentences. The texts have a simple or clear structure and have low density and many high frequent words. At higher stages, the text length and average sentence length are irrelevant, the connections in the text can be difficult or even implicit and the texts are likely to have high information density. The vocabulary consist of relatively many infrequent words.

The text features mentioned above are similar to the linguistic measures Diataal (Hacquebord, Andringa, Linthorst & Pulles, 2006) uses to select texts for its language tests. The tests are analyzed using a program called Textscreen. The texts are analyzed on number of sentences, number of words, average sentence length, average word length, coverage and some more measures.

(19)

19

2006). Research has revealed that especially VMBO-students have difficulty with reading (Onderwijsinspectie, 2006). However, HAVO and VWO students also experience difficulty with reading. Kerkhof (2008) found that a lot of HAVO and VWO students read below the targeted CEF-levels. This leads me to my first research question:

What makes reading difficult for adolescents and are adolescents aware of what makes a text difficult to read?

The Frameworks described in this chapter refer to linguistic measures that probably have an influence on text difficulty. The frameworks describe different stages in the reading development of children. At the first stages a child is able to read short texts with simple structure and simple vocabulary. At the top stages, the child can read anything no matter how long the text is, difficult words will be derived from sentence meaning and the reader can take different viewpoints. Research by Andringa & Hacquebord has even proven that Average Sentence Length (ASL), Average Word Lenth (AWL) and Coverage correlate with text difficulty, I will elaborate on this in the next section. My second research question is:

Do linguistic measures like ASL, AWL and Coverage have an effect on students beliefs about how difficult a text is, and if so, to what extent can these linguistic measures predict the

(20)

20

3. METHOD

3.1 Text corpus

First of all, a corpus of texts was designed. The corpus consists of forty texts. The texts were selected by using the Referential Framework from the committee Meijerink (2007), and some texts were provided by Diataal (Hacquebord, Andringa, Linthorst, & Pulles, 2006). Diataal is a language competency level test. Each text had to be written for students and had to focus on a topic matching one school course, like mathematics, physics, Dutch or geography. For each school course, one or two texts were submitted to the corpus.

The texts were ordered on difficulty, using the program Textscreen. This program is also used to select texts for Diataal. Textscreen takes into account different linguistic measures, like Average Word Length (AWL), Average Sentence Length (ASL) and Coverage to define the difficulty level of a text. Average Word Length is the number of characters per word divided by the total number of words. Average Sentence Length is calculated by dividing the number of words per sentence by the number of sentences. A sentence is defined as: starting with a capital and ending with a dot. Colons and semi-colons do not qualify as sentence ending, nor do headings. Coverage is the percentage of words from the „Basiswoordenlijst‟ (essential word list) composed by De Klein & Nieuwborg (1991). The texts finally got an Average Difficulty Rate (ADR), based on the difficulty rate per linguistic measure. According to the ADR, the texts were ordered from easy to difficult. The first text had an ADR of 1.16 and the last text in the corpus had an ADR of 5.00. An overview of the texts in the corpus and their text features and ADR is presented in Appendix 4.

(21)

21

As I could not let the students assess all forty texts, I had to make a selection from the corpus. Each lesson hour is fifty minutes, so in this timeframe, the students had to read the texts, answer a couple questions about the texts and they had to order the texts. In consultation with their teacher, I decided to let every student read five texts. The texts for HAVO 2 were from the first text level, with an ADR of 1.16 – 3.08. The texts from HAVO 3 were from the second text level, with an ADR between 3.26 and 4.16. The last group, VWO 3 was presented with five texts with an ADR from 4.35 – 5.00.

Of course the students from HAVO 2 had an easier task in distinguishing difficulty levels as the ADRs of their texts lay further apart than the ADRs of the texts that VWO 3 had to read. But the students from VWO 3 are much more advanced readers and should be capable to perform this task.

To select five texts for each group, I looked at the differences between the ADRs and I tried to keep these as constant as possible. The texts for HAVO 2 had an ADR from 1.16-3.08. The jump between these ADRs is 1.92. As I needed five texts, I divided 1.92 by 5 and that results in a jump from 0.38 between texts, to get a good selection. The same procedure was conducted in the other two groups. Group 2 had an ADR from 3.26 to 4.16, which is a gap of 0.9, divided by five, is 0.18. Group 3 had an ADR from 4.35 to 5.00, which is a gap of 0.65, divided by five, is 0.13. As there was not always a corresponding ADR to the jumps I wanted to take, I had to choose the nearest corresponding ADR. So in all groups, the differences in ADR between each text is as close as possible to the targeted jump. Table 4 displays an overview of the groups of texts and their ADRs. The ADRs are not in increasing order, as that would be too easy for the students. The table presents the order in which the students received the texts.

Table 4: Overview of groups of texts and their Average Difficulty Rates (ADR)

Group 1 HAVO 2 Group 2 HAVO 3 Group 3 VWO3

Text ADR Text ADR Text ADR

1 3.08 6 4.00 11 4.84

2 1.16 7 4.16 12 4.51

3 1.58 8 3.26 13 5.00

4 2.34 9 3.42 14 4.59

(22)

22

3.2 Participants

There were 72 participants in this study. All participants attend the Augustinus College in Groningen and students were 13-16 years of age. The students were from different school levels, 24 students from HAVO 2, 23 students from HAVO 3 and 25 students from VWO 3. The students were particularly selected on their school level because they are half-way through high school, so they should be able to decide which texts are easy to read for them, and which texts are a lot harder to read. Table 5: Distribution of texts

As there were only seven boys in HAVO 2, three boys from HAVO 3 and three boys from VWO 3 read the texts selected for HAVO 2. Table 5 is an overview of the distribution of texts among the students.

3.3 Materials

The texts in the corpus were analyzed using the program Textscreen. The texts were selected by using the Referential Framework (2007) and some texts in the corpus are from Diataal. Each text was in its original form, nothing had been changed except for the font. The text were printed on paper, as well as a questionnaire, which students had to fill in. Appendix 5 is an example of a text and Appendix 6 is the questionnaire.

The linguistic measures I want to examine are based on previous research and the referential frameworks. A study by Andringa & Hacquebord (2000) has revealed that especially word length and coverage are highly correlated with text difficulty. Andringa and Hacquebord asked teachers to asses thirty texts on difficulty level and they found a correlation between difficulty and average word length of 0.80 (Pearson‟s correlation). The frameworks however state that at the high levels, this measure, as well as average sentence length, becomes irrelevant. I want to see whether students think the same. Therefore I will look at word length.

Andringa & Hacquebord (2000) also found a positive correlation between Sentence length and text difficulty, but this correlation was much lower (r = 0.42). The frameworks also state that the length of a text can be important, therefore, my next two measures are „total

Textlevel Boys Girls Total

1 13 17 30

2 10 10 20

(23)

23

length of text‟ and „average sentence length‟. Coverage and structure might also influence difficulty of a text. Andringa & Hacquebord found a correlation of 0.79 (Pearson‟s correlation) between coverage and text difficulty. Therefore, I give the students the option to fill in „amount of difficult words‟, „no headings‟ and „no pictures‟ as causes of text difficulty. The students could also fill in that the text was difficult because of the „amount of words in a L2‟, because there was one text with many English words. At last there was an option „other‟ which gave the student the option to fill in another explanation for what made the text difficult. I expect to find similar correlations between the three linguistic measures ASL, AWL and Coverage and the text difficulty as assigned by students.

3.4 Procedure

The students were aware of the purpose of the study. This would enhance their willingness to participate. Each student read five texts and after each text they filled in a short questionnaire. The students had to write down the subject and the key note of the texts, to confirm they had understood what they had read. Students that did not understood the text were excluded from further analysis. The students also had to fill in how difficult the text was, on a scale from 1 to 7. They also had to fill in what made the text difficult, be it the total length of the text, the amount of difficult words, long sentences or anything else. After reading all texts and answering all questions, they had to give each text a number from 1 to 5. Number 1 represented the easiest text and number 5 the most difficult text. As each group read five different texts, fifteen texts of the corpus were read and rated by the students.

3.5 Analysis

(24)

24

(25)

25

4. RESULTS

4.1 Difficulty rates and ranking

Figure 3 displays how difficult students rated a text, on a scale from 1-7. All texts were given a number from 1 to 5, number 1 being the easiest text to read and number 5 the most difficult one. Figure 4 displays the mean ranking of each text. Text 11 has the lowest mean rate (1.45) and the lowest mean rank (1.47). The text with the highest mean rate as well as highest mean rank is text 7 (mean rate = 5.30, mean rank = 4.94). The figures show a resemblance in distribution, which suggests a strong correlation between the mean difficulty rate and the ranking by students. Text 10 is the only text that shows an obvious difference between difficulty rate and rank, the mean difficulty rate is

2.05 and the mean rank is 3.18. However, Table 6: Correlations mean diffculty rate and mean rank

the correlation of the mean difficulty rate and mean rank is proven to be significant. Table 6 shows the correlation between the mean difficulty rates and the mean ranking by students, which is rather high: r=.931 (p<0.01).

Mean rank

Mean difficulty rate Pearson Correlation Sig. (2-tailed) N 0.931** 0.000 15

** Correlation is significant at the 0.01 level (2-tailed).

(26)

26

The three groups, H2, H3 and A3 were analyzed separately. Group H2 rated and ranked text 1-5, H3 rated and ranked text 6-10 and A3 did text 11-15. Below the results of the ratings and rankings of each text. The rankings 1-5 were given for each text, based on the mean rankings by the students. The figures 5-7 show that in almost all cases, the text with lowest difficulty rate had the lowest ranking. Only two results stand out. First, text 8, 9 and 10 have similar difficulty rates. Text 9 has lowest difficulty rate but has rank 2. Text 8 and 10 have almost the same difficulty rate, but text 8 has rank 1 and text 10 is ranked 3. Second, text 12 has a higher difficulty rate than text 14 and has rank 2 and text 14 has rank 3.

Figure 5: Difficulty rates and ranks group H2 Figure 6: Difficulty rates and ranks group H3

(27)

27

4.2 Difficulty rates and ranking versus Textscreen

Table 7 shows that there is no significant correlation between the mean difficulty rate by students and the difficulty rate as calculated in Textscreen (r =.083; p>0.05). Figure 8 displays the mean difficulty rate per text, as calculated in Textscreen and the mean difficulty rate as filled in by the students. The graph does not clearly display that there is no correlation between the two difficulty rates, as the mean difficulty rate Textscreen goes up, the mean difficulty rate often goes up as well. However, text 1 for example is also compared with text 3, 4 and 5, and so on, and this clearly shows that is a large contrast between the difficulty rate from Textscreen and the difficulty rate assigned by students.

Table 7: Correlations mean difficulty rate students versus difficulty rate Textscreen Difficulty rate Textscreen

Mean difficulty rate Pearson Correlation Sig. (2-tailed) N 0.083 0.767 15

(28)

28

4.3 Causes of difficulty

Students filled in, per text, what made a text difficult. They could choose three options out of eight. The eight options were: total length of text, sentence length, word length, amount of difficult words, no headings, no pictures, amount of words in a second language or other, where a student could name another reason then the seven already mentioned. This results in a graph for each text (figure 9 - 23) which are displayed on the next pages.

Figure 11: Causes of difficulty text 3

Figure 9: Causes of difficulty text 1 Figure 10: Causes of difficulty text 2

(29)

29 Figure 18: Causes of difficulty text 10

Figure 14: Causes of difficulty text 6

Figure 15 : Causes of difficulty text 7 Figure 16: Causes of difficulty text 8

(30)

30 Figure 19: Causes of difficulty text 11 Figure 20: Causes of difficulty text 12

Figure 21: Causes of difficulty text 13 Figure 22: Causes of difficulty text 14

(31)

31

Figures 9-23 prove that the opinions of students of what makes a text difficult vary enormously. The causes of difficulty differ per text. „Sentence length‟, „amount of difficult words‟ and „no headings‟ are very often pointed out by the students as causes of difficulty. „No headings‟ is in five texts pointed out as number one cause for difficulty. „No pictures‟ and „Total length of text‟ are both the number one cause of difficulty in three texts. So it is very clear that when a text is long, like texts 7, 13 and 15, students mention that as a reason for difficulty. „Amount of difficult words‟ is in six texts the second cause of difficulty, „no headings‟ and „sentence length‟ are second in three texts.

To clarify the relation between the causes of difficulty, as pointed out by the students, and the actual linguistic measures, ASL, AWL and Coverage, as defined by Textscreen, I did a Pearson‟s correlation test (see Table 8). The correlation between the ASL as a cause of difficulty and the ASL as defined by Textscreen is not significant, r = 0.335 (p>0.05). The correlation between AWL as a cause of difficulty and the AWL by Textscreen is also not significant, r = 0.216 (p>0.05). However, there is a strong negative correlation between Coverage as a difficulty Table 8: Correlations ASL, AWL and Coverage students vs. Textscreen

cause and Coverage as defined by Textscreen. As the Coverage of a text is high, in other words, many familiar words are used, students hardly point out „amount of difficult words‟ as a cause of difficulty. This correlation is significant at the p<0.01 level, r= -0.699. These results

reveal that the causes of difficulty as assigned by students have hardly any relation to the actual linguistic measures as defined by Textscreen. In other words, students‟ believes about what makes a text difficult, do not coincide with linguistic measures.

ASL Students AWL Students Coverage Students

ASL Textscreen Pearson Correlation Sig. (2-tailed) N 0.335 0.223 15 -0.31 0.275 15 0.019 0.947 15 AWL Textscreen Pearson Correlation Sig. (2-tailed) N -0.108 0.701 15 0.246 0.439 15 0.378 0.165 15 Coverage Textscreen Pearson Correlation Sig. (2-tailed) N 0.285 0.303 15 -0.409 0.130 15 -0.699** 0.004 15

(32)

32

4.4 ADR in relation to causes of difficulty

Table 9: Correlations ADR and ASL, AWL & Coverage

ASL Students AWL Students Coverage Students

ADR Students Pearson Correlation Sig. (2-tailed) N 0.072 0.799 15 0.458 0.086 15 0.422 0.117 15

The correlations between ADR and the causes of difficulty as assigned by students on the three linguistic measures, reveal no significant results (see Table 9). There are relatively strong correlations between the ADR and AWL as a cause of difficulty as assigned by students (r = 0.458; p > 0.05 and ADR and Coverage as a cause of difficulty according to students (r = 0.422; p > 0.05). There is no correlation between the ADR and ASL as a cause of difficulty as pointed out by students (r = 0.072, with a p-value of 0.799).

The results of the correlations between ADR as assigned by students and the ASL, AWL and Coverage as calculated by Textscreen do reveal a significant result (see Table 10). The ADR as assigned by students is positively correlated with the AWL as calculated by Textscreen (r = 0.566 with p < 0.05). The correlation between ADR and ASL_txtscr is 0.361 (p > 0.05). There is a relatively strong negative correlation between ADR and Coverage as calculated by Textscreen (r = -0.431; p > 0.05) which is to be expected: the more known words (high Coverage) the easier the text is for the students and vice versa.

To sum up, these results show that if the AWL goes up, the higher students‟ rates are on difficulty. This result is significant. If the ASL goes up, the students‟ rates on difficulty go up as well, but this result is not significant. There is a negative correlation between Coverage and ADR, which Table 10: Correlations ADR and ASL, AWL and Coverage as calculated by Textscreen

means that if the number of known words (basic vocabulary) in a text goes up, the ADR assigned by

students drops. ** Correlation is significant at the 0.01 level (2-tailed)

ASL Textscreen AWL Textscreen Coverage Textscreen

(33)

33

4.5 Predicting the difficulty level

A regression analysis was executed to measure to what extent the linguistic measures can predict how difficult students rate a text. These results, displayed on the next page (table 11-13), reveal that AWL accounts for 32% of the variance in ADR by students. The three linguistic measures, AWL, ASL and Coverage, combined, account for 32.8% of the variance. This means that the ASL and Coverage contribute only 0.8% to the variance accounted for by AWL. Coverage by itself accounts for 18.6% of the variance in ADR, this means that AWL and ASL contribute 16.2% to the total variance of 32.8%. ASL explains the least percentage of variance in ADR, only 13%. This means that Coverage and AWL account for the other 19.8% of variance in ADR. The adjusted R² says something about how well the model generalizes. The adjusted R² has a value of 0.144 whilst the R² = 0.328. So there is a rather large difference of 0.184, which means that if the model were derived from population rather than a sample, it would account for 18.4% less variance in the results.

Table 11: Model Summary

Model R R Square Adjusted R Square Std. Error of the Estimate

1 2 0.566ª 0.572ᵇ 0.320 0.328 0.268 0.144 1.07982 1.16737

Table 12: Model Summary

Model R R Square Adjusted R Square Std. Error of the Estimate

1 2 0.431ª 0.572ᵇ 0.186 0.328 0.123 0.144 1.18148 1.16737

a. Predictors: (Constant), Coverage

b. Predictors: (Constant), Coverage, ASL, AWL a. Predictors: (Constant), AWL

(34)

34 Table 13: Model Summary

Model R R Square Adjusted R Square Std. Error of the Estimate 1 2 0.361ª 0.572ᵇ 0.130 0.328 0.063 0.144 1.22120 1.16737

a. Predictors: (Constant), ASL

b. Predictors: (Constant), ASL, Coverage, AWL

As AWL has the most influence on the ADR, I will further only display results with AWL as constant. The ANOVA test (see Table 14) tells us that if only AWL were used to predict ADR, there would be a significant result (F=6.119, p<.05). However, if ASL and Coverage are included in the model, the result is no longer significant (F=1.786 with p>.05). Shifting the arrangement of the linguistic measures showed no significant results; the ANOVA test for Coverage showed F=2.971 with p-value of 0.108 and the ANOVA test for ASL showed an F-value of 1.949 with a p-F-value of 0.186.

Table 14: ANOVAᶜ

Model Sum of Squares df Mean Square F Sig.

1 Regression Residual Total 7.135 15.158 22.293 1 13 14 7.135 1.166 6.119 0.028ª 2 Regression Residual Total 7.303 14.990 22.293 3 11 14 2.434 1.363 1.786 0.208ᵇ

a. Predictors: (Constant), AWL

(35)

35

5. DISCUSSION

5.1 Difficulty rates and ranking

I started off with comparing the difficulty rates assigned by students and their ranking of texts. As expected, there is great similarity in distribution of the difficulty rates and the ranking. The correlation between the two measures is extremely high: r=0.931. There were only two surprising outcomes. Text 8, 9 and 10 had similar difficulty rates, but the ranks were not similar to the difficulty rates. As students rated the texts as similarly difficult, perhaps they could not make a good decision in the ranking. The ranking was given after reading all five texts and they might not clearly have remembered which text was more difficult. However, these three texts have the lowest Average Difficulty Rate (ADR) as well as the lowest three rankings out of the five texts they were given. The same might be the case in text 12 and 14. The ADR of text 12 was higher than the ADR of text 14, yet text 14 had a higher ranking than text 12. The difference between the ADR of text 12 and 14 is very small, so perhaps students did not remember which one was more difficult.

5.2 Difficulty rates and ranking versus Textscreen

(36)

36

in Textscreen is based on all three linguistic measures. If the texts were ordered on AWL only, there probably would have been a positive correlation between the difficulty rate of Textscreen and the ADR as assigned by students. The results are comparable to the study of Andringa & Hacquebord (2000). They found a very strong correlation (r=.80) between AWL and text difficulty. Andringa & Hacquebord (2000) however, also found a very strong correlation between Coverage and text difficulty. The correlation between Coverage and difficulty rate in this study was not significant and not extremely high, but it was relatively strong (r = -0.431).

5.3 Causes of difficulty

Students most often pointed out „no headings‟, „amount of difficult words‟, „no pictures‟ and „sentence length‟ as causes of difficulty. It is quite remarkable that „word length‟ is hardly mentioned as difficulty cause, since the regression analysis proved that AWL is the best predictor of the ADR as assigned by students. So students do not believe AWL defines the difficulty rate of a text, while the regression analysis shows that AWL can predict the ADR by students.

Students do have clear ideas about what makes a text difficult. If the given options, according to them, did not describe what made the text difficult, the students would come up with their own causes under the option „other‟. The graphs clearly point out that students made use of this option. Most of the time these answers were related to the topic. Students did not like the topic and therefore they had difficulties focusing on the text. Apparently, topic is an important factor in reading difficulties, next to the other options like text length, sentence length and amount of difficult words.

„No headings‟ and „no pictures‟ were often pointed out as causes of difficulty. In some cases I could relate to this, as many texts had specific topics, for instance about the solar system, the light bulb and the dust mite. Such texts lend itself to be enriched with pictures. In other cases I could not imagine what picture would fit the texts, as these topics were less concrete, like „black money‟ and „dialects‟.

(37)

37

(38)

38

6. CONCLUSION

6.1 What makes reading difficult?

The results of my study reveal a contradiction between what students believe that makes a text difficult and what actually does. According to the frameworks described in this study, the length of a text, the length of sentences, lack of structure and amount of difficult words are factors that influence the difficulty level of a text. Diataal, the reading proficiency level test, makes use of Textscreen, to define a text‟s difficulty level. In Textscreen, the factors that define difficulty level are Average Sentence Length (ASL), Average Word Length (AWL) and Coverage. Andringa & Hacquebord (2000) found that AWL and Coverage correlate best with text difficulty. The results with my study correspond with these results. AWL and Coverage have the highest correlations with difficulty rates assigned by students. The correlation between AWL and difficulty rate even was significant. A correlation does not mean that high AWL always lead to high difficulty rate, it merely shows there is a relation, be it causal or not. The regression analysis however did reveal that AWL best predicts the difficulty rate assigned by students. This result is significant.

However, students do not seem to believe that AWL makes a text difficult. Students often pointed out „no headings‟, „no pictures‟, „amount of difficult words‟ and „sentence length‟ as causes of difficulty. „No headings‟ relates to the structure of the text. Headings can clarify the text and a heading can also influence a readers expectations. As Duke, Pressley and Hilden (2004) mentioned, good readers form expectations based on what they are reading. As headings often are in bold print, they stand out and the give the reader an idea of what the text will be about.

(39)

39

element of reading. Pictures might trigger the student‟s interest in the topic and that way, a student can be motivated to read the text.

„Amount of difficult words in a text‟ relates to the linguistic measure „Coverage‟. This study points out that there is a correlation between Coverage and the difficulty rate assigned by students. The students themselves relate to this. „Amount of difficult words‟ was often chosen as a cause of difficulty.

Students also pointed out „long sentences‟ as a cause of difficulty, which relates the Average Sentence Length (ASL). There was a relatively weak correlation between ASL and difficulty rate as assigned by students. „Sentence length‟ was often pointed out together with „total length of the text‟. Apparently students experience long texts to have more long sentences. The fact that „total length of text‟ is also often pointed out as a cause of text difficulty does not correspond with what the frameworks tell us. The frameworks state that at level B2 or 3F the length of the text is irrelevant. The students that read long texts, should have reached these levels at the time of testing, but apparently they do not agree that the length of the text is irrelevant to its difficulty level. On the one hand, the frameworks could be wrong. On the other hand, the students might not have reached the targeted level yet. As Duke and Pressley (2005) stated, in a classroom, half of the children read above their grade level and the other half below. Perhaps all children who not yet have reached the targeted level account for the high percentage of students that filled in „total length of text‟ as a cause of difficulty.

To answer the first research question, reading can be difficult because of the length of sentences, length of words, amount of difficult words, lack of structure and pictures. Adolescents claim that structure, amount of difficult words and long sentences are the most important factors. This study has revealed that AWL best predicts how difficult students believe a text is. As the students hardly ever pointed out „word length‟ as cause of difficulty, we can assume that they are not aware of what it is that makes a text difficult to read.

6.2 The role of linguistic measures

(40)

40

AWL and text difficulty and Coverage and text difficulty. The correlations I found were not as strong, but still the correlation between AWL and text difficulty as assigned by students was significant.

The texts the students read, were selected on ASL, AWL and Coverage. I wanted to know whether students would point out ASL as a cause of difficulty whenever the ASL was high in a text. Unfortunately the correlations I found between the actual ASL and AWL measures, as calculated by Textscreen, and the ASL and AWL as causes of difficulty pointed out by students, were rather weak. However, the negative correlation I found between the actual Coverage and the Coverage as a cause of difficulty was significant. So whenever the Coverage of a text was high, i.e. lots of familiar words, the students did not point it out as a cause of difficulty. If the Coverage was low, i.e. students know few words, they pointed out „amount of difficult words‟ as a cause of difficulty.

The results of the regression analysis tells us that the linguistic measures ASL, AWL and Coverage cannot predict the difficulty rate assigned by students. AWL on itself however can. Even though students did not often mention AWL as a cause of difficulty, the analysis reveals that AWL is the best predictor of difficulty rate assigned by students. The result from the regression analysis was significant.

(41)

41

7. RECOMMENDATIONS

My study has revealed a discrepancy between what makes texts difficult and students‟ believes about what makes a text difficult. Now I am not teacher, so this paragraph is merely to propose some ideas on how to work on this discrepancy, not to lay on measures on teachers. I think students have to become more confident readers. As I executed the research and students came across one of the long texts, I could hear them sigh. Many teachers probably recognize this. Students easily get distracted while reading long texts and many times they point out that long texts are boring. The frameworks however state that the length of text becomes irrelevant at the B2, or 3F level. Perhaps not all students I tested have reached this level. Still, I believe that adolescents should be familiarized with long texts. When you look inside a study book and you compare the length of texts in there, and the length of texts at the State Exam, you will find another big discrepancy. The texts in study books often are fragments of what is use during the State Exam. If students read more long texts, it probably might influence their anxiety for it.

Furthermore, I think students need to read more. In my background section I elaborated on the decline of students who read in their spare time. To read more, is to gain more knowledge and to grow vocabulary. My study shows that students often point out „amount of difficult words‟ as a cause of text difficulty. If they read more, their vocabulary will grow and then coverage no longer is a disturbing factor in reading comprehension.

(42)

42

BIBLIOGRAPHY

Andringa, S., & Hacquebord, H. (2000). De moeilijkheidsgraad van schoolboekteksten als grondslag voor het vaststellen van tekstbegripvaardigheid. Toegepaste

Taalwetenschap in Artikelen , 83-94.

Bohnenn, E., Jansen, F., Kuijpers, C., Thijssen, R., Schot, I., & Stockmann, W. (2007).

Raamwerk Nederlands, Nederlands in (v)mbo-opleiding, beroep en maatschappij. 's

Hertogenbosch: Cinop.

Council of Europe. (2001). Common European Framework of Reference for Languages:

learning, teaching, assessment. Strasbourg.

Duke, N. K., & Pressley, M. (2005, December). How can I help my struggling readers.

Instructor .

Duke, N. K., Pressley, M., & Hilden, K. (2004). Difficulties with Reading Comprehension.

Handbook of Language and Literacy .

Dynot. (2010). Opgeroepen op Oktober 26, 2010, van

http://www.dynot.net/index.php?option=com_content&task=view&id=47&Itemid=82

Expertgroep Taal & Rekenen (2009). Referentiekader taal en rekenen. Enschede: SLO.

Gille, E., Loijens, C., Noijons, J., & Zwitser, R. (2010). Resultaten PISA-2009 Praktische

kennis en vaardigheden van 15-jarigen. Arnhem: Cito.

Gray, P. (2002). Psychology. New York: Worth Publishers.

(43)

43

Hacquebord, H., Andringa, S., Linthorst, T., & Pulles, M. (2006). Diataal, taaltoetspakket

voor het vaststellen en diagnosticeren van de lees- en luistervaardigheid en woordkennis van leerlingen van 9 tot 15 jaar. Groningen: Etoc, Rijksuniversiteit

Groningen.

Hacquebord, H., Linthorst, R., Stellingwerf, B., & De Zeeuw, M. (2004). Voortgezet

taalvaardig. Een onderzoek naar tekstbegrip en woordkennis en naar de

taalproblemen en taalbehoeften van brugklasleerlingen in het voortgezet onderwijs in het schooljaar 2002-2003. Groningen: Etoc.

Huysmans, F., de Haan, J., & van den Broek, A. (2004). Achter de schermen: een kwart eeuw

lezen, luisteren, kijken en internetten. Den Haag: Sociaal en Cultureel Planbureau.

Inspectie van het onderwijs. (2006). De staat van het onderwijs. Onderwijsverslag 2004/2005. Utrecht: Inspectie van het onderwijs.

Kerkhof, R. (2008, August 29). De leesvaardigheid van HAVO 4/5 en VWO 5/6 leerlingen :

onderzoek naar het leesvaardigheidniveau van HAVO 4/5 en VWO 5/6 leerlingen en hoe dit zich verhoudt tot het leesvaardigheidniveau van beginnende MBO-ers.

Opgeroepen op September 21, 2010, van Universiteit Twente: http://essay.utwente.nl/58187/

Kleijn, de P., & Nieuwborg, E. (1991). Basiswoordenboek Nederlands. Groningen: Wolters-Noordhoff.

Kraaykamp, G. (2002). Leesbevordering door ouders, bibliotheek en school. Effecten en ontwikkeling. In A. Raukema, D. Schram, & C. Stalpers, Lezen en leesgedrag van

adolescenten en jongvolwassenen (pp. 209-231). Delft: Eburon.

Land, J. (2009). Zwakke lezers, sterke teksten? Effecten van tekst- en lezerskenmerken op het

(44)

44

Land, J., Sanders, T., & van den Bergh, H. (2006). Wat maakt een studietekst geschikt voor

vmbo-leerlingen? Een experimenteel onderzoek naar de invloed van tekst- en lezerskenmerken op begrip en waardering. Amsterdam: Stichting Lezen.

Martyniuk, W. (2006). Languages of Schooling: towards a Framework for Europe. European

Frameworks of Reference for Language Competences. Strasbourg: Council of Europe.

Meeus, W. (2002). Ontwikkeling en leesgedrag in de adolescentie. In A. Raukema, D. Schram, & C. Stalpers, Lezen en leesgedrag van adolescenten en jongvolwassenen (pp. 27-36). Delft: Eburon.

Nelck-da Silva, F. F. (2004). Dissertaties - Rijksuniversiteit Groningen. Non scholae sed vitae

legimus : de rol van reflectie in ego-ontwikkeling en leesattitudeontwikkeling bij adolescenten. Opgeroepen op Oktober 28, 2010, van Rijksuniversiteit Groningen:

http://irs.ub.rug.nl/ppn/265226244

OECD. (2010). PISA 2009 Results: Executive Summary.

Stalpers, C. P. (2005). Gevormd door leeservaringen: De relatie tussen leesattitude, het lezen van fictie en het voornemen van adolescenten om lid te blijven van de openbare bibliotheek. Proefschrift. Utrecht: Universiteit Utrecht.

Stanovich, K. E. (1986). Matthew Effects in Reading: Some Consequences of Individual Differences in the Acquisition of Literacy. Reading Research Quarterly, 21, 360-407.

Tellegen, S., & Catsburg, I. (1987). Waarom zou je lezen?: het oordeel van leerlingen: anders

dan men wel eens dacht. Groningen: Wolters-Noordhoff.

Universiteit Utrecht. (2010). Opgeroepen op Oktober 26, 2010, van

http://www.uu.nl/university/international-

Referenties

GERELATEERDE DOCUMENTEN

The total number of years to execute in prison is arrived at by multiplying the average sentence length by the number of convictions.. Central topics in this report are

For the construction of a reading comprehension test, Andringa &amp; Hacquebord (2000) carried out text research. They took average sentence length, average word length and the

Language use in texts was operationalised into four measurements: average word length, average sentence length, percentage coverage of common words and number

Following the interpretations in the previous part, the results of Johansen's cointegration test suggests that only three - BM, EP and CEP - out of the four ratios can

Pupil dilation responses in the Placebo group and Yohimbine group to CS⁺ and CS- during the acquisition phase (Session 1) and memory phase (Session 2). Error bars

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Furthermore, SMLSPs are frequently unaware of the latest techno- logical trends and state-of-the-art technologies available for smart services and business process improvementsX.

In the CTRL climate, the warm phase of decadal variability is associated with a regional atmospheric pattern characterized by a low-pressure anomaly over Greenland, which forms