• No results found

Dynamic testing in practice : shall I give you a hint? Bosma, T.

N/A
N/A
Protected

Academic year: 2021

Share "Dynamic testing in practice : shall I give you a hint? Bosma, T."

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bosma, T.

Citation

Bosma, T. (2011, June 22). Dynamic testing in practice : shall I give you a hint?. Retrieved from https://hdl.handle.net/1887/17721

Version: Not Applicable (or Unknown)

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden

Downloaded from: https://hdl.handle.net/1887/17721

Note: To cite this publication please use the final published version (if applicable).

(2)

Chapter 7 Teachers’ preferences for educational planning:

dynamic testing, teaching’ experience and teachers’ sense of efficacy.

The contents of this chapter are in submission:

Tirza Bosma, Marco G. P. Hessels and Wilma C. M. Resing

(3)

Abstract

This study surveyed a sample of elementary teachers with respect to their preference for information regarding educational planning, in particular information captured with dynamic testing procedures. The influence of teachers’ experience and sense of efficacy on teachers' preferences was also investigated. Results indicated teachers’ preferences for dynamically gathered information regarding children’s learning processes, next to standard information such as a diagnosis. Appreciation for dynamic testing information appeared to be relative higher for those teachers with longer teaching experience, but not related to teachers’ sense of efficacy. Findings are discussed with regard to their implications for both diagnostic and teaching practices.

Acknowledgements:

We thank L. Meeuwsen and K. Touw for helping collect and code data.

(4)

Introduction

Adapting instruction to the needs of individual students is an important theme in education. Teachers are more and more expected to differentiate instructions to address the needs of individual students, to monitor students’ progress carefully and to write educational plans for students with learning difficulties or special needs (e.g., Pameijer, 2006; Pelco, Ward, Coleman & Young, 2009). Adaptation of instruction and school curricula also affects the work of school psychologists (Brown-Chidsey, 2005).

While traditionally, psychologists conducted assessments with the purpose of classification, diagnosis, clarification of learning problems or eligibility for special education, today the emphasis is more and more on prescriptive assessment, that is, to provide recommendations for interventions and individual educational plans (Kanne, Randolph & Farmer, 2008; Pameijer, 2006; Resing, Ruijssenaars & Bosma, 2002).

However, the long noticed gap between the information provided by psychologists and the information considered relevant for instructional planning by teachers (Thurlow and Ysseldyke, 1982), as well as between the implementation of reported recommendations and the provision by educational services, still exist (Kanne et al., 2008). The application of a standard battery of instruments (i.e., intelligence tests) by school psychologists is believed to be one of the factors that contribute to maintaining this gap. These tests have been designed for describing strengths and weaknesses and for detecting deficiencies in learners, but hardly provide information for educational intervention (Elliott 2003; Pameijer, 2006; Resing, 2000). It is on the other hand presumed that outcomes of certain dynamic tests, in which the nature and amount of assistance and instruction a learner needs to solve cognitive tasks is explored, may provide such information (Elliott, Grigorenko & Resing, 2010; Haywood & Lidz, 2007).

The purpose of this study was to examine whether teachers consider the information provided by dynamic testing as relevant for instructional planning for

(5)

individual children and whether teachers' opinions are related to their age, teaching experience, and sense of efficacy.

Much research has been dedicated to the effects and applicability of dynamic testing in educational settings (e.g., Beckmann, 2001, Hessels, Berger & Bosson 2008;

Hessels-Schlatter, 2002a; Lidz, 2002; Resing, Tunteler, de Jong & Bosma, 2009; Tzuriel, 2000a). The results of these studies provided insight into children’s potential for learning, their need for instruction and their response to feedback. In particular the intervention phase or training phase that is part of the testing procedure provides opportunities to acquire rich information regarding learning processes, learning strategies children use and instructional strategies children could profit from (Elliott, Grigorenko & Resing, 2010; Tzuriel, 2000a). Outcomes of dynamic testing would therefore be of use to guide classroom recommendations (Bosma & Resing, 2010;

Delclos, Burns & Vye, 1993; Haywood & Lidz, 2007).

Some studies also addressed the applicability of information in psychological reports. Pelco et al. (2009) showed that teachers had difficulties to plan interventions, making use of the information found in psychological reports. About half of the teachers in their research were not able to describe an intervention in a short brainstorm session immediately after reading the reports. Kanne et al. (2008) focused on the applicability of neuropsychological reports regarding children with autism for writing individual educational plans and concluded that the regular reports were not suitable for this purpose. These authors suggested providing an additional bridging document that helps to translate the neuropsychological assessment information to educational practice.

Hulburt (1995) investigated preschool teachers’ preferences for reports to use in planning instruction for preschool children. Teachers were to rate statements about three types of assessment reports: standard assessment, curriculum based assessment and dynamic assessment. Hulburt (1995) found that information regarding learning processes and teaching strategies (which are both examples of information provided in dynamic testing reports) was requested most. Curriculum based information, in particular curriculum skills and progress monitoring, was also considered very useful, as was information regarding the diagnosis, a typical example of standard reports.

(6)

Freeman and Miller (2001) also focused on what information would be useful to write educational plans. They investigated the value of dynamic testing versus curriculum based assessment and standard assessment among special education coordinators. Ratings of excerpts of the three types of assessment reports revealed that coordinators were not very familiar with dynamic testing information and reports, but rated such information as more valuable and useful than information based on standard testing. Especially the dynamic measures of change and the description of thinking skills were valued. Curriculum related information on the other hand was rated as most familiar and useful by the coordinators.

In a study with elementary school teachers, who rated reports and recommendations based on dynamic or standard testing for children that were actually in their class, the potential value of dynamic testing for guiding educational planning was confirmed (Bosma & Resing, 2010). The majority of the teachers considered recommendations in dynamic testing reports as meaningful for their practice, but teachers who were presented standard reports and recommendations also responded positively. Teacher ratings appeared to vary a lot among teachers reading dynamic testing reports. Excerpts of dynamic testing reports that resembled those of Freeman and Miller (2001) were also considered to contribute to the elaboration of individual educational plans, but here again the variation in teachers’ responses was high. The ratings by the teachers appeared to be related to teacher experience and age, that is, the value of dynamic testing information (such as strategies used and improvement after prompts) for planning interventions was rated significantly higher by younger and less experienced teachers. Variation in report preference was also found by Hulburt (1995) who suggested that personal learning and teaching styles may have been of influence on reports preferences.

The latter may be related to teachers’ sense of efficacy, i.e., teachers' judgments about their ability to promote student learning, as it has been found to affect teachers' practices, their behaviors in the classroom, their instructional attitudes and their decision making (Wolters & Daughterty, 2007; Woolfolk-Hoy & Spero, 2005). Teachers with a higher sense of efficacy reported to invest more effort in planning and

(7)

organization of their lessons (Allinder, 1994; Tschannen-Moran & Woolfolk-Hoy, 2001) and show greater persistence when confronted with challenges (Guskey, 1988).

Teachers with a strong sense of efficacy also tend to be more open to new ideas, are willing to implement new practices to better meet the needs of their students (Guskey, 1988) and demonstrate more positive attitudes toward inclusion (Weisel & Dror, 2006).

Teachers’ sense of efficacy tends to be affected by experience (Wolters &

Daugherty, 2007; Woolfolk Hoy & Spero, 2005). Novice teachers have a lower sense of efficacy than more experienced teachers, especially with regards to sense of efficacy of instruction and for classroom management (Wolters & Daugherty, 2005). However, a recent study by Klassen and Chiu (2010) showed that teachers’ sense of efficacy may follow a nonlinear pattern, increasing from early career with a peek around 23 years of experience and decreasing to late career. The latter may perhaps be seen as a corroboration of the age effect found by Bosma and Resing (2010) with regard to dynamic testing.

In the present study we investigated the importance of dynamic testing information for guiding individual educational plans by addressing the following four questions: Firstly, do teachers have a preference for information based on dynamic testing, curriculum based testing or standard testing? Secondly, which aspects of dynamic testing information do teachers consider as most useful for writing individual plans? Thirdly, are teachers’ responses influenced by variables such as experience, age or training? And fourthly, are teachers' responses related to teachers’ sense of efficacy, i.e., sense of efficacy for instruction, classroom management and student engagement?

Method

Participants

Directors of elementary schools in The Netherlands were randomly selected and contacted by email. They were asked to propose their teachers to participate in the

(8)

survey. The teachers could participate by simple clicking on a link in the email. The survey was filled in by 221 teachers. Because of missing data on key variables, the final sample consisted of 188 teachers.

Most participants (87%) were female and only 13% were male, which corresponds with the percentage of male and female teachers in primary education. The mean age of the teachers was 41 years, with a range from 21 to 63 years. The majority of the respondents (96.3%) worked in regular schools and 3.7% of the respondents worked in schools for children with special needs. Teachers’ experience varied from 0-40 years.

Measures

All data were gathered thru the internet using a self report questionnaire. The first section addressed general demographic information, including items regarding age, training and education degree, as well as the number of years of experience. Teachers experience could be fall in one of five categories 0-5; 5-10; 10-20; 20-30 or 30-40 years.

Additional items focused on teachers' school settings, including the type of school they worked in, the number of years they had been working in this school, the location of the school (urban, suburban or countryside), the grade they were currently teaching and the grades they taught previously, whether they fulfilled other roles (than teacher) within the school, whether they read specialist journals, if they were taking perfection courses, the contacts they had with school psychologists about individual children, how often they read diagnostic reports and their familiarity with dynamic testing and learning potential.

In the second part of the survey teachers completed 24 items from the Teachers’ Sense of Efficacy Scale (Tschannen-Moran & Woolfolk Hoy, 2001). Participants responded to these items using a 9 points likert-type scale with anchors at 1 (nothing), 3 (very little), 5 (some degree), 7 (quite a bit) and 9 (a great deal). This instrument was designed to assess three related aspects of teachers’ sense of efficacy, and asks teachers to reflect on their overall beliefs (and not with regard to a particular class or group of students).

The factor Self–efficacy for instruction consisted of 8 items (α = .91) to measure

(9)

teachers’ confidence in their ability to use effective instructional strategies to meet the needs of all students. The Self-efficacy for classroom management factor consisted of 8 items (α = .90) to measure teachers’ confidence in managing student conduct and classroom behaviors effectively. The factor self-efficacy for engagement (8 items; α = .87) was used to measure teachers’ confidence in their ability to engage and motivate students in learning activities.

The third part of the survey was based on the Information for instructional planning inquiry (Hulburt, 1995), developed to examine what information preschool teachers consider important for constructing an individual educational plan. Items focused on information obtained by standard assessment, curriculum based assessment and dynamic assessment. Two original items were divided into separate items and the focus of two items was changed to dynamic assessment. The 11 items use a 4-point scale with ‘not-needed’, ‘unimportant’, ‘important’ and ‘very important’ and an additional box for “don’t know” was available.

The last section of the survey consisted of items from a survey conducted by Freeman & Miller (2001). We selected the five items that were exemplary for dynamic assessment. For each example teachers judged the usefulness of the given information for instructional/educational planning on a 4-point scale with anchors (1) ‘little’, (2)

‘somewhat’, (3) ‘ much’ and (4) ‘very much’. Items concerned information on changes after training, a description of deficient thinking skills, the amount of instruction needed, specific instructions children profited from and strategies the child used.

All items were translated into Dutch by the first author with the help of a graduate student. Eight elementary school teachers commented the pilot-version of the questionnaire regarding their understanding of the information. Their comments were included in the final version.

Data analysis

On the items of Hulburt’s questionnaire regarding information for educational planning an exploratory factor analysis was conducted. Frequency analyses, correlation analyses (Kendall’s Tau-c for categorical variables) and multivariate analyses have been

(10)

conducted to explore the influence of teacher-background variables. The 24 items of the self-efficacy questionnaire were examined in a confirmatory factor analysis (CFA) using maximum likelihood estimates with EQS 6.1 for Windows (Bentler & Wu, 2005).

As recommended by Kline (2005), multiple criteria were used to test the model’s quality.

Since the χ² is strongly influenced by sample size, the Comparative Fit Index (CFI) and the Root Mean-Square Error of Approximation (RMSEA) are presented. For a reasonably good fit the CFI should exceed .90. RMSEA values under .05 are indicative of a good fit and values as high as .08 indicate reasonable fit (Kline, 2005).

Results

The number of respondents appeared to be approximately normally distributed (skewness = .053; kurtosis = -1.16) over the five levels of experience. Table 7.1 presents relevant information about the participants categorized according to their years of experience. Teachers had different educational training to acquire the teaching certificate. Some respondents started teachers’ college after obtaining a Junior college degree (10.1 %), others started directly after (honors) high school (65.9%). A small number of teachers had obtained an additional special education degree (11.7%).

Teaching experience was, as expected, strongly related to age (rs = .87, p ≤. 001). The older the respondents, the more experienced they were. Teachers with least experience had a mean age of 27 years, whereas teachers with most experience had a mean age of 56 year. Further, level of experience showed a moderate correlation with the number of years at the present school (τ = .51, p ≤. 001).

More than half of the teachers had an additional position besides teaching: 23%

were also remedial teacher or special service coordinator and 22% were coordinator of other tasks, e.g., responsible for information technology within the school (ICT). Having an additional position was weakly related to the number of courses taken (τ = .18, p ≤.

.05), as well as to the number of psychological reports read (τ = .34, p ≤ .001).

(11)

Table7. 1. Relevant participant data

Teachers’ experience (years)

1-5 5-10 10-20 20-30 30 – 45

n = 38 n = 39 n = 44 n = 41 n = 26 Age

Mean SD Range

27.63 4.72 21-44

36.18 7.17 26-53

41.57 7.01 31-57

49.98 3.86 43-59

56.77 2.55 53-63 Education career (%)

Junior college 28.9 10.3 4.5 0.0 7.7

High school 39.5 35.9 54.5 41.5 50.0

Honors High School 56.3 35.9 22.7 7.3 15.4

Special Education 2.6 10.3 13.6 19.5 11.5

Other 4.3 7.7 4.5 31.7 15.4

Additional position at school (%) yes

no

34.2 65.8

64.1 35.9

43.2 56.8

61.0 39.0

53.8 46.2 Contact School psychologist (%)

0-3 times a year 50.0 43.6 40.9 36.6 65.4

>3 times a year 50.0 56.4 59.1 63.4 34.6

Number of psychological reports read on average a year (%)

0-1 50.0 15.4 25.0 24.4 23.0

2-3 26.4 46.1 52.3 39.0 53.8

>3 23.7 38.5 22.7 36.6 23.1

Familiar to Dynamic Testing (%) Yes

No

23.7 76.3

20.5 79.5

15.9 84.1

17.1 82.9

23.1 76.9 Reading journals frequently (%)

Yes No

55.3 44.7

64.1 35.9

75.0 25.0

73.2 26.8

80.8 19.2 Courses taken (%)

none 21.1 5.1 11.4 9.8 11.5

1 42.1 56.4 50.0 51.2 46.2

2 26.3 28.2 27.3 31.7 26.9

>3 10.5 10.3 11.4 7.3 15.4

(12)

Reading psychological reports was moderately related to contacts with school psychologists (τ = .40, p ≤ .001) and weakly related to the number of courses taken (τ = .21, p ≤ .001). Following additional courses was further related to contacts with school psychologists (τ = .14, p ≤ .001), and to familiarity with dynamic testing (τ = .20 , p ≤ .01).

A fifth of the teachers responded to be familiar with dynamic testing, and some of these teachers explained to have learned about dynamic testing through participation in a research project or reading about it.

To answer our first question regarding the information teachers preferred for planning interventions we inspected the responses on the 11 items on the survey regarding planning interventions; one item was disregarded because of a likely misconception of the item content. Table 7.2 shows the mean responses of each of the items. Exploratory factor analysis indicated that three factors could be distinguished, accounting for 65% of the variance. The factors related to the dynamic information of a child’s learning process (α = 0.84; M= 3.57; SD= .45), the standard information of diagnosis and achievements (α

= 0.67; M= 3.54; SD= .51), and the curriculum based information of peer-achievement (α

= 0.69; M = 2.77; SD= .75).

The data in Table 7.2 are organized on the basis of these factors. The mean scores on the first two factors were about equal (t (187) = .72, n.s.), indicating that teachers apparently value dynamic information regarding learning processes as high as the traditional standard information (such as a child’s diagnosis) for writing educational plans. Both scores were above the theoretical average of 3. On the other hand information regarding peer-achievements appeared to be of lesser value for educational planning, according to the teachers compared to both the dynamic information (t (187)

= 15.16, p ≤ .001) and the standard information (t (187) = 13.56, p ≤ .001).

(13)

Table 7.2 Descriptive statistics and factor analytical results for items regarding information teachers considered important in formulating an individual educational plan

Factors/ items (n =188) Mean SD

Child’s learning process (α= .84) 3.57 .45

Description of child’s learning (learning process, ability to acquire problem solving strategies) 3.70 .46 Information about entry points how child profits form instruction 3.64 .58 Information about type of learning strategies child needs to learn 3.65 .55 Information about ability to learn of the child under favorable conditions 3.37 .68 Information about ability of the child to learn new skills at school 3.52 .60

Child’s diagnosis and achievement (α= .67) 3.54 .51

The diagnosis of the child’s disability 3.65 .56

Besides diagnosis a description of the child’s limitations 3.57 .63 Information regarding a child’s progress based on data of a monitoring and evaluation

system

3.40 .75

Achievement compared to peers (α= .69) 2.77 .75

Description of a child’s achievement compared to peers 2.83 .84

Description of a child’s achievement compared to peers with comparable problems 2.73 .87

To answer our second research question, which aspects of dynamic testing information would be considered useful for educational planning, we examined the responses to the five examples of dynamic testing information. The means and standard deviations are presented in Table 7.3. The above average means for each example indicate that inclusion of such information was considerably valued for individual educational planning.

Inspection of the distribution of the responses to each example revealed which aspect of dynamic testing information would improve writing educational plans, according to teachers. The details of the distribution are presented in Table 7.3. The first item ‘information about progress a child makes in response to instruction’, was considered good to very good for formulating educational plans by a small majority of teachers (55 %). Inclusion of the second item ‘information regarding the deficits of a child’s thinking skills’ was considered by 72% of the teachers as a good to very good

(14)

improvement of the report. The third item ‘the amount of instruction a child needs’ was considered as useful to very useful additional information by a large majority (88%) of teachers. Inclusion of the fourth item ‘information regarding teaching strategies a child profits from’ was valued by 84% of the teachers as useful to very useful for formulating educational plans. The last item ‘information about a child’s problem solving strategies’

was considered as useful to very useful for writing educational plans by 70% of the teachers. The responses particularly showed the usefulness of information regarding a child’s thinking skills, needed instruction and teaching strategies for writing educational plans.

Table 7.3. Dynamic testing examples: means, standard deviations and distribution in ratings regarding usefulness for educational planning.

Descriptive Statistics

Useful for educational planning (%)

M SD

Little useful

Useful- Very useful

Progress after instruction 2.56 .88 44.7 54.8

Deficits in Thinking skills 2.90 .77 27.6 72.3

Amount of instruction

3.14 .61 11.7 88.3

Teaching strategies 3.17 .69 14.9 84.5

Problem solving strategies 2.81 .81 30.4 69.5

To answer the first part of our third question, whether teachers’ responses were influenced by demographic variables, we first present teachers’ preferences for information in relation to teachers’ background variables. Since age and teaching experience are very strongly correlated (rs = .87, p ≤. 001), we decided to focus our analyses only on teachers’ years of experience.

(15)

Results of a one way analysis of variance with the three information type factors (child learning process, diagnosis, and peer-achievement) as dependent variables and teaching experience as independent factor, demonstrated a significant effect for teaching experience regarding the first factor ‘child’s learning processes’, with F(4, 187)

= 2.88, p ≤.05, partial η² = .06. We neither found a significant difference on the factor

‘diagnosis and achievement’ (F(4, 187) = 1.24, p =.29) nor on the factor ‘peer achievement’ F(4, 187) = 1.62, p = .17.

Results of paired comparisons regarding teaching experience revealed that teachers with 1 to 5 years of experience gave significant lower ratings (p ≤.05) on the factor ‘child’s learning processes’ (M = 3.40; SD = .66) compared to teachers with 20-30 years of experience (M= 3.69; SD = .32). The results show that less experienced teachers considered the information about learning processes as less important for formulating individual educational plans than their more experienced colleagues. Further, we found weak correlations between the factor ‘child’s learning processes’ and whether or not the teacher has additional positions in the school (τ = .27, p ≤ .001) and with their experience in reading psychological reports (τ = .12, p ≤ .05). Job satisfaction also correlated weakly with the factor ‘child’s learning processes’ (τ = .16, p ≤ .01) and the factor ‘peer- achievement’ (τ = .16, p ≤ .01). The level of teacher training did not correlate with any of the three factors.

To answer the second part of our third question, we examined the relationships between the responses regarding the dynamic testing examples and background variables. We found significant (p ≤ .05) correlations between teaching experience and dynamic assessment examples ‘amount of instruction’ (τ = .15) and ‘teaching strategies’

(τ = .13). Teachers’ education was only significantly related to the item ‘teaching strategies’ (τ = .12, p ≤ .05). We examined whether the mean responses would increase with level of experience. Since the data were not normally distributed, Jonckheere’s tests were applied. These revealed a significant (one tailed) trend in the data for four of the five dynamic testing items: as experience increased, the ratings were higher.

‘thinking skills’, J = 7691.0, z = 1.72, p ≤ .05; ‘amount of instruction’, J = 8033.5, z = 2.79, p ≤ .01; ‘teaching strategies’, J = 7776.0, z = 2.19, p ≤.05; and ‘problem solving

(16)

strategies’, J = 7658.5, z = 1.63, p ≤ .05. For the item ‘progress after instruction’ the results were almost significant: J = 7540.0, z = 1.50, p ≤ .06. In Figure 7.1 the teacher ratings on each of the dynamic testing items are shown by experience level.

2 2,5 3 3,5 4

1 to 5 5 to 10 10 to 20 20-30 30- 40

Years of experience

mean teacher rating

Progress

Think skills

Needed instruct.

Teaching strat.

Probl.strat.

Figure 7.1. Ratings of dynamic testing items by level of experience

Several other teacher-background variables showed significant correlations (p ≤ .05) with the responses to the dynamic testing examples. The item ‘teaching strategies’

was related to the variable of holding additional positions (τ = .15) and following additional courses (τ = .13). The item ‘problem solving strategies’ was negatively related to familiarity with dynamic testing (τ = - .13), but positively to teachers’ experience in reading psychological reports (τ = .17), and to contacts with a school psychologist (τ = .16). The item ‘thinking skills’ was also related to contacts with the school psychologist (τ

= .13). The item regarding job satisfaction showed a low correlation with all five items (τ varied from .10 to .17).

(17)

To examine the role of teacher’s self-efficacy, our fourth research question, we first analyzed the responses on the 24 self-efficacy items. A confirmatory factor analysis was performed to examine the fit of the data to the original three factor structure as identified by Tschannen-Moran and Woolfolk Hoy (2001). Results showed a poor fit to the data: χ²(249) = 551.23, p < .001; CFI = .082, GFI = 0.81 and RMSEA = .07. We adapted the model by eliminating three items which loaded on two or more factors, following the guidelines by Byrnes (2004). We also moved one item from factor 3 to factor 2 as its content in the Dutch translation was more in line with the other items in this factor, and allowed three pairs of error terms to correlate, because of assumed item content overlap. This model (Robust method) showed a more reasonable fit: S-B χ²(183) = 237.75, p = .004; CFI = .095 and RMSEA = .038. The scales represented the ‘self-efficacy for instruction’ (8 items; α = .77; M= 6.87; SD =.81), the ‘self-efficacy for management’ (6 items; α = .85; M= 7.09; SD= .98) and ‘self-efficacy for student engagement’ (7 items; α = .75; M= 6.91; SD = .85). Standardized factor loadings were generally large and statistically significant. They are presented in Table 7.4.

The three factors showed moderate correlations amongst each other, varying from r = .50 to r = .66 (all p ≤ .001). To investigate whether differences existed in self efficacy scales regarding teacher’s experience levels, we conducted a multivariate analyses of variance with the three self-efficacy scales as dependent variables and teacher experience levels (1-5) as factor. Results showed no significant differences for levels of experience Wilks’ λ = .93, F (12, 439.49) = 1.03, p = .42).

Correlation analyses revealed between several teacher background variables and teachers’ self-efficacy low but significant correlations. The presence of a position besides teaching was positively related to self-efficacy factors instruction and student engagement: (rs = .23; rs = .18, p ≤ .05) respectively. Frequent contact with a school psychologist was related to self-efficacy for instruction as well as to management (rs = .15; rs = .19, p ≤ .05) respectively. The number of additional courses taken correlated significantly to self-efficacy for instruction (rs = .17; p ≤ .05 ), and management (rs = .24, p ≤ .01) . Familiarity to dynamic testing was negatively correlated to the self-efficacy factors for instruction (rs = - .16, p ≤ .05), and student engagement (rs = -.15, p ≤ .05).

(18)

Job satisfaction correlated significantly with each factor: rs = .33; rs = .34; rs = .23, p ≤ .001, for the self-efficacy for instruction, management and student engagement factors respectively. Teacher’s education did not correlate with any of the self-efficacy factors.

No significant relations were found between the responses on the self-efficacy scales and the rated preference of information for educational planning, neither with the ratings of the five specific dynamic testing items.

Table 7.4: Factor loadings for teaching efficacy items based on CFA

item Factor 1

Self-efficacy for

Instruction

Factor 2 Self-efficacy for

Management

Factor 3 Self-efficacy for Student engagement 23 Implement alternative strategies .69

24 Provide challenges for talented students .61 17 Adjust lessons to individual students .59 20 Alternative explanations for confused

students

.56

11 Ask good questions .53

18 Use variety of assessment .52 10 Assess student comprehension .51 7 Respond to difficult questions .47

19 Keep students from ruining lesson .78

21 Respond to defiant students .78

15 Calm a noisy student .74

3 Control disruptive behavior .71

13 Have classroom rules followed .59

1a Get through to difficult students .57

14 Improve own understanding of failing student

.73

9 Help students value their learning .63

6 Improve students’ belief in achievement .56

4 Motivate students for school .54

2 Help students think critically .53

12 Foster student creativity .53

22 Assist families in helping children .47

a item originally developed for Factor 3

(19)

Discussion

In the current study we explored the information teachers valued as useful for individual educational planning. Teachers in the current sample generally valued information regarding the child’s learning processes and information regarding diagnosis and achievement as highly valuable for individual educational planning, whereas teachers rated information regarding achievement compared to peers as less important.

It is interesting to note that teachers valued information regarding the child’s learning process and learning ability as very useful for individual educational planning.

Information regarding strategies a child applies and profits from, and his or her potential for learning are generally not provided in standard psychological reports. Such information may be obtained by using dynamic testing procedures. Specific examples of dynamic testing information were also included in the survey and the majority of the teachers responded that inclusion of this information in reports would make the reports useful to very useful for educational planning. In particular the items regarding the amount of instruction a child needs, and teaching strategies a child profits from were valued by more than eighty percent of teachers as useful to very useful. Similar positive findings regarding the usefulness of dynamic assessment information was found for preschool teachers (Hulburt, 1995) and special education coordinators (Freeman &

Miller, 2001), and generally expected by researchers in the field of dynamic assessment (e.g. Haywood & Lidz, 2007). Since teachers judge such information as highly useful and important, a more frequent use of dynamic tests and assessment procedures appears justified.

It must be noted that teachers also attached importance to information generally obtained by standard psychological (intellectual) assessment or standard school- achievement tests. Although advocates of dynamic assessment have criticized the usefulness of standard reports for planning educational interventions, teachers valued the standard information regarding diagnosis, or a child’s limitations, as high as the

(20)

dynamic information. This may be due to the fact that teachers are very familiar with this type of information, as dynamic testing procedures are hardly used in the Netherlands, and may also explain the absence of clear preferences for either standard or dynamic reports in various studies (Bosma & Resing, 2008; 2010; Delclos et al., 1993).

It appears that a combination of both types of information would provide optimal starting points for individual educational plans, from a teachers’ point of view.

Teaching experience was related to the evaluation of the information useful for educational planning, in particular the factor ‘child’s learning process’. Teachers in their early career (less than 5 years of experience) gave significant lower values than teachers in their mid career, but we did not find significant differences with teachers in the end of their career. Regarding the responses to the dynamic testing examples, a trend was found for four of the five items for teaching experience: less experienced teachers appeared to respond less positive compared to more experienced teachers, but their ratings were at least average. Teachers in their mid career and further career apparently noted the potential use of the novel, dynamic testing information. Although in a study regarding the usefulness of dynamic testing information in reports and recommendations (Bosma & Resing, 2010) younger and less experienced teachers (less than 12 years of experience) tended to have more positive responses, those results may have been influenced by the small sample size, the fact that categories of age and experience were rather broadly defined and by the type of study. In the current findings the relation with teaching experience may be partially explained by the fact that relatively inexperienced teachers may have had limited experiences with developing individual educational plans, as well as with students requiring special needs (e.g.

Freeman & Miller, 2001). However, the effect is relative small, due to the small variation in the ratings and a possible ceiling effects regarding the information preference items.

Future studies should examine the relation in more detail, before causal relations can be drawn.

Confirmative factor analysis of the Teachers’ Sense of Efficacy Scale (Tschannen- Moran & Woolfolk-Hoy, 2001) showed that a three factor structure was adequate, but that it needed some small adaptations compared to the original structure. Reliabilities of

(21)

the scales were very acceptable and average factor scores corresponded to those found in earlier studies (Klassen et al., 2009; Tschannen- Moran & Woolfolk-Hoy, 2001;

Wolters & Daugherty, 2007). We therefore concluded that this scale can be used for measuring sense of efficacy in the Dutch teacher population. Although Klassen & Chiu (2010) and Wolters & Daugherty (2007) found that teachers’ sense of self-efficacy was related to teaching experience, we did not find an experience effect for sense of self- efficacy. This was not expected, and might be due to the relatively low variation in the scores. Teachers’ sense of efficacy has been found to relate to their instructional practices and decision making (Wolters & Daugherty, 2007; Woolfolk- Hoy & Spero, 2005), as well as to investments in planning and organizing their lessons (Allinder, 1994).

However, the responses regarding the type of information teachers valued as useful for educational planning and the responses regarding the applicability of the examples of dynamic testing information did not relate to any of the sense of efficacy factors. Our survey questions were focused on information teachers found important for educational planning, and probably only indirectly focused on their own daily teaching practice or their work with children requiring extra help or adaptations. Therefore the responses may not have been associated to feelings of competence or confidence of their teaching practice.

Two variables that influenced responses were the fact of having an additional position besides teaching and job satisfaction. Both variables were weakly, though positively, related to teachers' sense of efficacy of instruction, management and student engagement as well as to the evaluation of information regarding child’s learning processes and to the rating of the dynamic testing example ‘teaching strategies a child profits from’. Perceptions of teachers with an additional position tended to be more positive regarding the usefulness of novel information for educational planning. This might show the importance of actual experiences that teachers have during their additional work outside the classroom through which they obtain a broader framework regarding assessment, special needs and interventions. The positive influence of additional tasks and/or higher job satisfaction was expected, in particular regarding teachers’ self-efficacy (e.g. Klassen et al., 2009).

(22)

Off course the results of the current study should also be interpreted in the light of its limitations. First, teachers interested in the topic of assessment and recommendations might have been overrepresented in the sample. Further, the current study examined teacher’s preferences and opinions regarding individual educational planning, however no conclusions can be drawn regarding the actual information teachers’ use and need for individual educational planning.

To summarize, the present study investigated the importance of assessment information for teachers in formulating individual educational plans. The study expanded the research on dynamic assessment by exploring the role of teachers’

background and teachers’ sense of efficacy. Teachers valued both the common provided standard information and the novel dynamic testing information as highly useful for educational planning. When we interpret the results from a needs based assessment (Pameijer, 2006) or assessment for intervention (Brown-Chidsey, 2005a) approach, they imply that dynamic tests merit a substantial place in psycho-educational assessment procedures. Practical issues regarding dynamic tests, e.g. with regard to reference norms, should be of interests to school psychologists, in order to add dynamic tests to their assessment procedures to obtain good starting points for teachers for their individual educational planning.

Referenties

GERELATEERDE DOCUMENTEN

With the use of questionnaires it was found that intrinsic rewards are the only type of reward that can positively influence the motivation of employees to show their

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden. Downloaded

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden.. Downloaded

Further the question is examined whether teachers value the outcomes of dynamic testing procedures in the process of formulating educational plans for individual children

The Analogical Reasoning Learning Test (ARLT) was administered to 26 young children, receiving special education services, followed by a newly developed reversal

We conclude that dynamic testing, including a graduated prompt training and a construction task provide additional quantitative and qualitative information about potential

Children were trained to use systematic measuring to solve seriation problems on the Seria-think instrument (Tzuriel, 1998). The training was adaptive and prompts

Although we did observe teacher-child interactions for all three conditions, specific recommendations based on dynamic assessment regarding instruction,