• No results found

Factors promoting and hindering data-based decision making in schools

N/A
N/A
Protected

Academic year: 2021

Share "Factors promoting and hindering data-based decision making in schools"

Copied!
18
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=nses20

Download by: [Universiteit Twente.] Date: 07 June 2017, At: 02:24

School Effectiveness and School Improvement

An International Journal of Research, Policy and Practice

ISSN: 0924-3453 (Print) 1744-5124 (Online) Journal homepage: http://www.tandfonline.com/loi/nses20

Factors promoting and hindering data-based

decision making in schools

Kim Schildkamp, Cindy Poortman, Hans Luyten & Johanna Ebbeler

To cite this article: Kim Schildkamp, Cindy Poortman, Hans Luyten & Johanna Ebbeler (2017)

Factors promoting and hindering data-based decision making in schools, School Effectiveness and School Improvement, 28:2, 242-258, DOI: 10.1080/09243453.2016.1256901

To link to this article: http://dx.doi.org/10.1080/09243453.2016.1256901

© 2016 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.

Published online: 16 Nov 2016.

Submit your article to this journal

Article views: 593

View related articles

View Crossmark data

(2)

Factors promoting and hindering data-based decision making in

schools

Kim Schildkamp, Cindy Poortman, Hans Luyten and Johanna Ebbeler

Faculty of Behavioural, Management and Social Sciences, University of Twente, Enschede, The Netherlands

ABSTRACT

Although data-based decision making can lead to improved student achievement, data are often not used effectively in schools. This paper therefore focuses on conditions for effective data use. We studied the extent to which school organizational characteristics, data characteristics, user characteristics, and collaboration influenced data use for (1) accountability, (2) school development, and (3) instruction. The results of our hierarchical linear modeling (HLM) analysis from this large-scale quantitative study (N = 1073) show that, on average, teachers appear to score relatively high on data use for accountability and school develop-ment. Regarding instruction, however, several data sources are used only on a yearly basis. Among the factors investigated, school organizational characteristics and collaboration have the greatest influence on teachers’ data use in schools.

ARTICLE HISTORY Received 6 November 2015 Accepted 27 October 2016 KEYWORDS

Data-based decision making; school improvement; accountability; school organization

Introduction

In countries around the world, more and more attention is being paid to data-based decision making in schools (or data use, for short). The primary reason for this is that we expect teachers to make high-quality decisions, which requires their decisions to be based not only on experience and intuition but also on data. Several studies have shown that data use can lead to school improve-ment in terms of increased student achieveimprove-ment (Campbell & Levin, 2009; Carlson, Borman, & Robinson, 2011; Lai, McNaughton, Timperley, & Hsiao, 2009; McNaughton, Lai, & Hsiao, 2012). Teachers can use data, such as assessment data and classroom observation data, to determine the learning needs of their students, and to adapt their instruction according to these learning needs. This can lead to improved student learning.

In this study, data is defined broadly as information that is systematically collected and organized to represent some aspect of schools (Schildkamp & Lai,2013). Data do not only include assessment data and other forms of student achievement data, but also any other form of structurally collected qualitative or quantitative data on the functioning of the school, including input data (e.g., student background data), process data (e.g., classroom observations and teacher interviews), context data (e.g., information about the building), and output data (e.g., student achievement data, student satisfaction questionnaire data). These data can be used to inform decision making in schools, which is called data-based decision making (Bernhardt,2013; Boudett, City, & Murnane,2005; Ikemoto & Marsh,2007; Schildkamp & Lai,2013; Wayman, Jimerson, & Cho,

2012).

There is increasing policy pressure on schools to use data. Schools are expected to use data to improve their quality, and governments are increasingly holding schools accountable for this.

CONTACTKim Schildkamp k.schildkamp@utwente.nl

VOL. 28, NO. 2, 242–258

http://dx.doi.org/10.1080/09243453.2016.1256901

© 2016 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http:// creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.

(3)

Policy pressure, for example, from policies such as“No Child Left Behind” in the US, can lead to schools using data only for accountability purposes (Wayman, Spikes, & Volonnino,2013). However, in order for data-based decision making to lead to school improvement in terms of increased student achievement, it is crucial that data are also used for school development and instructional purposes. Therefore, we need to study the extent to which school staff use data for accountability, school development, and instructional purposes.

In order to be able to support schools in the use of data and to provide them with effective professional development, we also need to gain more knowledge on the different factors that can promote or hinder effective data use. This has been studied previously (see, e.g., Datnow, Park, & Kennedy-Lewis,2013), but mostly through small-scale qualitative studies in the US. There is a lack of large-scale quantitative studies. Although in-depth qualitative studies can provide us with a lot of rich information on how certain factors influence the use of data, they do not provide us with information on the size of their impact, or about which factors matter most. Furthermore, studies looking into the factors promoting or hindering data use have not distinguished between the use of data for different purposes. Moreover, knowledge about data use in schools in European countries is still relatively scarce. In this relatively large-scale empirical study, therefore, we aim to provide more insight into data use for school development and instruction, as well as account-ability. We also studied which factors influence the use of data for these three different purposes. The main research questions of this study are:

(1) To what extent do teachers perceive that data within their school are used for accountability and school development purposes?

(2) To what extent do teachers indicate they use data for instructional purposes? (3) What factors influence the use of data in schools?

Theoretical framework

Data use for accountability, school development, and instruction

Data-based decisions can be made with respect to school development purposes, instructional purposes, and accountability purposes (Breiter & Light,2006; Coburn & Talbert, 2006; Diamond & Spillane,2004; Schildkamp & Kuiper,2010; Schildkamp, Lai, & Earl,2013; Wayman & Stringfield,2006; Wohlstetter, Datnow, & Park,2008; Young,2006). Teachers can use data for accountability purposes, for example, for reporting to parents and the inspectorate of education. They can use data such as assessment results and results of internal evaluations in external reports, such as reports for the inspectorate, to show the inspectorate how they are doing (Coburn & Talbert, 2006; Diamond & Spillane,2004; Schildkamp & Kuiper,2010; Schildkamp et al.,2013; Wohlstetter et al.,2008; Young,

2006). Although accountability in education is meant to contribute to school improvement, tensions and conflicts are likely to arise between these purposes (Hargreaves & Brown, 2013). For accountability policies to be effective, the data need to be used for determining whether schools are meeting the standards, and for changing practices and monitoring their effectiveness (Ingram, Louis, & Schroeder,

2004). In that case, data are also being used for school development and for instruction.

Data use for school development refers to schools using data to improve the school: For example, student satisfaction questionnaires and test results can be used to evaluate education, and based on these data school leaders and teachers can determine the extent to which the school is achieving its goals. Furthermore, when schools use data for school development, the allocation of teaching time is based on students’ identified learning needs. Yearly goals for school improve-ment are based on student learning results (e.g., from tests and exams, whether they pass to the next year in the program), and other indicators, such as the quality of teaching (as measured by the Dutch school inspection and by schools themselves, e.g., by means of classroom observations), and

(4)

these are used to identify gaps in the curriculum, which leads to decisions with regard to the type of professional development that is needed. In these schools, data use is a tool for determining effective teaching methods (Breiter & Light, 2006; Coburn & Talbert,2006; Schildkamp & Kuiper,

2010; Schildkamp et al.,2013; Wayman & Stringfield,2006; Wohlstetter et al.,2008; Young,2006). Finally, data can be used for instruction. The quality of teachers’ instruction is an important influence on student achievement (Hattie,2009), and data use for instruction can lead to improved student achievement (Campbell & Levin,2009; Carlson et al.,2011; Lai et al.,2009; McNaughton et al., 2012). Teachers can use data, such as homework, student interviews, and classroom observations by colleagues/school leaders in many different ways to improve their instruction; they can use data to: set learning goals for students, determine which topics and skills students do and do not grasp, determine students’ progress, tailor instruction to individual student needs, set the pace of lessons, give students feedback on their learning process, form smaller groups of students for targeted instruction, identify instructional content to use in class, study why students make certain mistakes, and adapt instruction based on the needs of the gifted and struggling students (Breiter & Light, 2006; Coburn & Talbert, 2006; Schildkamp & Kuiper,2010; Schildkamp et al.,2013; Wayman & Stringfield,2006; Wohlstetter et al.,2008; Young,2006).

Factors influencing the use of data

Different studies of data use (e.g., Coburn & Turner,2011; Datnow et al.,2013; Schildkamp & Kuiper,

2010; Schildkamp & Lai,2013) have shown that data use is influenced by a number of organiza-tional characteristics, user characteristics, and data characteristics (see Figure 1 for a synthesis). First, several school organizational characteristics can influence the use of data in schools. Data use in schools can be enhanced if there is a shared vision, which includes a common understanding about the nature of good teaching, student learning, and effective ways to evaluate student learning. There should be norms for data use, meaning that data use should be a priority in the school and that the school needs a structured method for analysis and interpretation of data on which to base actions (Datnow, Park, & Wohlstetter,2007; Earl & Katz,2006; Halverson,2010; Kerr, Marsh, Ikemoto, Darilek, & Barney, 2006; Sharkey & Murnane, 2006; Wayman, Cho, & Johnston,

2007; Wayman & Stringfield,2006; Wohlstetter et al.,2008; Young,2006).

Moreover, studies show that it is important that school leaders encourage data use as a tool to support effective teaching, that school leaders demonstrate effective data use (e.g., act as role models), create opportunities for teachers to use data (e.g., provide teachers with time), openly discuss data with their teachers, and focus on developing teachers’ data use skills (Knapp, Copland, & Swinnerton,2007; Knapp, Swinnerton, Copland, & Monpas-Huber,2006; Wohlstetter et al.,2008; Young,2006).

(5)

Furthermore, data use requires support. Teachers need support in the effective use of data to improve their practice, which means that a data expert should be available to answer questions teachers have about using data, and support should be provided in the form of time set aside for data use (Marsh, Sloan McCombs, & Martorell,2010; Marsh,2012).

Effective data use also requires collaboration. Teachers should be able to share and discuss their students’ results and their own functioning with students, parents, and teachers. Teachers should collaborate on how to use data to improve their school and their students’ learning (Copland,2003; Datnow et al.,2013; Horn & Little,2010; Means, Padilla, & Gallagher,2010; Nelson & Slavit, 2007; Park & Datnow,2009; Spillane,2012; Wohlstetter et al.,2008).

Studies have also shown that data use depends on certain user characteristics of teachers. In order to use data, teachers need to have the knowledge and skills needed to analyze and interpret different forms of data, they need to understand the quality criteria for data use and data use concepts (e.g., reliability of data, correlation), and they need to be able to diagnose student learning needs and adjust instruction accordingly (e.g., data literacy) (Copland,2003; Earl & Katz,

2006; Kerr et al.,2006; Little,2012; Mandinach,2012; Marsh, Pane, & Hamilton,2006; Means et al.,

2010; Nelson & Slavit,2007; Park & Datnow,2009; Sharkey & Murnane,2006; Supovitz & Klein,2003; Wohlstetter et al.,2008; Young,2006).

Furthermore, teachers’ dispositions towards data use can influence their use of data. Teachers who believe in the use of data and who believe that it is important to use data in determining individual student needs, that data are important for improving instruction, and that students can benefit when teachers use data, are more likely to actually incorporate data in their daily decision making (Mingchu,2008).

Finally, our literature review identified several data characteristics that might influence the use of data. If teachers are to use data to improve their instruction, they need to have access to student data relevant to student learning, such as intake information, absence data, and learning results, in an information system; they should be able tofind the data they need, and also have access to it. Furthermore, data should be available in a timely manner (Breiter & Light,2006; Halverson,2010; Wayman & Stringfield,2006).

Teachers also need to perceive the available data as usable and of high quality. Teachers need appropriate data to help them plan their lessons, and help them determine the progress and growth of their students. The data that teachers have available should be up to date and accurate (Breiter & Light, 2006; Chen, Heritage, & Lee, 2005; Datnow et al., 2007; Earl & Fullan, 2003; Halverson,2010; Kerr et al.,2006; Schildkamp & Kuiper, 2010; Sharkey & Murnane,2006; Wayman et al.,2007; Wayman & Stringfield,2006; Wohlstetter et al.,2008).

Methodology

To study the use of data and factors promoting or hindering it in schools, a quantitative metho-dology was employed. We administered a survey in a large sample of Dutch secondary schools.

Context: Dutch secondary education

This study took place in Dutch secondary education. Dutch schools have a great deal of autonomy. They are free to choose the religious, ideological, and pedagogical principles on which their education is based and in organizing their teaching activities. Dutch schools also have considerable freedom regarding the subject matter taught, textbooks, assessments, and the instructional stra-tegies used (Ministerie van Onderwijs, Cultuur & Wetenschappen [Ministry of Education, Culture & Science], 2000). The Netherlands do have an inspectorate that holds schools accountable for the education they provide.

At the end of secondary education, all students take a national assessment. This is the only standardized assessment that is used by all secondary schools in The Netherlands. Thisfinal exam

(6)

consists of an internal school-based assessment, and an external national assessment. National examinations are school independent and the same for every Dutch student within a track. School-internal examinations are developed by the schools themselves and differ across schools. Both parts of thefinal examination, the external and internal parts, are considered to be equally relevant: For each subject, the average school-internal examination grade makes up 50% of the final examination grade, while the average external examination grade makes up the other 50%. To obtain a high school diploma, an examinee must have scored passing marks in a specified number of subjects, such as Dutch, English, and mathematics (Schildkamp, Rekers-Mombarg, & Harms,

2012, p. 230). Other important data sources available to Dutch schools include: school inspection reports; school self-evaluation data; data on intake, transfer, and school leavers; student work; curriculum assessments; and student and parent questionnaire and interview data.

Respondents

In The Netherlands, there are 659 secondary education schools with a total of 1,339 locations (according to the Ministry of Education website: https://www.onderwijsincijfers.nl/kengetallen/ voortgezet-onderwijs) (e.g., a school might have a different location for upper and lower secondary education, or for different tracks). The survey was sent to a convenience sample of 10% of all the schools in the country: 66 schools and 140 school locations. A total of 27 schools (40.9%) and 69 school locations (51.4%) participated in the survey; 1,073 teachers completed the survey. Languages was the main subject taught for most teachers (27%), followed by science (23%) and social studies (12%). Culture, physical education, economics, and other subjects were each repre-sented by less than 10% of our sample. More teachers (55%) were working in the higher grades of secondary education than in the lower grades (45%).

Using a convenience sample has a bias risk, as the teachers who choose to participate in this study may not fully represent the population from which the sample has been drawn (Cochran,

1997). However, we recruited the schools by using different networks from several educational researchers working at our university, who study different aspects of education (also non-data-use-related topics, such as ICT in education). Moreover, the teachers in our sample are comparable with teachers from an average Dutch school with regard to gender of the teachers, number of years of experience, and average number of students (seeTable 1, source https://www.trendsinbeeldocw. nl/actueel/nieuws/2014/06/25/nederlandse-vo-leraar-tevreden-over-beroepandhttps://www.onder wijsincijfers.nl/kengetallen).

Survey

The survey was developed based on the theoretical framework described in this paper and on existing valid and reliable instruments (Allensworth, Correa, & Ponisciak,2008; Chahine & Childs,

2009; Comenius Project Using Data for Improving School and Student Performance,2011; Geijsel, Sleegers, Stoel, & Krüger,2009; Kelly, Downey, & Rietdijk,2010; Schildkamp,2007; Schildkamp & Kuiper,2010; Schildkamp & Teddlie,2008; Vanhoof, Verhaeghe, Verhaeghe, Valcke, & Van Petegem;

2011; Verhaeghe, Vanhoof, Valcke, & Van Petegem,2010; Wayman, Snodgrass Rangel, Jimerson, & Cho,2010). The face validity of the survey was established through peer review by four teachers

Table 1.Comparison of the sample and the population on gender, experience, and number of students.

Sample Population

Gender: Male/female 55%/45% 45%/55%

Average years of experience 17.8 16

(7)

and five experts (researchers in the field). Each item was checked to determine whether it adequately reflected the construct in relation to the Dutch education context. In addition, the survey was piloted in three schools. Based on the review and pilot, minor adjustments were made, mostly in terms of formatting the items more clearly and more specifically.

The survey consists of questions with regard to data use for accountability purposes, data use for school development purposes, data use for instructional purposes, perception of school organizational characteristics, perception of user characteristics, and perception of data char-acteristics. In the survey, we used a broad definition of data (see Introduction), thus including different data sources, such as achievement results, classroom observations, and surveys. Respondents could express their agreement with the items on a 4-point scale: strongly disagree (1), disagree (2), agree (3), and strongly agree (4). Because of the early stage of data use in The Netherlands, we also included the alternative “don’t know”. For the questions with regard to data use for school development and data use for accountability, we asked teachers to what extent data in their school were used for these purposes. For the questions with regard to “data use for instructional purposes”, for validity reasons, we used different response cate-gories. Respondents were asked to indicate how often they used data for specific instructional purposes on a scale from 1 to 6: never, yearly, a couple of times per year, monthly, weekly, and a couple of times per week. Because of the early stage of data use in The Netherlands, we also included the possibility ”not applicable” here.

Confirmatory factor analyses and reliability analyses revealed a 7-factor structure nearly consistent with the theoretical framework: school organizational characteristics (vision and norms, leadership, and support); data characteristics (accessibility of timely data, usability, and quality); user characteristics (knowledge and skills, dispositions to use data); and collabora-tion. Although “collaboration” was initially perceived as a school organizational factor, the related scale did not load strongly enough on either this component or the user characteristics component. However, collaboration is probably shaped by both school organization and individual characteristics of teachers and school leaders and can therefore be considered as a separate factor. Furthermore, 10 items were deleted from the original survey, because these items loaded below 0.5. Two other items from two scales also loaded slightly below 0.5, but these items were kept in the scales for theoretical reasons. Reliability of the scales is sufficient to good (seeTables 2 and 3).

Table 2.Reliability of the data use scales in the survey. Scale

Number of items

Cronbach

alpha Example items

Data use for accountability

3 0.75 ● The data we use for accountability purposes (e.g., for parents, the inspection) represent the reality

● Results of our internal evaluations are documented in external reports (e.g., a report for inspectors, state audits)

Data use for school development

9 0.87 ● Student achievement results are used to identify gaps in our curriculum

● Detailed data analyses are an essential part of the school improve-ment process in my school

Data use for instruction

11 0.91 To what extent do you use data to

● set learning goals/targets for individual students

● set the pace of my lessons

(8)

Analyses

For the first research question regarding the extent to which teachers use data, we conducted descriptive analyses presenting mean, standard deviation, minimum, maximum, and median of the data use scales for accountability, school development, and instruction. For the”accountability” and”school development” scales, we included ”don’t know” as a response alternative. We therefore also reported the mean frequency of“don’t know” responses for these scales. For the “data use for instruction” scale, we included “not applicable” as a response alternative. We therefore also reported the mean frequency of“not applicable” responses for this scale.

For the second research question regarding factors influencing data use, we conducted multi-level analyses to determine the extent to which data use for accountability, data use for school development, and data use for instruction are influenced by school organizational characteristics, data characteristics, user characteristics, and collaboration, because the data have a nested struc-ture (teachers are nested within schools) (Hox,2002; Snijders & Bosker, 1999).

To draw conclusions based on a multilevel analysis, several assumptions must be met (Field,

2009): The predictors should have some variation in value; no perfect linear relationship between two or more of the predictors should exist (i.e., no perfect multi-collinearity); the predictors should not be correlated with external variables; the variance of the residual terms should be constant (e.g., homoscedasticity); the residual terms for any two observations should be uncorrelated (i.e., independent errors); the residuals in the model should be random, normally distributed variables with a mean of zero; all values of the outcome variable should be independent; and the mean values of the outcome variable for each increment of the predictors should lie along a straight line. All of these assumptions were checked and met.

We calculated a model with data use for accountability purposes as a dependent variable and school organizational characteristics, data user characteristics, data characteristics, and collabora-tion as independent variables. We also calculated a model with data use for school development purposes as the dependent variable, and a model with data use for instructional purposes as the dependent variable. On many of the independent variables, respondents answered”don’t know” or “not applicable” at least once. For the analyses, therefore, we replaced “don’t know”/”not applic-able” responses on these factor scales (characteristics of school organization, data, user, and collaboration) with the series mean (Newman, 2009). Both the explanatory and the dependent variables were transformed to z scores for these analyses. Therefore, the reported effects of the explanatory variables can be interpreted in the same way as standardized regression coefficients in a regression model (seeTable 5).

Table 3.Reliability of the scales for the factors influencing data use in the survey. Scale

Number of items

Cronbach

alpha Example items

School organizational characteristics

16 0.92 ● My school leader/head of department is a good example of an effective data user

● There is specific time set aside for me to use data Collaboration 3 0.74 ● I discuss my students’ results with other teachers

● I share and discuss my students’ results with parents User characteristics 8 0.80 ● Students benefit when instruction is based on data

● I understand the quality criteria and concepts for data use (e.g., correlation, validity, reliability)

Data characteristics 11 0.85 ● I have access to student data in an information system

(9)

The significance of the multilevel regression coefficients is routinely reported in SPSS. However, when multiple significance tests are conducted, the risk of obtaining “significant” results merely by chance needs to be taken into account. For example, if one conducts 20 significance tests at the 5% level, one would expect to find one “significant” result just by coincidence. A simple (and somewhat conservative) method of correcting for this risk is to apply the Bonferroni correction (Neter & Wasserman,1974). In the present case, this means dividingα by 12 (we tested the effects of four explanatory variables on three dependent variables). This means that a coefficient should be considered significant at α < .05 if SPSS reports it significant at α < .0042.

Results

Data use for accountability, school development, and instruction

The first research question concerns the extent to which teachers use data for accountability, school development, and instruction. Table 4 shows the means, standard deviations, minimum, maximum, and medians for these three scales. Teachers scored the highest on data use for accountability. They also scored relatively high on school development. This means that teachers generally agree with statements such as: In our school we use external evaluations (e.g., from the inspection) for our own improvement (school development); The data we use for accountability purposes (e.g., for parents, the inspection) represent the reality (accountability).

However, teachers seemed to make less use of data for instructional purposes. For this scale, the possible answers were “never, yearly, a couple of times per year, monthly, weekly, and a couple of times per week”. Nearly all items were scored between “yearly” and ”a couple of times per year”. For example, “Formulating learning goals for individual students” was scored 2.53 on average, between yearly and a couple of times per year. One item, “Identifying needs of and planning and adjusting instruction for gifted students”, even had a mean score between “never” and “yearly” (1.95). Another item, “Investigating why students make particular mistakes”, received a mean score between ”a couple of times per year” to ”monthly” (2.51). On the other hand, “Providing students with feedback on their learning process” was scored 3.95, so almost”monthly” on average.

In addition, teachers tended to answer“I don’t know” on several items with regard to account-ability and school development. Concerning accountaccount-ability, teachers answered“I don’t know” on average 1.34 times (out of 3 items). With regard to school development, this was on average 2.31 times (out of 9 items), and with regard to instruction this was 0.97 times (out of 12 items). Thus, teachers are often not aware whether or not data are used for accountability and school develop-ment. These “don’t know” and “not applicable” responses also explain the reduced number of respondents included in the descriptive analyses on the data use scales.

Table 4.Data use for accountability, school development, and instruction. Accountability (3 items) School development (9 items) Instruction (12 items) Mean (SD) – N 2.96 (.43)– 348 2.64 (.47)– 381 2.96 (1.01)– 708 Min. 1.33 1.00 1.00 Max. 4.00 4.00 5.73 Median 3.00 2.67 3.00

Frequency Don’t know Don’t know Not applicable

Mean (SD) – N 1.34 (1.12)– 1042 2.31 (2.53)– 1047 0.92 (2.05)– 1048 Notes: For”accountability” and “school development”, response options were: 1 = strongly disagree; 2 = disagree; 3 = agree; 4

= strongly agree; and“don’t know”. For “instruction”, response options were: 1 = never; 2 = yearly; 3 = a couple of times per year; 4 = monthly; 5 = weekly; 6 = a couple of times per week; and “not applicable’”. For these analyses, both the explanatory and the dependent variables were transformed to z scores. Therefore, the reported effects of the explanatory variables can be interpreted in the same way as standardized regression coefficients in a regression model.

(10)

Factors influencing data use

Table 5shows the results of the multilevel analyses regarding the variables influencing data use for accountability, school development, and instruction.

For the analysis with data use for accountability as the dependent variable, the results (see

Table 5) show that collaboration, school organizational characteristics (p < .001), and data char-acteristics (p < .0042) significantly influence data use for accountability. These three variables show an effect that remains significant at the .05 level (two-tailed) when applying the Bonferroni correction. This means that a score of one standard deviation higher on “school organizational characteristics”, for example, relates to an increase of 0.32 in the score for “data use for account-ability”. User characteristics do not significantly influence data use for accountability, however. The improvement in fit (Х2change= 81.43, dfchange = 4, p = 0.000) for the accountability model was

significant. The variables together explained 26% of the variance in data use for accountability (within which 87% of the variance at the school level was explained and 20% of the variance at the teacher level).

For the analysis with data use for school development as the dependent variable, the results (see Table 5) show that only school organizational characteristics (p < .001) show a significant result. The effect of data characteristics (p < .05) does not reach statistical significance once the Bonferroni correction is applied. The improvement infit (Х2change= 253, dfchange= 4, p = 0.000)

for the school development model was significant. These variables explained 53% of the variance (within which 70% of the variance at the school level was explained and 50% of the variance at the teacher level).

Finally, the results show that school organizational characteristics, user characteristics, and collaboration significantly influence data use for instruction (p < .001) (see Table 4). The improvement in fit (Х2change= 111, df change = 4, p = 0.00) for the instruction model was

significant. The variables together explained 19% of the variance in data use for instruction (within which 27% of the variance at the school level was explained and 16% of the variance at the teacher level).

Conclusions and discussion

Data use for accountability, school development, and instruction

The results of this study show that schools seem to be making greater use of data for account-ability and school development than for instructional purposes. The focus seems to be mostly on data use for accountability. It is important to hold schools accountable for their functioning, but we argue that the primary focus should be on the use of data for school development and instruction. A strong focus on data use for accountability can come with the danger of negative side effects, such as focusing only on a specific type of students who can help improve your status on accountability indicators (e.g., “bubble kids”), cheating to improve the status on accountability indicators, teaching to the test, excluding certain students from a test, and encouraging low-performing students to drop out (Ehren & Swanborn,2012; Hamilton, Stecher, & Yuan,2009).

Moreover, the results show that multiple respondents answered“I don’t know” on items about data use for accountability, data use for school development, and data use for instruction. This suggests a lack of knowledge concerning how to use data in schools. Even on the data use for instruction scale, a number of teachers answered“I don’t know” to several questions about how they use data in their own classroom. The lack of knowledge and skills related to the use of data for instruction can prevent schools from actually improving their outcomes. It is also rather alarming that, on average, teachers only use data for instruction between“yearly” and “a couple of times per year”. This applies, for example, to using data for formulating learning goals for individual students. Adjusting instruction to the needs of the students, on average, only happens between“a couple of times per year” and “monthly”; this also applies to ”Investigating why students make particular

(11)

Table 5. Results of the multilevel analysis; factors in fluencing data use. Data use for accountability Data use for school development Data use for instruction Model 0 Model 1 Model 0 Model 1 Model 0 Model 1 Estimate SE Estimate SE Estimate SE Estimate SE Estimate SE Estimate SE Fixed eff ects Intercept –.006 .066 –.001 .049 .006 .062 .000 .041 .008 .048 .000 .043 School organization .315*** .053 .659*** .041 .217*** .038 Data characteristics .167** .054 .094* .042 .046 .040 User characteristics .016 .052 .014 .038 .217*** .036 Collaboration .193*** .049 .042 .037 .155*** .035 Variance components Between schools .074+ .042 .010 .022 .061 .043 .018 .021 .037+ .021 .027 .016 Between teachers .925*** .075 .738*** .059 .943*** .074 .474*** .038 .963*** .053 .808*** .044 ICC (rho) .074 .013 .061 .037 .037 .032 Variance explained Between schools 86.6% 70.4% 27.8% Between teachers 20.2% 49.7% 16.0% Deviance 984.068 902.060 1080.862 827.826 2005.997 1895.332 Improvement in model fit( p ) .000 .000 .000 N of school locations 63 62 68 N of teachers 381 348 708 + p < .10; *p < .05; ** p < .0042; *** p < .001 (two-tailed).

(12)

mistakes”. The link between data use and improved student achievement (Campbell & Levin,2009; Carlson et al.,2011; Lai et al.,2009; McNaughton et al.,2012) usually goes through teachers actually improving their instruction in the classroom based on data.

By presenting three types of data use (for accountability, school development, and instruction), it may seem that we present data use as a rather straightforward, linear, rational process, which it is not. As others have stated (Coburn & Turner,2011; Coburn, Toure, & Yamashita,2009; Mandinach, Honey, Light, & Brunner, 2008; Schildkamp, Karbautzki, & Vanhoof, 2014), data use is a complex process, involving several different processes, conditions, and contexts that all interact with each other. Moreover, data use is not a case of using data for accountability OR school development OR instruction. We argue that data should be used for accountability AND school development AND instruction. Only then can the ultimate goal of data use, school improvement in terms of increased student learning, be reached. After all, as Earl and Katz (2006) state:“Accountability without improvement is empty rhetoric, and improvement without accountability is whimsical action without direction” (p. 12).

Several data use theories of action exist (see, e.g., Boudett et al.,2005; Earl & Katz,2006; Lai & Schildkamp, 2013; Mandinach et al., 2008; Marsh, 2012; Schildkamp & Poortman,2015). Several of these frameworks begin with having a clear purpose for using data. As this study has shown, these purposes should relate to data use for accountability, data use for school development, and data use for instruction. Next, data are collected and analyzed to identify (student learning) problems and possible causes of these problems. These data also need to be interpreted, which involves a sense-making process, as the implications regarding solutions to the problems and consequent actions based on the analysis of the data are often not self-evident (Mandinach et al.,2008; Marsh,2012). In this sense-making process, teachers connect the data with their own experiences, understanding, knowledge, and expertise. This sense-making process can lead to improvement actions, which need to be evaluated (e.g., was the purpose achieved), making data use a cyclic and iterative process.

Factors influencing data use

The results of this study show that data use for accountability purposes is influenced by school organizational characteristics, data characteristics, and collaboration. This indicates that certain school organizational characteristics (e.g., school leader support), data characteristics (e.g., having access to data), and collaboration (e.g., teacher collaboration around the use of data) are enablers of data use for accountability.

Data use for school development is influenced by school organizational characteristics and data characteristics, indicating that these factors can enable the use of data for school development. Data characteristics are an important enabler of data use for school development and account-ability, but not for instruction (seeFigure 2). Given what is at stake in accountability (and school

Data use for school development Data use for

instruction

Data use for accountability Data characteristics Collaboration User characteristics School organization

(13)

development) decisions, the quality of data plays an important role, implying that the use of objective, reliable, and valid data is necessary (Harlen, 2007). Although data characteristics are important, the results of this study also show that they appear to be less important than school organizational characteristics, collaboration, and user characteristics.

Data use for instruction is influenced by school organizational characteristics, collaboration, and user characteristics. Concerning user characteristics, data use for instruction requires a certain amount of data literacy: how to analyze and use data to improve instruction. Schildkamp and Poortman (2015) found that effective data use required data literacy, but also pedagogical content knowledge (PCK). PCK includes subject-matter content knowledge, but also knowledge on how to teach subject-matter knowledge (Shulman,1986). Data can help teachers to identify the concep-tions and misconcepconcep-tions of students, but teachers still need their PCK to determine how to alter their instruction accordingly. Mandinach (2012) refers to this as pedagogical data literacy: the ability to analyze data and, based on the data in combination with PCK, take meaningful action.

School organizational characteristics appear to influence all three types of data use, confirming the importance of this factor. Collaboration around the use of data also seems crucial, as others have stated (Copland,2003; Datnow et al.,2013; Horn & Little,2010; Means et al.,2010; Nelson & Slavit,2007; Park & Datnow,2009; Spillane,2012; Wohlstetter et al.,2008). Collaboration, trust, and the willingness and capability to address conflict are essential for the use of data (Copland,2003; Daly,2012; Datnow et al.,2013; Horn & Little,2010; Means et al.,2010; Nelson & Slavit,2007; Park & Datnow, 2009). According to Hargreaves and Braun (2013), conflicts between the purposes of accountability and school improvement in data use are“most likely to be resolved when there is collaborative involvement in data collection and analysis, and collective responsibility for improve-ment” (p. 13).

Organizational characteristics, data characteristics, user characteristics, and collaboration are likely to influence data use in all countries around the world. However, in some countries, schools will have access to all sorts of data, ranging from student and parent questionnaires and classroom observations to standardized test results, which enables data use; whereas in other countries, access to a range of valid and reliable data might be a problem and thus form a barrier to data use. Depending on the specific context, a factor can become either an enabler or a barrier (Datnow et al.,2013; Schildkamp et al.,2014). For example, our results show that more teacher collaboration may lead to an increase on data use. Therefore, derived from this, a lack of teacher collaboration in a certain context may hinder the use of data in schools. Another example: Teachers who perceive data characteristics more positively, such as having a wide range of data available, are more likely to use data. In The Netherlands, collecting and using classroom observation data is becoming more and more common. This is less common in other countries, for example, Australia. This might mean that a lack of access to observational data might be a hindrance to the use of data in Australian schools. But this is not necessarily the case; it could also mean that there needs to be greater negotiation around the collection of such data in that context (Lai & Schildkamp,2016).

Limitations of this study

It is important to acknowledge that this study was conducted in a specific context, The Netherlands. This context is different from that of other countries, such as the US or the UK. Schools in The Netherlands have a great deal of autonomy. For example, in The Netherlands there are no national standardized assessments, with the exception of the final examination, whereas these assessments do occur in countries such as the US and the UK. One could also argue that the accountability pressure in countries such as the US and the UK is higher than in The Netherlands, due to the No Child Left Behind Act, and Ofsted inspections and League Tables, respectively, although The Netherlands does have an inspectorate that holds schools accountable for the education they provide. However, we do think that the results of this study also apply to other contexts. For example, Schildkamp et al. (2014) concluded from a qualitative case study in five

(14)

different countries, including the UK, that schools are more focused on data use for accountability than on data use for school development and instruction.

Moreover, we included only teachers in our study, but to be effective, data use should not be an endeavor by individual teachers. Data use should also involve school leaders, principals, and even students. Using data should not be a mere aspect that depends on each individual teacher, but also on collaboration within a school. This is also, at least partly, reflected in the results of this study, as these show that data-based decision making is influenced by collaboration, school organizational characteristics, data characteristics, and user characteristics. The variables included in this study can explain part of the variance in data use. However, part of the variance in the use of data for accountability, school development, and instruction remains unexplained. To some degree, this could be due to the limited variability in the scores for data use in the schools. It could also be due to (possible)flaws of this study, such as the data-driven composition of the models, the use of self-report data, and the sample of schools participating in this study.

Implications for further research

A potential cause of the remaining unexplained variance in data use may be that our theoretical framework does not include all relevant variables for studying the use of data. Further (in-depth qualitative) studies need to study what actually happens in the school when teachers engage with data, and which factors promote data use. Therefore, we argue for mixed-methods studies into data use, combining quantitative and qualitative studies. Following Little (2012, p. 144), the qualitative part can consist of a conceptually robust, methodologically sophisticated, and extensive program of micro-process research on data use that also anticipates the ways in which local practice both instantiates and constructs institutional and organizational structures, processes, and logics. As stated by Hamilton et al. (2009), we need more knowledge on how teachers do use and should use data to make instructional decisions. The quantitative part can consist of a large-scale quantitative study into the use of data in other contexts, as this is needed in order to obtain more generalizable results regarding the factors influencing data use. These types of studies could also take into account whether and how the different factors influence each other (e.g., the influence of organizational characteristics on user characteristics, the influence of user characteristics on collaboration).

Moreover, the results of this study suggest that professional development in the use of data is needed, as it appears that the data are not used to their full potential. However, professional development is often ineffective in terms of improving teachers’ knowledge, skills, and attitude. We therefore need a more fundamental understanding of the characteristics of data use-related professional development (PD) (Desimone, 2009), and the conditions under which this PD is effective (Coburn & Turner, 2012; Marsh, 2012). As Wayman et al. (2007) noted, it is likely that teachers will engage in data use if properly supported. It would be interesting to develop professional development on the use of data and, if this is successful in accomplishing higher levels of data use, to investigate how much variance in the dependent variable is then explained by the variables in our theoretical framework.

Implications for practice

In an ongoing stream of changes and challenges, effective data use is yet another challenge teachers must deal with. Schools struggle with using data effectively (Ingram, Louis, & Schreider,

2004; Schildkamp & Kuiper, 2010; Schildkamp & Teddlie, 2008; Schildkamp, Visscher, & Luyten,

2009; Wohlstetter, Datnow, & Park, 2008). Schools and teachers need support in the use of data. The results of this study show that collaboration and school organizational characteristics are important promoters of effective data use, which suggests that professional development on the use of data should take these factors into account.

(15)

Although the exact content and shape of this professional development is likely to be influenced by the context of the school and the country, similar design guidelines could be applied. For example, we found that user characteristics is an important factor influencing data use for instruction. We also know from previous research (e.g., Schildkamp et al.,2014) that teachers often do not know what data to start with. Therefore, professional development could begin from a problem-solving perspective: instead of starting with data, starting with a problem in the classroom that teachers would like to work on. The next step would be collecting data tofind possible causes of this problem. Such an approach has proven to be effective in professional development programs (see, e.g., Lai & McNaughton,2013; Lai et al.,2009; Poortman, Ebbeler, & Schildkamp,2015; Schildkamp & Poortman, 2015), and could work in different countries, provided that these countries are willing to invest in these professional development programs for teachers. Only then can the full potential of data-based decision making for increased student learning and achievement be reached.

Finally, it is important not only to develop, implement, and evaluate professional development programs and interventions for data use (e.g., programs such as those discussed by Boudett et al.,

2005; Campbell & Levin,2009; Carlson et al.,2011; Lai et al.,2009; Schildkamp & Poortman,2015; Slavin, Cheung, Holmes, Madden, & Chamberlain,2011; Timperley & Parr,2009) but also to invest in teacher education colleges. Little attention is paid to data use in the curriculum of most teacher education colleges (Mandinach & Gummer,2013). This needs to change, if we are to realize the full potential of using data to increase student learning.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributors

Kim Schildkampis an associate professor in the Faculty of Behavioural, Management and Social Sciences of the University of Twente. Kim’s research, in The Netherlands but also in other countries, focuses on “data-based decision making for school improvement”. She has been invited as a guest lecturer and keynote speaker at several conferences and universities, including AERA, the University of Pretoria in South Africa, and the University of Auckland in New Zealand. She is a board member of the International Congress for School Effectiveness and Improvement (ICSEI) and chair of the ICSEI data use network. She has published widely on the use of data.

Cindy Poortmanis an assistant professor in the Faculty of Behavioural, Management and Social Sciences of the University of Twente. Her research interests include professional development of teachers in professional learning communities. She has published widely on teacher professional development.

Hans Luytenis an associate professor of education at the Faculty of Behavioural, Management and Social Sciences of the University of Twente, Enschede, The Netherlands. He is an internationally recognized expert on multilevel analysis. His research interests include longitudinal studies both at the individual student level (growth curve analysis) and higher (trends at school and system level), international comparisons, educational disadvantage, and the develop-ment of methodologies for assessing the effect of schooling on student development.

Johanna Ebbelerwas a PhD student at the University of Twente in The Netherlands. Her PhD study focused on the use of data in data teams, the role of the school leader in the use of data, and the effects of data use.

References

Allensworth, E., Correa, M., & Ponisciak, S. (2008). From high school to the future: ACT preparation– Too much, too late. Why ACT scores are low in Chicago and what it means for schools (CCSR Research Report). Chicago, IL: Consortium on Chicago School Research. Retrieved from https://consortium.uchicago.edu/sites/default/files/publications/ ACTReport08.pdf

Bernhardt, V. L. (2013). Data analysis for continuous school improvement (3rd ed.). New York, NY: Routledge. Boudett, K. P., City, E. A., & Murnane, R. J. (Eds.). (2005). Data wise: A step-by-step guide to using assessment results to

(16)

Breiter, A., & Light, D. (2006). Data for school improvement: Factors for designing effective information systems to support decision-making in schools. Educational Technology & Society, 9(3), 206–217.

Campbell, C., & Levin, B. (2009). Using data to support educational improvement, Educational Assessment, Evaluation and Accountability, 21, 47–65. doi:10.1007/s11092-008-9063-x

Carlson, D., Borman, G. D., & Robinson, M. (2011). A multistate district-level cluster randomized trial of the impact of data-driven reform on reading and mathematics achievement. Educational Evaluation and Policy Analysis, 33, 378– 398. doi:10.3102/0162373711412765

Chahine, S., & Childs, R. (2009). Evaluation of the utility of PSAT/NMSQT tools for educators (Final Report for the College Board Fellowship Project Number: 2009-045). Toronto, Canada: University of Toronto.

Chen, E., Heritage, M., & Lee, J. (2005). Identifying and monitoring students’ learning needs with technology. Journal of Education for Students Placed at Risk, 10, 309–332. doi:10.1207/s15327671espr1003_6

Coburn, C. E., & Talbert, J. E. (2006). Conceptions of evidence use in school districts: Mapping the terrain. American Journal of Education, 112, 469–495. doi:10.1086/505056

Coburn, C. E., Toure, J., & Yamashita, M. (2009). Evidence, interpretation, and persuasion: Instructional decision making in the district central office. Teachers College Record, 111(4), 1115–1161.

Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Perspectives, 9, 173–206. doi:10.1080/15366367.2011.626729

Coburn, C. E., & Turner, E. O. (2012). The practice of data use: An introduction. American Journal of Education, 118, 99– 111. doi:10.1086/663272

Cochran, W. G. (1997). Sampling techniques (3rd ed.). New York, NY: Wiley.

Copland, M. A. (2003). The Bay area school collaborative: Building the capacity to lead. In J. Murphy & A. Datnow (Eds.), Leadership lessons from comprehensive school reform (pp. 159–184). Thousand Oaks, CA: Corwin Press.

Comenius Project Using Data for Improving School and Student Performance. (2011). Comparative analysis data use in Germany, the Netherlands, Lithuania, Poland and England. Retrieved from http://www.datauseproject.eu/home/ documents

Daly, A. J. (2012). Data, dyads, and dynamics: Exploring data use and social networks in educational improvement. Teachers College Record, 114(11), 1–38.

Datnow, A., Park, V., & Kennedy-Lewis, B. (2013). Affordances and constraints in the context of teacher collaboration for the purpose of data use. Journal of Educational Administration, 51, 341–362. doi:10.1108/09578231311311500

Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high-performing school systems use data to improve instruction for elementary students. Los Angeles, CA: Center on Educational Governance, Rossier School of Education, University of Southern California.

Desimone, L. M. (2009). Improving impact studies of teachers’ professional development: Toward better conceptua-lizations and measures. Educational Researcher, 38, 181–199. doi:10.3102/0013189X08331140

Diamond, J. B., & Spillane, J. P. (2004). High-stakes accountability in urban elementary schools: Challenging or reproducing inequality. Teachers College Record, 106(6), 1145–1176. doi:10.1111/j.1467-9620.2004.00375.x

Earl, L., & Fullan, M. (2003). Using data in leadership for learning. Cambridge Journal of Education, 33, 383–394. doi:10.1080/0305764032000122023

Earl, L. M., & Katz, S. (2006). Leading schools in a data-rich world. Harnessing data for school improvement. Thousand Oaks, CA: Corwin Press.

Ehren, M. C. M., & Swanborn, M. S. L. (2012). Strategic data use of schools in accountability systems. School Effectiveness and School Improvement, 23, 257–280. doi:10.1080/09243453.2011.652127

Field, A. (2009). Discovering statistics using SPSS. Thousand Oaks, CA: Sage.

Geijsel, F. P., Sleegers, P. J. C., Stoel, R. D., & Krüger, M. L. (2009). The effect of teacher psychological and school organizational and leadership factors on teachers’ professional learning in Dutch schools. The Elementary School Journal, 109, 406–427.

Halverson, R. (2010). School formative feedback systems. Peabody Journal of Education, 85, 130–146. doi:10.1080/ 01619561003685270

Hamilton, L. S., Stecher, B. M., & Yuan, K. (2009). Standards-based reform in the United States: History, research, and future directions. Santa Monica, CA: RAND Corporation. Retrieved fromhttp://www.rand.org/pubs/reprints/RP1384

Hargreaves, A., & Braun, H. (2013). Data–driven improvement and accountability. Boston, MA: National Education Policy Center.

Harlen, W. (2007). The quality of learning: Assessment alternatives for primary education. (Interim Reports Research Survey 3/4). Cambridge, UK: University of Cambridge. Retrieved fromwww.primaryreview.org.uk

Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.

Horn, I. S., & Little, J. W. (2010). Attending to problems of practice: Routines and resources for professional learning in teachers’ workplace interactions. American Educational Research Journal, 47, 181–217. doi:10.3102/ 0002831209345158

(17)

Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the data-driven mantra: Different conceptions of data-driven decision making. In P. A. Moss (Ed.), Evidence and decision making (Vol. 106, pp. 105–131). Malden, MA: Wiley-Blackwell. Ingram, D., Louis, K. S., & Schroeder, R. (2004). Accountability policies and teacher decision making: Barriers to the use

of data to improve practice. Teachers College Record, 106(6), 1258–1287. doi:10.1111/j.1467-9620.2004.00379.x

Kelly, A., Downey, C., & Rietdijk, W. (2010). Data dictatorship and data democracy: Understanding professional attitudes to the use of pupil performance data in schools. Reading, UK: CFBT Education Trust.

Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvements: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112, 496– 520. doi:10.1086/505057

Knapp, M. S., Copland, M. A., & Swinnerton, J. A. (2007). Understanding the promise and dynamics of data-informed leadership. Yearbook of the National Society for the Study of Education, 106(1), 74–104.

Knapp, M. S., Swinnerton, J. A., Copland, M. A., & Monpas-Huber, J. (2006). Data-informed leadership in education. Seattle, WA: Center for the Study of Teaching and Policy, University of Washington.

Lai, M. K., & McNaughton, S. (2013). Analysis and discussion of classroom and achievement data to raise student achievement. In K. Schildkamp, M. K. Lai, & L. Earl (Eds.), Data-based decision making in education: Challenges and opportunities (pp. 23–47). Dordrecht, The Netherlands: Springer.

Lai, M. K., McNaughton, S., Timperley, H., & Hsiao, S. (2009). Sustaining continued acceleration in reading comprehen-sion achievement following an intervention. Educational Assessment, Evaluation and Accountability, 21, 81–100. doi:10.1007/s11092-009-9071-5

Lai, M. K., & Schildkamp, K. (2013). Data-based decision making: An overview. In K. Schildkamp, M. K. Lai, & L. Earl (Eds.), Data-based decision making in education: Challenges and opportunities (pp. 9–21). Dordrecht, The Netherlands: Springer.

Lai, M. K., & Schildkamp, K. (2016). In-service teacher professional learning: Use of assessment in data-based decision-making. In G. T. L. Brown & L. R. Harris (Eds.), Handbook of human and social conditions in assessment (pp. 77–94). New York, NY: Routledge.

Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118, 143–166. doi:10.1086/663271

Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform practice. Educational Psychologist, 47, 71–85. doi:10.1080/00461520.2012.667064

Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data literacy in educator preparation. Educational Researcher, 42, 30–37. doi:10.3102/0013189X12459803

Mandinach, E., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data-driven decision-making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 13–31). New York, NY: Teachers College Press.

Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48.

Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education: Evidence from recent RAND research. Santa Monica, CA: RAND Corporation.

Marsh, J. A., Sloan McCombs, J., & Martorell, F. (2010). How instructional coaches support data-driven decision making: Policy implementation and effects in Florida Middle schools. Educational Policy, 24, 872–907. doi:10.1177/ 0895904809341467

McNaughton, S., Lai, M. K., & Hsiao, S. (2012). Testing the effectiveness of an intervention model based on data use: A replication series across clusters of schools. School Effectiveness and School Improvement, 23, 203–228. doi:10.1080/ 09243453.2011.652126

Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: From accountability to instructional improvement. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.

Mingchu, L. (2008). Structural equation modeling for high school principals’ data-driven decision making: An analysis of information use environments. Educational Administration Quarterly, 44, 603–634. doi:10.1177/ 0013161X08321506

Ministerie van Onderwijs, Cultuur & Wetenschappen. (2000). Wet op het onderwijstoezicht [Education supervision act]. Den Haag, The Netherlands: SDU.

Nelson, T. H., & Slavit, D. (2007). Collaborative inquiry among science and mathematics teachers in the USA: Professional learning experiences through cross-grade, cross-discipline dialogue. Professional Development in Education, 33, 23–39. doi:10.1080/13674580601157620

Neter, J., & Wasserman, W. (1974). Applied linear statistical models. Homewood, IL: R. D. Irwin.

Newman, A. (2009). Missing data techniques and low response rates: The role of systematic nonresponse parameters. In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity, and fable in the organizational and social sciences (pp. 7–36). New York, NY: Routledge.

Park, V., & Datnow, A. (2009). Co-constructing distributed leadership: District and school connections in data-driven decision making. School Leadership and Management, 29, 477–494. doi:10.1080/13632430903162541

(18)

Poortman, C. L., Ebbeler, J., & Schildkamp, K. (2015, April). School improvement effects of a data use intervention for teachers. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL. Schildkamp, K. (2007). The utilisation of a self-evaluation instrument for primary education (Doctoral dissertation).

Enschede, The Netherlands: University of Twente. Retrieved from http://doc.utwente.nl/57803/1/thesis_ Schildkamp.pdf

Schildkamp, K., Karbautzki, L., & Vanhoof, J. (2014). Exploring data use practices around Europe: Identifying enablers and barriers. Studies in Educational Evaluation, 42, 15–24. doi:10.1016/j.stueduc.2013.10.007

Schildkamp, K., & Kuiper, W. (2010). Data-informed curriculum reform: Which data, what purposes, and promoting and hindering factors. Teaching and Teacher Education, 26, 482–496. doi:10.1016/j.tate.2009.06.007

Schildkamp, K., & Lai, M. K. (2013). Conclusions and a data use framework. In K. Schildkamp, M. K. Lai., & L. Earl (Eds.), Data-based decision making in education: Challenges and opportunities (pp. 177–191). Dordrecht, The Netherlands: Springer.

Schildkamp, K., Lai, M. K., & Earl, L. (Eds.). (2013). Data-based decision making in education: challenges and opportunities. Dordrecht, The Netherlands: Springer.

Schildkamp, K., & Poortman, C. L. (2015). Factors influencing the functioning of data teams. Teachers College Record, 117(4). Retrieved fromhttp://www.tcrecord.org/library/abstract.asp?contentid=17851

Schildkamp, K., Rekers-Mombarg, L. T. M., & Harms, T. J. (2012). Student group differences in examination results and utilization for policy and school development. School effectiveness and school improvement, 23, 229–255. doi:10.1080/09243453.2011.652123

Schildkamp, K., & Teddlie, C. (2008). School performance feedback systems in the USA and in The Netherlands: A comparison. Educational Research and Evaluation, 14, 255–282. doi:10.1080/13803610802048874

Schildkamp, K., Visscher, A., & Luyten, H. (2009). The effects of the use of a school self-evaluation instrument. School Effectiveness and School Improvement, 20, 69–88. doi:10.1080/09243450802605506

Sharkey, N. S., & Murnane, R. J. (2006). Tough choices in designing a formative assessment system. American Journal of Education, 112, 572–588. doi:10.1086/505060

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. doi:10.3102/0013189X015002004

Slavin, R. E., Cheung, A., Holmes, G., Madden, N. A., & Chamberlain, A. (2011). Effects of a data-driven district reform model. Retrieved from http://www.cddre.org/_images/Effects%20of%20a%20Data%20Driven%20District% 20Reform%20Model%20January%202011.pdf

Snijders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis. An introduction to basic and advanced multilevel modeling. London, UK: Sage.

Spillane, J. P. (2012). Data in practice: Conceptualizing the data-based decision-making phenomena. American Journal of Education, 118, 113–141. doi:10.1086/663283

Supovitz, J. A., & Klein, A. (2003). Mapping a course for improved student learning: How innovative schools systematically use performance data to guide improvement. Philadelphia, PA: Consortium for Policy Research in Education, University of Pennsylvania Graduate School of Education.

Timperley, H. S., & Parr, J. M. (2009). Chain of influence from policy to practice in the New Zealand literacy strategy. Research Papers in Education: Policy and Practice, 24, 135–154. doi:10.1080/02671520902867077

Vanhoof, J., Verhaeghe, G., Verhaeghe, J. P., Valcke, M., & Van Petegem, P. (2011). The influence of competences and support on school performance feedback use. Educational Studies, 37, 141–154. doi:10.1080/03055698.2010.482771

Verhaeghe, G., Vanhoof, J., Valcke, M., & Van Petegem, P. (2010). Using school performance feedback: Perceptions of primary school principals. School Effectiveness and School Improvement, 21, 167–188. doi:10.1080/ 09243450903396005

Wayman, J. C., Cho, V., & Johnston, M. T. (2007). The Data-Informed District: A district-wide evaluation of data use in the Natrona County school district. Austin, TX: The University of Texas.

Wayman, J. C., Jimerson, J. B., & Cho, V. (2012). Organizational considerations in establishing the Data-Informed District. School Effectiveness and School Improvement, 23, 159–178. doi:10.1080/09243453.2011.652124

Wayman, J. C., Snodgrass Rangel, V. W., Jimerson, J. B., & Cho, V., (2010). Improving data use in NISD: Becoming a Data-Informed District. Austin, TX: The University of Texas.

Wayman, J. C., Spikes, D. D., & Volonnino, M. R. (2013). Implementation of a data initiative in the NCLB Era. In K. Schildkamp, M. K. Lai., & L. Earl (Eds.), Data-based decision making in education: Challenges and opportunities (pp. 135–153). Dordrecht, The Netherlands: Springer.

Wayman, J. C., & Stringfield, S. (2006). Data use for school improvement: School practices and research perspectives. American Journal of Education, 112, 463–468. doi:10.1086/505055

Wohlstetter, P., Datnow, A., & Park, V. (2008). Creating a system for data-driven decision-making: Applying the principal-agent framework. School Effectiveness and School Improvement, 19, 239–259. doi:10.1080/ 09243450802246376

Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112, 521–548. doi:10.1086/505058

Referenties

GERELATEERDE DOCUMENTEN

This study is therefore focused on identifying the factors that were experienced as hindering or promoting the implementation of DBDM by school leaders and trainers of

IUCN enables the use of five different criteria to estimate the extinction risk of species: criterion A, population size reduction; criterion B, geographic range

In 2015, the Research Institute for Nature and Forest (INBO) adopted an open data policy, with the goal to publish our biodiversity data as open data, so anyone can use these.. In

The questions of the interview are, in general, in line with the ordering of the literature review of this paper. Therefore, three main categories can be

Invasive breast cancer The hospital organizational factors hospital type, hospital volume, percentage of mastectomies, number of weekly MDT meetings, number of plastic surgeons per

However, apart from the traditional problems faced by the black workers in this case males at the industry at that time, there was another thorny issue as provided in section

Second, we will use a Probit regression model to estimate the probability that a cartel exists in an industry, using data on discovered cartels as will be explained in Chapter 4.. We

Vooral de percentages juiste antwoorden op vraag B 27 bevreemden ons, omdat we van mening zijn dat juist door het plaatsen in een context deze opgave voor de leerlingen