• No results found

Towards a comprehensive model of technology integration in education : the influence of will, experience, skill and pedagogy (WESP) on pre-service teachers’ preparedness to integrate technology in education

N/A
N/A
Protected

Academic year: 2021

Share "Towards a comprehensive model of technology integration in education : the influence of will, experience, skill and pedagogy (WESP) on pre-service teachers’ preparedness to integrate technology in education"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculty of Social and Behavioural Sciences

Graduate School of Child Development and Education

Towards a comprehensive model of technology

integration in education

The influence of will, experience, skill and pedagogy (WESP) on

pre-service teachers’ preparedness to integrate technology in

education

Research Master Child Development and Education Research Master Thesis

Student: J.L. van Leeuwen BA BSc

Supervisors: prof. dr. J.M. Voogt (University of Amsterdam/Windesheim University of Applied Sciences) & dr. A.E.H. Smits (Windesheim University of Applied Sciences) Reviewer 1: dr. J.A. Schuitema (University of Amsterdam)

Reviewer 2: prof. dr. G.A. Knezek (University of North Texas, USA) Date: 26th of July 2018

(2)

2

Abstract

Although research suggests that technology can enhance learning-related student attributes such as motivation, create more efficient learning environments and improve learning outcomes, the implementation of technology in education is diffuse. In addition, pre-service teachers do not feel prepared to effectively use technology in their classroom. To improve the understanding of how technology in education is influenced and to enable better preparation of pre-service teachers for technology integration, this study proposed and examined a comprehensive model in which the constructs Will (the attitudes and beliefs towards technology), Experience (learning experiences on technology use), Skill (the competency of technology use) and Pedagogy (pedagogical content knowledge) influence technology use (the WESP-model). This model is based on the WST-model (Knezek & Christensen, 2008), and its two recent expansions: WSTP (Knezek & Christensen, 2016) and WEST (Farjon, 2017). A mixed-methods design was used, with an online questionnaire and semi-structured interviews. Applying structural equation modelling on the data of 139 pre-service teachers from one of the largest Teacher Education Institutes in the Netherlands, it was found that the factor Pedagogy was not stable and did not add to a better fitting model. Continuing the quantitative analyses with Will, Experience and Skill (WES-model), and taking into account the qualitative data gathered from nine pre-service teachers, most evidence was found for a mediation model. In this model Experience and Will both have a (strong) direct effect on Technology Integration, whereas the effect of Skill on Technology Integration is fully mediated by Will. Future research, for example using a longitudinal design, can help to establish stronger statements about causality. However, the current study has shown the probable complexity of the influences on technology integration. Furthermore, the qualitative data has indicated practical implications for Teacher Education Institutes. An in-depth investigation of the constructs Will, Experience, Skill and Pedagogy found that pre-service teachers especially need more practical and relevant examples of technology integration, for which teacher educators can function as role models.

Key words: technology integration; pre-service teachers; teacher training; educational

(3)

3

Towards a comprehensive model of technology integration in education: The influence of will, experience, skill and pedagogy (WESP) on pre-service teachers’

preparedness to integrate technology in education

Technology is interwoven with many parts of our society and most of us could not imagine a world without it. Consequently, technology has become an important point of focus for policy makers in education. Many researchers have argued that technology can be used effectively in educational practice (e.g. Abbot, 2007; Harskamp & Jacobse, 2010; Katz, 2018; Lai, 2018; Lewis, Trushell & Woods, 2005; Ralph, 2006; Voogt, Sligte, Beemt, van den Braak, & Aesaert, 2016). Especially, technology was found to enhance learning-related student attributes such as motivation, to create more efficient learning processes and to improve learning outcomes (Christensen, 1997; Kennisnet, 2013; Knezek, Christensen & Fluke, 2003). Furthermore, technology has been linked to skills and knowledge related to 21st century skills that are deemed

important in our society (e.g. critical thinking, creativity, problem solving; Thijs, Fisser & van der Hoeven, 2014) and differ from skills and knowledge enhanced by traditional teaching practices (Lai, 2018).

Numbers suggest that technology integration in education is wide-spread. For example, in the Netherlands, practically every teacher uses some form of technology. A quarter of them uses 13-14 different technology applications, 50% uses 6-7 applications and another quarter uses 1-2 applications (Kennisnet, 2017). Most frequently, these applications are teacher-centred, involving the use of a digital chalkboard or beamer, the use of internet for class preparations or the use of digital teaching materials in the classroom. Only a small amount of Dutch teachers (approximately 25%) lets their students work with specific learning programmes or digital material during the classes, creating more student-centred and efficient learning environments (Kennisnet, 2017). Teachers align technology use with their established routines and repertoire, resulting in the maintenance of classroom-focused education (Voogt et al., 2016). This kind of technology use is consistent with other (international) findings (Culp, Honey & Mandinach, 2005; Lai, 2018).

Thus, although the number of teachers that uses some form of technology is high, the actual implementation of technology in education is diffuse. Teachers do not seem to have sufficient insight in which technology applications are suitable for which learning goals (Voogt et al., 2016). Furthermore, it raises concern that pre-service teachers often do not feel prepared to integrate technology effectively and successfully (Ertmer, Ottenbreit-Leftwich, Sadik, Sendurur & Sendurur, 2012; Tondeur, Pareja Roblin, van Braak, Voogt & Prestridge, 2017).

(4)

4 Research has shown that what (pre-service) teachers learn in their educational programmes at Teacher Education Institutes (TEIs) is not effectively transferred to their actual practice (Ertmer et al., 2012). TEIs are thus facing a major challenge to properly prepare pre-service teachers for the practical use of technology in the classroom.

Several factors have been named as potential barriers to the integration of technology in education by teachers. Ertmer (1999) classifies these barriers as first-order and second-order barriers. The first-order barriers are extrinsic factors based merely on resources such as access to technology and acquiring several technology skills, whereas the second-order barriers are intrinsic factors such as beliefs about teaching, beliefs about technology use in education and established classroom practices.

Taking into account first- and second-order barriers, the WST-model identifies will (attitudes towards and beliefs about technology integration in education), skill (the competency of technology use) and tool (access to technology) as the three main factors independently influencing technology integration (Knezek & Christensen, 2008). In addition, it has been suggested that actual learning experience with technology integration and exposure to successful technology integration in the pre-service education programme is also a relevant predictor of technology integration by teachers (Agyei & Voogt, 2011; Drent & Meelissen, 2008). Therefore, a recent pilot study that serves as a starting point for the current study added the explanatory variable experience to the model, creating the WEST-model (Farjon, 2017). Although the pilot study could not yet fully support the addition of experience to the model, it did find that tool had limited impact on technology integration (Farjon, 2017), which is consistent with earlier findings (Knezek & Christensen, 2016; Knezek, Christensen & Fluke, 2003). This could be due to the fact that nowadays access to technology is not problematic anymore in more developed countries. For example, a study with the WST-model in Mexico found tool to be to most important predictor of technology integration, whereas it was the least important predictor in the United States (Morales, 2006).

Even though studies have found that the W(E)ST-model explains 45-90% of the variance of pre-service teachers’ technology integration (Agyei & Voogt, 2011; Farjon, 2017; Knezek, Christensen & Fluke, 2003; Morales, 2006), the complex relationships between the constructs are not yet fully known and it is debatable whether the included constructs

independently influence technology integration. For example, Tondeur et al. (2012) indicated

that there could be a positive effect of skill on will – actively practicing your skills could lead to more positive attitudes –, and Morales (2006) concluded that the best fitting version of the WST-model had an indirect effect from will on technology integration through skill.

(5)

5 Besides the more technology-orientated factors that are needed for successful technology integration, the pedagogical skills of teachers have been named as important and recently the WSTP-model with pedagogy as a fourth construct was introduced (Knezek & Christensen, 2016). Knezek and Christensen (2016) define pedagogy in this context as teaching style, teaching approach or instructional strategy, and level of confidence. This embodies concepts from the Technological Pedagogical Content Knowledge (TPACK) framework, which emphasizes the connections between technology, pedagogy and content, to establish effective teaching with technology (Mishra & Koehler, 2006). Pedagogical Content Knowledge (PCK) is one of its pillars and can be defined as “an understanding of how particular topics, problems, or issues are organized, represented, and adapted to the diverse interests and abilities of learners, and presented for instruction” (Shulman, 1986, p. 8).

To let the use of technology in education reach its full potential, it is crucial to understand how the complex interplay of will, experience, skill and pedagogy explains pre-service teachers’ technology integration in the classroom. Because this study is based on a Dutch sample, we eliminate tool from the original model, thus creating the WESP-model. The current study aims to develop a more comprehensive and thorough understanding of the factors that influence technology integration in the classroom. With renewed understanding of the model, TEIs can better prepare pre-service teachers for technology integration. The main question is the following:

To what extent and how does the WESP-model explain pre-service teachers’ preparedness to integrate technology in education?

Theoretical framework Will, Skill, Tool model of technology integration

Researchers have reached consensus that attitudes, beliefs and skills are important for successful use of technology in the classroom (Ertmer et al., 2012; Knezek & Christensen, 2008; Tondeur et al., 2012). Furthermore, access to technology has been named as a prerequisite for proper technology integration (Ertmer et al., 2012). Combining and refining these factors, the will, skill, tool (WST) model of technology integration was developed to explain technology integration in education, in which will, skill and tool independently influence the integration of technology in the classroom (Knezek, Christensen, Hancock & Shoho, 2000; Knezek & Christensen, 2008, 2016). A visual representation of the model is presented in Figure 1.

(6)

6 The concept of will refers to a positive attitude towards technology use in education (Knezek & Christensen, 2016). Attitudes and beliefs have widely been recognized as important contributors to successful technology integration (Agyei & Voogt, 2011; Ertmer et al., 2012; Marshall & Cox, 2008; Prestridge, 2012). For example, teachers named attitudes and beliefs as the strongest barriers preventing other teachers to integrate technology in education (Ertmer et al., 2012).

However, attitude is not the only factor influencing technology integration. Teachers can have beliefs they don’t act upon in practice. According to Chen (2008), teachers do not always practice what they preach with regard to technology integration, because of external factors, limited theoretical understanding or other conflicting beliefs. In the first place, this highlights the importance of technology skills. In the WST-model skill is defined as the ability to use technology and the confidence and preparedness to do so (Knezek & Christensen, 2016). This can be accomplished mostly by professional development and training (Chen, 2008; Knezek & Christensen, 2016). Furthermore, the availability and accessibility to technology,

tool, at home and in the classroom, is important (Knezek & Christensen, 2016).

Finally, technology integration is defined in the model as “the self-perceived level of adoption of technology for educational purposes” (Knezek & Christensen, 2016, p.311). These levels of adoption have been classified by several models. First, the concerns-based adoption model (CBAM) Levels of Use determines seven levels teachers can inhibit when they adopt an innovation (Hall and Rutherford, 1974). The levels range from no use (lack of knowing that the innovation exists) to an active and effective use (Hall and Rutherford, 1974). Second, Apple Classrooms of Tomorrow (ACOT) identifies entry, adoption, adaptation, appropriation, and invention as five levels of integration (Dwyer, Ringstaff & Sandholtz, 1989). Third, Stages of

(7)

7 Adoption has six stages referring to the perceived use of technology in education, ranging from awareness to successful integration and innovative use (Christensen, 1997). Additionally, to understand what kind of knowledge teachers need to successfully integrate technology in education, the Technological Pedagogical Content Knowledge (TPACK) framework can be used (Mishra & Koehler, 2006). The model consists of seven knowledge domains (see Figure 2), with TPACK as the domain that encompasses the complex interaction and coordination between the other domains (Koehler & Mishra, 2009). TPACK is being able to combine technological, pedagogical and content knowledge to enhance teaching with relevant technology applications (Voogt, Fisser, Tondeur, & van Braak, 2013). Higher levels of technology adoption are thus associated with higher levels of TPACK.

Adding new constructs: Experience and Pedagogy

Recent studies have emphasized the need to extend the WST-model with technological learning

experience and pedagogy (Agyei & Voogt, 2011; Farjon, 2017; Knezek & Christensen, 2016;

Tondeur et al., 2012). In the first place, the quantity and quality of technology experiences in TEIs seems crucial (Tondeur et al., 2012). Therefore, a recent study of Farjon (2017), which serves as a pilot study of the current study, defined this as experience and added it as a new construct to the WST-model, creating the WEST-model. In the second place, pedagogy was successfully added as a new construct by the developers of the WST-model, creating the

(8)

8 model (Knezek & Christensen, 2016). We combine these two developments by adding both

experience and pedagogy to the WST-model. The constructs are discussed below.

Experience. TEIs are faced with the pressing challenge to prepare pre-service teachers

for successful technology integration in education. Key variables related to effective teaching with technology include teacher learning, and knowledge about the use of technology for educational purposes (Riel & Becker, 2008). This could be acquainted at a TEI. In many TEIs, introductory courses into ICT have been included in the curriculum (Polly et. al, 2010). However, research has shown that for a proper preparation of pre-service teachers, ICT has to be integrated in the entire curriculum of a TEI. With such a curriculum approach, it becomes clear to pre-service teachers what the educational reasons for ICT use are and it lets them experience how ICT can be used across subject domains (Tondeur et. al, 2012). The same integrative approach is intended by TPACK, which emphasizes the dynamic interactions between technological knowledge (TK), pedagogical knowledge (PK) and content knowledge (CK) (Mishra & Koehler, 2006). To be able to successfully integrate technology in education, gaining technological knowledge is not enough; teachers need to reach an understanding of how TK, PK and CK can be combined to support teaching and learning.

A synthesis of recent qualitative studies on strategies to prepare teachers for technology integration in the classroom has resulted in the Synthesized Qualitative Data(SQD)-model (Tondeur et. al, 2012; see Figure 3). The SQD-model contains twelve key themes of successful preparation for technology use. Six themes are related to the preparation of the pre-service teacher itself: using teacher educators as role models, reflecting on attitudes about the role of technology in education, learning technology by design, collaborating with peers, scaffolding authentic technology experiences, and moving from traditional assessment to continuous feedback. The six other themes are related to the necessary conditions at the institutional level: technology planning and leadership, co-operation within and between institutions, staff development, access to resources, systematic and systemic change efforts, and aligning theory and practice. The pre-service teachers thus need to have learning experiences in their TEIs that include role models, reflection, instructional design, collaboration, authentic experiences and feedback.

(9)

9 Using SQD to investigate the effect of experience on technology integration, Farjon (2017) found that adding experience to the WST-model did not substantially improve the model. However, this could be due to the fact that in his study, only pre-service teachers in their first year of teacher education were included. The variance of experience was low in this group, probably because they did not (yet) encounter that many experiences at their TEI. It is therefore still interesting to include experience in the model and investigate its added value in a more diverse group of pre-service teachers.

Pedagogy. The developers of the WST-model suggest that the inclusion of pedagogy as a

separate construct can improve the model (Knezek & Christensen, 2016). For the purpose of the model, they defined pedagogy as teaching style, teaching approach or instructional strategy, and teachers’ confidence in the use of instructional strategies for technologies. This encompasses concepts from Pedagogical Content Knowledge (PCK), introduced by Shulman (1986) and covered by TPACK (Mishra & Koehler, 2006). PCK unites pedagogical knowledge and content knowledge, whereas TPACK adds the technological knowledge component to the pedagogy and content. Knezek and Christensen (2016) tested the renewed WSTP-model with three datasets of teachers gathered in the years 2011 (n = 1648), 2014 (n = 466) and 2015 (n =

Figure 3. The SQD-model to prepare pre-service teachers for technology use (Tondeur et.

(10)

10 226). The 2011 dataset included measures for will, skill and tool, and the 2014 and 2015 datasets included measures for skill, tool and pedagogy. Pedagogy was validated as a construct and respectively explained 30%, 33% and 35% of the variance of technology integration (Knezek & Christensen, 2016), therewith proving to be a good and even the strongest predictor in the tested models. In line with Farjon (2017) and Knezek, Christensen & Fluke (2003), tool had only a small effect in these models.

The WESP-model of technology integration

In the current study, the recent developments are combined and both experience and pedagogy are added to the model. Furthermore, the construct tool is eliminated from the model based on the beforementioned theoretical considerations (Farjon, 2017; Knezek & Christensen, 2016; Knezek, Christensen & Fluke, 2003). The resulting WESP-model, with will, experience, skill and pedagogy independently influencing technology integration, is presented in Figure 4.

However, it seems unlikely that the pictured relations are independent in practice and previous studies have indicated possible mediation effects between will and skill (Tondeur et al., 2012; Morales, 2006). The small effect of experience on technology integration as found by Farjon (2017) could also be due to a more complex dependency. In addition, the model could act differently in different regions and for different levels of technology integration; in less developed regions, tools can be an important predictor (Morales, 2006) and when teachers are near the highest level of technology integration, will is a more important predictor (Knezek & Christensen, 2016). Therefore, in this study, we wish to establish a more comprehensive

(11)

11 understanding of the complex interplay between will, experiences, skill and pedagogy and their influence on technology integration in education, in more developed regions.

Current study

In the current study, multiple theoretically funded models are compared. We start with a base model, based on the pilot study of Farjon (2017), with will, experience and skill independently influencing technology integration. In the second model, we add pedagogy as an independent explanatory variable. Then, we apply mediation effects: first between will and skill in models three and four. This is followed by the addition of will as mediator of the effect of experience on technology integration – model five – and the addition of skill as mediator of the effect of

experience on technology integration – model six. Thus, these are the six hypothesized models:

1. Base model WES (will, experience, skill);

2. Base model WES (will, experience, skill) with added explanatory variable pedagogy (WESP);

3. WES(P), where the effect of skill on technology integration is partially mediated by will;

4. WES(P), where the effect of will on technology integration is partially mediated by skill;

5. WES(P), where the effects of skill and experience on technology are partially mediated by will;

6. WES(P), where the effects of will and experience on technology are partially mediated by skill.

The specific research questions are:

1) Which of the proposed models does best explain the technology integration of pre-service teachers?

2) To what extent and how do the attitudes and beliefs of pre-service teachers (will) explain their technology integration?

3) To what extent and how do the characteristics of the curriculum at TEIs (experience) explain the technology integration of pre-service teachers?

4) To what extent and how do the competencies of technology use of pre-service teachers (skill) explain their technology integration?

(12)

12 5) To what extent and how does the pedagogical content knowledge (pedagogy) of

pre-service teachers explain their technology integration?

These questions will be subject to both a quantitative study evaluating the ‘to what extent’ and a qualitative study evaluating the ‘how’. It is expected that pedagogy and skill have the largest effect on technology integration, following Knezek & Christensen (2016) and Morales (2006). Furthermore, it is expected that one of the mediation models does better explain technology integration than the base model(s).

Method Procedure

This study used a mixed-methods design. In the first place, a quantitative approach was used, consisting of an online questionnaire. This online questionnaire was distributed through email and through the digital learning environment at one of the largest TEIs in the Netherlands. In the online questionnaire, participants gave active consent for the use of their data in this study. In the second place, to answer the ‘how’, a qualitative approach was used, consisting of semi-structured interviews. These interviews were conducted partly at the TEI and partly through video calling, and took approximately thirty minutes per interview. With consent of the participants, the semi-structured interviews were audiotaped to make the data processing and analysis easier. For the interviews, participants signed an extra consent form. The study and its consent procedures have been approved by the Ethics Committee of the Faculty of Behavioural and Social Sciences at the University of Amsterdam. Data management and protection were applied following the rules of the Ethics Committee, storing the data in a safe working environment and keeping a respondent key and the data in separate files.

Sample

A selective sample of 139 pre-service teachers studying at one of the largest TEIs of the Netherlands is included in this study. First (5.7%), second (25.2%), third (64.0%) and fourth (5.0%) year students in teacher training programmes were recruited and were invited to fill in the online questionnaire. Responses came from pre-service teachers who were enrolled in the bachelor programme to become elementary school teachers (5%) or in the bachelor programmes to become secondary school teachers in Dutch (12.9%), English (54.7%), French (4.3%), German (4.3%), History (18.0%), and Biology (0.7%). Of these students, 38% was male and 62% female, whereas the mean age was 21.7 years.

(13)

13 In the online questionnaire, students could mark if they would like to be invited for a semi-structured interview. The sample for the semi-structured interviews is thus self-selective. All 28 students who wanted to be invited, received an email with an invitation. In the end, nine students agreed to an interview.

Operationalization and instruments

The online questionnaire consisted of 65 questions measuring the constructs will, experience,

skill, pedagogy and technology integration with five- and six-point Likert scales, ranging from

1 (strongly disagree) to 5-6 (strongly agree). For technology integration, there was one question with an 8-point Likert scale. A visual representation of the theoretical base models and its operationalizations are presented in Figure 5 (WES) below and Figure 6 (WESP) on the next page. An English translation of the original Dutch questionnaire can be found in Appendix A.

(14)

14 In the first place, experience was operationalized as active learning experiences with technology in the curriculum of TEIs. This was measured with the Synthesized Qualitative Data-questionnaire, an unidimensional scale of 24 questions with six-point Likert scales for six domains of the curriculum that are important for technology integration: Role Model (ROL), Reflection (REF), Instructional Design (DES), Collaboration (COL), Authentic Experiences

(15)

15 (AUT) and Feedback (FEE) (Tondeur, van Braak, Siddiq & Scherer, 2016; Tondeur et. al, 2012). Example items are ‘In my study programme I have seen many practical examples of educational ICT use’, ‘In my study programme we discussed the difficulties with ICT use in education’, and ‘In my study programme I was stimulated to gain experience with ICT in practice’. The internal consistency of this scale is high (Cronbach’s α = .98; Tondeur et al., 2016). Second, skill was operationalized as technology knowledge and measured with the Technological Knowledge(TK)-scale of the TPACK-questionnaire (TPACK-NL, based on Schmidt et. al, 2009). This scale has seven questions, measured on a five-point Likert scale. Example questions are ‘I can solve my own ICT-problems’, ‘I easily learn new things about ICT’, and ‘I have the technical skills I need to use ICT’. This scale has sufficient internal consistency (α = .82).

Third, will was operationalized as the attitudes towards and beliefs of technology integration in the classroom. It was measured by two questionnaires: the attitudes toward the integration of ICT consisting of five questions (ATI, Avidov-Ungar & Iluz, 2014; α = .82) and the anxiety subscale of the teachers’ attitudes towards computers questionnaire consisting of six questions (TAC, Christensen & Knezek, 2009; α = .96). Both were measured with five-point Likert scales. To be able to perform our intended analyses, TAC was reverse coded so that ATI and TAC had the same connotation. Example questions of TAC are ‘I experience ICT as intimidating’ and ‘I feel uncomfortable when I work with ICT’. Example questions of ATI are ‘ICT aligns with my ideas about the role of the teacher’ and ‘I can reinforce my (future) educational practice by using ICT’.

Fourth, pedagogy was operationalized as “an understanding of how particular topics, problems, or issues are organized, represented, and adapted to the diverse interests and abilities of learners, and presented for instruction” (Shulman, 1986, p. 8). It was measured with an adaption of the PCK-scale (four questions) of the TPACK-questionnaire (TPACK-NL, based on Schmidt et. al, 2009) and a Dutch adaption of the Isakson Survey of Academic Reading Attitudes that consists of seven questions (ISARA; Isakson, Isakson, Plummer, & Chapman, 2016). Both scales were measured with a five-point Likert scale. Example questions of PCK are ‘I have enough pedagogical knowledge for the course/courses I am teaching’, and ‘I can choose the right pedagogies for the courses I am teaching’. Example questions of ISARA are ‘It is important for me to read academic articles on pedagogics’ and ‘I use professional literature to solve pedagogical issues I encounter’. Both scales showed sufficient internal consistency, respectively α = .93 and α = .85.

(16)

16 Finally, technology integration was operationalized as knowing how to and being ready to integrate technology. This was measured with the Technological Pedagogical Content Knowledge (TPACK) core scale, consisting of eight questions measured with a five-point Likert scale (Fisser, Voogt, van Braak & Tondeur, 2013), and with a unified construct scale (COMB) of three questions: Concerns Based Adoption Model (CBAM; eight-point Likert scale) Levels of Use (Hall and Rutherford, 1994), the Stages of Adoption of Technology (Christensen, 1997; six-point Likert scale), and Apple Classroom of Tomorrow (ACOT; Dwyer, Ringstaff & Sandholtz, 1989; five-point Likert scale). All items in the unified construct require respondents to choose the described level of ICT integration that best fits their actual integration. For TPACK, example questions are ‘I am able to choose ICT applications that support the content of the course(s) I am teaching’ and ‘I am able to choose ICT applications that reinforce what and how I teach’. Both the TPACK core (α = .94) and the unified construct of CBAM, Stages of Adoption and ACOT (α = .84) have high internal consistency (Farjon, 2017; Hancock, Knezek & Christensen, 2007).

For the semi-structured interviews, the same constructs and operationalizations were used. The interview items were based on the items of the online questionnaires and were instrumentalized to clarify findings of the quantitative study. Questions were asked about (a) pre-service teachers’ experiences in the teacher training programme, (b) their attitudes towards technology integration in education, (c) their technological skills, (d) their pedagogical skills, and (e) their perceived technology integration in the classroom. Examples of these questions are ‘What do you think of the preparation your study programme offers you to use ICT in your education?’, ‘What kind of feeling do you get when you think of integrating ICT in education?’, ‘Do you feel that you have the skills that are necessary for the integration of ICT in education?’, ‘Do you think that pedagogics are related to the integration of ICT in education?’, and ‘Can you describe your level of ICT-integration?’. An English translation of the original Dutch interview protocol can be found in Appendix B.

Statistical analyses

Mokken Scale Analysis. Because we use many different questionnaires to measure our

constructs, we start the statistical analyses with a Mokken Scale Analysis (MSA; Mokken, 1971) for each separate scale and for the combination of all scales within one construct. MSA is a useful method to identify (sub)scales (Sijtsma & van der Ark, 2016) and to investigate the fundamental measurement properties of assessments (Wind, 2017). Thus, it can give insight in whether all items in predetermined measurement scale(s) measure the same underlying (latent)

(17)

17 construct. It is a nonparametric approach to Item Response Function (IRF) theory, that is important in the social sciences, where using parametric methods can be doubtful, given that underlying response patterns are often not well understood (Wind, 2017). The non-parametric models use ordinal scales for persons and items, based on the observed scores. Three assumptions underly MSA. First, the assumption of unidimensionality indicates a single latent construct measured by all items in a scale. Second, monotonicity implies that when someone has a higher score on the latent construct, it should be more likely that the observed scores of this person represent typical scores at a higher level of the latent construct, compared to someone who has a lower score on the latent construct. Third, local independence means that the observed items are independent, except for covariance that is due to the latent construct.

MSA provides scalability coefficients, denoted H, that separate items on low and high quality in relation to the test-score distribution (Sijtsma & van der Ark, 2016). In the current study, MSA is performed with an automated item selection procedure (AISP) in R Studio Version 1.1.383 with R version 3.5.1, package mokken (van der Ark, 2007). AISP identifies one or more scales, selects items for each scale and identifies deviating items that might not be suitable to measure the underlying latent construct(s). AISP uses scalability coefficients for pairs of items (Hjk) and individual items (Hj) (Mokken, 1971; Sijtsma & Molenaar, 2002).

Furthermore, a scalability coefficient H for the total scale is calculated. The following rules of thumb are applied: 3 ≤ H ≤ 4 is considered a weak scale; 4 ≤ H ≤ 5 a medium scale; and H > 5 a strong scale; a set of items for which H < 3 is seen as unscalable (Sijtsma & Van der Ark, 2016). Because MSA only works with datasets without missing data, we deleted the seven cases in our dataset with missing values for this purpose (but not for the other analyses).

Structural Equation Modelling. For the quantitative research questions of the study,

structural equation modelling is the preferred method of analysis. With structural equation modelling, we are able to compare models and use mediation effects. After checking whether the model is identified, whether the data is (multivariate) normally distributed, and whether there are outliers, the hypothesized path models are tested using R Studio Version 1.1.383 with R version 3.5.1, package lavaan (Rosseel, 2012). To fit a structural equation model, first the measurement model has to be fitted to the data, allowing only for covariances between the latent constructs and not implying any other relationships between them. Therefore, the measurement models for WES and WESP are separately fitted to the data. Identification is assured using Unit Loading Identification (ULI) constraints for each factor. The measurement models are in the first place fitted without any residual covariances. Then, we allow for model modification, using

(18)

18 residual correlations > .1 and significant modification indices (using a significance level of α < .05) to indicate possible modifications. Only residual covariances between items within the same scale are allowed in this stage, to foster interpretability. Based on their fit, the measurement model of either WES or WESP is selected for our subsequent analyses. We assess model fit using the χ2-test of exact fit, the Root Mean Square Error of Approximation (RMSEA; including the 90% confidence interval) and the Confirmatory Fit Index (CFI). A good model fit consists of a non-significant χ2. RMSEA < .05 indicates close fit, and RMSEA < .08 acceptable fit. A CFI > .90 indicates good fit. Individual parameters are considered significant at α < .05. The estimation procedure used is maximum likelihood (ML). This estimation procedure yields parameter estimates and model fit indices.

When the fit of the measurement model is acceptable, the structural path models are included and their fit is assessed. When we compare the fit of nested models a χ2-difference test is used. Since two models with mediation effects – those with mediations between will and skill – are so-called ‘equivalent models’, we cannot directly compare their model fit as it will be equal by definition (MacCallum, Wegener, Uchino, & Fabrigar, 1993). Therefore, we also check the model parameters and use the qualitative data to gain more insights in the direction of the relationship between will and skill.

The required sample size for structural equation modelling is dependent on many factors (Wolf, Harrington, Clark & Miller, 2013). However, often as a rule of thumb 10-15 respondents per estimated parameter is seen as the minimum. Having less participants decreases the power to detect effects. Therefore, to increase power, when the measurement model fits the data properly and the factor structure thus holds with an indicator for each question of the questionnaire, we continue our analyses with the sum scores of the questions per scale. This yields a lower amount of estimated parameters and thus increases power.

Qualitative analyses. For the qualitative research questions of the study – the ‘how’ –

the audiotapes of the semi-structured interviews have been transcribed verbatim. We used two cycles of open and axial coding, and a within and cross-case analysis (Miles & Huberman, 1994). First, there was an open coding cycle, in which we created a list of codes that was derived from the data. This list was subject to revision whenever a new interview was coded and already coded interviews were frequently revised based on new codes. Using the created list of codes, axial coding was performed. In the axial cycle, the codes were clustered and categorized, resulting in overarching codes and subcodes and a list with descriptions of the codes. Coding

(19)

19 was applied using Atlas.ti version 8.2. The final list of codes and descriptions can be found in Appendix C.

For each participant, we created a case report and looked for relationships between the variables and noticeable findings within the case. These individual case reports were the basis for the cross-case analysis, in which we looked for patterns between the cases. The coding and analyses were primarily conducted by one of the researchers. To assure reliability of the coding procedure, the coding scheme, interpretations and choices were frequently discussed with the other involved researchers.

Results

In this section, the outcomes of the quantitative and qualitative study are separately discussed, starting with the quantitative part. In this first part, we discuss the outcomes of the Mokken Scale Analysis and the descriptive statistics of the scales, followed by the model fit comparison of the hypothesized models. Then, we describe two possible final models and their estimated parameters. In the second, qualitative part, we start with the findings for technology integration, which provides more in-depth information on the relations between technology integration and

will, experience, skill and pedagogy. Finally, we provide an exploration of the findings within

the constructs will, experience, skill and pedagogy, investigating more specific relevant elements of these constructs and laying the foundations for practical recommendations for TEIs.

Quantitative study

Mokken Scale Analysis and Descriptive Statistics. An exploratory Mokken Scale

Analysis (MSA) was performed on the combined scales for each separate construct. For will, both TAC and ATI were identified as separate scales, although the sixth item of TAC fell out of both scales. This indicates that this item might better be removed from the questionnaire. Removing this item, for TAC H = .63 (strong), and for ATI H = .63 (strong). For Experience, the SQD-questionnaire was found to be a one-scale questionnaire (H = .45; medium strength). In addition, for Skill, the t-scale was found to be a one-scale questionnaire (H = .60; strong). Furthermore, for Pedagogy, PCK and ISARA were found to be separate scales (H = .55 and H = .53; strong). For Technology Integration, TPACK and the combined technology measurement were identified as one scale (H = .50; strong). It thus appears to be possible to use all items of TPACK and the combined measurement as one scale of ‘Technology Integration’ in the measurement model, which we applied in the subsequent analyses. However, because CBAM was measured with an eight-point Likert scale and Stages of Adoption with an six-point Likert

(20)

20 scale, the scores on these items were linearly transposed to fit the five-point Likert scale of the other items.

In addition, we performed tests of internal consistency with Cronbach’s α. These indicated satisfactory to excellent reliability of the scales (See Table 1). However, for TAC it was found that removing the sixth item would lead to a higher internal consistency of the scale. This finding, combined with the outcomes of the MSA, led us to definitely remove the sixth item from the scale and to continue the analyses with a five-item scale. In the descriptive statistics of Table 1, item 6 has already been removed. The descriptive statistics of the data distribution indicate that on average, respondents rated their technology knowledge predominantly positive (t-scale), had predominantly positive attitudes and beliefs (TAC and ATI), were slightly negative about their experiences at the TEI (SQD), were predominantly positive about their pedagogical content knowledge (PCK), were slightly positive about their academic reading on pedagogy (ISARA) and rated their technology integration predominantly positive (Technology Integration).

Scale N Min Max Median Mean Cronbach’s α

T-scale 139 1.43 5.00 3.43 3.42 .90 TAC* 139 1.00 5.00 4.01 4.00 .90 SQD 139 1.00 5.75 3.08 3.07 .95 ATI 139 1.20 5.00 3.40 3.40 .86 PCK 132 2.25 5.00 3.75 3.72 .77 ISARA 132 1.00 5.00 3.00 2.99 .87 Technology Integration** 128 1.13 4.84 3.45 3.43 .91

* reversely coded; **TPACK, ACOT, CBAM (linearly transposed) and Stages of Adoption (linearly transposed)

Model evaluation.

Measurement model. We fitted both the measurement model for the WES and WESP

model separately. In the measurement models, we did not include covariances between the observed variables – the questions in the online questionnaire – in the first place, but allowed for model modification after fitting the model without these covariances. In the modification process, using residual correlations and modification indices as described in the methods section, we allowed for the addition of covariances between the observed variables within the same measurement scales. This resulted in the WES-model with an acceptable model fit: χ2 = 1806.593 (df = 1211, p < .001). CFI = .88, RMSEA = .060 [.054-.065]. For the WESP-model,

Table 1

(21)

21 we used the same procedure, which also resulted in an acceptable model fit: χ2 = 2877.04 (df = 1822, p < .001). CFI = .82, RMSEA = .065 [.060-.069]. We cannot test whether the difference between both models is significant with the χ2-difference test, because the WESP-model uses extra data for pedagogy and therefore the models are not nested. Nevertheless, the fit of the WESP-model appears to be worse than the fit of the WES-model as the χ2 and RMSEA are higher and CFI is lower. In addition, the factor loadings of the PCK-scale and ISARA-scale were not significant (respectively λ = .128 , p = .239 and λ = .156, p = .224). Thus, the scales are not very indicative of the common factor pedagogy on which they load. Therefore, it appears that pedagogy is not a stable factor in our model and also does not improve the model. On these grounds, the decision was made to continue the subsequent analyses with the WES measurement model.

Structural model. Based on the MSA and the fitting of the measurement model, it can

be concluded that the proposed factor model with will, experience and skill holds. Therefore, in the fitting of the structural model, we used the sum scores of the measurement scales as the indicators, instead of the individual questions. First, we fitted the hypothesized models to the data, without allowing for covariances between the indicators. For each model, we inspected the residual correlations and modification indices. This led us to include the residual covariance between the SQD-scale and the ATI-scale as a parameter in the models. In Table 2, the model fit of the five models and the applied χ2-difference tests are presented.

Model χ2 (df, p) RMSEA [95% CI] CFI χ2-difference with model 1 χ2-difference with model 2 1.WES base 72.474 (4, < .001) .35 [.28; .43] .61 - 67.024 (df-difference = 1, p <.001) 2.Mediation Skill - Will 5.450 (3, .14) .08 [.00; .18] .99 67.024 (df-difference = 1, p <.001) - Table 2

(22)

22 3.Mediation Will - Skill 5.450 (3, .14) .08 [.00; .18] .99 67.024 (df-difference = 1, p <.001) - 4.Mediation Experience & Skill - Will 5.393 (2, .07) .12 [.00; .23] .98 67.081 (df-difference = 2, p < .001) .06 (df-difference = 1, p = .81) 5.Mediation Experience & Will - Skill 4.313 (2, .12) .09 [.00; .21] .99 68.161 (df-difference = 2, p < .001) 1.14 (df-difference = 1, p = .29)

As indicated in the fifth column of Table 2, the models 2, 3, 4, and 5 had a significant better fit than model 1. The non-significant χ2 and the value of CFI of models 2 to 5 indicate good fit. However, RMSEA indicates poor fit. This could be due to the fact that RMSEA has the tendency to falsely indicate poor fit too often for models with small df and small sample size (Kenny, Kaniskan, & McCoach, 2014). Therefore, we can still conclude that the model fit of those models is sufficient. Comparing the model fit of models 2 and 3 (equivalent fit) with models 4 and 5, the χ2-difference tests did not indicate a significant difference. Thus, adding a mediation effect of either skill or will on the effect of experience on technology integration, does not seem to improve the model. Models 2 and 3 are therefore the most parsimonious models for our data. Although goodness of fit in structural equation modelling is assed by model fit measures such as χ2, CFI and RMSEA, it is also possible to calculate R2. In models 2 and 3,

R2 = .51, thus they explain 51% of the variance in technology integration.

These outcomes imply that the relation between skill and will is more complex than suggested by their independency in the original WES(T)-model. However, due to model equivalence, we could not compare model fit between models 2 and 3 and thus cannot make statements about the direction of the effect between will and skill. Both models are pictured (simplified) in Figure 7a and 7b on the next page.

(23)

23 Taking a closer look, in both mediation models, the direct effect of skill on technology

integration is not significant. This could either mean that there is an indirect effect of skill on technology integration, through will (Model A), or that there is no evidence for an effect of skill

on technology integration (Model B). Theoretically, it appears more evident that Model A better fits reality, as for example, skill has explained about 30% of the variance of technology

integration in earlier research (Knezek & Christensen, 2016). We will further investigate which

model seems more appropriate in the qualitative data analysis.

Figure 7b. Model B: WES-model with mediation of skill on the effect of will on

technology integration.

Note. dotted lines indicate non-significant effects. *p <.05. R2 = .51.

Figure 7a. Model A: WES-model with mediation of will on the effect of skill on

technology integration.

(24)

24

Final models. The estimated unstandardized parameters, their 90% Confidence

Intervals (CI) and the standardized parameters of the two possible mediation models are presented in Table 3 (page 24). For each unstandardized parameter, when the exogenous – independent – variable (on the vertical axis) increases with one unit, the endogenous – dependent – variable (on the horizontal axis) increases with the parameter estimate, conditionally on the other variables. For example, in Model A, when will increases with one unit, technology integration increases with .490, conditionally on the other variables. So, when pre-service teachers have reported more positive attitudes and beliefs towards the integration of technology in education, their self-reported technology integration is significantly higher.

For each standardized parameter, when the Standard Deviation (SD) of the exogenous variable increases with one unit, the SD of the endogenous variable increases with the parameter estimate, conditionally on the other variables. In addition, standardized parameters can be interpreted as effect sizes, with an effect of < .1 indicating a very small effect, .1 – .2 indicating a small effect, .3 – .4 indicating a medium effect and > .4 indicating a large effect. For example, in Model B, when the SD of experience increases with one unit, the SD of technology

integration increases with .412, conditionally on the other variables. This is a large effect. Thus,

the size of the effect that experience, the quantity and quality of experiences with technology in the TEI, has on the self-reported technology integration is large.

Model A

Will Technology Integration

Exogenous variables

Unstd. [90% CI] Std. Unstd. [90% CI] Std.

Will .490** [.162; .818] .644 Experience .315*** [.215; .414] .412 Skill Direct .896*** [.793; 1.199] -.069 [-.434; .296] -.080 Indirect .439* [.084; .795] .513 Total .754** [.385; 1.123] .925 Model B

Skill Technology Integration

Table 3

(25)

25

*Note. TAC is reversely coded.

Exogenous variables

Unstd. [90% CI] Std. Unstd. [90% CI] Std.

Will .699*** [.579; .820] .796 Direct .484** [.167; .802] .644 Indirect .220*** [.141; .299] .328 Total .704*** [.366; 1.043] .972 Experience .315*** [.215; .414] .412 Skill -.069 [-.434; .296] -.080

Note. *p < .05, **p < .01, ***p < .001. Unstd. = Unstandardized parameter estimate, Std. = Standardized

parameter estimate.

Qualitative study

Participants. An overview of the characteristics and scale scores of the nine participants

of the qualitative study is given in Table 4. The self-perceived level of technology integration as indicated in the online questionnaire is given in column IntegrationQ. In the interviews, we asked participants to grade their technology integration on a scale from 1 to 10. These grades are presented in the column IntegrationI. Interestingly, only a relatively small correlation of ρ = .34 between IntegrationQ and IntegrationI was detected. It is therefore unclear if the grading in the interviews yielded reliable indications of the actual technology integration.

Name Sex Age Study Year TAC* ATI T-scale PCK ISARA SQD IntegrationQ IntegrationI

Alan M 23 English 2 3.80 3.00 4.14 4.00 1.50 3.75 3.46 6 Becky F 17 English 2 3.46 3.00 3.14 NA NA 2.29 3.84 6-7 Chloe F 19 English 3 2.20 3.00 2.14 3.00 3.00 2.42 3.24 6.5 Diane F 21 English 3 4.20 3.80 3.86 2.75 2.50 3.58 3.97 7 Eric M 20 History 3 4.80 4.60 4.71 4.00 3.67 2.50 3.55 8 Filip M 20 Elementary 3 5.00 3.80 4.86 3.25 3.00 2.67 4.13 7.5-8 Gabi F 22 Elementary 4 4.40 3.00 2.86 2.75 4.00 2.33 2.70 7 Hugo M 21 History 3 4.40 3.60 3.43 4.00 3.83 3.08 3.82 7.5 Ilsa F 25 Dutch 3 4.40 4.60 3.86 4.00 1.83 3.38 3.34 6 Mean 20.9 4.07 3.60 3.67 3.47 2.92 2.89 3.56 6.91 Table 4

(26)

26 Furthermore, no large deviations were found between the means of the measurement scales for this subgroup of nine participants, and the means for the whole sample (Table 1). It thus appears that the subsample for the qualitative analysis is representative for the whole sample.

The following results are based on the semi-structured interviews with all nine respondents. We first discuss the findings for the concept technology integration and the factors that contribute to a successful integration, as suggested by the respondents. This provides in-depth insight in how the constructs of will, experience, skill, pedagogy and technology

integration are related. Although in the quantitative analyses we proceeded with a model

without pedagogy, it is still included here, because the qualitative analysis might provide more insight in the relevance and definition of this construct. Then, we explore the findings for the constructs will, experience, skill and pedagogy in more detail. This provides insight the elements of these constructs that are important for technology integration and concretizes the needs of pre-service teachers to this regard.

Technology Integration: what is happening in the classroom of pre-service teachers WESP. Considering what is most important for technology integration in the classroom,

the respondents gave diverse responses. Four of them are similar, as they indicated that you need to know what your goal is, whether it is of added value for the students, and why you do it. One of them explicitly mentioned TPACK and stated that pedagogy, content and technology have to be interwoven to make up a harmonious whole. The other respondents believed that it is most important that everything functions, that the people behind education (e.g. policy makers at the national level) have positive attitudes, that teachers have a self-exploring attitude and are not afraid to try new things, that technology is used as auxiliary tool and not as the basis for education, and what the policy of the school is.

More specifically, when they were asked to choose between will, experience, skill and

pedagogy, seven respondents chose will as the most important factor influencing technology

integration, two chose experience and one chose pedagogy. The respondents who chose will, feel that will is the starting point; when you are positive, you are more inclined and less afraid to try new things: ‘you have to turn the switch […] when you are more willing to do something, it is easier to do it in an effective way’ (Becky). One respondent linked will to experience and sees a role for the TEI to improve the attitude of pre-service teachers. The two respondents who chose experience, feel that it is most important that the TEI provides necessary and useful tools which the pre-service teachers can use in practice. Finally, one respondent stated that pedagogy, knowing what your technology adds to the learning experience, is most important.

(27)

27 Linking these findings to the quantitative findings, the prominent role of will is confirmed, followed by an important role of experience, reflecting the relevance of the TEI. Furthermore, there is a marginal role of skill. When asked more specifically about Skill, respondents often indicated that technology integration is only a little or not at all about acquainting technical skills, but more about confidence, not being afraid to try new things and to fail once in a while. This again points in the direction of will. Therefore, the qualitative data appear to support the mediation model in which the effect of skill on technology integration is fully mediated by will. In addition, the role of pedagogy and Pedagogical Content Knowledge should not be marginalized: according to the respondents knowing what the goal of technology is and what it adds is important for successful technology integration. Thus, the concept of

pedagogy seems important, but might need another operationalisation and measurement to gain

a significant position in the theoretical model.

Level of integration and use. The respondents were asked to indicate their level of

technology integration in the classroom, by providing a grade between 1 and 10. They gave themselves grades between 6 and 8. The higher grades were mainly given by respondents who identified themselves as ‘progressive’, ‘not afraid’ and ‘willing to experiment’, whereas lower grades were given by respondents who identified themselves as ‘knowing the basics’, ‘doing what we all know’, and ‘only working with what I already know’. Secondly, respondents were asked to name the largest barriers for technology integration in the classroom. Most frequently, they mentioned materials and conditions: not functioning laptops, iPads and WiFi, non-available laptops or iPads, slow WiFi, outdated tools, and not being able to download programmes. This is an interesting finding, as it points towards contextual factors that are not taken into account by the WESP-model or by the previous (WES, WST, WSP) models. Some of these factors might be covered by the construct tool, but with a different conceptualization than in the WST-model: access to technology and technological conditions in the classroom. Furthermore, the teacher training programme and lack of knowledge were mentioned as barriers: “there is not enough attention paid to it, causing that you have to figure out a lot by yourself” (Eric) and “I just don’t know all the possibilities” (Ilsa). This could be handled by the TEI, through the facilitation of knowledge about and skills on technology integration.

Third, most situations in which technology was integrated were teacher-centred. Technology is mainly used for clarifying instruction, for activating knowledge and as a reward: “you have worked well, now we can do a quiz” (Becky). One respondent uses technology in a more student-centred way, for example by using formative tests and facilitating individual

(28)

28 practice on laptops. He (Eric) notes that technology is, in his opinion, used too often as some kind of reward, as a ‘little party’ without a broader goal, and not that often for the more effective use of independently working by students.

Experience

Most respondents noted that their general preparation for technology integration in the teacher training programme is minimal. There are tools and gadgets presented to them, but these are often not the most relevant and easily usable ones. For example, there is a lab at the TEI, where you can try several technological gadgets, but the primary and secondary schools where the respondents (will) work do not own these gadgets. Also, many of these gadgets are not seen as useful in secondary education as they are mostly ‘toys’. However, there are some positive notes. For example, specific teacher educators do show useful tools and try to inspire their students. Still, the students have to explore the practical application of these tools themselves. It appears that for some of the students, getting only this inspiration and having to translate it to your own practice is sufficient. However, the majority of the students seems to need more concrete examples and practical applications.

In line with this, most respondents do not see the teacher educators as role models. There are not many practical examples with the exception of the basics of using pictures and movies, PowerPoint and quiz programmes, such as Kahoot. Two students feel that there were enough examples presented to them, but these students are also more negative about the integration of technology in education. Therefore, they probably do not feel the need to learn more about technology. In addition, most respondents did not experience (enough) reflection possibilities on the use of technology in education, assistance in instructional design with technology and feedback on their technology use, although this type of feedback is provided more frequently at the primary and secondary schools during the internships.

Asked what they would like to improve, respondents frequently explicitly formulated the need for more examples of specific tools and their application in practice. They want to learn more about the possibilities technology can offer. Specific applications they would like to learn more about are ‘everything that is not Word, PowerPoint or Prezi’ (Becky), ‘digital blackboard’ (Diane), ‘student tracking systems’ (Diane), ‘programming’ (Filip), ‘video recording’ (Chloe) and ‘video editing’ (Becky). Taking this into account, it seems to be a reasonable recommendation for the TEI to facilitate these examples.

(29)

29

Will

The majority (six) of the respondents demonstrated a positive attitude towards the integration of technology, but they also show their doubts and insecurities. These respondents were enthusiastic about the possibilities of technology for education and in this context mainly highlighted the increased motivation of their students as a result of technology. Demonstrated doubts and insecurities indicate that they “do not always know what you have to do” (Becky). In contrast, three respondents were more generally neutral to negative towards the integration of technology in education. One feels that “It is ok, you always have some technology use, but I would not like the digitalization of education” (Alan). Another respondent does not trust technology: “A lot can go wrong with technology in your lesson, you could have a power failure, and then you need to have a back-up plan” (Chloe).

Some of the respondents with a more positive attitude implied that their teacher training programme has been an inspiration and has sparked their attitudes: “the programme gives you a push, but it is a small push, so it won’t reach every student” (Diane). The majority of the respondents, however, agreed that their attitudes are more related to their personal interests and intrinsic motivation, and that the TEI could strengthen these positive attitudes especially in their more negative peers, by showing more practical examples and offering more technology in the curriculum.

In addition, all respondents stated that technology use can reinforce educational practices. The largest contribution of technology would be stimulating the motivation and the enthusiasm of students. Next to this, respondents mentioned ‘better remembering’, ‘creating an extra dimension’, ‘differentiation’ and ‘programming’ as things technology can contribute to in education. But, the pitfalls were also illustrated. For example, Eric displayed the importance of knowing the purpose of technology use: “You have to think about why you use technology, and not [only use it] because it is the future”. Two other respondents have experienced that computer- and iPad-use leads to distraction and that the teacher has no control over what is happening on the computers: “When students have a book, you know it when they are working or when they are distracted. When students work on a computer, you never know what they are doing, because they are quiet anyway.” (Gabi).

Although they believe in the reinforcement of educational practices through technology, not all respondents feel confident with all forms of technology. Recurring, respondents mentioned that their confidence depends on the type of technology: when they have used a tool before or are familiar with it, they feel more confident than with new tools. Some respondents therefore only use what they know and believe that the TEI could assist them in gaining more

(30)

30 confidence: “they should offer more and learn us to use it well, if you know it and you become familiar with it, you will feel more secure to use it” (Ilsa).

Skill

When asked about necessary skills for technology integration, all but two respondents believed they have the necessary skills. However, a majority of the respondents implied that it is not merely a skill you need, but that it is all about practice, confidence, enthusiasm, a positive attitude and fearlessness to just try out things. Nevertheless, five respondents also indicated that you need some technical skills: you need to know something about computers and how to solve technical problems. In addition, four respondents mentioned that being able to reflect on the purpose of your technology use is an important skill. These necessary skills are taught at the teacher training programme up until a certain height: “you are being challenged, resulting in more experience, but this differs per student” (Diane) and “they inspire you to dare and try new things in a common sense, but not specifically for technology” (Ilsa).

Pedagogy

Contrasting with the quantitative findings, the respondents believed that pedagogical content knowledge is an important contributor to technology integration. They referred to TPACK and reflection: “you have to have insight in whether the technology you apply reinforces your education” (Chloe) and “teachers should not being afraid to look back: did you make the right decision and why did you choose to use technology or not” (Diane).

Discussion

The current study aimed to investigate to what extent and how will, experience, skill and

pedagogy (WESP) explain pre-service teachers’ preparedness to integrate technology in

education, in order to facilitate a more comprehensive understanding of the influences on

technology integration in education. A mixed-methods design was used to gain extensive

insights in the relationships between the constructs. We proposed five possible theoretically funded models. The findings of the quantitative study in the first place indicate that a factor model with will, experience and skill (WES) better fitted the data than a factor model with will,

experience, skill, and pedagogy (WESP). Therefore, the subsequent analyses were performed

with the WES-model. The findings of the structural equation modelling point towards two possible mediation models. As these are so-called ‘equivalent models’, it was not possible to compare their model fit. In the first model, experience and will independently influence

(31)

31

technology integration, whereas the effect of skill on technology integration is fully mediated

by will. In the second model, experience and will independently influence technology

integration, whereas will influences skill, but there is no evidence for any effect of skill on technology integration.

Looking at the ‘to what extent’-question for the specific influences of will, experience and skill, in both mediation models the direct effect of will was the strongest effect with a large effect size, the effect size of experience was also large, and the direct effect of skill was not significant. Because pedagogy was not tested in the final model, we cannot answer the quantitative question for this construct. Reconsidering, our hypotheses were partly right: we did find the expected better fit of a mediation model as compared to a base model. But, we did not find an effect of pedagogy, and the effect of skill on technology integration was not large and only indirect. Furthermore, our study has provided the evidence that experience is a relevant and significant predictor of technology integration in the model, that Farjon (2017) could not yet establish.

Our findings are contrasting with previous research, such as the studies of Morales (2006) and Agyei and Voogt (2011), which found skill to be the strongest predictor. This could be due to the fact that nowadays (pre-service) teachers have more basic technical knowledge and skills anyway. Considering other previous studies, Morales (2006) did find that the best fitting model was our second model, but in that study the direct effect of skill on technology integration was significant as well. The non-significance of this direct effect that we found in our second model, makes the first mediation model with a significant indirect effect of skill more appealing. Especially since skill has been found to explain over 30% of the variance of technology

integration in other studies (Knezek & Christensen, 2016).

Adding to the quantitative outcomes, the findings of the qualitative study also point towards a primary influence of will and a large influence of experience, accompanied by a small influence of skill and pedagogy on technology integration. Most respondents named will as the most important factor for technology integration in the classroom, followed by experience. When asked about skills, some respondents named not being afraid, dare to take risks and try new things, and enthusiasm as most important. This reflects the close ties between will and skill. In addition, the role of pedagogy should not be marginalized. The interviewed respondents do believe that knowing how to use technology with a pedagogical perspective is an important asset for successful technology integration. Therefore, although our model did not identify

pedagogy as a stable factor, future studies should look for ways to define and operationalize pedagogy in a more suitable way and include it in the model. For example, they could follow

Referenties

GERELATEERDE DOCUMENTEN

Assessment of asthma control and future risk in children &lt;5 years of age (evidence level B)*. Symptom control Well controlled Partly controlled

Maatregelen die de telers hebben genomen zijn niet alleen de keuze voor minder milieubelastende middelen (nieuwe middelen zijn vaak minder milieubelastend, maar soms ook ca.

World Englishes, English as an International Language, English as a Lingua Franca, The world Englishes Enterprise, Expanding Circle, educated Japanese English, English for

of the birth of a child and the (attempted) disposal of its body. The accused voluntarily pleaded guilty of lying to the nurse about the dead child and confessed her

Despite the possible reasons put forward for the provisions pertaining to customs searches remaining intact after the enactment of the Constitution , the court in Gaertner v

Geneesmiddelen die op enige manier in verband zijn gebracht met een negatieve invloed op de door het coronavirus veroorzaakte ziekte COVID-19 zijn de Angiotensine Converting

Chapter 3 deals with the measurements under steady-state conditions of the inlet velocity, the axial void distribution and pressure drops at different condit i

De proef op het niet-gezeefde materiaal (Kz3, feitelijk geen zand) laat zelf bij het grootst mogelijke verval geen doorgaande pipe zien, terwijl de berekende doorlatendheden en