• No results found

Machine learning as computational thinking in primary school education

N/A
N/A
Protected

Academic year: 2021

Share "Machine learning as computational thinking in primary school education"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Machine learning as

computational thinking in

primary school education

(2)

Layout: typeset by the author using LATEX.

(3)

Machine learning as

computational thinking in primary

school education

Roos J. Vervelde 11278218

Bachelor thesis Credits: 18 EC

Bachelor Kunstmatige Intelligentie

University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor dr. B. Bredeweg Informatics Institute Faculty of Science University of Amsterdam Science Park 904 1098 XH Amsterdam June 26th, 2020

(4)

Abstract

This thesis discusses how computational thinking skills can be acquired by pri-mary school students through teaching about artificial intelligence - classifica-tion in particular. A digital tool is created that consists of four components: an image classifier, a learning method, a system on individualized formative as-sessment and a teacher-interface. The classification model uses a convolutional neural network to train an image classifier with images collected by the student. The training time has to be reduced to insure engagement with the student. Fur-thermore, data augmentation handles the small amount of data that is foraged. This classifier is embedded in a learning method, which uses global variables for questions, narration and parameters that create an interactive tkinter program. Classification is found to be a relevant subject when teaching about computa-tional thinking skills. The classifier is accessible for students, and shows the most important components necessary to understand the concept of machine learning.

(5)

Acknowledgements

I would like to thank my supervisor dr. B. Bredeweg for giving me the opportu-nity to write this thesis on a subject that is so close to my interest. His honest and thorough feedback has functioned as a great guidance throughout the process of writing this thesis report. Furthermore I would like to thank prof. dr. J.M. Voogt, who has conducted many researches that have been an inspiration when reading about the topic. Talking to her about the process of creating learning materials has really helped me to understand its social implications and complex-ity. Lastly, I would like to thank all teachers and other experts that evaluated the digital learning system. Their analysis helped to understand the success of the user experience, and to see which changes could be made for future research.

(6)

Contents

1 Introduction 2

2 Theoretical background 4

2.1 Learning computational thinking skills . . . 4

2.1.1 Classification tools in education . . . 4

2.1.2 Teaching computational thinking with classification . . . . 5

2.2 Individualized Formative Assessment . . . 6

2.3 Individualized learning systems . . . 6

3 Design and Implementation 9 3.1 Learning Goals . . . 9

3.2 Designing interactivity and engagement . . . 10

3.2.1 The classification model . . . 10

3.2.2 The learning method . . . 12

3.2.3 Individualized formative assessment . . . 13

3.2.4 The teacher-interface . . . 14

4 Results and Evaluation 15 4.1 The classification tool . . . 15

4.2 Incorporating a learning method . . . 15

4.3 The teacher-interface . . . 16

4.4 Technical evaluation . . . 17

4.5 User evaluation . . . 18

5 Conclusion and Discussion 21 5.1 Conclusion . . . 21

(7)

Chapter 1

Introduction

Designing digital systems for solving complex problems involves systematic and abstract thinking. This thought process is called computational thinking [31]. It involves solving problems by drawing on the concepts fundamental to computer science, but can be used by non-scientists. It is a fundamental skill for everyone [31]. Therefore, computational thinking should be actively implemented into our education [22, 31].

Education on computational thinking in Dutch primary school education is almost nonexistent [12]. Often, teachers are not qualified for teaching these subjects [29]. There is a learning gap between what children encounter about computational thinking in primary school education, and what they have to know when taught in secondary school. Closing the learning gap is apparent in a report on computational thinking in Dutch education [28]. This thesis focuses on primary school education.

Artificial intelligence (AI) is a branch of computer science, which also re-quires computational thinking. Even though AI is omnipresent in society, there is still a lot of unfamiliarity with the subject [30]. Algorithms can exert a form of automatic decision making, which could have ethical implications. Understand-ing the science and limits of AI is an important step towards understandUnderstand-ing the implications it has on our society [26]. Therefore, understanding machine learn-ing is an essential component of teachlearn-ing AI in primary school education. This requests the creation of resources that students and teachers can use to learn about AI. Classification is a common form of machine learning and is apparent in everyone’s digital environment [2].

This thesis report discusses how a classification tool can be used to teach children computational thinking skills using the concept of classification. The educational tool has aspects of individualized learning and formative assessment. The tool is expected to embark self-regulated learning, and to contribute to the acquisition of computational thinking skills.

Chapter 2 discusses the context of computational thinking in education, re-views other digital tools on classification and explains the subject of individual-ized formative assessment learning. Chapter 3 explains the creation of the digital learning tool. It describes the different components of the tool, the choices that

(8)

were made and the reasoning behind them. In chapter 4, the tool is explained and evaluated. Chapter 5 presents the conclusion and discussion.

Keywords

(9)

Chapter 2

Theoretical background

2.1

Learning computational thinking skills

Computational thinking draws upon the thought processes involved in designing machinery for solving problems. A common misconception is that computer programming is the basis of computational thinking [10, 16, 31]. Computational thinking is an approach to problem-solving needed for programming, but can also be applied to various other problems. For instance, unplugged education has been successful with learning computational thinking skills [10]. Moreover, both teamwork and the usage of creative thinking are important for creating an efficient environment for learning about computational thinking [17, 29].

2.1.1

Classification tools in education

An example of an online digital tool on machine learning for children can be found on the website machinelearningforkids.co.uk. [13]. It contains a digital tool where children can work with a machine learning instrument and implement this into Scratch programs. Scratch is a visual programming language which teaches creative thinking, systematic reasoning and teamwork [15]. Children create the data set by inserting images, text or numbers in different categories, from which the computer can learn. The website offers a lot of worksheets which explain different programs that can be created using this website and Scratch. This digital tool is a possible way of teaching machine learning concepts [26].

Another way of exploring the subject of classification is AI Experiments. This is a showcase for simple experiments that make it easier for anyone to use machine learning algorithms for pictures, drawings, language, music, and more [5]. It also has an experiment called ’teachable machine’. This is a web tool that makes it fast and easy to create machine learning models for projects, without the use of coding (figure 2.1). There is not so much a focus on learning about machine learning, but it can still be used by young students to understand the process of machine learning [26].

(10)

Figure 2.1: layout of the Google AI classifier. Students can forage their own data, train, and test their training model using their webcam.

2.1.2

Teaching computational thinking with classification

These two examples show that there are already tools which could be used by teachers and students to learn about machine learning and classification. Hav-ing students forage their own data in data exploration exercises can create a higher engagement [14]. The idea of implementing someone’s own data is used by both educational materials. Furthermore, the usage of a classifier creates the possibility to use the system creatively, which is an important aspect of learn-ing materials on computational thinklearn-ing. Still, it is unclear to which extend these tools contribute to the formation of computational thinking skills. They do not supply accompanying learning material, limiting the control over specific learning goals. This also limits the accessibility to these systems for teachers. It expects the teachers to create their own learning-materials which requires them to understand concepts that are fundamental to machine learning, which is often not the case [29].

Classification is closely linked to computational thinking, because it involves concepts that are related to computational thinking, like abstraction [4]. When creating learning materials on computational thinking, it is important to focus on understanding the higher-level concepts [29]. Nevertheless, using computational thinking vocabulary is not beneficial when students are still learning to under-stand the human language [7]. The learning material that supports a classifier should therefore not use a complicated vocabulary. Students can adopt a personal perspective on abstract computational concepts like abstraction, decomposition, generalization, pattern recognition and the application of algorithms [28].

This results in the follow up question: how can primary school students learn computational thinking skills on machine learning successfully and how can this be assessed?

(11)

2.2

Individualized Formative Assessment

To test a student’s learning progress, their understanding has to be assessed. The two most common forms of assessment are formative assessment and sum-mative assessment [1]. Sumsum-mative assessment means measuring a student’s per-formance, by testing the success of the specific learning goals with the student. Formative assessment also focuses on achieving these goals, but tries to gain in-sights into the learning processes during the course and uses this knowledge to adapt the learning strategy to the student [1, 8]. An important aspect of For-mative assessment is feedback. The goal of giving feedback is to close the gap between what is understood, and what was aspired to be understood [6, 19]. It refers to both feedback about a student’s learning to the teacher, and directives intended for the student, meant to affect their learning process directly [19]. In this way, formative assessment is a more individualized manner of assessment in respect to summative assessment, because the process of learning is unique for each student. It is known as an effective medium to support learning, because it teaches students to analyze their own competences and improve their work during the process of learning.

However, the success of this method is questioned [9]. It was found to be mostly ineffective, due to its often incorrect implementation [8]. It is a com-plex and social exercise, which also expects the student to participate in the assessment-process [8]. The usage of digital systems could produce a possible solution to some of these problems [27]. There has been an increased interest in using online tools for assessment [1]. Still, translating a student’s progress data to form feedback that is useful to the student, is a complex subject of ongoing research [27].

One way of using digital systems in assessment is by using it for game-based learning [20]. In gaming, a player’s success is often tracked by analyzing the interaction with the game-environment. Similarly, a digital learning-environment collects information on the performance of a student, such as the request for hints and response-time to generate individualized formative assessment, known as stealth-assessment [20]. Stealth-assessment is a method of embedding assessment into a learning environment, where the assessment is invisible to the learner being assessed. This way of assessment reduces anxiety commonly associated with traditional assessment, which results in better insight into a student’s knowledge level [21]. These systems are often digital.

2.3

Individualized learning systems

A collective word for digital learning tools is e-learning. E-learning can be a solu-tion to the complex problem of implementing individualized learning [20, 25, 27]. Personalization is explained to be one of the three main pedagogical benefits of using digital learning tools, together with interactivity and engagement [23]. Still, complete individual learning is highly complex, because a lot of individu-alized content would be necessary. A solution to this problem can be reducing

(12)

the levels at which students can be placed to limit the total number of learning material versions that have to be produced [25]. For instance, by representing the different learning segments as nodes that can be combined in an optimal manner for every individual student.

A digital formative assessment tool that is already used in primary school education in the Netherlands is Snappet [24]. This tool gives formative feedback to the student, while also giving insight to the teacher on the progress of the students and giving adaptive assignments. It is a successful tool for the high-performing students, when applied to learning mathematics [3]. This could be explained by the fact that Snappet has the possibility of assigning more learning material, when a student is finished sooner than other students. Furthermore, it gives a complete overview of a student’s performance to the teacher by track-ing their current progress, while also creattrack-ing perspective on a student’s overall performance (figure 2.2).

Figure 2.2: This figure shows the overview a teacher gets of his or her students when using Snappet [24].

Another formative assessment tool that assesses computational thinking skills, is called Dr. Scratch [18]. This tool analyzes projects in Scratch to assess the level of development of computational thinking. Nevertheless, a lot of the com-putational thinking skills can not be assessed by Dr. Scratch, like creativity, originality and correctness. Therefore, it should be used as a tool for teachers to enhance their assessment.

Reflecting upon the aim of the project, it is apparent that classification can be used as an effective means for teaching computational thinking skills. It is in

(13)

line with Dutch curriculum plans on digital literacy and is deemed to be the most apparent form of machine learning [2]. Learning computational thinking skills does not necessarily involve computer programming [10, 16, 31], which is not necessary for teaching children the concept of classification and therefore does not have to be a part of the digital tool. Contrarily, aspects that were deemed important for the creation of an effective environment for learning computational thinking skills are teamwork and creative thinking [17, 29]. A classification tool encourages creative thinking, while also giving the possibility of working together in teams. Furthermore, the classification tool creates the possibility of foraging your own data. This demands for creativity and is proven to increase engagement [14]. Still, when a classification model is used by young students to understand the process of machine learning, it does not focus on that understanding com-pletely [26]. It is deemed necessary to create a learning method around the system, in order to reach more advanced learning goals.

In order to reach these goals, forms of individualized formative assessment are used similar to stealth-assessment. This method reduces the amount of test anxiety in a student [21]. Parameters like answering time or the asking of hints can be used to determine a student’s level [20]. This creates an individualized experience. Furthermore, formative assessment also involves feedback from the student to the teacher, giving them the possibility to adjust their lesson effectively [19].

(14)

Chapter 3

Design and Implementation

This section describes the creation of the digital tool. Firstly, it describes the creation of the concepts fundamental to the content of the digital tool. Secondly, the design of the digital tool is defined, focusing on the interactive and engaging aspect of the tool. Thirdly, the individualized aspect of the design is explained.

3.1

Learning Goals

Three learning goals are chosen. These are in consonance with learning goals on digital literacy, like the learning goals designated for the new curriculum change in the Dutch plans. Each goal is in line with the following topics from the curriculum report: "Data and Information", "Operation and creative usage of digital technologies" and "Digital citizenship and identity", because they touch upon the subject of classification. These goals are adjusted to be applicable to machine learning and classification and are defined as the following:

1. Data and Information

(a) Students understand that machines learn from data.

(b) Students learn that one cannot always draw correct conclusions from given data, because the amount and quality of data is relevant for the likelihood that the results are significant.

2. Classification Tools

(a) Students understand the steps that a classifier takes to analyse data. (b) Students can operate a system that uses image classification. They can collect their own data, and draw conclusions from this data using the classifier.

(c) Students can identify classification in their own digital environment. 3. Problems with Classification.

(a) Students learn about the downfalls of a classification model, like dis-crimination.

(15)

(b) Students are aware of the influence of classification to their own digital identity.

These learning goals are attained by looking from four different perspectives, which are defined by the curriculum report [28]. These four aspects are inter-twined with the learning goals to create an interactive and engaging learning method. The aspects are:

1. Students gain knowledge of digital technology. 2. Students learn how to utilize digital technology.

3. Students learn to think critically about digital technology. 4. Students gain the ability to create using digital technology.

The subject of data and information focuses mostly on gaining knowledge of digital technology, by focusing on the process of classification. Examples of subjects are data collection, and creating classes. The second learning goal also revolves around knowledge of the process of classification, but mostly focuses on the usage of the classification tool by giving step-by-step instructions. The third and final learning goal describes the problems with classification, raising ques-tions regarding racism and ethics, and explaining the influence of classification systems onto society, requiring critical thinking. The final perspective is the abil-ity to create using digital technology. In the final phase of the learning model, the student gets total freedom over the classification tool, making it possible to create their own experiments by foraging data and observing the outcome of the classification tool.

3.2

Designing interactivity and engagement

In order to create an interactive and engaging learning method, the system is divided into four different components that interact with each other: the classifi-cation model, the learning method, the individualized formative assessment and the teacher-interface. The classification model is used throughout the learning method to enhance the content, create interactivity and increase engagement. Extra learning material is added to the learning method by the individualized assessment when this is necessary according to the level of the student. The assessment parameters are then used by the teacher-interface to give feedback about the students to the teacher (figure 3.1). These components are explained in more detail below.

3.2.1

The classification model

The digital learning environment focuses on classification, which is explained by using an interactive classification model. The goal of this classifier is to explain

(16)

learning method

classifier assessment teacher-interface

extra material student level

parameters

Figure 3.1: The basic components that form the system, and how these interact with each other.

classification in an interactive manner, and encourage creative thinking. The classifier includes a part where the student forages their own data, a part where this data is used to train a neural network, and a part where new data can be tested.

The first part consists of data collection. The entire system is implemented using the tkinter module in Python, which makes it possible to create an inter-active interface that is based on Python. Using the cv2 Videocapture module, the webcam can be activated inside the tkinter display, which gives the student the possibility to take photos. These are saved on the device, because they need to be used to create the training model. Students are requested to take at least twenty pictures in two different classes, in order to increase the chance that the training model will be successful.

The second part is the training model. The classification tool consists of a convolutional neural network. The pictures that were made by the student create a large amount of trainable parameters. The images are preprocessed to reduce complexity, by normalizing the size of the pictures, and converting them to grayscale. It is important that the system is relatively quick, because the student must stay engaged while the system is training. Furthermore, there is most likely no external processor available on computers on a primary school to decrease training time. The Adam optimization algorithm for stochastic gradient descent, is computationally efficient and requires little memory [11]. The twenty pictures that were made by the student are not enough for creating an optimal image classifier. Therefore, the system executes data augmentation. This way, the limited amount of data that the student has foraged will be supplemented with images that are based on the images the student took. These images are for instance slightly shifted, rotated or zoomed in versions of the original data. The final training model is saved using the pickle module. This way, the model can be quickly accessed by the test algorithm.

The third part concerns the testing of the trained model. Using the cv2 Videocapture module again, the webcam is requested which creates an interactive test option. The student sees themselves in the camera, while screen-images from the camera are constantly being tested using the model that was trained before. These screen-images are first preprocessed in a similar way the training pictures

(17)

if self.which_question.get() == 1:

self.question.set("1. When you saw a cat for the first time, why would you be able to think it was a dog?") self.questionA.set("A. The cat looked a lot like a dog.")

self.questionB.set("B. You did not yet know the concept of ’cat’.") self.questionC.set("C. The cat acted like a dog.")

self.questionD.set("D. You thought that ’dog’ meant ’pet’.")

Figure 3.2: Code example of the way questions are implemented, where all self.question variables are interchangeable text variables.

are, after which the guessed class and probability is printed onto the live webcam image continually.

3.2.2

The learning method

A learning method is built around this classification tool. This consists of ques-tions and a supporting narrative, which makes it easier to incorporate and track the learning goals. The narration and question boxes contain an interchange-able text variinterchange-able, which creates the possibility to change the text of the box while keeping the same box (figure 3.2). This text-variable is global, and can be changed throughout the course. This way, the format of the model stays the same, while the content of the objects changes. This is an efficient method to change the course of the narrative.

Other global variables are used throughout the program to track the progress and success of the student. Examples of these variables are the variable that tracks which question the student has to answer next, which narrative text has to be used next, and after which question the narration has to start again. Fur-thermore, these variables can trigger the classification model within the learning method. All elements are attributes to the class object of the program, making it possible to use these globally throughout the program. These are variable classes, which creates the possibility to update them continually within the tk-inter program by for instance pressing a button. This way, the narrative and accompanying questions can be displayed at the correct moment. The objects containing the text variables are also global variables. When it is necessary, these can be removed from the display or replaced again. This is useful, because the narration and questions sections alternate, which makes it clearer to the student where he or she should focus. These objects are replaced and removed when the student presses buttons that go to the next question, or the next narrative.

(18)

3.2.3

Individualized formative assessment

After the creation of the content of the tool, the individualized aspect of the tool was constructed. Two important aspects are how the system tracks a student’s performance, and how this performance affects the course of the learning method. In order to create an individualized experience, the student is constantly tracked. A performance level is measured using some chosen parameters similar to game-based learning. This level affects the course of the learning method, by giving additional supporting questions, which help the student to improve their under-standing of the topic. It is a form of stealth-assessment, because the student is never assessed consciously and does not choose actively to answer more ques-tions. Still, the student can see whether he or she answered a question correctly, but not to which level the system has appointed them.

The level of a student is influenced by three parameters: hint requests, an-swering time, and correctness. It is a number between zero and three, three being a high-level student. The answering time is measured by taking into ac-count a student’s reading speed. In the first three pages with narrative, the time is measured, which shows roughly to which speed a student reads an amount of characters. Depending on the length of the question, the expected reading time is predicted. When a question appears, the time is measured. When a student checks his or her answer, the time is measured again. The time in between these timestamps is the time the student took to answer the question. The expected reading time of the student is subtracted from this answering-time, resulting in the total time it took to think about the question. The level is calculated using this formula: level = (1 − t 10) + (1 − h tq + tc tq)

Here, t stands for thinking time, h stands for the total amount of hints that were requested, tq stands for the total amount of questions answered thus far, and tc stands for the total amount of answers that was correct.

Because both hints and correctness is divided by the total questions that were answered thus far, the level of the student does not change drastically when a student answers one question. This way, when a student makes one mistake, their entire level does not change their level and manner they are treated in the program. Similarly, a low-level student will not be assessed to a higher-level after one correct question.

The levels are divided into three groups: the level up to one, which gets the most support, the level between one and two, which gets a little support, and the one between two and three, which does not get any extra support. This support is represented by additional questions that support the content of the original question, and help with the understanding of that subject. The additional questions are automatically assigned to the student, because the level is automatically measured when the student checks his or her answer. In level one, someone receives two additional questions, in level two someone receives one

(19)

Question X Question X + 1 Question X.1 Question X.2 level 3 level 1 + 2 level 2 level 1

Figure 3.3: An abstract reconstruction of the effect on the questions with different levels. Level 1 receives the most support, level three receives the least support and continues directly to the next question.

additional questions, and in level three, someone does not receive any additional questions (figure 3.3).

All levels of every student with every question are stored in a csv document. This is used in the teacher-interface to create graphs on the student’s progress.

3.2.4

The teacher-interface

The system gives the teacher the possibility to track the performance of the students. The system saves the progress of every student. If the student answered a question, the level after answering the question is saved, together with the answering time, correctness and whether the student asked for a tip. If a student has not answered all questions, these will not show. The teacher can login to the system and immediately sees a popup screen with a graph, showing the progress of all students, including the level they had after answering their last question. The teacher also has the possibility to type in the name of one specific student, and see their level-change over time. This allows the teacher to give extra support to students who need this.

(20)

Chapter 4

Results and Evaluation

Using the learning goals: ’Data and Information’, ’Classification Tools’, and ’Problems with Classification’ the digital learning tool is created. The main goal of these learning goals are to teach students that machines can learn from data and are limited in their learning abilities. Furthermore, it teaches students to interact with a classification system and think critically about the outcomes of the system. Students have the possibility to forage their own data, which creates a higher engagement. It requires creative thinking, and can be executed in teams.

4.1

The classification tool

The main aspect of the digital learning environment involves an interactive clas-sification system that uses a webcam. The students can forage their own data by making pictures with the webcam, which have to be sorted into two classes. The student can choose what he or she wants to portray, and what the classification will be about. The student chooses the names of these classes and trains the model (figure 4.1). Later, the student can test the success of the model by open-ing up the test window. Here, the video footage from the webcam will continually be tested. The test window will display the live webcam footage with the class and probability that this class is correct according to the training model (figure 4.2). This is a number between zero and one, one being completely certain it is the correct class.

4.2

Incorporating a learning method

Thus far, the classification model resembles the Google AI classifier, and the classifier from machinelearningforkids.co.uk. In order to achieve the learning goals, a learning method is build around the classification tool, which explains the tool to the student, but also raises questions that encourages critical and creative thinking. It is build as a multiple choice system (figure 4.3). When answering the multiple choice questions, students can check their answer and receive an explanation. Also, they can ask for hints.

(21)

Figure 4.1: Layout of the classification model in the digital tool which uses the webcam.

Pictures are used to increase engagement with the student and function as a visual support of questions. The questions lecture a multitude of related sub-jects. For instance, at the beginning of the course, a student sees a table with different animals and their attributes (figure 4.4). The student is asked to draw conclusions from this by answering questions. For example: if an animal has legs, it lives in the sea. Intuitively, students do not agree with this statement. A com-puter that does not know more about the world than the information from this table, does agree. This shows the student that a computer needs a right amount of information in order to draw conclusions. Furthermore, when explaining the concept of classification, a student sees pictures of animals that were sorted into two groups using one particular feature (figure 4.5). The student should find the feature that was used to sort the two groups. This shows that classification can be done using different features, while still having the same data set. This also shows that one could try to classify on a particular feature, while the computer will use a different feature. This touches upon the subject of discrimination and is also present in the multiple choice questions.

The individualized system gives students unknowingly extra support when they need it. This is done by adding supporting questions after every question when the student has a level of two or lower. The parameters that were used to calculate the level are: hint requests, answering-time and correctness of answers (see 3.2.3).

4.3

The teacher-interface

The system also gives teachers the possibility to track the performance of the students. This allows them to give extra support to students who need this. They can see the performance of all students in a graph, and also request a graph on the progress of one specific student. (figure 4.6).

(22)

Figure 4.2: Pop-up test screen that shows the webcam with the assigned class and the corresponding probability.

Figure 4.3: Interface of the digital tool when handling a multiple choice question. At 1, a student can ask for hints. At 2, the student can ask for an explanation.

4.4

Technical evaluation

The classification tool is functional, but can be faulty when the images in the two classes do not differ too much from each other. The training model could not train effectively by lack of data and training time. The training time has to be shortened in order to reduce the waiting time for a student and preserve engagement. The student is obliged to make at least twenty pictures per class before the training button appears, to increase the chance of a successful training model, but relatively, this is still not a lot of data to train an image classifier.

Even though the Adam algorithm for stochastic gradient descent is useful, the parameters have to be reduced to save training time, which reduces the success of the system. The batch size, which is the amount of training samples that have to be trained before the internal parameters of the training model are updated, is reduced to thirty. The amount of epoch’s is reduced to three. This way, the total

(23)

Figure 4.4: This figure is used in the digital tool when explaining the importance of the amount of data necessary to draw conclusions. It shows animals with some of their features.

Figure 4.5: Integration of the exercise on classification in the digital tool.

training time with twenty pictures per class is about one minute. This results in a model that is overfitted: the accuracy of the model would almost always be one. Still, the fastness of the model is deemed to be more important. When testing these models, the system works well when the two classes differ from each other enough. Students are encouraged by the system to make pictures in the classes that are clearly different from each other.

4.5

User evaluation

The system has been evaluated by four primary school teachers, two computer scientists and one educational advisor. They analyzed three different aspects of the system: the format, the content and the teacher-interface.

On the format, the interactivity is mostly praised. The ability to playfully learn about machine learning and be able to freely experiment with the classifier are seen as some attractive features of the system. Furthermore, the possibility

(24)

(a) Interface showing all students. Hovering over the bar gives the level that student had after answering the last question.

(b) Graph showing one student’s progress

Figure 4.6: These graphs can be requested by the teacher, to provide insight into the performance of the students

(25)

of teamwork is noticed. The self-regulated aspect of the learning system makes it attractive for teachers, because they do not have to know much about the system yet. Still, the format is deemed to be too messy and confusing for primary school students. It is noted that students often don’t take the time to read explanations, which will make it difficult for students to follow the course. Also, a lot of buttons are visible, which could be distracting and confusing. Furthermore, it is unclear whether privacy could be guaranteed, because there will be usage of a webcam. Not all school computers have a webcam, so this could also be a problem.

On the content, the subject of classification is complimented. The concept of classification is deemed immutable in the future, contrarily to other technologies that lose their relevance more easily. It also requests a philosophical conversation about digital identity, which is seen as an important subject. Still, the content is deemed to be too difficult for the student. Concepts like ’data’, ’training’, and ’testing’ are some concepts that are too difficult for a student to understand in the short amount of questions and explanations he or she receives about this.

The teacher-interface is found to be useful during lessons. The parameters are appropriate, and the graph showing all students gives a good overview of everyone, showing quickly who might need more help. Also, an adjusted version of this interface could give the teachers the ability to teach remotely. Still, the parameters are criticized because they give a one-dimensional view of a student’s success. Being able to see the precise answers and the thought process of a student would be more useful. Furthermore, the precise meaning of every level is arbitrary and has to be defined better.

(26)

Chapter 5

Conclusion and Discussion

5.1

Conclusion

The aim of this project was to create a classification tool that can be used to teach primary school students computational thinking skills. A learning method has been created that incorporates an image classifier. Students can forage their own data, which requires creative thinking, but also increases engagement. Fur-thermore, a student learns to utilize digital technology. The learning goals were very much in line with Dutch curriculum plans on digital literacy. This showed, that the topic of classification is very relevant when teaching about digital liter-acy and computational thinking.

The user evaluation praised the interactivity of the program. Using the classi-fier freely would demand the usage of creativity, and give the possibility to work together with the class or smaller teams. These aspects are important when learning computational thinking skills. This showed that this digital tool can be a successful learning tool on computational thinking.

The classifier that is incorporated within the learning method had to handle a small amount of data, and train in a short amount of time. Due to the limitations of the processor of the computer and the amount of training time, this resulted in an overfitted system. This was no problem, because the system did not have to be very accurate. The goal of the classifier was to explain the concept of classification, and how these system can be faulty. Because this system did not always work, the student had the possibility to challenge the system and see what it could manage. Because the test display was continually testing the training model, the student could easily see when the system went wrong and better understand its limitations.

Furthermore, using stealth-assessment, this system was enhanced, creating a more optimal learning environment. The student was unknowingly supported by the system through additional questions that supported the student when they often asked for hints, did not answer questions correctly and took too much time to answer the questions. The individualized formative assessment aspect of the system was recognized in the user evaluation as self-regulative, creating the possibility to use this system when a teacher was uninformed about the subject.

(27)

The chosen parameters were considered useful when assessing the students. An important aspect of assessment is the feedback the teacher receives about the students. The possible request of graphs about the students was praised in the user evaluation, mostly for the possibility of overlooking all students at once.

5.2

Discussion and future work

The user evaluation showed that the didactic implementation of this system needs improvement in order to be more successful with primary school students. Moreover, the format of the system was deemed to be too messy for students. Students have the tendency to overlook explanations, so having a lot of but-tons will confuse them. Setting a timer when an explanation must be read can function as a solution to this problems. The system does not have an engaging presentation, which is expected to decrease motivation and engagement. Fur-thermore, some concepts that were mentioned in the program were expected to be too difficult for primary school students. The parameters were suspected to be useful, but criticized for their one-dimensional view of a student’s success. Im-plementing these proposed changes could create a system that is more engaging, understandable and at level with the students.

The technical evaluation showed that the classification model needs some improvements. The necessity to utilize a training model that trains relatively fast, prevented the formation of a perfectly performing system. Furthermore, the system could not handle too many parameters due to this lack of training time, which limited for instance the amount of classes. The neural network of the system could be optimized in future work to make the system less dependent on the correct collection of data by students. Furthermore, the system could be expanded to handle multiple classes. Another solution could be to use the waiting time for the training model to shows an educational video or give the student more questions. This could also maintain engagement.

This learning system is not available online. Creating a similar system that runs online, could gather information on the success of the system with students directly, simplifying the collection of user data. Moreover, the teacher-interface can be improved, because the student-information is not saved on the student’s computer alone.

Furthermore, not only regular primary school students could benefit from a learning system on classification. Creating similar systems for different age groups could also have a positive effect on the formation of computational think-ing skills among a greater group of students. Equally, creatthink-ing a similar system for students with a learning disability would be an interesting field of study.

(28)

Bibliography

[1] Geoffrey T Crisp. Integrative assessment: reframing assessment practice for current and future learning. Assessment & Evaluation in Higher Education, 37(1):33–43, 2012.

[2] Pedro Domingos. A few useful things to know about machine learning. Communications of the ACM, 55(10):78–87, 2012.

[3] Janke M Faber, Hans Luyten, and Adrie J Visscher. The effects of a digital formative assessment tool on mathematics achievement and student moti-vation: Results of a randomized experiment. Computers & education, 106: 83–96, 2017.

[4] George Gadanidis. Artificial intelligence, computational thinking, and math-ematics education. The International Journal of Information and Learning Technology, 2017.

[5] Google-Creative-Lab. Ai experiment - teachable machines. 2019.

[6] John Hattie and Helen Timperley. The power of feedback. Review of edu-cational research, 77(1):81–112, 2007.

[7] Clint Andrew Heinze, Janet Haase, and Helen Higgins. An action research report from a multi-year approach to teaching artificial intelligence at the k-6 level. In First AAAI Symposium on Educational Advances in Artificial Intelligence, 2010.

[8] Mary C Heitink, Fabienne M Van der Kleij, Bernard P Veldkamp, Kim Schildkamp, and Wilma B Kippers. A systematic review of prerequisites for implementing assessment for learning in classroom practice. Educational research review, 17:50–62, 2016.

[9] MA Hendriks, J Scheerens, and P Sleegers. Effects of evaluation and assess-ment on student achieveassess-ment: A review and meta-analysis. The influence of school size, leadership, evaluation, and time on student outcomes, pages 127–174, 2014.

[10] Byeongsu Kim, Taehun Kim, and Jonghoon Kim. and-pencil programming strategy toward computational thinking for non-majors: Design your solu-tion. Journal of Educational Computing Research, 49(4):437–459, 2013.

(29)

[11] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic opti-mization. arXiv preprint arXiv:1412.6980, 2014.

[12] Faber Hylke H Wierdsma Menno DM Koning, Josina I. Introducing compu-tational thinking to 5 and 6 year old students in dutch primary schools: an educational design research study. In Proceedings of the 17th Koli Calling International Conference on Computing Education Research, pages 189–190, 2017.

[13] Dale Lane. Machine learning for kids. Accessed april 9th 2020.

[14] Irene Lee, Fred Martin, and Katie Apone. Integrating computational think-ing across the k–8 curriculum. Acm Inroads, 5(4):64–71, 2014.

[15] MIT Media Lab Lifelong Kindergarten Group. scratch-mit. 2003.

[16] James J Lu and George HL Fletcher. Thinking about computational think-ing. In Proceedings of the 40th ACM technical symposium on Computer science education, pages 260–264, 2009.

[17] Punya Mishra, Aman Yadav, Deep-Play Research Group, et al. Rethinking technology & creativity in the 21st century. TechTrends, 57(3):10–14, 2013. [18] Jesús Moreno-León, Marcos Román-González, Casper Harteveld, and Gre-gorio Robles. On the automatic assessment of computational thinking skills: A comparison with human experts. In Proceedings of the 2017 CHI Confer-ence Extended Abstracts on Human Factors in Computing Systems, pages 2788–2795, 2017.

[19] D Royce Sadler. Formative assessment and the design of instructional sys-tems. Instructional science, 18(2):119–144, 1989.

[20] Valerie Shute, Fengfeng Ke, and Lubin Wang. Assessment and adaptation in games. In Instructional techniques to facilitate learning and motivation of serious games, pages 59–78. Springer, 2017.

[21] Valerie J Shute, Eric G Hansen, and Russell G Almond. You can’t fatten a hog by weighing it–or can you? evaluating an assessment for learning system called aced. International Journal of Artificial Intelligence in Education, 18 (4):289–316, 2008.

[22] Shell Duane F Ingraham Elizabeth Ramsay Stephen Moore Brian Soh, Leen-Kiat. Learning through computational creativity. Communications of the ACM, 58(8):33–35, 2015.

[23] K Udaya Sri and V Krishna. E-learning: Technological development in teaching for school kids. International Journal of Computer Science and Information Technologies, 5(5):6124–6126, 2014.

(30)

[25] Djamshid Tavangarian, Markus E Leypold, Kristin Nölting, Marc Röser, and Denny Voigt. Is e-learning the solution for individual learning?. Elec-tronic Journal of E-learning, 2(2):273–280, 2004.

[26] Gardner-McCune Christina Martin Fred Seehorn Deborah Touretzky, David. Envisioning ai for k-12: What should every child know about ai? In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9795–9799, 2019.

[27] Fabienne van der Kleij and Lenore Adie. Formative Assessment and Feedback Using Information Technology, pages 1–15. Springer Interna-tional Publishing, Cham, 2018. ISBN 978-3-319-53803-7. doi: 10 . 1007 / 978-3-319-53803-7 _ 38-2. URL https : / / doi . org / 10 . 1007 / 978-3-319-53803-7_38-2.

[28] CNV onderwijs de Algemene Onderwijsbond de Federatie van Onderwijs-vakorganisaties de Algemene Vereniging Schoolleiders het Landelijk Aktie Komite Scholieren SLO VO-raad, de PO-Raad. Leergebied digitale gelet-terdheid. Curriculum.nu, 2019.

[29] Joke Voogt, Petra Fisser, Jon Good, Punya Mishra, and Aman Yadav. Computational thinking in compulsory education: Towards an agenda for research and practice. Education and Information Technologies, 20(4):715– 728, 2015.

[30] Darrell M West and John R Allen. How artificial intelligence is transforming the world. Report. April, 24:2018, 2018.

[31] Jeannette M Wing. Computational thinking. Communications of the ACM, 49(3):33–35, 2006.

Referenties

GERELATEERDE DOCUMENTEN

Title: The learning portfolio as a tool for stimulating reflection by student teachers Titel: Het ontwikkelingsportfolio als reflectie-instrument voor docenten-in- opleiding.

Based on the results of content analyses of retrospective interviews with the student teachers and their portfolio- evaluation reports on their experiences of working on a

Three functions, ‘understanding experiences’, ‘understanding the learning process’, and ‘understanding yourself as a teacher’, belonged to the group of process functions

When ‘analysis’ was combined with ‘evaluation’, and these learning activities both related to lines of reasoning that supported or undermined an opinion, this became

We found themes with meaning-oriented learning activities in four of the six theme clusters: themes about problems experienced, the educational reform, teaching and testing,

We made a distinction within the process function of the learning portfolio between learning activities geared to action and the improvement of action (recollection and

Portfolios in teacher education: Effects of instruction on preservice teachers’ early comprehension of the portfolio process.. Student teachers’ and

To realize that working on the portfolio starts a learning process, student teachers must go through an elaborate pattern of learning activities and the learning