• No results found

Automating Reference-based Feedback for Learning Conceptual Knowledge in Secondary Education

N/A
N/A
Protected

Academic year: 2021

Share "Automating Reference-based Feedback for Learning Conceptual Knowledge in Secondary Education"

Copied!
5
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Amsterdam University of Applied Sciences

Automating Reference-based Feedback for Learning Conceptual Knowledge in Secondary Education

Kada, Yasmina; Bredeweg, Bert

Publication date 2018

Document Version Final published version Published in

Benelux Conference on Artificial Intelligence, Den Bosch, Netherlands

Link to publication

Citation for published version (APA):

Kada, Y., & Bredeweg, B. (2018). Automating Reference-based Feedback for Learning Conceptual Knowledge in Secondary Education. In Benelux Conference on Artificial

Intelligence, Den Bosch, Netherlands (pp. 95-96). (Belgian/Netherlands Artificial Intelligence Conference).

General rights

It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).

Disclaimer/Complaints regulations

If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please contact the library:

https://www.amsterdamuas.com/library/contact/questions, or send a letter to: University Library (Library of the University of Amsterdam and Amsterdam University of Applied Sciences), Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.

Download date:26 Nov 2021

(2)

30th Benelux Conference on Artificial Intelligence

BNAIC 2018 Preproceedings

November 8-9, 2018

Jheronimus Academy of Data Science (JADS),

’s-Hertogenbosch, The Netherlands

(3)

Editors:

Martin Atzmueller

CSAI/JADS, Tilburg University Wouter Duivesteijn

Eindhoven University of Technology

(4)

Automating Reference-based Feedback for Learning Conceptual Knowledge in Secondary Education

Yasmina Kada

Vrije Unversiteit Amsterdam Department of Computer Sciences

Amsterdam, The Netherlands Email: Y.Kada@student.vu.nl

Bert Bredeweg

University of Amsterdam

Informatics institute Amsterdam, The Netherlands

Email: B.Bredeweg@uva.nl

Abstract

Knowledge construction tools provide students with a symbolic vocabulary to construct logic-based models. However, students using these tools typically rely on the teacher for feedback about their model, which often hampers their learning process due to limited availability of the teacher in a classroom. In this contribution, we present an automated and domain independent feedback system that effectively supports students in assessing the correctness of their model.

Introduction

Knowledge construction tools based on Qualitative Reasoning have been developed for education [2, 3]. These tools use a symbolic vocabulary to support students in constructing their understanding [4, 5]. DynaLearn is such a tool and focusses on learning about systems and their behaviour [1,6]. Students formulate hypotheses, create cause-effect models, assess their models through simulation, reflect on the outcomes, and keep further developing their understanding until reaching a desired result.

Currently, the feedback provided by DynaLearn is limited to simulation outcomes. The student has to either self-check or ask for help from the teacher or a classmate when support is needed, e.g. to understand and interpret the simulation results. Hence, an outstanding challenge is to provide students with automated feedback throughout the modelling process and optimise their learning experience.

In this thesis, a feedback module is presented for the DynaLearn environment, focussing on use in secondary education. The feedback takes reference models as the basis for providing feedback about correctness. The feedback module has been evaluated with students on usability, utility and comprehensibility.

Feedback module

In many cases, students in secondary education are required to learn a specific model of some knowledge system. Because of the specificity, recommendations can be made, based on a reference model created for a specific learning goal and considering the students’ level. Since the correctness of the model has already been validated by the teacher, recommendations based on such reference models are relevant to the student.

An automated, domain independent, feedback system has been developed and implemented that compares the student model to a predefined reference model, maps elements from the two models that match, identifies deviations, and communicates the feedback visually using different icons and colour codes (Figure 1). The feedback module provides students with immediate feedback on the correctness of model elements. Feedback is communicated visually, using icons to convey distinct information about the correctness of the model, the need for the addition, and removal of elements or relations.

95

(5)

Figure 1. Example of the communication provided by the feedback module. LHS shows the reference model. RHS shows the student-created model and the feedback the student receives. The

subject matter concerns trophic cascades using information about Yellowstone park (USA). The diagram shows the initial part of the full model which has over 50 ingredients.

Results and Conclusions

An evaluation study was done to assess usability, utility and comprehensibility of the feedback module. Participants (N=7) completed a biology assignment on tropic cascades and answered questionnaires. Three out of the seven participants were high school students (age 15-18), while four were university students (age 19-27). The high-school students received pre-university education with a science and mathematics track. The university students followed a science or technology major.

The reception of the feedback module was positive. The participants in the evaluation study considered the feedback useful and argued that the feedback helped them improve their model.

This also became apparent when looking at the number of questions they asked the teacher. The participants were asking increasingly less questions about the correctness of their model towards the final stages of the modelling process and relied mostly on the automated feedback to improve the model.

Some confusion occurred regarding the colour-codes used. After the teacher reminded the participants to carefully look at the differences between the icons and consider their meaning, the students started using them effectively.

The current implementation covers 50% of the elements in DynaLearn, which is sufficient for the first modelling level (standard). For full use in education, it will be necessary to also implement feedback for the second (advanced) and third (advanced+) level. Furthermore, additional flexibility could be researched by comparing the student created models to more than one reference model.

References

[1] Bredeweg, B., Liem, J., Beek, W., Linnebank, F., Gracia, J., Lozano, E., Wißner, M., Bühling, R., Salles, P., Noble, R., Zitek, A., Borisova, P., Mioduser, D. (2013). DynaLearn – An Intelligent Learning Environment for Learning Conceptual Knowledge, AI Magazine, 34(4), 46-65.

[2] Forbus, K.D., Carney, K., Sherin, B.L., Ureel II, L.C. (2005). Vmodel: A visual qualitative modeling environment for middle-school students. Ai Magazine 26(3), 63.

[3] Leelawong, K., Biswas, G. (2008). Designing learning by teaching agents: The betty’s brain system.

International Journal of Artificial Intelligence in Education 18(3), 181–208.

[4] Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. NY: Basic Books.

[5] Schwarz, C.V., White, B.Y. (2005). Metamodeling Knowledge: Developing Students’ Understanding of Scientific Modeling. Cognition and Instruction, 23(2), 165-205.

[6] https://create.dynalearn.nl, last visited September 23rd, 2018.

Automating Reference-Based Feedback for Learning Conceptual Knowledge 15

Communication of feedback

Once all elements have been through the mapping process and all errors have been assigned, the feedback is generated based on the errors found. Model checking and feedback generation is done after every change in the model. Feedback is shown without delay, only after it is requested by the student. An example of how the feedback is communicated can be found in Fig. 9.

An element can have multiple errors assigned to it. The communication model consists of a set of rules, describing the priority as well as the manner of communication. The priority is relevant when there are conflicting errors. An element is always considered superfluous when it cannot be mapped. However, when there is still a need for an instance of that particular element type, a label error is assigned to it. Showing both errors (superfluous element and label error), will convey a message that is likely to cause confusion. Therefor, the message that will be communicated is based on the desired student behavior. In this case, changing the label to a correct label is desired, before the removal of an element instance and creating a new one. The superfluous error is resolved by a correct change of label.

When multiple errors of the same class are assigned to an element, this is all commu- nicated through the use of one icon. For instance, if a relation element has a mistake in direction and sub-type, only one icon is displayed to communicate that the elements needs to be fixed. Errors about the absence of di↵erent element types are all shown, each di↵er- ent type using their specified representation. There are two reasons for this: (1) too many icons will clutter the model and (2) the number of missing element instances can be tracked through the element counter bar, positioned at the bottom of the screen (see section 3, Fig. 2). When an error is resolved, the icon disappears. The disappearance of an icon can be interpreted as implicit positive feedback, indicating that the student has performed an action that improved the model.

Fig. 9. Example of the communication of all feedback types

Automating Reference-Based Feedback for Learning Conceptual Knowledge 15

Communication of feedback

Once all elements have been through the mapping process and all errors have been assigned, the feedback is generated based on the errors found. Model checking and feedback generation is done after every change in the model. Feedback is shown without delay, only after it is requested by the student. An example of how the feedback is communicated can be found in Fig. 9.

An element can have multiple errors assigned to it. The communication model consists of a set of rules, describing the priority as well as the manner of communication. The priority is relevant when there are conflicting errors. An element is always considered superfluous when it cannot be mapped. However, when there is still a need for an instance of that particular element type, a label error is assigned to it. Showing both errors (superfluous element and label error), will convey a message that is likely to cause confusion. Therefor, the message that will be communicated is based on the desired student behavior. In this case, changing the label to a correct label is desired, before the removal of an element instance and creating a new one. The superfluous error is resolved by a correct change of label.

When multiple errors of the same class are assigned to an element, this is all commu- nicated through the use of one icon. For instance, if a relation element has a mistake in direction and sub-type, only one icon is displayed to communicate that the elements needs to be fixed. Errors about the absence of di↵erent element types are all shown, each di↵er- ent type using their specified representation. There are two reasons for this: (1) too many icons will clutter the model and (2) the number of missing element instances can be tracked through the element counter bar, positioned at the bottom of the screen (see section 3, Fig. 2). When an error is resolved, the icon disappears. The disappearance of an icon can be interpreted as implicit positive feedback, indicating that the student has performed an action that improved the model.

Fig. 9. Example of the communication of all feedback types

96

Referenties

GERELATEERDE DOCUMENTEN

• Bij combinaties van middelen in suikerbieten anders dan BOGT of BOPT zijn PS1"waarden > 65 gemeten vanaf 3 dagen na behandeling indicatief voor een goed effect op

Onder normale omstandigheden met voldoende jodide in de voeding is de opname ongeveer 20-30%, tijdens jodi- umdeficiëntie kan dat oplopen tot 80%, maar dat is relatief en vaak

Met dank aan Geert Vynckier (VIOE) voor advies bij de determinatie van het aardewerk, Johan Van Heesch (Munt- en Penningkabinet van de Koninklijke Bibliotheek van België) voor

Zowel de reeds gekende site op de akker aan het Kanenveld als de overgangszone tussen deze site en de alluviale vlakte bieden perspectief voor verder archeologisch onderzoek.. Het

Bij de realisatie van scenario 3/4 zal de organisatie van de gehele bedrijfstak relatief sterk worden gedomineerd door de circa20 opdrachtnemende aannemings-/toeleveringsbedrij-

(Ingestuur). die \rersk'J.1ende sosjale ri.gtin'ge in dte verkeerswereld onderskei as die Brits- sosialistiese, die · Afrikaner-sosialistie- se en die

The processing scheme of the separation of the chromatic from the achromatic color categories, in the saturation-intensity plane, using human color categorization data (see Section 2