• No results found

Explainable AI for automated cause-and-effect reasoning

N/A
N/A
Protected

Academic year: 2021

Share "Explainable AI for automated cause-and-effect reasoning"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Explainable AI for automated

cause-and-effect reasoning

(2)

Layout: typeset by the author using LATEX.

(3)

Explainable AI for automated

cause-and-effect reasoning

Paul A.J. Schmidt 11850590

Bachelor thesis Credits: 18 EC

Bachelor Kunstmatige Intelligentie

University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor dr. B. Bredeweg

Informatics Institute Faculty of Science University of Amsterdam

Science Park 904 1098 XH Amsterdam

June 26th, 2020

Abstract

Qualitative reasoning models are used in educational settings to further student under-standing of dynamic system processes. Due to complex internal mechanisms, conclusions derived from algorithms can be difficult to understand for the student. This thesis pro-vides a system for information storage and explanation generation, with a focus on student engagement. The main goal of this paper is to present the approach taken to perform auto-matic explanations within Garp3, a workbench for Qualitative modelling. If expanded, the

(4)

Contents

1 Introduction 1

2 Literature 1

3 Technical Design 2

3.1 Value assignment and development . . . 3

3.2 Information storage . . . 3

3.3 Explanation mechanisms . . . 4

3.3.1 Explaining contradictions . . . 4

3.3.2 Explaining values in successful models . . . 4

4 Concept and implementation 6 4.1 Explanation medium and formulation . . . 6

4.2 Access Mechanism . . . 7 4.3 student-invoked explanations . . . 7 5 Technical evaluation 10 5.1 Input values . . . 10 5.2 Influence resolutions . . . 10 5.3 Correspondences . . . 10

5.4 Value constraints on derivatives . . . 12

6 User Experience 12

7 Discussion 13

Appendices 14

(5)

1

Introduction

To understand any complex system in natural or social sciences, an understanding of the processes and causal relations within that system is required. Qualitative modelling is thought to be an effective tool for the representation of such knowledge, both for experts and novice students [1]. In educational settings especially, qualitative modelling and simulation represent promising tools for students to attain an understanding of complex systems. However, a drawback of these complex knowledge constructions is the complexity of their internal mechanisms. Integral mechanisms like logical inferences and causal reasoning are realised internally, without explicit communication to the user. Due to this, conclusions derived by algorithms are sometimes difficult to understand. To make qualitative models more insightful for students, there is a need for explanation of these automated reasoning processes that facilitate the models. Moreover, these explanations should be presented in a manner that is optimally constructive to student understanding.

This thesis provides a prototype for the integration of automatic explanation of internal reasoning mechanisms in Garp3 [2], a workbench for qualitative modelling. Key inferences per-formed within the internal reasoning engine are structurally collected throughout simulations, which enables automatic explanation of system values post-simulation. The implementation of automated explanations are performed and displayed in an environment external to Garp3. This environment mimics Dynalearn1, the web-based counterpart of Garp3 which specifically targets

education.

The research goal of this thesis is captured in the following research question:

How can the technical implementation of complex constraint-based solvers be optimally employed to enhance student understanding in educational settings?

In this paper, this question is addressed through the design and implementation of automated explanation mechanisms. The following section provides an overview of related work and theo-retical background in qualitative modelling, interactive learning environments and explanation theory. Section 3 presents the technical design for the collection of key information regarding model outcomes. Section 4 addresses choices made in the conceptual design surrounding the pre-sentation of explanations. A prototypical example is used here for illustration purposes. Section 5 evaluates the design from a technical perspective, while section 6 briefly discusses effectiveness from the perspective of the user. Finally, section 7 discusses and concludes the thesis.

2

Literature

The majority of the work done for this thesis took place within Garp3 [2]. This workbench employs knowledge-based techniques to accommodate qualitative modelling and the simulation of complex system behaviour. The workbench is an implementation of Qualitative Reasoning, a research area within Artificial Intelligence that allows for automated reasoning about continuous aspects of the world with little numerical information [3]. Garp3 and other qualitative modelling implementations have been used in educational settings in order to boost student understanding of complex physical systems common in natural sciences [4, 5].

Researchers generally regard explanations as ’the process of generating a mechanistic or causal analysis of why something happened or how something works’ [6]. However, as [7] argues, explanations range across a wide spectrum and the concept is not easily unified under one comprehensive definition. The term ’explanation’ is best viewed as an umbrella term, describing a cluster of related processes . A dominant factor in all views on explanations is the emphasis on

(6)

causal relations. Explanations often refer to causal relations, and in many cases causal analyses alone suffice in fulfilling the explanatory objective. [1, 8].

Generally, an explanation is deemed satisfactory if it conforms with empirical observations, has a sufficiently large explanatory scope and is internally consistent [9]. Novices place more value on simplicity, while experts value precision and formality. Explanations need some medium of communication, either through natural language or graphical means. Both approaches have merit, and often the two can be used to complement each other [1]. In online learning environ-ments, additional factors rise to prominence. [10] places extra emphasis on the access mechanism, the means through which the student is granted access to an explanation. This can either be student-invoked - where the student explicitly requests an explanation - or automatically per-formed - where the system decides when to provide the student with an explanation. Increasing barriers with regards to explanation access increases the students likelihood to perceive the expla-nation process as a separate process, rather than an integral part of learning and understanding. This, in turn, makes the student less likely to utilise explanation mechanisms. This concept is closely tied to the cognitive effort perspective, which states that the perceived amount of effort is an important, deciding factor in an agent’s strategy selection. This means that students will not invoke an explanation if they perceive the mental effort of doing so to outweigh the benefit of accessing the explanation [11]. Darlington concludes that access mechanisms that require less cognitive effort are more effective with respect to the achievement of learning objectives.

On the other hand, student performance is thought to improve with higher cognitive stu-dent engagement [12]. Simplifying the task can be counterproductive, with a small likelihood of student improvement [13]. Other concerns specific to interactive learning environments include ’gaming the system’, where students attempt to succeed in an educational task by exploiting properties of the system rather by learning the material [14]. [15] recognises four key principles that an automatically generated explanation must adhere to: conciseness, self-containment, com-pleteness and context-dependence. According to [16], an automated explanation system must determine what information constitutes a satisfactory answer, which it should then formulate in a coherent order, simplified to the minimum necessary to grasp the process description.

3

Technical Design

In this thesis, two situations that necessitate further explanation are recognized: contradictory models and successfully built models. Contradictions occur when quantity values - the repre-sentation of entity features in a system - or derivative values in input states are irreconcilable with one another, leading to the absence of model states. Successful models are models that simulate successfully, resulting in any number of model states. To design a conclusive system for the gathering of key information concerning these situations, there is a need to identify and account for all ways in which values can originate and develop.

This section clarifies how this account of value is made possible. Firstly, all ways in which values can be assigned in input states are identified. Secondly, value developments in later states are listed, after which the process of information storage is outlined. Lastly, the mechanisms for explanation provision are stated. Any models used for illustration purposes in this section - and the continuation of this thesis - are based on the Frog Population model provided in [2].

(7)

3.1

Value assignment and development

Slight differences exist in the possible value assignment processes in input states and all subse-quent states. Both will be handled subsesubse-quently in this section.

With regards to input states, the following value assignment types are recognized for deriva-tives:

• The user assigns the input value directly.

• The resolution of one or multiple influences or proportionalities determines the value. • The value is derived through a correspondence.

• The value is derived through an inequality.

• The corresponding quantity value is in the lower landmark limit, constraining the derivative value to ≥ 0.

• Conversely, the corresponding quantity value is in the upper landmark limit, constraining the derivative value to ≤ 0.

• Influences on the derivative cannot be resolved unambiguously, causing all possible values to be assigned in alternate states.

The possibilities for quantity value assignments are more limited. Unlike derivatives, quantity values are not yet affected by external influences in input states. Furthermore, constraints imposed on derivative values don’t apply to quantity values. Thus, only direct user input, correspondences and inequalities have the potential to assign input values to quantity values.

With regards to the value development of derivatives, nothing changes in subsequent states. Since all value assignment processes have an immediate effect on the derivative, changes in derivative value work identically in input states and subsequent model states. For quantity values however, derivative values from previous states now affect value development. Along with the previously identified assignment processes, derivative value assignments and developments from previous states comprise all ways in which quantity values are assigned or developed.

3.2

Information storage

Information storage thus amounts to storing value assignments, along with the necessary in-formation to account for these assignments. For simplicity, higher order derivatives and value assignments through inequalities have not been taken into account. An overview of stored infor-mation is displayed in the table 1.

Table 1: Information storage per assignment type Assignment type Stored information

Input by user The resulting value

Correspondence The resulting value, the corresponding quantity, the cur-rent state

Influence resolution The resulting value, the type and sign of the influence(s), the influencing quantities, the individual and overall effect of the influencing quantities, the current state

Derivative change due to value in limit

(8)

3.3

Explanation mechanisms

Based on this information, explanations can be provided. The explanatory structure for contra-diction and successful models is laid out in this subsection.

3.3.1 Explaining contradictions

As stated, contradictions occur when multiple input values are irreconcilable with one another. Through the lens of section 3.1, this means multiple of the input state value assignment pro-cesses have a different effect on a single value. Potential combinations of these value assignment processes are numerous, as assignments can be propagated through several orders of influence. Figure 1 illustrates this. A conclusive list of model configurations that lead to contradictions is thus not feasible. However, if all values derived from assignment processes are stored structurally, all contradictions can be explained through an explanation of the conflicting value assignment processes.

Figure 1: Value assignments get propagated through multiple orders of influence. The quantity Birth has a positive effect on the derivative of Number of. This, in turn, has a correspondence with the derivative of Biomass, which has a negative input value. The combination of these 3 assignment processes lead to a contradiction.

Once a simulation concludes without any model states, the stored value assignments are searched for any quantities values or derivatives with double assignments. This conflict can then be communicated to the user through an account of both value assignments. As the simulation is halted once a contradiction is encountered, only the first encountered contradiction is explained in models that contain multiple contradictions.

3.3.2 Explaining values in successful models

Unlike contradictory models, explanations for successful models need an account of both value assignments in input states as further value development in subsequent states. However, since

(9)

explanations with regards to successful models only take first order influences into account (See section 4.1), the number of distinct situations is limited. An if-then-else structure is used to discriminate between distinct value development processes, which provides the basis for the explanations in each situation. The if-then-else structures for derivative and quantity values is displayed below.

if-then-else structure for explanation of derivative values if quantity value in limit then

derivative is constrained.

else if derivative has influences then if unambiguous then

resulting effect is assumed else if ambiguous then

all possible derivatives are explored in alternate states. else if no previous derivative then

if derivative has correspondence then corresponding value assumed. else

no possibilities for value assignment. else

input value is maintained.

if-then-else structure for explanation of quantity values if no input value then

if value has correspondence then corresponding value assumed. else

no possibilities for value assignment. else

if derivative has influences then if current state = input state then

influences cannot affect value yet, input value is maintained. else

if unambiguous then

value changes in direction of previous state. else if ambiguous then

all possible derivatives were explored.

Value change is result of one of several options. else if derivative has input value then

value changes in direction of derivative input value. else

(10)

4

Concept and implementation

Two didactic goals are leading with regards to choices made on the design. In the first place, the design should aid the student in understanding the processes and causal relations within the model at hand. Secondly, the design should foster a more general feeling for the mechanisms within the Dynalearn modelling environment. This means that the design should work towards the acquisition of both model-specific and generalizable knowledge. With these goals in mind, two issues need to be addressed. First of all, it should be determined through which medium and in which format the information should be presented to the student. On top of that, the means and difficulty of access to an explanation must be determined. This section discusses the choices made regarding these two issues, building on the outcomes of the literature review.

An implementation of the discussed concept has been carried out based on the scenario given in Figure 2. States and explanations shown in this section are taken from this implementation.

Figure 2: The scenario on which all figures in section 4 are based

4.1

Explanation medium and formulation

Provision of explanation in online environments can be performed through either natural language or visual means. As interaction in the Dynalearn learning environment almost exclusively consists of visual and graphical means, the added value of a visual explanation is deemed minimal. Thus, explanations are provided through textual dialogue.

In creating the format for textual dialogue, the four principles for basic help - conciseness, self-containment, completeness and context-dependence - identified by [15] are adhered to. Ad-ditionally, an emphasis is placed on the simplicity of the explanations. These criteria, along with the general didactic objectives, lead to several choices regarding the exclusion of particular information.

For the sake of conciseness and simplicity, explanations of causal processes within the model cover only the first link in any causal chain. Any influences or proportionalities beyond direct causal ingredients are thus not considered in an explanation. The student has the option to

(11)

inquire about quantities further down the causal chain on his own accord (Section 4.2). This does not apply to contradictory models, where the entire cause for contradictions is stated in a single statement.

Similarly, explanations will not seek to provide complete accounts of a quantity value from the first state to the current state. Instead, the goal is to provide an explanation regarding the quantity value in the current state, as opposed to the last. This has two reasons. Firstly, in the same vein as previously, this serves to keep the explanations concise and sufficiently simplified for the student. Secondly, and more importantly, the processes within a model remain the same throughout states. Since the first didactic goal is to aid the student in understanding of model processes, there is no added value in backtracking all the way to the first state, since the explanation will largely be identical throughout the state path.

Other information is included based on the aforementioned criteria and didactic goals. For the sake of self-containment and the second didactic goal, there is at times a need for the inclusion of information regarding general system mechanisms. General knowledge about the relation between value and derivative, or criteria for when values can be derived from correspondences are thus included in the explanation dialogue.

The provided information should be presented in a coherent manner, walking through rea-soning steps in such a way that each next step logically follows from the previously provided information. To assure coherency in the explanation dialogues, the order of information is stan-dardised. To begin with, a general rule for the context at hand is stated. Based on this rule, the relevant model ingredients and processes are determined. Finally, the overall effect is stated.

The stated principles and criteria have been applied to each distinct situation identified in Section 3.3.2, leading to explanation templates for each situation. These templates are listed in appendix A.

4.2

Access Mechanism

With regards to the access mechanism, a mixture of student-invoked and automatically per-formed explanations is used. In the case of contradictory models, explanations will be provided automatically. Several factors contribute to this design choice. Firstly, models containing con-tradictions often indicate a significant misunderstanding of system processes. Additionally, the pre-simulation phase is the phase in which the student possesses the least amount of informa-tion, due to the fact that there are no simulation values that the student can use to deduct system mechanisms. These two observations have lead to the conclusion that time spent in this situation should be limited. Moreover, the scope of potential explanation targets is limited in contradictory models, since the cause of a contradiction is readily identified by the system. It is thus more simple to provide the needed information in contradictory models than in success-fully simulated models. In the latter, any values present in the system could potentially be the source of confusion for the student. As such, student-invoked explanations are the more effective choice in successful models. The next section discusses the procedure concerning student-invoked explanations.

4.3

student-invoked explanations

As discussed in the literature review, active student engagement is thought to improve student performance. Some safeguard against passive behaviour - requesting an explanation before giving thought to the issue first - from the student is thus required. This is implemented in the form of a selection procedure that the student has to complete before gaining access to the explanation.

(12)

Whenever a student is unsure about the processes that led to a certain quantity value in a certain state, he can request an explanation on that front. If any external model ingredients affected the current quantity value in any way, the student is prompted to select any ingredients that he deems relevant to the quantity value at hand (Figure 3). Again, any influences and proportionalities beyond the first link in any causal chain are not considered in the selection of the correct quantities. If the selection is correct, the explanation dialogue can be provided after the first attempt (Figure 4). If it is (partially) incorrect, any incorrect selections should be deselected, after which the student is prompted to try once more. This process is repeated at most three times, after which the explanation dialogue is provided regardless.

This imposed limit is the outcome of a conflict between concerns regarding cognitive effort perspective and concerns regarding gaming the system potential. The limit leaves room for exploitative student behaviour by selecting at random until the explanation is provided. However, opting for a limitless selection procedure raises the difficulty of access tremendously, making students less likely to utilise the explanation mechanism. Moreover, safeguards against the exploitation of an imposed limit very quickly turn into obstacles for genuinely puzzled students, once again resulting in information loss as the explanation mechanism might not be fully utilised. As access to information is deemed more valuable than efficient safeguards against exploitation of the selection procedure, the current limit is proposed as a compromise.

Within the student-invoked explanations, a distinction is made with regards to the difficulty of access to an explanation. Explanation requests for quantity values that have at least one external system ingredient as its direct cause follow the previously discussed selection procedure. However, explanation requests for quantity values that do not have any external system ingredients as their direct cause are explained instantly. This is motivated by the fact that these situations once again can be an indication of lesser understanding of system mechanisms. If, for instance, a student requests an explanation for a value that has not changed since the input state, since it is not influenced by any external ingredients, it can be assumed that this student does not yet sufficiently grasp the system mechanisms. As such, it is deemed beneficial to aid the student with an instant explanation. This choice is enforced by the setup of the selection procedure. Since the student is explicitly requested to select the system ingredients that are in direct causal relation to the current value at hand, the student will be primed to select one ingredient at minimum, rather than none at all. Applying this procedure for values that do not have any direct external causes is thus deemed counterproductive.

(13)

Figure 3: The student has requested an explanation for the value of Biomass in State 1. Since Biomass is influenced by Number of, the selection process is started

Figure 4: The student has made the correct selection, prompting a popup with an explanation dialogue.

(14)

5

Technical evaluation

In this section, the storage of value assignments for each of the categories defined in Section 3.1 is evaluated. The standard model used throughout this paper is expanded in order to increase the capacity for different value assignments in three ways. Each model is specifically designed to include a multitude of assignments of a particular kind: input values, influence resolutions and correspondences. The last value assignment process - constraints on the derivative based on the quantity value - appears throughout these three models and is evaluated based on all three models.

5.1

Input values

The model displayed in Figure 5 contains 14 different input value assignments. All possibilities for input values are represented, with input values in upper limit landmarks, lower limit landmarks and intervals. 14 out of 14 input values have been recognized and stored.

5.2

Influence resolutions

The model shown in Figure 6 results in 8 total states with 3 input states. Due to the repeti-tive nature of influence resolutions, only input states are taken into account here. Five unique influence resolutions can be found in these input states. :

• the unambiguous negative effect on Birth. • the ambiguous effect on Number of.

• the three alternate possibilities for an unambiguous effect on Biomass.

Five out of five cases have been recognized and stored. All aspects mentioned in Table 1 are present.

5.3

Correspondences

The model shown in Figure 7 results in three states. A combination of directed correspon-dences, undirected corresponcorrespon-dences, value correspondences and derivative correspondences is used. Through the simulation, four quantity value assignments and four derivative assignments occur, for a total of eight. Out of these eight, all four value correspondences are recognized and stored. The derivative correspondences are not recognized by the system.

(15)
(16)

Figure 7: the standard model is expanded here to include eight value assignments through correspondences across three states.

5.4

Value constraints on derivatives

The three models created for evaluation contain a total of three instances were a value is in the limit of the quantity space. In this summation, only the input values for Figure 5 have been taken into account due to the substantial size of this simulation. All three instances are recognized and stored in the system.

6

User Experience

To assess the usability and effectiveness of the explanation prototype shown in Section 4.3, a small examination of user experience has been conducted. 3 subjects with no prior experience with qualitative modelling have tested the prototype to provide a preliminary indication of the effectiveness of the approach.2 The subjects received assistance and explanation on the basic

mechanisms in Dynalearn, after which they were left to explore the simulation results of the model in Figure 2. Any model values that caused confusion in this phase are noted, after which the same scenario was explored within the environment that allows for automated explanations. No assistance was provided in this phase. From this procedure, several points of feedback have been noted.

A number of model aspects that were unclear in the Dynalearn environment were resolved with the help of the explanation procedure. Notably, confusion about the value of influenced quantities in input states was resolved, as well as confusion about the number of input states. On the latter, it was noted that the explanation for this could be more clear. One of the subjects did not understand why the derivative of Number of was set to zero in state 6 (Figure 8), which was clarified after using the explanation prototype.

Points of critique consist of confusion about when the selection procedure is required instead of the instant explanation provided in many cases. Furthermore, the prototype does not explain why the simulation stops at state 2, 4, 6 and 7, as opposed to any other states.

2The corona virus safety measures created too many obstacles for more rigorous testing.

(17)

Figure 8: The derivative of Number of has been set to zero here, con-strained by the fact that the quantity value is in the lower limit. This was clarified through an automated explanation.

7

Discussion

The research goal of this thesis was to leverage the internal mechanisms of Qualitative Reasoning workbench Garp3 in such a way that it can enhance student understanding of dynamic systems. The approach taken has been to construct a conclusive value history for all quantities in a model. The mechanisms for value assignment and development were identified and a system for structural information storage of these mechanisms has been put in place. An evaluation of the storage system suggests that the result is largely successful, although derivative values assigned through correspondences were not detected. Moreover, inequality relations where left out of consideration in this paper, which limits the generalizability of the current results. An approach for the explanation of system values was designed, with a focus on student engagement. This focus is expressed through a selection procedure that follows an explanation request, urging the student to actively hypothesise about probable causes. This approach has been realised in the form of a prototype, which has allowed for initial tests on effectiveness. Due to the limited nature of these tests, any conclusions drawn are subject to uncertainty. An additional source of potential confusion was identified through subject input, as the reasoning behind state path endings is not included in model explanations. Nevertheless, results do seem to indicate that the taken approach furthers understanding of the prototypical model.

Building on this thesis, future work includes the completion of information storage on value assignment and development, as well as more rigorous testing of the taken approach with regards to explanations. Further down the line, concerns surrounding ’gaming the system’ could be more adequately addressed, since the current selection procedure leaves room for this counterproduc-tive behaviour.

This paper has presented a method to provide automatic explanation generation based on model simulations in Garp3. The explanations provided aim to support students in gaining a better overall understanding of the complex systems at hand. It is concluded that the approach taken, though still unfinished, could successfully stimulate student understanding of complex systems.

(18)

Appendix A

Explanation templates

Since Dynalearn targets Dutch students, the original templates are formulated in Dutch. For clarity, they have been translated to English here. The order corresponds to the if-then-else structures provided in Section 3.3.2

1. (a) The value of Q is in the limit. Since the value can’t Direction any further, the derivative must be Relation than or equal to zero.

(b) The derivative of Q is influenced by Influences. The overall effect is Effect, which causes the derivative of Q to be Effect.

(c) The derivative of Q is influenced by Influences. Since the overall effect of these influences is ambiguous, all possible derivative signs for Q are explored. The other possibilities can be found in AlternativeStates.

(d) The derivative of Q has no previous value. Furthermore, no model ingredients influ-ence Q. However, since the derivative of Q has a correspondinflu-ence with Correspond-ingD, the value CorrespondingV can be assumed for the derivative of Q.

(e) None of the model ingredients has an effect on the derivative of Q. Furthermore, no input value was given. Therefore, no value for the derivative of Q can be assigned. (f) None of the model ingredients has an effect on Q. Therefore, the derivative’s input

value is maintained.

2. (a) Q has no previous value. This can only change through correspondences and inequal-ities. Since Q has a correspondence with CorrespondingQ, the value Correspond-ingV can be assumed for Q.

(b) Q has no previous value. This can only change through correspondences and inequali-ties. Since Q has no correspondence or inequality relations, it is not possible to assign any value to Q.

(c) A quantity value is based on its derivative in the previous state. While the derivative of Q is influenced by Influences, this cannot change the value of Q yet because the current state is an input state.

(d) A quantity value is based on its derivative in the previous state. The derivative of Q in state PreviousState is influenced by Influences. Since the overall effect of these influences is ambiguous, all possible derivative signs for Q are explored, including PreviousD in PreviousState. Because of this, the value of Q is RelativeValue than in state PreviousState.

(e) A quantity value is based on its derivative in the previous state. The derivative of Q in state PreviousState is influenced by Influences . The overall effect is Effect, which causes the derivative of Q in state PreviousState to be Effect. Because of this, the value of Q is RelativeValue than in state PreviousState.

(f) None of the model ingredients has an effect on Q. Because of that, the input value for the derivative of Q is still InputD. Therefore, the value of Q is RelativeValue than in state PreviousState.

(g) None of the model ingredients has an effect on Q. On top of that, there is no input value for the derivative of Q. Therefore, Q has maintained its original input value.

(19)

References

[1] Anders Bouwer. Explaining behaviour: using qualitative simulation in interactive learning environments. 2005.

[2] Bert Bredeweg et al. Garp3 — Workbench for qualitative modelling and simulation. Eco-logical Informatics, 4(5-6):263–281, 2009.

[3] Kenneth D. Forbus. Qualitative reasoning. In CRC Handbook of Computer Science, 1997. [4] Gautam Biswas, Daniel Schwartz, John Bransford et al. Technology support for complex

problem solving: From sad environments to AI. 2001.

[5] Erika Schlatter et al. Can learning by qualitative modelling be deployed as an effective method for learning subject-specific content? Data Driven Approaches in Digital Education, Lecture Notes in Computer Science, page 479–485, 2017.

[6] Shane T. Mueller, Robert R. Hoffman, William Clancey, Abigail Emrey, Gary Klein. Ex-planation in human-AI systems: A literature meta-review, synopsis of key ideas and publi-cations, and bibliography for explainable AI. CoRR, 2019.

[7] Tania Lombrozo. Explanation and abductive inference. Oxford handbook of thinking and reasoning, pages 260–276, 2012.

[8] Douglas S. Krull and Craig A. Anderson. The process of explanation. Current Directions in Psychological Science, 6(1):1–5, 1997.

[9] William F. Brewer, Clark A. Chinn, Ala Samarapungavan. Explanation in scientists and children. Minds and Machines, 8(1):119–136, 1998.

[10] Keith Darlington. Aspects of intelligent systems explanation. Universal Journal of Control and Automation, 1(2):40–51, 2013.

[11] Shirley Gregor and Izak Benbasat. Explanations from intelligent systems: Theoretical foun-dations and implications for practice. MIS quarterly, pages 497–530, 1999.

[12] Robert M. Carini, George D. Kuh, Stephen P. Klein. Student engagement and student learning: Testing the linkages. Research in higher education, 47(1):1–32, 2006.

[13] Jennifer Hammond and Pauline Gibbons. What is scaffolding. Teachers’ voices, 8:8–16, 2005.

[14] Ryan Shaun Baker, Albert T. Corbett, Kenneth R. Koedinger. Detecting student misuse of intelligent tutoring systems. In International conference on intelligent tutoring systems, pages 531–540. Springer, 2004.

[15] Wouter Beek and Bert Bredeweg. Context-dependent help for novices acquiring concep-tual systems knowledge in dynalearn. In International Conference on Intelligent Tutoring Systems, pages 292–297. Springer, 2012.

[16] Vladan Devedzic and Ljubomir Jerinic. Explanation in intelligent tutoring systems. Bulletins for Applied Mathematics, 1196(96):183–192, 1996.

Referenties

GERELATEERDE DOCUMENTEN

["Audit of tax items is important, provided that they are material." - Audit Manager] If taxes are not material, the external auditor will not perform additional

An experiment that was conducted shows that the brand equity of the firm will increase when it is participating in a special type of cause-related marketing, a cause-brand

It also presupposes some agreement on how these disciplines are or should be (distinguished and then) grouped. This article, therefore, 1) supplies a demarcation criterion

The complex of factors that is mentioned mostly (48 times or 41 ~) has to do with limitations that follow from their working part-time, their flexible working hours and,

are no clear criteria or guidelines available. It seems that it should merely be stated if an actor creates the value or not and the number of values that are created. This leaves

(2) to examine the interaction of flaw’s origin with raw material naturalness (3) to investigate the role of perceived intentionality..

The main theory proposed to support this idea is the prevailing role of process in naturalness determination (Rozin, 2005) and thus the fact that perceived lower processing, when

Photoacoustic imaging has the advantages of optical imaging, but without the optical scattering dictated resolution impediment. In photoacoustics, when short pulses of light are