• No results found

Information, Complex Topics, and Search Patterns: An inquiry into how search patterns can help create Adaptive Information systems

N/A
N/A
Protected

Academic year: 2021

Share "Information, Complex Topics, and Search Patterns: An inquiry into how search patterns can help create Adaptive Information systems"

Copied!
79
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Information, Complex Topics, and Search Patterns: An inquiry into how

search patterns can help create Adaptive Information systems.

Author: N. V. van der Veere Coordinator: Frank Nack University of Amsterdam

Faculty of Natural sciences, Mathematics and Computer Science Student ID: 11612592

Date: 03/07/2020

Words abstract: 236 Word count: 9622

(2)

Table of Contents

1. Introduction ... 4 1.1 Research Question ... 5 2. Theoretical Background ... 6 2.1 Complex topic(s) ... 6 2.2 Development aid ... 7 2.3 Interactive Storytelling ... 8 3. Method ... 10

3.1 Information source dataset ... 11

3.2 Experiment design and Implementation ... 13

4. Experiment Performance ... 19 4.1 Materials ... 19 4.2 Participants ... 19 4.3 Procedure ... 20 5. Results ... 21 5.1 Experiment dataset ... 21 5.2 Analysis ... 22 5.3 Dataset descriptive ... 23

5.4 Paired Samples T-test ... 24

5.5 Wilcoxon Signed Rank Test ... 24

5.6 Pearson correlation ... 24

5.7 Participant Patterns ... 24

5.8 Vector Magnitudes and Directions ... 27

6. Discussion ... 30

6.1 Interpretation of Results ... 30

6.2 Limitations ... 36

7. Conclusion and Future work ... 38

8. References ... 40

Appendix A: Design ... 44

Appendix B: Statistics... 49

(3)

Abstract

The purpose of this study is to gain an understanding of how people search for information on complex topics. Storytelling is deemed the optimal method to explain complex topics. With the shift to non-linear indirect media, the elements of storytelling are in decline. This has led to complex topics being harder to fathom. To this end Interactive Storytelling has been used to create an environment which facilitated storytelling. The research question was therefore, ‘To what degree can Interactive Storytelling be employed, to discover if people follow patterns when moving through information sources on complex topics, specifically pertaining to development aid?’ To answer this question an understanding of people’s movements through information had to be created, what patterns exist in these movements and what the preferences of people are. For this a custom web based cross-sectional survey, simulating an adaptive information system, is created. The data is tested using three statistical tests, the Paired Samples T-test, Wilcoxon Signed Rank test and Pearson R correlation. And is also further used to create the patterns. The statistical tests showed conflicting results, and were partially non-conclusive. A total of 6 main patters were identified. No one-sided norm for information preferences is found and patterns cannot be explicitly linked to demographic groups, but dominant within group patters are identified. The identified patterns contribute to the further development of an adaptive system. The research also benefits the wider domain of Interactive Storytelling.

Keywords: Interactive Storytelling, Adaptive Information System, Simulation, Development Aid, Information Search Patterns, Complexity, Sentiment.

(4)

4 1. Introduction

People have a notoriously hard time wrapping their heads around complex topics. This is because complex topics consist of lots of information which is ever-increasing. It used to be that people would receive their information through linear non-interactive media, such as news and television. Even though this still happens it is being increasingly substituted by information found on the internet. The internet is a vast source of information, but information is presented in a non-linear, indirect and in some cases heavily personalized manner. Whilst newspaper articles are linear, direct in nature and story driven. The ways of searching on the internet are indiscriminate and any search can net a multitude of different sources (Hölscher & Strube, 2000). This non-linear, indirect structure goes against the optimal way of human understanding making it increasingly hard to fathom complex topics. By keeping this structure, it takes away the elements of storytelling. This story element is important because people understand complex content better through stories.

To research this the theory and application of Interactive Storytelling (IS) is employed. Although IS has been used in various social contexts, there is still sparse research on how the IS can help people understand information, how information is chosen, and what people’s preferences are for information. Nonetheless, since storytelling is the optimal method of conveying complex content (Nix & Spiro, 1990), and interactivity is, in most cases, the optimal way of learning new information (McNett, 2016), it becomes an important method for creating an adaptive information system.

However, as of writing there are no known systems which have implemented the notion of an adaptive information system. Thus, the investigation into an adaptive IS system for information dissemination, is of importance to furthering the use of IS. The complex topic which is used in this

(5)

5 research is that of development aid. This complex topic is chosen because it is perhaps one of the most influential subjects of the past decades, but its effects explicit are disputed.

The aim of this research is to make an inquiry into three different aspects of how people can understand complex topics. The first is, in what ways people move through information sources collected by other people. The second is, do people show explicit behavior when moving through these sources, and the third is, in what format do people want their information.

1.1 Research Question

The following research question was formulated according to the stated aim and the chosen complex topic, which are both incorporated into the subsequent research question.

‘To what degree can Interactive Storytelling be employed, to discover if people follow patterns when moving through information sources on complex topics, specifically pertaining to development aid?’

Several sub-questions have been formulated to further structure the research, these are the following,

1. Which elements of interactive storytelling can be used to explain a complex topic?

2. Which patterns could potentially exist or are created when people receive information sources chosen by the researcher? And what can these patterns tell us?

3. Do people show explicit behavior when moving through these sources, or can it be related to certain behavior? And are they influenced in any way by the information.

(6)

6 To examine and answer these questions further theoretical context on social problems, interactive storytelling and complex topics, will need to be given. The three of these will be discussed in the following section which will build the theoretical framework.

2. Theoretical Background

This section builds the theory behind the concepts further implemented in the research. First an explanation on what complex topic(s) are and how these are defined will be given, this includes the description of development aid. An overall view on why development aid is a complex topic will be given. Afterwards the link will be made with interactive storytelling, what it is, and how it can be used in this research.

2.1 Complex topic(s)

Complex topics are not the run of the mill subject, they require more in-depth knowledge and explanation to be comprehensible. Complex topics involve a lot of different inter-related parts which make up a whole (Cambridge Dictionary, 2020). The reason why social problems are also complex topics is that social problems consist of different angles and facets, interrelated parts of a whole. Social problems are in this respect complex topics.

The definition of a social problem is, “A social problem is any condition or behavior that has negative consequences for large numbers of people and that is generally recognized as a condition or behavior that needs to be addressed” (University of Minnesota Libraries Publishing, 2016). But how does a condition or behavior become a social problem. The development of a social problem starts when someone (a claims maker) makes an argument (a claim) that a condition or behavior is harmful and tries to convince others why something must be done about it and what specific

(7)

7 actions are needed (Defronzo & Gill, 2015). This often leads to different information sources making differing claims as to the causes, effects and influences of social problems. This creates a clouded environment of information overload, in which it becomes ambiguous what information is true, and what is false. It has become even more confusing with the addition of fake news. This debacle makes social problems complex topics.

2.2 Development aid

The chosen complex topic for this research is development aid. Development aid is one of the most influential concepts of our time and it plays a significant role in today’s world. Development aid is “aid that promotes and specifically targets the economic development and welfare of developing countries” (Organization for Economic Co-operation and Development (OECD), 2019). This has led to advocates pushing for a doubling of the current international assistance, which is currently 50 billion dollars (for Africa alone) annually (Moyo, (2009a, 2009b)). However, aid can have both positive and negative consequences for a country’s development, socio-economic environment and economic infrastructure.

As it now stands, countries, institutions and the World Bank have, in the last decade, donated a sizeable 1 trillion US dollars to African countries. But this is strange because evidence shows that the aid to Africa and other countries has only made the poor poorer and economic growth slower (Rajan, & Subramanian, 2005) (Moyo, (2009a, 2009b)) (World Economic Forum & Dreher, 2015).

One of the reasons that foreign aid continues without change is that, even when struck by headwinds the moral imperative stays upright. This moral imperative is the strong driving force behind development aid and the need to give resources to people in underdeveloped, war torn or

(8)

8 countries hit by natural disasters (Opeskin, 1996). Burnside and Dollar (2000) however contradict this downward spiral and have come to the conclusion that aid, given equal to 1 percent of the gross domestic product, can increase growth by a sustainable 0.5 percentage points, if it is a well-managed country. This is however a big ‘IF,’ because no countries have thus far been successful in realizing this. The notion that aid helps is nonetheless further substantiated by Levine (2004) who conducted a series of case studies and found that a World Bank funded project in China averted 30,000 cases of tuberculosis per year. These findings are however again contradicted by Ferro and Wilson (2011), who show that development assistance for health has no effect on mortality and its effectiveness has not improved over time illustrating there is no consensus.

Foreign aid also has a multitude of other functions such as, investments for diplomatic approval, military allies, rewards for admired behavior, the extension of cultural influence, or the provisioning of infrastructure needed by the donor country to extract resources from the recipient country, or other such commercial gains (Cheru, & Obi, 2011). This makes development aid look like a type of trade or negotiation mechanism.

Development aid is therefore a complex topic which encompasses many facets and angles and is therefore also ideal as the complex topic for this research. To explain these facets and angles the theory and application of IS can be used.

2.3 Interactive Storytelling

Storytelling is a comprehensive method to bring structure to problems that have a multitude of facets and angles. It gives people the opportunity to comprehend information naturally from different sources. However, since we are telling a story in a social context and a look is taken at how people search, select and move through information a twist is needed. Traditional storytelling

(9)

9 is not solely the solution. To create a well-rounded information system and to help people understand information it has to be interactive. To accomplish this IS is employed. The premise is that IS can be used to give people options to orientate their way through information in a naturalistic manner. Whilst still giving them the ability to control part of the narrative and not be overwhelmed by the sheer amount of information that exists on complex topics, such as development aid.

Interactive storytelling encompasses many disparate fields, including psychology, sociology, cognitive sciences, linguistics (both human and computer), user experience and interface design and computer sciences. Inherently IS is a form of digital entertainment in which the storyline, which the users experience, is not predetermined. The storytelling in this case refers to the process of creating and authoring rather than a final passive reception of a product. This defines interactive storytelling as any type of media that allows the user to generate unique dramatic narratives.

Interactive storytelling can be traced back to the 1970’s and the research efforts of Roger Schank. It helped produce arguably the first early storytelling system called TaleSpin (Meehan, 1977). These efforts have been further elaborated on in the 1980’s by Michael Liebowitz with the development of a conceptual system called ‘Universe’ (Lebowitz, 1984). Brenda Laurel extended this work with her PhD dissertation, ‘Toward the Design of a Computer-Based Interactive Fantasy System’ (Laurel, 1986). Eventually during the 1990’s Interactive storytelling gained a following which led to several research projects such as the Oz Project (Kelso, et al., 1993), the Software Agents group, the Improv Project and the Virtual Theater group. Roger Schank (Wyer, et al., 1995) went on to develop a model which creates interrelationships between stories and memory. He describes how stories can be understood and how people create stories from gist of memory. Storytelling systems seek to model the knowledge processes necessary to tell stories (Mateas, &

(10)

10 Sengers, 2003). These developments have partially led to ‘Façade,’ which is a unauthored story path generator that uses characters output and the users input for story creation (Mateas, & Stern, 2003).

Furthermore, extensive research is and has been done toward the implementation and improvement of adaptive storytelling within contexts of games (Paul, 2011) (Durairaj, 2020). The challenge for games is the need for stories to be seamless and ongoing, thus adaptation is needed when certain storylines are invalidated by events in the game world. The techniques associated with adaptive storytelling systems could be implemented in different contexts.

3. Method

The research employed both a quantitative and to a certain extent qualitative design. The quantitative design was used to provide systemic information on the observed phenomenon. For this a cross-sectional survey was used. No explicit goal is stated to pin point a cause and effect relationship which mitigates the drawback of using the cross-sectional survey. The qualitative design of the research stemmed from the quantifying of opinions into scores for the creation of patterns corresponding to participants. This interpretation of opinions is the qualitative element.

The experiment, which is part two of the survey, is fundamentally structured around two storytelling elements experience, which are sentiment, and complexity. The information sources and the participant have all been assigned scores on these two metrics. These metrics were used because the literature made clear that these two elements structured storytelling. Specifically, the first metric was experience, the experience of the information, how it was perceived, ‘do I like what I read?’ The other element was complexity, how complex is the information, ‘do I understand what I read?’ The experience metric was translated to sentiment, but the complexity was kept the

(11)

11 same. These metrics were chosen because they could be actively measured and quantified for both the information sources and the participant. This in turn made it possible to place, and change the placement of a participant within the information source dataset.

The experiment ultimately simulated an adaptive information system. This was a simulation because, due to time and resource constraints, a full system could not be implemented. It is to a certain degree adaptive, but not how the standard definition would define adaptiveness (Oxford, 2020). Nonetheless the simulation was able demonstrate the basic concept and its possible implementation.

3.1 Information source dataset

To build the survey the first order of action was to create a dataset filled with information sources. With the collection of the information sources we actually simulate what the author of an IS system would do as creational work. A person building an adaptive information system would have to create such an initial dataset too. The information source dataset consisted of 52 individual textual and audiovisual sources. There were 11 news articles, 11 YouTube videos, 12 opinion pieces, 4 blog posts, and 14 Informative articles from either newspapers, blogs or organizations involved in development aid. The information sources were collected from these sources because the information was readily accessible. Meaning that the participant would have been able to simply lookup these articles with the Google search engine, by using variations on the term ‘Development aid.’

The sources in the information source dataset were classified on both sentiment and complexity. Both sentiment and complexity were defined as scores. This classification was done based on the decision that, to facilitate an adequate entry point for participants in the information

(12)

12 source dataset, both the participant and the sources should be measured on the same scales. With the use of these scales it became possible to facilitate an entry point. The sources were assigned the sentiment scores through the use of a hybrid method of a rule-based sentiment analysis model made with the Python NLTK library (NLTK Project, 2020) and a sentiment analysis API (MonkeyLearn, 2020). Further, the complexity scores were calculated using several textual scoring metrics (Generation Ready, 2013) from the Python package Readability (Dimascio, 2020). These were Flesch-Kincaid Grade Level, Flesch-Kincaid Reading Ease, Gunning Fog, Coleman Liau Index, Automated Readability Index (ARI) and Linsear Write. The choice was made to use such a broad variety of scores because it would have given a better overall picture of every article’s complexity. Several other scoring systems were however left out because these were incompatible with the length of some information sources. Figure 1 represents the distribution of the documents in the sentiment and complexity space.

Figure 1: Visualization of the dataset as a populated grid of information sources.

Description: The blue dots represent the information sources. The x-axis goes from simple to complex, measured between 0 to 1, 0 being the lowest and 1 being the highest. The y-axis goes from negative to positive sentiment, measured between -1 to 1, -1 being completely negative and 1 being completely positive.

(13)

13 The drawback of the current information source dataset was that sources were not evenly distributed. This meant that if someone went to the very extremes of the dataset and had a high starting point, they would likely not have received the desired complexity and sentiment. The dataset could coincidentally also turn out differently, i.e. more clustered, more positive, less complex, or other variations. This is due to the researcher biases. To guard against such bias sources were taken from as wide as possible places and differing authors. Inherently the dataset will still be minimally biased because we live in an increasingly polarized society (Spohr, 2017) (McCoy, et al., 2018). So, if the researcher does not write the text themselves, they are going to get polarizing views.

3.2 Experiment design and Implementation

Experiment design

The main requirements for this survey were the following. One, the survey should be able measure both the sentiment and complexity of a participant in terms of scores. Two, these scores, from the sentiment and complexity, are able to place a participant in the information source dataset. Three, once the participant was placed in the information source dataset using his or her own sentiment and complexity scores, they would be able to receive information sources. Four, this first information source is received based on the Euclidean distance of the person’s scores for sentiment and complexity, and the sentiment and complexity scores of the next nearest information source.1

Five, the participant should receive a new information source, for up to 5 times (Tan, et al., 2016).

1 The formula for Euclidean distance 𝑑(𝑝, 𝑞) = √(𝑞

1− 𝑝1)2+ (𝑞2− 𝑝2)2 . With which the next nearest

article was calculated. Where q1 was the source’s sentiment, q2 was the source’s complexity and where p1

(14)

14 Six, sources 2, 3, 4 and 5 are given based on the participants opinion of the source that came before it. Seven, this survey should be interactive to best simulate an adaptive system.

Only 2 requirements were filled by existing survey solution, thus the decision was made to build a custom survey. The design of the survey was based on the identified components of storytelling and idea of the ideal dataspace structure. As for the variables, these warrant their own explanation. This will be in the order of, independent, dependent, control and then extraneous variables.

The independent variable was the information sources. The information sources were consistently changed based on the prior sentiment and complexity of section one and the opinion(s) given by the participant. The information sources served to change the sentiment, but also to show a pattern of movement throughout the information source dataset.

The dependent variable was the change in the sentiment and complexity of the sources pertaining to development aid. Both sentiment and complexity were constantly changed at the same time through the change in the information sources. This allowed the participant to move through the information source dataset. Effectively it took on a tuple structure of both scores, i.e. (sentiment, complexity). The sentiment of the participant was measured in pre- and post-test conditions. With the second part (the simulation) being the test. This was done to see if a significant change in sentiment would occur after the participant received information sources.

The control variables were the demographics of the participants. These were the age, gender and education. Questions on these points were found in section one. These datapoints, in the variables, let us subdivide the sample population for more in-depth analysis.

The extraneous variables were, prior knowledge, motivation, and informative searches. These variables could have potentially influenced the results of the survey. Prior knowledge could have

(15)

15 prematurely influenced the opinion of the participant. Meaning that a person could have had a prior opinion influencing their assessment of the sources. The sentiment measurements (pre- and post-test) could have also been influenced because of the already existing opinion. Motivation was another variable which could have influenced the resulting pattern(s). This pertained mostly to the motivation to educate oneself which was also a driving factor of moving through the information source dataset. If a person was not motivated, they were not assumed to search for novel information, look at the sources critically or choose a best fitting opinion. Since the survey was on the web it could not be controlled if participants would undertake informative searches. With which is meant, that searches are enacted for the right answers to the knowledge and/or complexity questions.

Following these variables, the structure of the survey was determined and several assumptions were made.

1. The assumption that participants will follow certain definable patterns because they tend to have certain patterns with which they traverse information sources.

2. The assumption that different demographic groups in the sample population will show different patterns. Such that lower educated participants will move toward less complexity and more positive content.

3. The assumption that, because of the information sources, participant’s sentiment will momentarily change to either more positive or more negative in regard to the before – after experiment condition.

These assumptions were further incorporated into the design of the survey. The design created an eventual result space of 48 initial datapoints for each participant.

(16)

16 The survey was setup following a three-part structure and each part served a different function. Part one served to ask demographic, knowledge, sentiment and complexity questions to create groups within the sample population. A total of 16 questions were asked (see Appendix A, part 1). The initial questions asked the participants for age, gender and education, consisting of 5, 3 and 6 options respectively. These were the controlled variables and were not changed throughout the experiment. The participant was then quizzed on their knowledge of the subject, this was done with 2 multiple-choice questions with 4 options each. These questions were designed so that only a participant with in-depth knowledge could answer them correctly (not accounting for chance).

The participant is hereafter questioned on their sentiment toward development aid. This was done with 8 sentiment questions. The set of questions consisted predominantly of statements to invoke a person’s opinion and values. Answers were given based on 5-point Likert scales ranging from ‘Very Bad’ to ‘Very Good.’

At last the participant was asked 3 questions to measure their complex thinking. The first question tested mathematical reasoning and this was an open-question. The second tested reading comprehension and is a multiple-choice question with 4 options. The third tested pattern recognition and was a multiple-choice question with 2 options. The questions were inspired by a set of samples from AssessmentDay (2020). The use of such questions is substantiated by Hegarty (2000) and Hoffman (2007).

The second part of the survey simulated an adaptive information dissemination system, but was in actuality an interactive information dissemination system (see Appendix A, part 2). It consisted of information sources which were given to the participant. These sources were each coupled with an

(17)

17 opinion question. The participant saw a minimum of 1 and a maximum of 5 sources. These sources were the independent variable as well as containing, partly, the dependent variable. This was because, with each article, a change in both sentiment and complexity occurred. The minimum was set because the participant could exit the second part whenever he or she wanted, but data had to still be gathered.

The design was structured in such a way that the participant was first directed to examine the information source and then give their opinion on the source. The opinion question was a 4-option multiple-choice question. The answer consisted of a combination of four options, ‘I liked the content,’ ‘I disliked the content,’ ‘I liked the complexity,’ and ‘I disliked the complexity.’ The choice the participant made influenced which information source would be presented next. Meaning that the participant received, based on their choice for sentiment and complexity, a lower or higher sentiment and complexity source.

The participant was given an initial article based on the answers to the sentiment and complexity questions asked in part 1. After this initial source the opinion questions were used by the participant to navigate the information source dataset. The mechanism facilitating this was based on three measures the Euclidean distance, a percentile score and opinions saying which way to move on the sentiment and complexity scales. I.e. If one chose for the opinion ‘I dislike the content, but liked the complexity,’ a person received a random next article from the 25th percentile

of Euclidean distances.

However, if a person chose for the ‘I liked the content and the complexity’ they would get a random article from the 50th percentile of Euclidean distances. All information sources were further

(18)

18 The reasoning behind this 25th and 50th percentile rule was that if a person liked their current

source, i.e. (+1, +1). They would be pushed to see if they liked a higher level too, and vice versa for disliking the source. This made it possible to push people to the outer areas of the dataset. For this the 50th percentile was used. If one was unsure about either sentiment or complexity they

would be left to see if they could adjust their current levels until they were happy, for this one could not move too far from the original source, so the 25th percentile was used.

The third part of the survey consisted of negatively formulated sentiment questions mirroring the sentiment questions of part one. These questions served to measure the change in sentiment or opinion of a participant post part two in which information sources were presented. The participant was asked a total of 8 negatively formulated sentiment questions. It subsequently followed the same structure of statements and Likert scales from part one (see Appendix A, part 3).

Survey Implementation

This is what the survey looked like and how it was structured. The survey itself was created with ReactJS (Facebook, 2020) and the SurveyJS library (Devsoft Baltic OÜ) were used (See appendix A for codebase and illustrations). ReactJS was chosen because the researcher has prior experience with the JavaScript language and libraries. The SurveyJS library was used because this had existing templates which could be integrated easily for efficiency purposes. Because of the adaptive nature of the survey it had to be conducted online.

(19)

19 4. Experiment Performance

4.1 Materials

Internet capable device

Due to the nature of the survey, and it being solely available online, participants needed access to an internet capable device. Meaning the participant only needed a mobile phone, tablet or computer to fill in the survey. Computers were however preferred due to better functionality.

Web survey

The web survey was custom made for the experiment. It was generally designed following Gonzalez-Bañales and Adam’s (2007) recommendations on survey design. The paper was used as a rough guideline to create both the structure as well as the questions in the survey. The styling was directly attributable to the SurveyJS library (Devsoft Baltic OÜ). Screenshots of the web survey can be found in Appendix A.

4.2 Participants

The experiment aimed to use predominantly Dutch participants between the ages of 20 – 69 years old who had either a High School, College-, Bachelors- or Master’s degree. This excluded persons with lower or higher ages, as well as certain groups with lower or higher. No further distinctions were made in regards to, within education levels. Meaning that the levels of education within High School, College or Bachelor’s degrees were not taken into consideration.

Participants were garnered through two methods of sampling. The first was purposive sampling and the second was voluntary response sampling. The underlying choice to use two sampling methods was that both methods, when used in a combined manner would yield a more diverse sampling group. The purposive sampling was used at the onset of the research to target specific

(20)

20 groups of potential participants for increased diversification. This was for the most part based on age. Participants were asked to share the survey to their respective circles as a means to spread the survey. This first sampling method created a potential base for the voluntary response sampling to grow from.

Participants were further incentivized to fill in the survey with the prospect of winning monetary compensation. This was decided due to initial feedback from trial runs in which participants mentioned the loss of motivation due to the time-consuming nature of the survey.

4.3 Procedure

An invitation to fill in the survey was initially sent by mail to N=20 people beginning on the morning of 15/05/2020. This was to start the purposive sampling process. The invitation email was sent under the signature of the author and through the University of Amsterdam student email account. The email included a short introduction to the survey, a description of the incentive and a link to the survey. No subsequent email reminders were sent.

On the 22/05/2020 the survey was posted on social media websites, Facebook, LinkedIn and Instagram, to start kickstart the sampling again. This was the voluntary response sampling method. This had a total potential reach of N=684 on 22/05/2020. This was done due to the low response rate in the purposive sampling stage.

The survey was closed on 15/06/2020. The eventual field period for the survey was from 15/05/2020 to 15/06/2020. This is a total of 31 days.

(21)

21 5. Results

The aim of this research was to make an inquiry into different aspects of how people can understand complex topics using IS as the basis for an information system. To find this out several methods of analysis have been employed. The results of this research are explained in this section of the research paper.

5.1 Experiment dataset

The experiment dataset held all the data from the survey and was the dataset which all conclusions will be inferred from. The experiment dataset, counted a sample size of N=87. The dataset contained 48 datapoints for every individual participant. These 48 datapoints were subdivided into 7 different encompassing variables.

These encompassing variables were Demographics (of which 3), Knowledge (of which 2), Complexity (of which 3), Sentiment before (of which 8), Sentiment after (of which 8), Movements (of which 3) and Information Source opinions (of which all participants saw 19 out of the possible 52). The variable Information Source opinions contained all the opinions of participants on the information sources.

The distribution within the dataset, concerning the Demographic variable, was skewed. Namely the sample had an overrepresentation in the gender and age datapoints. The gender distribution was skewed with men outnumbering woman by 3 to 1. This was also the case for age groups where participants of the 20 – 29 years old group outnumbered the other respective group by a ratio of at best 7 to 1 (for 30 – 39) and at worst 37 to 1 (for 60 – 69). The distribution for levels of education was more spread. There are 32 participants with a Highschool diploma, 32 with a Bachelor’s degree, 15 with a College degree and 8 with a Master’s degree.

(22)

22

5.2 Analysis

Before data analysis could commence some preprocessing was in order. A total of 2, 3 and 8 datapoints from respectively the variables, Knowledge, Complexity, and the Sentiment (after the experiment) were recoded for each participant. The Knowledge datapoints were recoded into a summed score between 0 - 2, the Complexity datapoints were summed to a score between 0 - 3. And the Sentiment variable datapoints (after the experiment had taken place) were reverse coded. This was done because these datapoints were negatively formulated and had to conform with the first set of Sentiment datapoints (before the experiment had taken place).

All preprocessing was done with Python using the Numpy (The SciPy community, 2020) and Pandas (NumFOCUS, & McKinney, 2020) packages. Python is further employed for the analysis of the dataset. For this the packages, SciPy (SciPy developers, 2020), Seaborn (Waskom, 2020), Plotly (Plotly, 2020) and MatPlotLib (Matplotlib development team, et al., 2020) were used. SciPy was used for the statistical testing and Seaborn, Plotly and MatPlotLib were used to visualize the data.

The data was initially checked for normality. The data was determined to be partially normally distributed (Sentiment (before) was a point of doubt); thus, the statistical tests, Paired Samples T-test, Wilcoxon Signed Rank Test and Pearson R correlation were used to analyze the data.Due to the nature of the experiment with a before – after design a paired samples t-test is used to analyze if a relation exists. Because it was unclear if the first distribution of sentiment in the pre-test condition of the experiment, was not entirely normally distributed a Wilcoxon signed rank test was also used. This was done to confirm or disconfirm the results of non-significance, which the T-test produced.

(23)

23 The second step was the analysis of the datapoints in the Movement variable to create the patterns of movement that participants produced. The patterns were plotted in both 2D- and 3D-space to see what types of patterns were produced and how these could indicate certain behavior. Specific patterns were identified using intuition and backward reasoning. No coherent framework was created to analyze or classify the potential patterns. This was not the aim of the research and can be subject to further research.

Furthermore, this Movement variable was used to show the direction of participants and what the vector magnitude (length) was for each participant’s movements.

The whole sample was used for the results. None of the participants were excluded from the statistical tests or from the pattern classification.

5.3 Dataset descriptive

Table 1: Descriptive statistics from the Experimental dataset.

Men Women Overall

Spread of articles seen 2 – 5 4 – 5 2 – 5

Average articles read (out of 5) 4.4 4.7 4.5

Average sentiment of person (pre-test) (out of 5) 3.138 3.208 3.155

Average sentiment of person (post-test) (out of 5) 3.239 3.304 3.254

Average complexity of person (out of 1) 0.788 0.857 0.805 Description: This table shows some of the statistics that were found in the Experimental dataset. These statistics were used to see what the overall state was in the dataset (and for Men and Women).

(24)

24

5.4 Paired Samples T-test

The results from the pre-test (M = 3.155, SD = .434) and post-test (M = 3.254, SD = .389) sentiment score indicated that receiving information sources did not change overall sentiment, t(86) = 1.801, p = .075 (see Appendix B) at a confidence interval of p < .05 for the result.

5.5 Wilcoxon Signed Rank Test

The Wilcoxon signed rank test indicated that the sentiment before the experiment (mean rank = 49.25) was significantly different from the sentiment after the experiment (mean rank = 37.53), Z = -2.263, p = .023 (see Appendix B) at a confidence interval of p < .05 for the result.

5.6 Pearson correlation

The mean sentiment of the participant in the pre-test and post-test conditions were found to be moderately positively correlated, r(85) = .22, p = .035. (see Appendix B). It can be concluded that no noteworthy relationship exists between the two conditions.

5.7 Participant Patterns

In total 6 main patterns, illustrating the participants movements, were identified. These were classified under the following names, with corresponding amounts found: 17 V-pattern, 2 O-pattern, 2 Oscillating-O-pattern, 5 W-O-pattern, 53 Linear-pattern and 8 Undefinable Behavior (see figure 2). The set of Undefinable Behavior patterns consists of patterns for which no clear pattern could be assessed, these did not conform to the other main patterns and were therefore defined as undefinable behavior, actually meaning, ‘not having a pattern.’

(25)

25 Figure 2: Main patterns found due to participant movement.

a. V-pattern b. O-pattern (sub-type Oscillating)

c. Oscillating-pattern d. W-pattern

e. Linear-pattern f. Undefined behavior

Description: a) shows the V-pattern, this is its most basic shape, and gives the overview of it without any sub-type combinations. b) shows the O-pattern, this always has a sub-type because there are no purely O-shaped patterns, this one has the Oscillating sub-pattern. c) shows the Oscillating-pattern, this specific pattern shows the participant is

(26)

26 constantly oscillating between going up and down on both complexity and sentiment scales. d) the W-pattern shows how someone chooses to stay between two values of content or complexity, but does consistently go down or up on the opposing scale. e) the Linear-patterns is the most common pattern, it denotes a constant up or down movement on both scales. Someone moves consistently down and down, or up and up. f) shows Undefined behavior, from this it is not possible to see any logical behavior. All patterns move from the red to yellow, to green, to blue and at last to purple. If at any point the participant stopped before the 5th source the pattern will end at an earlier color.

Figure 3 shows a different representation of figure 2c. It is plotted in 3D-space to demonstrate the harder to visualize patterns such as, the Oscillating-pattern in figure 3. Here a clear back and forth movement can be seen indicating the oscillating behavior between two states. The

participant is effectively constantly alternating between two opinions which cancel each other out.

Figure 3: Changed representation of figure 2c.

Description: This pattern is precisely that of figure 2c, but given a 3-dimensional representation through plotting it on a new z-axis defined as the number of movements. In the figure one can see the back and forth motion which represents oscillating behavior.

(27)

27 Sub-dividing the patterns, such as in figure 2, into six main patterns only served to simplify the range of possible patterns, in actuality more granularity could be added. This was because 38 of the 87 patterns, produced by participants, contained 1 sub-pattern next to the main pattern, and 1 pattern contained 2 different sub-patterns. These sub-patterns were the same types as the main patterns, but were not deemed as the dominant pattern, hence the name sub-pattern, because they do not represent the dominant pattern and behavior. This would have given a total of 11 distinct patterns.

Another distinction was made in the patterns by using the amount of Movement datapoints, also known as the opinion of the participant on the sources, as characteristics of patterns. This ranged, as mentioned, between 1 to 5. If this was taken into account an even more fine-grained distinction could be made between patterns. If these are factored in, there are a total of 21 distinct patterns (see Appendix C for all the produced patterns). These patterns were however still conglomerated under the main patterns in figure 2.

5.8 Vector Magnitudes and Directions

Onto the vector magnitudes and directions of participants. These were plotted in figure 4 as the vector magnitude (length), and shows the absolute distance travelled from the starting point, defined as (0,0).

There seems to be no one-sided norm of vector magnitude. This creates the opportunity to identify individual groups in figure 4. In figure 4 the areas of vector magnitude are color coded to show how they belong together. This hints at an underlying mechanism that facilitates the grouping of participants. Color codes were assigned according to magnitude ranges going from, 2 to 4, 4 to 6 and 6 to 8 for both the complexity and sentiment magnitudes. Furthermore, a sub-division was

(28)

28 made in the magnitude ranges with two colors. This was done to also show what the dominant direction of movement was, i.e. The dots (participants) in the red area have a higher vector magnitude for complexity than the dots in the blue area.

Figure 4: Magnitude of vector movements.

Description: The scatter plot shows the distance, measured in vector magnitude, that the participant ended at relative to the starting point set at (0, 0). The datapoints in the white squares indicate participants who had the same magnitude for both sentiment and complexity. The red and blue squares indicate participants that had either a large sentiment or complexity magnitude, but not both. The green and yellow squares indicate a participant who had a moderate magnitude for both sentiment or complexity, but one was higher than the other. The datapoints in the orange and purple squares had a low vector magnitude for both sentiment and complexity, but one was higher than the other.

Figure 4 has no factor taking into account direction of the sentiment or complexity, thus; datapoints simply illustrate how far participants moved in absolute numbers from their starting point. So, not with which sentiment, negative or positive, and complexity level, simple or complex, they ended with. This illustrates both the spread and the potential groupings of persons with the like-minded mentalities or characteristics.

(29)

29 This direction of sentiment and complexity was illustrated in figure 5. To accomplish this, it took the summation of vector movements, the -1’s and +1’s, and translated these to endpoints and directions. This was done to keep the direction of both scales intact. This figure shows the negative or positive direction of participants in which they ended. It shows a dispersed and divided sample population.

Figure 5: Overall direction of sample population.

Description: This graph shows the overall direction based on the summation of movements for the whole sample population. The x-axis shows the content direction and the y-axis shows the complexity direction.

Figures 6 and 7 are subsets of figure 5 and show the difference between Men and Women, albeit the unbalanced dataset in regards to women. Figure 6 shows the spread of directions which Men followed. It can be inferred that Men do not follow a conclusive general direction, indicating non-restrictive behavior in regards to sentiment and complexity. This was different for Women. The left-bottom of figure 7 is empty, illustrating that woman showed a little more restrictive behavior as opposed to men. Woman further seemed to care less for positive content with a low complexity, but match men equally in the other regions. Woman also tended to be more in the extremes, as opposed to men indicating they read more sources and have more Linear-patterns.

(30)

30 6. Discussion

6.1 Interpretation of Results

The following piece is the interpretation of the results. The aim of the research was to make an inquiry into different aspects of how people can understand complex topics using IS as the basis for an information system. This will be partially guided by the sub-questions posed earlier.

Statistical tests.

The statistical tests were used to see if the sentiment of the participants was significantly altered between the pre- and post-test conditions of the survey (before and after part 2).

The Paired samples T-test, with p = .075, showed no significance at the p < 0.05 level. Meaning the change in sentiment was either too small or did not exist. This result is contradicted by the

Figure 6: Overall vector direction Men Figure 7: Overall vector direction Women

Description: Content and complexity directions of men. Overall, these are very dispersed and do not follow a coherent direction.

Description: Content and complexity directions of women. Overall, these seem to lean more to a certain side and are less dispersed than the men’s directions.

(31)

31 Wilcoxon signed rank test. This test indicated that the change was significant at the p < 0.05 level with a p = .023. Meaning that the change between the pre- and post-test condition was significantly different.

Furthermore, the Pearson R correlation, with an r(85) = .22, showed only a weak positive linear relation between pre- and post-test conditions. Inferring that the sentiment change in the pre-test condition has little to no effect on the post-test sentiment.

The statistical results were important to show that participants could not be influenced by information sources given by the researcher. What this means for the adaptive system is that if the system does not explicitly aim to influence there is only a small chance of this happening. These conclusions must however take into account the sample size and population. The sample size of N=87 is not large enough to give a definitive conclusion. It does give some indication in regards to the fact that it could be used for such a purpose. The current experiment would however need to gather more data on choices, attitudes and opinions (Kelman, 1961) (Wood, et al., 1985) given by participants to make this conclusive.

Patterns

The research showed that participant behavior could be modelled to form identifiable patterns. Six distinct main patterns were found, to describe the movements of participants best. Technically speaking these patterns had sub-patterns and differences in the amount of datapoints producing 21 different patterns, but these can be conglomerated under the 6 so, these 6 will be used from now on. This proves that participants can follow identifiable behavior. The fact that identifiable patterns could be found is in itself crucial because it proves that there are certain behaviors participants follow and that an adaptive system based on patterns can be made.

(32)

32 The current pattern counts show that most participants either follow a Linear- or V-pattern. But most patterns are actually composed of two or more sub-patterns including the Linear- and V-patterns.

Per the design participants were only allowed to go through 5 sources. This is why sources are a minimum of 1 and a maximum of 5 points long. This puts a restriction on the variability in the patterns possibly influencing the overall number of unique patterns which could have been found. If, for example, a person was given 50 sources, what behavior would be displayed, and consequently, what patterns would be produced? The expectation would be that longer and more diverse patterns would start to emerge. But that these movements would consist predominantly out of one of the 5, previously defined (excluding Undefinable Behavior), main patterns. However, then the concern of scalability comes into play. Will it still be possible to break the big pattern down and analyze it on the sub-level? If the answer to this is yes, it would mean that an adaptive engine does not need to cover everything, every time, but it would be able to take small snippets of the big patterns. Say, the last 4 datapoints and predict the next move. Meaning the system would look ahead using the most recent 4 movements and then predicts the next (namely the 5th) source using the established patterns. So, the participant follows the last created pattern. For example, a W-pattern, so the system provides the information for the W-pattern. If you then suddenly change direction, the system will again start to react and will try to predict the next step.

If the answer to the earlier questions is a no, then this would pose a significant problem for the adaptive system because larger patterns, with diverging movements, will not be coherently

(33)

33 classifiable. The experiment would effectively have to be run again but with a much larger dataset of information sources to create a new, longer patterns for a basis to predict from.

Defining the behavior and goals of the participants who follow the patterns is important to find out the underlying motivation for their movements. Unfortunately, due to the current size of the patterns it cannot be inferred how the patterns, discussed above, change beyond the limited 5 datapoints. However, the potential behavior behind patterns can, within bounds, be inferred from their movements.

The V-pattern, when occurring with a dominant Linear-pattern, infers that a conscious choice is made to change direction on either sentiment, complexity, or both. They make this change when they reach their lower or upper limit on either or both scales. The second pattern, which is the Oscillating-pattern, is predominantly a sub-pattern of the O- or Linear-pattern. Figure 3 (in the 5.6 Participant Patterns) shows the back-and-forth behavior of the pattern as a zigzag. It can also be imagined as a pendulum. From this behavior it can be inferred that either the participant does not know what he or she wants or tries to find their ideal level of sentiment and complexity by changing each level independently.

Unfortunately, none of the main patterns can be specifically linked to one predominant demographic group, only dominant within group patterns can be identified. This is most likely due to the dataset being skewed, the lack of a sufficient sample size or the relation not existing. It can also not be confirmed if patterns can be used to predict user movements repeatedly, this will have to be tested in a different scenario in which an adaptive system is designed around predicting participant’s movements based on the identified patterns.

(34)

34 Movements and Directions

Although specific patterns cannot be attributed to demographic groups a form of grouping is visible in the figures 4, 5, 6 and 7 (section 5.6 and 5.7). This is based on vector magnitude (figure 4 in section 5.6) and on the summation of vectors (figures 5, 6, 7 in section 5.7).

From figure 4 it can be inferred that participants moved with differing vector magnitudes. The higher vector magnitudes corresponded to Linear-patterns and the lower to O- and Oscillating-patterns. Following this reasoning, it seems that higher vector magnitude and/or vector summation leads to increasingly linear patterns. These participants can be found in the red and blue areas of figure 4. The reverse also applies, meaning that lower vector magnitude and/or vector summation leads to less linear patterns, which are Oscillating- and O-patterns. Found in the orange and purple sections of figure 4.

Figure 5 looks at the directions of participants using vector summation, as opposed to the absolute vector magnitude. Figure 5 shows how the sample population does not move in a shared direction. This indicates that there is no consensus on the right sentiment or complexity for information. Meaning that participants have different information preferences and that every complex topic cannot be explained in the same manner. This begs the question; ‘how can these preferences be identified?’ And ‘why do participants choose one direction (opinion) over another?’

These differences in preferences already indicate that an adaptive system, which adapts to your preferred level of sentiment and complexity can be of great help explaining complex topics. Especially if different demographic groups are involved.

Figure 6 and 7 (section 5.7) make it evident that, when splitting on gender, there are less women compared to men in the dataset. We can see that women in figure 7 do not move into the

(35)

bottom-35 right corner and that they look more for the extremes as opposed to men in figure 6. The male side seems to be far more diverse in their reading.

On closer examination the predominant, within group, pattern for women is the Linear-pattern. This could mean that women have a different way of looking at information. Women have a spread of 4 to 5 sources that they have read, whilst men have a broader spread of 2 to 5 sources (see Table 1, section 5.3). We can conclude from this that women seem to be more diligent and instruction abiding or women simply have the tendency to read longer. Men might consider that they are done at an earlier stage and have had enough information on the topic after having read less sources. In order to confirm this more data is needed, but it might be that women take longer to be satisfied, follow up until they know, and are more thorough. This does explain the smaller and higher spread of women’s reading.

Overall, the experiment would have to be redone with a more balanced information source dataset, larger sample size and different sample population to confirm these inferences. But, if assuming these inferences are correct it would mean that women have much different needs for information than men do, making gender a defining characteristic to be used in an adaptive information system.

These conclusions and inferences give a small indication into what an adaptive system needs to look at in order to create understanding for complex topics.

(36)

36

6.2 Limitations

This section discusses the failings and shortcomings of the currently implemented design. Several constraints were implemented in the design to make the experiment feasible in regards to time and complexity.

The current implementation is a simplified adaptation of an adaptive system. Therefore, results are more constricted than a full experiment would produce. So, the patterns which were produced are a limited view of how people behave in such systems. The sample population also plays a role. This is why the outcomes of the statistical tests cannot be deemed entirely reliable due to the limited sample population on all demographics. The significance does not necessarily tell a reliable story.

The first imposed constraint is that participants were only allowed to see 5 sources, and no more. This made the maximum length of the patterns 5 datapoints long. This limitation was conciously implemented for two reasons. The first was that the survey would have otherwise been too tedious, and two, the article by Tan, et al., (2016) explained how a maximum of 5 arguments can be given to convince a person. Sources were seen as arguments, so a maximum of five sources was imposed.

The second imposed constraints were the limits on the age and education ranges. Meaning that a large size of the population was automatically excluded. This was done to limit the size of the research in regards to external time constraints.

The third, which is a design constraint, was that the system currently focused on one subject. The ideal system would eventually be able to explain a multitude of subjects. With someone asking the system “what does this mean,” and the system giving information back that explains the topic.

(37)

37 This is also something that was not implemented due to higher complexity and external time constraints.

The fourth noteworthy constraint is that in the current design the participant did not have the option to stay on the same level of complexity and/or sentiment. One had to always move either up or down on each scale. Flatlines were therefore never possible. So, it could be that the current patterns portray the movement correctly, but it might be that flatlines were left out. Which could have influenced the findings.

The fifth constraint regarding reliability of the results, is that there is no framework to trace back how patterns were classified. This was done using intuition and backward reasoning. However, this is not an entirely quantifiable method. Meaning that the current method will not be scalable for larger datasets as this will be tedious. This suggests that a coherent framework should be created for identifying patterns in larger scale adaptive information systems.

The sixth and seventh constraints pertain to the information source and experiment dataset. Currently the information dataset has a sparse middle and upper ground. This creates a limited space which participant can traverse. This is already evident from the limited space that was traversed in this setup, namely only 19 out of 52 articles were seen. But this could have been different if more middle ground papers were included. The method of source sampling was also potentially subjective due to the unstructured selection of the sources. This is a concern due to our polarizing society (Spohr, 2017) (McCoy, et al., 2018).

The experiment dataset suffers from a similar problem. This dataset is too sparse in regards to all three demographic identifiers. Factors that might have contributed to this are the sampling methods and external incentive. This limits how patterns can be explicitly linked to demographic groups.

(38)

38 7. Conclusion and Future work

The aim of this research was to make an inquiry into different aspects pertaining to how people can understand complex topics, and using IS as the basis for an adaptive information system. The research question which was formulated was, ‘To what degree can Interactive Storytelling be employed, to discover if people follow patterns when moving through information sources on complex topics, specifically pertaining to development aid?’ This research was for the most part descriptive because of the explorative nature and this was reflected in the results.

The research concludes that patterns can indeed be identified, therefore it can be strongly assumed that if one would use the same mechanism as we have used, such as the sentiment and complexity questions, the patterns can be predicted.

Six main patterns were identified. These were the V-pattern, O-pattern, Oscillating-pattern, W-pattern, Linear-pattern and Undefinable Behavior. Of which the Linear-, V-, O- and Oscillating-patterns were most insightful due to them having the easiest inferable behavior. Identifying these patterns highlights the potential for predicting a person’s movement and preferences, which is crucial for the workings of an adaptive system. The identified patterns contribute to the development of an adaptive system from the perspective of users and their information needs. However due to the small size of the sample there are likely more patterns to be found.

The research was also unfortunately unable to explicitly link patterns and the demographics through correlations. Therefore, only minimal inferences could be made linking patterns to demographic groups.

Furthermore, the theory of IS was used throughout the experiment to create the simulation of the adaptive system. The research demonstrated, through simulation, that IS could be used for such

(39)

39 applications. This extends the uses of IS, but mostly substantiates the initial proof that systems built using IS can function to explain complex topics

As for future work, the following suggestions can be given. Experiment wise the first suggestion pertains to the question “Were the sentiment and complexity metrics the right choice?” These metrics were initially used because they would allow a participant to be placed within a dataset of information sources where the sources were measured on the same scales as the participants. It could be beneficial to see if other metrics, such as the Knowledge variable, or no metrics at all, can place participants in the correct spot.

The second suggestion pertains to the survey and recommends that future research implements more qualitative elements in the survey. More qualitative elements in the survey, especially in the part two (the simulation), when evaluating the sources, can help identify why and how people decide on the opinions they give. Through these questions a better understanding of choices can be created which influence how is looked at the subsequent patterns which have formed. Options for further changes to the survey could be the creation of an option in the simulation to not move up or down, but to flatline, in regards to the sources.

The third suggestion of future work is attributed to the current system and how this uses Euclidean distance percentile scores and randomness to give participants the next article. This system needs evaluative research. Not only because the current system was chosen because it was the most convenient, but because it should be tested if other systems implemented this way, with for example, the Manhattan distance or Minkowski distance, can function better or produce different results.

(40)

40 8. References

1. AssessmentDay Ltd. (2020). Diagrammatic Reasoning Tests, Free Online Practice Tests. Geraadpleegd van https://www.assessmentday.co.uk/aptitudetests_diagrammatic.htm

2. Burnside, C., and D. Dollar. 2000. ‘Aid, Policies, and Growth.’ American Economic Review 90 (4): 847–68.

3. Cambridge Dictionary. (2020, 17 juni). Complex definition: 1. involving a lot of different but related parts: 2. difficult to understand or find an answer to…. Learn more. Geraadpleegd van https://dictionary.cambridge.org/dictionary/english/complex

4. Cheru, F., & Obi, C. (2011). De-coding China-Africa Relations: Partnership for

development or ‘(neo) colonialism by invitation’? The World Financial Review, (Sep/Oct), 72– 75. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:nai:diva-1433

5. Defronzo, James; Jungyun Gill (2015). Social Problems and Social Movements. Sociology, 1–24. Chapter 1: The Sociology of Social Problems. ISBN: 9781442221543. Retrieved from https://rowman.com/ISBN/9781442221543/Social-Problems-and-Social-Movements For the specific chapter go to https://rowman.com/webdocs/SP_CH1.pdf

6. Devsoft Baltic OÜ. (2020). SurveyJS: Free Online Survey and Quiz Tools. Geraadpleegd van https://surveyjs.io/

7. Dimascio, C. (2020, 20 april). Cdimascio/py-readability-metrics. Geraadpleegd van https://github.com/cdimascio/py-readability-metrics/blob/master/README.md

8. Durairaj, M. (2020, 15 juni). An Adaptive and Interactive Storytelling System for Mobile Augmented Reality Ambiance (AISTMAR). Geraadpleegd van

https://www.semanticscholar.org/paper/An-Adaptive-and-Interactive-Storytelling-System-for-Durairaj-Aurelia/8452501df6c02b55b527a635280001197162c6e2

9. Ferro, E., & Wilson, J. S. (2011). Foreign aid and business bottlenecks: A study of aid effectiveness. Policy Research Working Papers. doi: 10.1596/1813-9450-5546

10. Generation Ready. (2013). A Beginner’s Guide to Text Complexity. Geraadpleegd van https://www.generationready.com/wp-content/uploads/2014/02/Beginners-Guide-to-Text-Complexity.pdf

(41)

41 11. Gonzalez-Bañales, D. L., & Rodenes Adam, M. (2007, 24 juni). WEB SURVEY

DESIGN AND IMPLEMENTATION: BEST PRACTICES FOR EMPIRICAL RESEARCH. Geraadpleegd van http://www.suristat.eu/document/documentArticle/web_surveys_valencia.pdf 12. Hegarty, M. (2000, 1 september). Capacity Limits in Diagrammatic Reasoning.

Geraadpleegd van https://link.springer.com/chapter/10.1007/3-540-44590-0_19

13. Hoffmann, M. H. G. (2007 februari). Cognitive Conditions of Diagrammatic Reasoning. Geraadpleegd van https://smartech.gatech.edu/handle/1853/23809

14. Hölscher, C., & Strube, G. (2000). Web search behavior of Internet experts and newbies. Computer Networks, 33(1–6), 337–346. https://doi.org/10.1016/s1389-1286(00)00031-1 15. Kelman, H. C. (1961). Processes of Opinion Change. Public Opinion Quarterly, 25(1), 57. https://doi.org/10.1086/266996

16. Kelso, M. T., Weyhrauch, P., & Bates, J. (1993). Dramatic Presence. Presence: Teleoperators and Virtual Environments, 2(1), 1–15. doi: 10.1162/pres.1993.2.1.1

17. Laurel, B.K. (1986). "Toward the Design Of A Computer-Based Interactive Fantasy System (PhD dissertation)". Department of Theater, Ohio State University.

18. Lebowitz, M. (1984). Creating characters in a story-telling universe. Poetics, 13(3), 171– 194. doi: 10.1016/0304-422x(84)90001-9

19. Levine, R. (2004). ‘Millions Saved: Proven Successes in Global Health’. In: Kinder, M. (ed.) What Works Working Group. Washington DC: Centre for Global Development.

20. Mateas, M., & Sengers, P. (2003). Narrative Intelligence. Narrative Intelligence Advances in Consciousness Research, 1–25. doi: 10.1075/aicr.46.01mat

21. Mateas, M., & Stern, A (2003). "Façade: An experiment in building a fully realized interactive drama" (PDF). Proceedings of the Game Developers Conference, Game Design track. 22. Matplotlib development team, Hunter, J., Dale, D., Firing, E., & Droettboom, M. (2020, 8 april). Matplotlib: Visualization with Python. Geraadpleegd van https://matplotlib.org/

23. McCoy, J., Rahman, T., & Somer, M. (2018). Polarization and the Global Crisis of Democracy: Common Patterns, Dynamics, and Pernicious Consequences for Democratic

(42)

42 Polities. American Behavioral Scientist, 62(1), 16–42.

https://doi.org/10.1177/0002764218759576

24. McNett, G. (2016). Using Stories to Facilitate Learning. College Teaching, 64(4), 184– 193. https://doi.org/10.1080/87567555.2016.1189389

25. Meehan, James R. (1977). "TALE-SPIN, An Interactive Program that Writes Stories". Proceedings of the Fifth International Joint Conference on Artificial Intelligence. CiteSeerX 10.1.1.74.173.

26. MonkeyLearn. (2020). MonkeyLearn API documentation. Geraadpleegd van https://monkeylearn.com/api/v3/

27. Moyo, D. 2009a. Dead Aid: Why Aid Is Not Working and How There Is a Better Way for

Africa. New York: Farrar, Straus and Giroux. ——. 2009b. ‘Why Foreign Aid Is Hurting Africa.’ The Wall Street Journal, 11 March. Retrieved

from https://www.wsj.com/articles/SB123758895999200083

28. NLTK Project. (2020, 13 april). Natural Language Toolkit — NLTK 3.5 documentation. Geraadpleegd van https://www.nltk.org/

29. Nix, D., & Spiro, R. J. (1990). Cognition, Education, and Multimedia. "Chapter 7: Cognitive Flexibility and Hypertext: Theory and Technology for the Nonlinear and

Multidimensional Traversal of Complex Subject Matter" Hillsdale, New Jersey: L. Erlbaum. 30. NumFOCUS, & McKinney, W. (2020, 28 mei). Pandas - Python Data Analysis Library. Geraadpleegd van https://pandas.pydata.org/

31. Opeskin, B. (1996). The Moral Foundations of Foreign Aid. World Development. 24. 21-44. 10.1016/0305-750X(95)00118-V. Retrieved from

https://www.researchgate.net/publication/222917027_The_Moral_Foundations_of_Foreign_Aid 32. Organization for Economic Co-operation and Development. (2019). Official

Development Assistance (ODA) - OECD. Geraadpleegd van

https://www.oecd.org/dac/financing-sustainable-development/development-finance-

standards/official-development-assistance.htm#:%7E:text=Official%20development%20assistance%20(ODA)%20is,and%20wel fare%20of%20developing%20countries.

Referenties

GERELATEERDE DOCUMENTEN

Next, participants indicated how much they expected they would think about the exam during the weekend (1) if they avoided the information (i.e., “Imagine that you decided not to

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

The data was used to estimate three Generalized Linear Model’s (GLIM), two model based on a Poisson distribution and one normally distributed model. In addition, several

Therefore, it can be said that it depends on the interaction effect if additional information or previous experience has a significant positive moderating effect on the

First, the causal relationship implies that researchers applying for positions and grants in their organisational career utilise these as resources for the enactment of scripts

The relationship between the size of the knowledge base and the intention to adopt new innovations is mediated by the actor’s own experience and the use of local and

Empirical estimation results indicate that both models are capable of representing heterogeneity in activity-travel decisions, in terms of heterogeneous risk attitude

Context There are two context conditions associated with this pattern: (1) the initial number of concurrent task instances (denoted by variable m in Figure 137) is known prior to