• No results found

The influence of supporting web credibility evaluation through automated evaluation techniques

N/A
N/A
Protected

Academic year: 2021

Share "The influence of supporting web credibility evaluation through automated evaluation techniques"

Copied!
17
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The influence of supporting web credibility evaluation through

automated evaluation techniques

SUBMITTED IN PARTIAL FULLFILLMENT FOR THE DEGREE OF MASTER OF SCIENCE

FRANK LICHT

10575952

MASTER INFORMATION STUDIES

HUMAN-CENTERED MULTIMEDIA

FACULTY OF SCIENCE

UNIVERSITY OF AMSTERDAM

July 5, 2017

1st Supervisor 2nd Supervisor Dr. A.C. Nusselder Dr. D. Heinhuis

(2)

The influence of supporting web credibility evaluation

through automated evaluation techniques

Frank Licht

University of Amsterdam,

Graduate School of Informatics

Science Park 904, Amsterdam

frank.licht@student.uva.nl

1. ABSTRACT

Dutch citizens use the internet on a daily basis to find information, communicate and use different services. When searching for information, it is up to web users to evaluate the credibility of the information they retrieve. The skills for evaluating the credibility of online information are related to information literacy, a skillset not developed fully with many Dutch citizens that have lower educational backgrounds. In this thesis research, three prototypes of the CredibleWeb application - a system that automatically evaluates the credibility of webpages – are used by members of the target group. Results from this research indicate that an automated system which shows details on how a website was evaluated on its credibility can have an influence with the target group in their decision to trust a website, but cannot substitute other influences in the user’s credibility decision.

Categories and Subject Descriptors

H.4.1 [Information systems]: World Wide Web: Web searching

and information discovery

E.3.4.1 [Software and its engineering]: Software creation and management: software verification and validation: software

prototyping

Keywords

Information literacy, online credibility, automated credibility evaluation

2. INTRODUCTION

The internet has been expanding rapidly since its debut in the early 1990’s and has become an important source of information and leisure for Dutch citizens. In 2013, 86% of Dutch citizens reportedly used it to find information about goods and services, 77% to access information about news and 83% used the internet for online banking [17]. In early 2016, it was estimated that about 3 billion people have access to the internet worldwide [28] and the Netherlands ranks among the top countries when considering the internet “penetration” - the percentage of the population that has access to the internet at home or at work [5]. This percentage was about 96% in 2015 [27, 40].

Search engines are an important part of present-day internet: 97% of internet users indicated they use search engines to search for information online [17]. Google is the most-used search engine in

the Netherlands with almost 94% of traffic from search engines toward other websites coming from Google [35]. There are however two important factors at play when users click on links to find information through search engines.

Firstly, the majority of users click on a link in the top 3 of search results, as is shown in earlier research [7, 15, 20]. Based on big data analysis, this Click Through Rate (CTR) lies between 61,5% and 68% for the top 3 search results with approximately 33% of internet traffic going to the first result, given that users see a result that is related to their search query. According to the same reports, 75,7% to 87% of users click on one of the first five organic search results [6, 39].

Secondly, when search engine users are searching for information as layman (i.e. on a topic they are no expert in), they aim to “maximize the outcome of the search task while minimizing the effort it takes to come with acceptable results” [24]. This affects the evaluation of the search results: “(…) in trying to find an optimal trade- off between cognitive effort and efficient outcome, instead of a systematic, that is, thorough and complete knowledge-based evaluation of all given information individuals often evaluate information in a rather heuristic way” [19].

Since search engines are used so much by internet users and the top search results have the highest CTR scores, there is an incentive for many different parties to invest in Search Engine Optimization (SEO). Through investing in SEO, organizations and individuals are focussed on showing up in the top search results for many different search queries. With the possibility for anybody to publish information on the internet and economic investments in SEO allowing to show up high in search results, it is up to the internet user to evaluate the credibility of the information they retrieve. Throughout this thesis, the terms credibility and trustworthiness are used. Credibility is defined by Webster as “the quality or power of inspiring belief”, related to the term credible, which is defined as “offering reasonable grounds for being believed”. Trustworthiness is defined by Webster as “worthy of confidence”. When asking internet users whether or not they find a webpage trustworthy, credibility could have been used as well.

This thesis will answer the question “how can a digital information system that automatically evaluates the credibility of web pages influence the perceived trust of Dutch citizens with an educational background of Mbo level 4 or lower?”. To this end, a prototype of such a system, named CredibleWeb, is created and used in an experiment with participants from the target audience. The thesis will first focus on the different methods of online credibility evaluation available in literature and how internet users validate the credibility of web pages. The thesis will furthermore describe the experiment that was set up to measure the influence of the prototype on online credibility evaluation with participants and present the results from the experiment.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Copyright 2011 ACM 978-1-4503-0267-8/11/05.... $10.00.

(3)

3. RELATED WORK

3.1 Information literacy skills

The skills of internet users to evaluate the credibility of webpages lies within the domain of information literacy. In 2013, the international computer and information literacy Study (ICILS) was conducted in 21 countries around the world, including the Netherlands. This study focused on the Computer and information literacy of young people who had about 8 years of education, meaning they lay in the age range of 12 to 14 years old; the first and second year of secondary school in the Netherlands. The researchers defined computer and information literacy (CIL) as “an

individual’s ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace and in society” [13].

Dutch results in the research showed there is a majority of students who do not score higher than level two of four on the hierarchical computer and information literacy scale [21]. This scale describes the skill level of students on several aspects of Computer- and information literacy, including the knowledge and usage of computer hardware, computer programs, finding and evaluating online information and the creation of digital products. The table in appendix A shows the CIL scale and descriptions of the levels in the scale. The difference between the lower two and upper two levels of the CIL scale is mainly a difference in type of skills and self-reliance. Students with scores on the lower two levels are expected to have basic knowledge about computers, digital security and the usage of programs, which are skills mostly related to computer literacy skills (the ability to use ICT tools and applications, [29]). Students with scores on levels 3 or 4 can be expected to use computers individually as information gathering tools; these skills are typically related to information literacy skills. All students participating in the research were asked whether or not they had learned information literacy skills at school. With the exception of the skills to present information with a computer, no significant differences between the different educational levels were found (see table 1). Even so, the results of the tasks students needed to perform during the research clearly shows the difference between the scoring of students at different educational levels. As figure 1 shows, 52% of the students on the Pro educational level did not reach the first CIL-level, while the majority of students (53%) at the Vwo level reached level 3 in the CIL scale. Although students at all levels indicated they learned about information literacy skills, this is not reflected in their scoring as the majority of students on the Pro- and Vmbo levels did not score higher than level 2 on the scale; meaning they mostly hold computer literacy skills, instead of information literacy skills.

Figure 1. Percentage of Dutch students who reached different CIL levels, divided per education level.

Table 1. Students indications of digital literacy skills learned at school, per educational level, in percentages, weighed [21].

Percentage of students Activities Pro Vmbo Havo Vwo All a. Referencing online sources 68.4 63.6 63.5 68.7 65.1 b. Collecting information with a computer 78.5 77.0 75.8 77.2 76.3 c. Presenting information with a computer to a particular audience or with a particular goal 68.7 78.5 77.8 83.1 78.7 d. Determine if information on the internet is credible. 60.5 55.5 59.6 61.2 57.9 e. Determine if information is relevant for your school assignment 55.8 57.8 58.7 62.5 59.3 f. Arranging of information from different internet sources 59.7 55.6 49.5 50.6 53.2 g. Determine where to look for information on a new subject 71.3 62.9 60.2 59.0 61.4 h. Searching for different types of digital information 69.7 65.5 66.0 64.5 65.1

Through research by van Deursen and van Dijk in 2009 and 2010, different experiments regarding internet skills were conducted among the Dutch population between the ages of 18-80 with different backgrounds. Van Deursen and van Dijk describe internet skills as “a range of skills that users can possess to successfully

retain digital information” [8]. These skills cover the medium that

is used (such as computers, tablets and phones) and the content (information) that the user is focused on. Medium-related skills are further split up to operational- and formal internet skills; content-related skills are split up to information- and strategic skills. The table in attachment B explains the range of skills that users can display in this model.

The medium-related internet skills can be linked to computer literacy, focussing on the manner in which individuals interacts with computers. The content-related internet skills in the model are more in line with information literacy skills, focussing on searching, evaluating and selecting information to complete a task. Participants of the studies were asked to perform assignments with different tasks to measure operational, formal, information and strategic internet skills [26].

Like the ICILS 2013 study, van Deursen & van Dijk measured a significant positive correlation between the education level of participants and number of tasks completed in their experiments [8]. A small but significant negative correlation between an individual’s education background and the time to complete tasks was found for tasks of the operational, formal (medium-related) and information (content-related) internet skills types.

0% 10% 20% 30% 40% 50% 60% <1 1 2 3 4 PE R CE N TA G E

(4)

Overall, the results from van Deursen & van Dijk show that younger participants are better at operating computer browsers and navigate around the web (medium-related internet skills or computer literacy skills), but scored the same as adults with tasks concerning finding and evaluating online information sources (content-related internet skills or information literacy skills). No significant differences between genders was found. The results describe how years of internet experience and time spend online do not increase internet skills, with the exception of operational internet skills.

The results from ICILS and van Deursen & van Dijk further show that skills to use computers and skills to find and use information are in fact two different type of skillsets to possess. Van Deursen & van Dijk conclude their research that “it is important to understand that operational and formal internet skills are not sufficient for effective use of the internet and that information and strategic internet skills are more troublesome [to learn].” [8].

As defined by van Deursen & van Dijk, the evaluation of internet sources is one of the skills internet users need to possess to locate the information they require. The CIL scale includes evaluation of online sources as part of level 3 and 4, which the majority of students on the Pro- and Vmbo educational levels do not reach. These results show that although many Dutch citizens have access to the internet as a source for information, many individuals insufficiently possess the skills to evaluate online information. The lack of skills to evaluate webpages is combined with the believe that top results in search engines are unbiased and accurate [16, 19, 32], leading to the possibility that individuals who lack the skills to evaluate information on credibility, are vulnerable for misinformation. As such, this research will focus on possibilities for Dutch citizens with a lower-educational background to still find confirmed credible information on the internet.

3.2 Evaluating the credibility of web sources

Over the years, different methods for web credibility evaluation have been created, which either have users evaluate the information or help them through automatic evaluation. The different methods have been analysed and described by Shah et al. [36]. Through a literature research, a list of 8 categories for web credibility assessment was created, which is shown in table 2. The different web credibility evaluation methods use a subset of the categories that are described in the table.

At the turn of the century, researchers linked the evaluation of online sources on credibility to the work of librarians and described the checklist approach for credibility evaluation. This approach requires the internet user to evaluate a webpage on five internal criteria using checklists: authority, scope, objectivity, accuracy and

currency [1, 3, 14, 33]. The checklists are extensive and cover all

categories of web credibility evaluation seen in table 2, but aren’t used by internet users as shown by Flanagin & Metzger [10, 11] and Metzger et al. [23]. As Shah et al. state: “(..) Most internet users more often assessed a website’s credibility only on currency, comprehensiveness and objectivity, while verifying author’s identity, qualifications, and contact information were evaluated the least. From these results, it can be said that the users prioritized criteria that were easy to evaluate rather than ones which were time consuming” [36]. Additionally, in a study in 2003 by Fogg et al. [12], internet users mentioned the “design look of the website” most often (in 46,1% of the cases) as a way to determine the credibility

1 Metzger et al. describe social computing as “individuals engage in widescale communication, collective resource building, and collaboration online”

of online sources. Fogg et al. mention using this evaluation technique would leave internet users vulnerable for potential scams through well-designed webpages, as long as the website is “well designed” [12].

The notion that “a common strategy employed by Internet information seekers is to minimize cognitive effort and mitigate time pressures through the use of heuristics” as mentioned in the introduction of this thesis, comes from the 2010 research from Metzger et al. [24]. This heuristic approach to web credibility assessment describes four general strategies internet users seem to use, mostly based on social computing1. Table 3 lists the different

strategies identified through their study. Due to the focus on social computing, this method uses many different factors to identify the popularity of a particular webpage for the credibility assessment. However, the method leaves evaluation categories regarding aesthetics, professionalism and currency of a webpage out of the equation for the credibility assessment. The data from Metzger et al. suggests most internet users use the identified social strategies as credibility evaluation.

In 2004, Meola proposed a contextual approach for the evaluation of online information sources, which focussed on the comparison of different sources such as library sources and sources found on the “free web” (i.e. websites that do not require paid subscriptions to access the information) and used corroboration as a manner to confirm if information is reliable. By using this method of evaluation, three out of eight categories for credibility evaluation identified by Shah et al. [36] would be addressed: the accuracy, impartially and quality of information. Meola proposed this alternative approach since “the checklist approach is based on false presumptions regarding available information online, mistaken beliefs about student evaluation skills and an exaggerated sense of librarian expertise in evaluating information” [22].

Table 2. Categories of web credibility evaluation by [36]. Category Explanation

Accuracy Correctness of the information provided by the author

Authority Experience and popularity of the author. This includes author’s qualifications and credentials in the Web community

Aesthetics Combination of colours, layout, images, videos, fonts, use of bulleted lists, or presentation of tabular data used on the web page

Professionalism Policies and features available on the Website

Popularity Website’s reputation among Web users and reviewers

Currency Frequency of updates applied to the content Impartiality Lack of bias in the content

Quality Whether the article has been peer-reviewed or not and other factors contributing towards quality of the content.

(5)

Table 3. Heuristics for web credibility evaluation by [24].

Strategy Explanation

Social information pooling

Using the number of testimonials or reviews online, and using the proportion of positive versus negative reviews. Social confirmation

of personal opinion

Fining like-minded individuals to aid in the search for credible information. Enthusiast

endorsements

Relying on “enthusiast” or presumed experts on a particular topic on forums, wiki’s, blog’s, etc.).

Resource share via interpersonal exchange

Recommendations made by friends and family, both online and offline.

Meola continues his argument stating that “the list of questions to fully evaluate an online source is very long in order to catch all nuances, making it an impractical tool to evaluate sources that are readily available” and states that website-evaluation is “inherently qualitative, not quantitative”, referring to tools that automate the checklist approach. He supports this point by stating it is not possible to evaluate websites in this manner “because otherwise big companies would already have done it”. As is shown within the next decade by Olteanu et al. [30], automated credibility checks are technologically possible by means of Artificial Intelligence (AI). According to Meola however, automating the process of credibility validation would be questionable anyhow since “assessing the quality and value of information is an important part of the research process. (…) If we teach students to surrender evaluation to a mechanical process, we teach them to sacrifice part of their autonomy as learners and knowers”. This may hold true for students and graduates at the bachelor’s and master’s levels, but is certainly not expected of the majority of Dutch workers who have never been taught these skills (as shown through the ICISL 2013 research) and use their internet access as a source of information in their daily lives, instead of the educational environment Meola describes. The beforementioned models of credibility assessment all focussed on internet users evaluating the credibility of webpages themselves by means of different evaluation categories. In recent years, automated web credibility evaluation has made progress. The Visual Cues approach is one such example. This method describes how internet users are provided with background information on the credibility of web sources based on the automated evaluation of search results and web pages. The extra information is provided to the user on search result pages as well as webpages themselves, as is shown in the research by Schwarz et al. [34]. This method uses factors across all eight categories of website evaluation, as analysed by Shah et al. [36]. The researchers opted for implementing the credibility information inside the search result pages and webpages, as earlier research had shown that browser toolbars had limited effects on internet users when trying to warn them about security issues [42]. The approach by Schwarz et al. was effective on search result pages with regards to the accuracy of credibility evaluation but not did not have effect on the perceived credibility of websites by users; their confidence in search results remained high across the board. A possible explanation for the limited effect on internet users could be the lack of prominence as coined by Olteanu et al.: “showing many features to users in undesirable, each feature would become less prominent due to information clutter” [30].

In 2013, Olteanu et al. presented how a supervised learning algorithm could be used to create a general-purpose and fully automated web credibility assessment framework. With the

framework, the researchers were able to reach a minimal score of 70% precision and recall on the binary classification of websites as credible or non-credible. In regression analysis, they raised the mean absolute error (MAE) about 53% [30]. The framework uses 22 factors as significant predictors for the credibility of a website, as shown in appendix C. These factors are divided into two categories: content-related and social-related factors.

The research by Olteanu et al. was expanded on by Wawer et al. with the focus on using linguistic features of content to predict the credibility [41]. The General Inquirer (GI) was used to map the text of webpages to 183 different categories that are linked to psycholinguistic and psychosocial categories. “The list of GI categories includes for example topic-based ones (politics, economy, religion), several emotion-related categories such as pleasure, pain, feelings or arousal. Two of the categories, Positive and Negative, represent evaluative dimension (sentiment). Category membership is binary: words either belong to a category or not. (…) For each document in the dataset, the GI application produces a vector of 183 numbers, which represent counts of each category in every analysed text.” [41]. Through this research, the researchers were able to increase the precision and recall of predictions made by the system, compared to the results they found by Olteanu et al.

The work of Olteanu et al. was analysed by Shah et al. and mapped to the different categories for credibility evaluation identified in the research, as shown in table 4.

Table 4. Factors identified by [36] to evaluate online credibility by the machine learning approach of [30]. Category Factors identified

Accuracy Metadata features, link structure Authority Author details under metadata Aesthetics Appearance, web page structure

Professionalism Content features including text-based features

Popularity Social features including online popularity of the web page.

Currency Metadata features Impartiality Reviews by experts Quality Reviews by experts

The work by Olteanu et al. and Wawer et al. show the credibility of websites can be predicted with reasonable precision & recall, although more research is needed to improve the algorithms. This fits in the trend of startups and big companies investing in Artificial Intelligence to provide support to individuals and business in many different aspects of their lives, such as but not limited to Virtual Assistants [18], preventing financial fraud [25] and the development of autonomous cars [31]. 2016 has been a record year AI startup investments, as well as large companies like Google, Apple, Microsoft and Intel buying AI startups [37, 38]. Whether or not businesses are interested in using the technology to identify credible information online through AI, there is potential for the technology within the interest of investors in AI solutions that address many different aspects of human lives.

(6)

4. DEVELOPING THE CREDIBLEWEB

PROTOTYPE

With a large portion of the lower-educated workforce in the Netherlands lacking the skills to critically evaluate web pages [8, 21], internet users generally using evaluative methods that minimize cognitive effort described by Metzger et al. and the potential of automated credibility evaluation, the effects of showing automated credibility information in different manners to the target audience are addressed in this thesis. To this end, a prototype for CredibleWeb was created. This application can be installed in modern web browsers as Add-On and analyses web sites using the framework as developed by Olteanu et al. to rate the credibility of websites and place them on a 5 point Likert scale. By using this framework for automated credibility evaluation, users of the application are shown the end result of the automated evaluation. Research on the use of security warning in web browsers against phishing websites showed that active security warnings did have an effect on internet users [9]. With this in mind, the application was developed to remain in the top browser bar when the credibility score of a webpage is high, while showing an active warning to the user when the credibility score is low or doubtful.

For this thesis, several versions of the CredibleWeb prototype are shown to participants to measure the influence of the application on the perceived credibility of websites with participants. Each of the versions has a different amount of details on how the application had calculated the score, even though the underlying method of evaluation is the same. Figures 2-4 show the different versions of the application lay-out used in the experiment. Versions A and C only show the end score of the automated credibility evaluation, with version C including extra information on the scoring by other internet users and experts on the topic, including the option for users to provide their own credibility score of the website. Version B shows extensive information on how the credibility of a web page is evaluated through the framework, but leaves out the scores of other internet users and experts on the topic. To remove possible language barriers with the use of the application by the Dutch target audience, the choice was made to create the application in Dutch. All credibility scores that are used in the prototypes are placeholder scores: more on this in chapter 4.1.

The application is created through Axure RP 7; a wireframe tool with extended scripting possibilities, allowing the creation of an interactive prototype that resembles the final product as close as possible. In case the credibility scoring is doubtful or if the webpage is not credible, the application shows a popup to the user in the top-right corner of the browser screen. The credibility score can always be seen in the top-right corner of the web browser.

Figure 2. Version A: only showing the credibility scoring to the user. In this case positive.

2https://trends.google.com/trends/explore

Figure 3. Version B: showing more detailed information on the credibility scoring. Each category contains more details.

Figure 4. Version C: Showing the credibility scoring and scoring from other users & experts. Allowing the user to score the credibility of the webpage as well.

5. EVALUATION

Participants in the study were each shown three different versions of CredibleWeb, in different orders and with different credibility scores to the websites included in the experiment. Before the prototypes of CredibleWeb were shown to the audience, different websites were selected to be included in the experiment.

This information was retracted from the Google trends tool2 since

Google is the primary search engine used in the Netherlands [35].To select a data set, three topics were chosen in which internet users can search for information as layman: Health, Finance and Politics. These three topics are based of the research by Schwarz et al. [34], although their research also included the topics “Celebrity News” and “Environmental Science”. Due to time constraints, these topics were not included in this experiment. For Health and Politics, two websites each were chosen the experiment; for Finance, two web pages on one website were selected, to measure

(7)

how different credibility scores on one website can influence the target group.

For the three topics, subtopics were selected from the Google Trends tool, and the subsequent top 3 search queries of the past five years in the Netherlands were retracted. The search queries were then placed back in to Google, using a clean installation of the Firefox web browser in a new user account on a PC running Microsoft Windows 10, so to minimize the risk of retrieving personalized search results from Google. The top 3 search queries for each of the categories can be found in appendix D, including the selected websites for the experiment.

The scores in CredibleWeb prototype are placeholder scores based on an approximation of credibility scoring done by the framework. Using the factors from the framework (see appendix C) and the information from literature as a guide, the author of this thesis rated the selected websites. The author is an expert web user who uses search engines and information from different websites on a daily basis in both work- and academic environments and holds a bachelor’s degree in Computer Science.

5.1 Experiment setup

Participants of the experiments were welcomed to the experiment on a special webpage3, where they received an introduction to the

experiment and an explanation of what was expected. When clicking on the button on this page to start the experiment, a JavaScript web script randomly referred each participant to one of four experiment variations. The web pages for each experiment variation held an online questionnaire and three buttons to open the different browser windows with one CredibleWeb variation as shown in figure 5. The first page of the questionnaire was focused on retrieving personal information from the participants: participants were asked to enter their age, gender, province (location) and educational background before being able to continue the experiment. Participants with a higher educational background were shown a thank you page and could not continue the experiment after the first page of the questionnaire.

Figure 5. The experiment set-up. On the left side participants can answer the questionnaire, while buttons on the right side open the different websites for the participant’s version of the experiment in a popup screen.

After the first questionnaire page, the four experiment variations each had a different order of assignments. This set-up was chosen after feedback on earlier experiment set-ups with two variations and six assignments which took participants too long to complete and did not account for the order in which participants were shown different CredibleWeb versions. Table 5 shows the set-up of the four different experiment variations.

3Http://www.franklicht.nl/scriptieonderzoek

For each assignment, participants had to open the special web browser with one of three CredibleWeb prototypes and were asked to casually explore the web page. After doing so, participants were asked if they found the web page they just saw trustworthy or not and were asked to indicate what aspects of the web page had influenced their decision on a Likert Scale: the credibility score of CredibleWeb, the content of the web page, the reputation of the website, the experience of the writer, the design of the website, the language used on the web page and the date the website was updated.

Table 5. CredibleWeb variations and the credibility score per assignment, per experiment variation

Experiment 

Assignments  1 2 3 4

1 A (+) A (-) C (-) C (+)

2 B (±) B (+) B (±) B (+)

3 C (-) C (+) A (+) A (-)

In order to measure if CredibleWeb had an influence on the perceived credibility of websites with the target group, three hypothesis sets were set up to evaluate the use of the application. These hypotheses are all related to using the CredibleWeb browser plugin with Dutch internet users with a lower-educated background.

4.2.1 Hypothesis A

H0: When showing only the computed credibility score, the perceived credibility of the website is not influenced stronger by the CredibleWeb application than traditional influences on perceived credibility.

HA: When showing only the computed credibility score, the CredibleWeb application has a stronger influence than traditional influences on the perceived credibility of a website with the user.

4.2.2 Hypothesis B

H0: When showing the expanded credibility score, the perceived credibility of the website is not influenced stronger by the CredibleWeb application than traditional influences on perceived credibility.

HB: When showing the expanded credibility score, the CredibleWeb application has a stronger influence than traditional influences on the perceived credibility of a website with the user.

4.2.3 Hypothesis C

H0: When including credibility scores for other web users and experts, the perceived credibility of the website is not influenced stronger by the CredibleWeb application than traditional influences on perceived credibility.

HC: When including credibility scores from other web users and experts, the CredibleWeb application has a stronger influence than traditional influences on the perceived credibility of a website with the user.

5.2 Participants

Based on the target group of the research, participants were selected based on their educational background. Participants could not have finished a bachelor’s or master’s degree - or the equivalent from earlier Dutch school systems – leaving individuals with an Lbo or Mbo degree. The same applied for degrees from secondary school:

(8)

only individuals with a Pro or Vmbo degree were selected for this research.

The research was shared on Facebook and shared by friends, family and (former) colleagues. Afterwards, potential participants who were part of the target group were contacted through different channels such as Facebook Chat, WhatsApp, e-mail and telephone. No participants were paid to participate in the research.

6. RESULTS

A total of 131 participants took part in the research, divided over the four experiments. The median age group was 30 -39 years, which was the same in all four experiment versions. In experiments versions 2 & 3, most participants were between 20 and 29 years old while in experiments 1 & 4 this was 30 to 39 years old and 40 to 49 years old respectively. As the research automatically rejected participants who had a higher educational background, participants either had a Pro, Vmbo, Lbo or Mbo degree, with Mbo being the most common degree held by participants (47,3%), followed by Vmbo (22,9%). The ratio male to female was almost 1:1, with 48,1% male and 51,9% female participants. The participants came from all 12 Dutch provinces, but mostly from the Randstad area with Noord- and Zuid-Holland, Utrecht and Noord-Brabant accounting for 73,3% of all participants.

As table 5 showed, all participants where shown at least one website which received a high credibility score and one website which received a low credibility score. Experiments 1 and 3 also showed participants a website where the CredibleWeb application raised questions about the credibility of the website and provided a doubtful score. With three of the six websites included in the research, the majority of participants agreed with the CredibleWeb score; 50,7%, 67,7% and 75,4% respectively. With the website that received a doubtful credibility score, the majority (52,2%) of participants indicated they did not find the website a credible source of information, although there were major differences between the results of the two experiment settings that included this website. In two cases, participants did not follow the CredibleWeb score and indicated they did not find the website credible, while the CredibleWeb application provided a high credibility score. Table 6 shows the credibility scoring by CredibleWeb and the participants per website.

Table 6. Credibility scores of websites in research and agreement of participants Website topic CredibleWeb score Participant score Similar result?

Finance 1 Credible Credible

(50,7%) Yes Finance 2 Not credible Not credible

(67,7%) Yes Health 1 Doubtful

credibility

Not credible

(52,2%) Somewhat Health 2 Credible Not credible

(67,7%) No

Politics 1 Credible Not credible

(62,9%) No

Politics 2 Not credible Not credible (75,4%) Yes

5.1 CredibleWeb prototype A

Spearman's rank correlation coefficient was used to measure the correlation between participant’s indications on an ordinal Likert scale about the influences on their decision to find the website credible or not. Version A of the CredibleWeb application was used with both Finance web pages and was shown to participants as first their first website in experiments 1 and 2, or as third (last) website in experiments 3 and 4. Experiment 1 and 3 showed participants the financial website with a high credibility score, while experiments 2 and 4 showed participants the financial website with a low credibility score.

Results from experiment 1 showed a significant correlation (ρ (33) = .45, P < .01) between the decision of participants to find the website credible or not and the influence of the application in this decision. Combined with the results from experiment 3, ρ (67) dropped to .125 with P > .10, meaning the combined results are not statistically significant. In experiment 3, the influence of the application shows a moderate positive correlation with the influence of the website’s reputation (ρ (32) = .403, P < .05). The data from experiment 1 show similar results, though with weak significance.

For the second financial web page (settings 2 and 4), the combined results from the two experiments result to ρ (60) = .194 with P > .10. A moderate significant correlation between the decision to find the website credible or not and the content of the site is shown through the combined data from the two experiments with ρ (60) = .434 and P < .01. Experiment 4 showed several moderate and significant negative correlations between the influence of the application and the influences of the content of the website, language used on the website and the currency of the website with Spearman’s rho of -.496, -.357 and -.361 respectively.

5.2 CredibleWeb prototype B

The CredibleWeb prototype B was shown as second website for participants of all four experiment settings. In settings 1 and 3, the prototype was used in combination with the health 1 website, while in experiment 2 and 4, this version of the prototype was shown in combination with the politics 1 website (see table 5 for usage in experiments; appendix D for the website details). Combined data from two experiments per website showed no significant correlation between the prototype and participant’s decisions to find a website credible or not. Individual experiment data did show significant correlations however, especially with the health 1 website which received a doubtful credibility score from the application. The data from experiment 1 showed ρ (33) = .355 with P < .05, while results from experiment 3 showed ρ (32) = -.358 with P < .05.

Other significant correlations for the health 1 website in experiment 1 where found between content of the website (ρ (33) = -.367, P < .01) and participant’s decision, as well as the reputation of the website (ρ = -.314) and participant’s decision, although with a weak significance due to the P-value between .10 and .05. Significant correlations towards the participant’s credibility decision for the health 1 website in experiment 3 where shown to be the reputation (ρ (32) = .35, P < .05) and the design of the website (ρ (32) = .377, P < .05).

With the health 1 website, the influence of the application did show significant negative correlations between the influence of language used on the website and participant’s credibility decision in experiment 1 (ρ (33) = .369, P < .05) as well as the design of the website and participant’s credibility decision in experiment 3 (ρ (32) = .502, P < .01).

(9)

For the politics 1 website, which received a credible score from the application, participants in the two experiments were divided as well. Data from experiment 2 showed a moderate negative correlation between the CredibleWeb prototype and the credibility decision, although the results were not significant: ρ (28) = -.268 with P > .10. Results from experiment 4 showed a positive correlation: ρ (30) = .582 with P < .01. While data from experiment 2 shows only the currency of the website as other (weak) significant correlation toward the participant’s credibility decision (ρ (28) = .351, P = .057), results from experiment 4 point to significant correlations between reputation (ρ (30) = -.59, P < .01), design (ρ (30) = -.349, P < .05), expertise (ρ (30) = .356, P < .05) and language (ρ (30) = .534, P < .01) toward the participant’s decision to find the website credible or not. With the politics 1 website, the influence of the application showed correlations with the expertise of writers of the website (ρ (28) = .539, P < .01) and the design of the website (ρ (28) = -.319, P < .01).

5.3 CredibleWeb prototype C

Like prototype A, prototype C was shown to participants as the first version of the application (experiments 3 and 4) or as third and last version of the application (experiments 1 and 2). In none of the experiment settings, prototype C showed a significant correlation between the application and participant’s credibility decision. The correlations from combined experiment results for both websites with prototype C were weak and not significant. For the politics 2 website, Spearman’s rho (67) showed a correlation of .044 with P > .10; for the health 2 website, the overall correlation was ρ (60) = .186 with P > .10.

For the politics 2 website, a strong correlation between the content of the website and participant’s credibility decision did seem to occur with an overall correlation of ρ (67) = .419 and P < .01. Although weaker, both the language used on the website and the currency of the website showed correlations with the participant’s decision. The currency showed a positive correlation of ρ (67) = .265, P < .05 while the language used on the website indicated a negative correlation of ρ (67) = -.307, P < .05.

The correlations between the influence of the application and other influences showed contractionary results for prototype C: with the politics 2 website in setting 1, the influence of the application was positively correlated with the design of the website (ρ (33) = .426, P < .05) and language used (ρ (33) = .417, P < .05), while results from experiment 3 showed opposite results: both the influences of the design and the language used were negatively correlated with the influence of the application: ρ (32) = -.327 and ρ (32) = -.351 respectively, both with P < .05. The health 2 website showed similar contradictory results between the influence of the application and the content influence with a positive correlation in experiment 2 and a negative correlation in experiment 4.

6. DISCUSSION

6.1 Validity of results

The experiment for this thesis research was set up with the Proximal Similarity Model [4] in mind, to make sure results from the experiment could be generalized to the Dutch population with an Mbo level 4 as highest educational degree. To this end, actions were taken to ensure participants with a higher degree would not be able to fill in the questionnaires, so to keep the number of participants outside the target group to an absolute minimum.

By active pursuit of particular subgroups within the population, participants from all age groups between 16 and 70+ participated in the experiments, although most participants where in the age group of ages 20 – 29. The Dutch statistics bureau CBS recently

released 2016 numbers on education level for Dutch workers, albeit for slightly different age groups (15 – 25, 25 – 45, 45 – 75 and 75+). When comparing similar age groups of this thesis’ participants with the CBS statistics, younger participants of the experiment (those in age groups 20 – 29 and 30 – 39) show to mostly have finished Lbo and Mbo levels of education, similar to those in the CBS research age group 25 – 45. Older participants in the experiment (those in age groups 40 – 69) show to have completed mostly Vmbo and Pro or similar previous versions of these education levels, very similar to the 2016 CBS results in age group 45 - 75 [2].

Other aspects of the Proximal Similarity Model are the setting and place of the experiment; both of which are related to the manner of searching for information on the internet. The application used in this research is first and foremost intended to be used in a traditional web browser, which can be installed on a laptop or desktop. Even though mobile phones and tablets have become a popular new manner to access the internet, laptops and desktop PC’s are still among the top manners in which Dutch citizens access the internet, as is shown by the CBS [17]. In this research, participants were able to use the application on their own PC or laptop and in their work- or home environment which they were familiar with.

6.2 Results from CredibleWeb prototype A

Overall, a weak correlation between application prototype A and participant’s perceived credibility to the shown websites was measured. In three of the four settings where participants were shown this prototype version, no significant correlation was found. Since the correlation was weak and did not hold over the different experiment settings for this prototype version, the null hypothesis for this prototype cannot be rejected at this time.

In the two experiment settings (2 and 4) where participants received an active warning from the application prototype, the influence of website’s content did show a significant, albeit moderate positive correlation with participant’s credibility assessment. It seems the active warning from the application about the content of the website in these settings might have “pushed” participants in the experiment take an extra careful look at the content and those who decided this was a credible source ultimately saw the content of the website as the largest influence on their decision. This theory is further supported by the moderate negative correlation between the influence of the website’s content and the influence of the application. Although the correlations between these influences and the credibility decision of participants have been measured, a causal effect has not been proven by these measurements and will require further research to be confirmed.

6.3 Results from CredibleWeb prototype B

Prototype B showed very interesting results in the different experiment settings. With the health 1 website, which received the doubtful credibility score from the application, both experiment settings showed a significant but moderate correlation between the application’s influence and participant’s credibility decision, but both in another direction; one setting showed a positive correlation while the other showed a negative correlation. This meant that participants in one setting seemed to rely more on the application in their decision to trust the website, while participants in the other setting did not find the application a significant influence on their decision to find the website trustworthy. Both experiment settings further showed another significant correlation between the website’s reputation and the credibility decision made by participants; again, in opposite directions. No direct significant correlation was found between the two influences on participant’s decision though.

(10)

Looking at what could have caused the different correlation directions between the experiment settings, age, location, gender and educational background where all factors that were explored, but offered no plausible explanation for the differences. The order in which participants were shown the different prototype version could however have influenced participants for this outcome to show. Participants in setting 3 had already seen a pop-up from the application with the first website, indicating the CredibleWeb application did not find that website credible, while giving a doubtful score to this particular website. Participants in setting 1 had not seen this pop-up as the website for their first assignment was deemed credible by the application and thus did not show a warning, meaning they were warned actively for the first time by the application in this setting. As the correlation in setting 3 was negative, this could mean users of the application are less inclined to act on doubtful credibility scores when they know the credibility score could be worse. Due to the small dataset used in the experiment however, this is indication is not conclusive.

The results with prototype B and the politics 1 website are inconclusive: there was a significant but moderate correlation between the application’s influence and the participant’s credibility decision in experiment setting 4, but this did not hold in setting 2. Overall, a moderate positive correlation was measured with the language used on the site and participant’s credibility decision. Prototype B does show the strongest correlation between the application’s observed influence and participants’ credibility decisions. This correlation is significant in three of the four settings but did not show to be the strongest correlation in all settings. The reputation of the health 1 website showed a similar correlation with the participant’s decision while the influence of the language used on the politics 1 website showed a similar correlation as the application’s observed influence. Due to these results, the null hypothesis cannot be rejected for this prototype at this point.

6.4 Results from CredibleWeb prototype C

No significant correlations where found between the application’s influence and participant’s credibility decision. Both the influences of content and language used showed a moderate negative correlation with user’s credibility decision for the politics 2 website. This means participants where less inclined to find the website a credible source when indicating these factors had an influence on their decision. For the health 2 website, participants where heavily divided on whether or not to trust the website, showing no correlation between any (but 1) of the influences and participant’s credibility decisions. Like the health 1 website though, the reputation of the website showed a significant correlation with participant’s credibility decision. As the reputation influence mainly showed significant correlations with the participant’s decision with health websites, it seems this factor is important for web users when searching for health information on the web. This will have to be proven by experiments with a larger dataset of websites though.

This version of the application seems to have scored the worst with participants, showing no significant correlation with their credibility decisions. Based on these results, one can conclude the credibility evaluation of other web users and experts have the least effect on the population. This means the null hypothesis cannot be rejected for this version of the application.

6.5 Future challenges

The algorithm that the CredibleWeb prototypes are based on, has thus far only been used in experimental environments and has yet to be used in a large scale, live environment. As mentioned in

chapter 2.2, interest in AI solutions to everyday problems has spiked in recent years with many new startups that are tackling everyday problems with AI solutions, one of which could be finding credible information online. A financial incentive will have to be present for an automated credibility evaluation to become available from privately held company.

The current algorithm would have to be further improved as well, since it currently predicts the credibility of websites correctly in about 70% to 80% of the cases. For users to be able to trust the application and credibility scores provided by the application, this percentage would ideally 95% or higher.

6.6 Research limitations

The choice to use an online questionnaire for participants to report on their influences has enabled the 131 participants to take part in the experiments. In order for the online questionnaire to work, limitations were set on the actions of participants to take. This also meant excluding having participants to enter queries in search engines and selecting sources based on these queries. This is however an important part of online information retrieval, and the author of this thesis suggests including this in future experiments. Another limitation with the use of online questionnaires was the reliance on self-observed influences with the participants. A more qualitative method of research with participants is suggested to retrieve more nuances in the decision-making process of web users on the perceived credibility of websites.

7. CONCLUSION

Although this thesis research was conducted under the assumption that intelligent information systems could replace the credibility evaluation with the target audience, the results from the experiment did not support this theory. None of the CredibleWeb prototypes showed to be such a significant influence on participant’s credibility decision, that they would trump other influences. It seems the use of an intelligent system for automated online credibility evaluation rather supports than replaces the decision-making process of internet users, decision-making them more aware of possible dangers of misinformation while searching for credible information on the internet. In this sense, the algorithm developed by Olteanu et al. and theoretically used by the CredibleWeb application helps to assess aspects of websites which are not easily evaluated by web users themselves to support them in their decision to trust a website as a credible source.

With the supportive goal in mind, it seems the best manner for a digital information system which automatically evaluates the credibility of webpages to influence Dutch citizens with an educational background of Mbo level 4 or lower is to provide them with the details on how the automatic score of a webpage is generated. The results with version B of the CredibleWeb prototype - which provided these details to users - showed a significant correlation with participant’s credibility decision in three of the four settings where it was used, providing enough indications this version of CredibleWeb might influence web users enough to help them with finding credible information on the internet. The results with this prototype indicate users appreciate the details about how the automated score was set up while ignoring the application more often when this information is not included. A causal connection between the application’s influence and user’s credibility assessment has not been proven by this experiment though, and will need further research to be confirmed.

(11)

8. ACKNOWLEDGEMENTS

The author of this thesis would also like to thank Dr. A.C. Nusselder for his thoughts, feedback and the interesting conversations on the research and design thinking in general. Furthermore, my thanks go to all participants who helped with the experiments for this thesis research as well as my family, friends and (former) colleagues who supported me throughout the entire process.

9. REFERENCES

[1] Alexander, J.E. and Tate, M.A. 1999. Web wisdom: How

to evaluate and create information quality on the Web.

Lawrence Erlbaum.

[2] Bevolking; hoogstbehaald onderwijsniveau en

onderwijsrichting: 2017.

http://statline.cbs.nl/Statweb/publication/?DM=SLNL&P

A=82816ned&D1=0&D2=a&D3=0-1,5,9-10&D4=0&D5=a&D6=0&D7=64,69&HDR=G3,G6,G 1,G2&STB=G5,T,G4&VW=T. Accessed: 2017-07-02.

[3] Brandt, S. 1996. Evaluating information of the internet.

Computers in Libraries. 16, 5 (1996), 44–46.

[4] Campbell, D.T. 1986. Relabeling internal and external validity for applied social scientists. New Directions for

Evaluation. 31, (1986), 67–77.

[5] Country comparison: Internet Hosts: 2012.

https://www.cia.gov/library/publications/the-world-factbook/rankorder/2184rank.html. Accessed: 2016-06-20. [6] CTR Study: 2016. https://www.advancedwebranking.com/cloud/ctrstudy/. Accessed: 2017-05-26.

[7] Cutrell, E. and Guan, Z. 2007. What are you looking for? An Eye-tracking Study of Information Usage in Web Search. Proceedings of the SIGCHI conference on Human

factors in computing systems - CHI ’07. (2007), 407.

[8] Van Deursen, A.J.A.M. and Van Dijk, J.A.G.M. 2010. Internet skills and the digital divide. New Media &

Society. 13, 6 (2010), 893–911.

[9] Egelman, S. et al. 2008. You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings. Proceeding of the twenty-sixth annual

CHI conference on Human factors in computing systems - CHI ’08. (2008), 1065.

[10] Flanagin, A.J. and Metzger, M.J. 2000. Perceptions of Internet Information Credibility. Journalism & Mass

Communication Quarterly. 77, 3 (2000), 515–540.

[11] Flanagin, A.J. and Metzger, M.J. 2007. The role of site features, user attributes, and information verification behaviors on the perceived credibility of web-based information. New Media & Society. 9, 2 (2007), 319–342. [12] Fogg, B. et al. 2003. How do users evaluate the credibility of Web sites?: a study with over 2,500 participants. Paper

presented at the Proceedings of the Conference on Designing for User Experiences. (2003), 1–15.

[13] Fraillon, J. et al. 2013. International Computer and

Information Literacy Study - Assessment Framework.

[14] Fritch, J.W. and Cromwell, R.L. 2001. Evaluating internet resources: Identity, affiliation, and cognitive authority in a networked world. Journal of the American Society for

Information Science and Technology. 52, 6 (2001), 499–

507.

[15] Granka, L.A. et al. 2004. Eye-tracking analysis of user behavior in WWW search. Proceedings of the 27th annual

international conference on Research and development in information retrieval - SIGIR ’04. (2004), 478.

[16] Hargittai, E. et al. 2010. Trust online: Young adults’ evaluation of web content. International Journal of

Communication. 4, (2010), 468–494.

[17] ICT gebruik van personen naar persoonskenmerken,

2005-2013: 2016. http://statline.cbs.nl/statweb/publication/?vw=t&dm=sln l&pa=71098ned&d1=33,55-59,93-95,97-100,102- 104,106-109,112-113,119,124- 133&d2=0&d3=0,l&hd=130422-1124&hdr=g2,g1&stb=t. Accessed: 2017-05-26.

[18] Julie Desk, an “AI virtual assistant” that helps you schedule meetings and more, scores €2.5M funding: 2017.

https://techcrunch.com/2017/06/01/julie-desk-an-ai- virtual-assistant-that-helps-you-schedule-meetings-and-more-scores-e2-5m-funding/?via=indexdotco. Accessed:

2017-06-05.

[19] Kammerer, Y. and Gerjets, P. 2014. The Role of Search Result Position and Source Trustworthiness in the Selection of Web Search Results When Using a List or a Grid Interface. International Journal of Human-Computer

Interaction. 30, 3 (2014), 177–191.

[20] Keane, M.T. et al. 2008. Are people biased in their use of search engines? Communications of the ACM. 51, 2 (2008), 49–52.

[21] Meelissen, M.R.M. et al. 2014. Digitale geletterdheid van

leerlingen in het tweede jaar van het voortgezet onderwijs - Nederlandse resultaten van ICILS-2013.

[22] Meola, M. 2004. Chucking the Checklist: A Contextual Approach to Teaching Undergraduates Web-Site Evaluation. 4, 3 (2004), 331–344.

[23] Metzger, M.J. et al. 2003. College student Web use, perceptions of information credibility, and verification behavior. Computers and Education. 41, 3 (2003), 271– 290.

[24] Metzger, M.J. et al. 2010. Social and heuristic approaches to credibility evaluation online. Journal of Communication. 60, 3 (2010), 413–439.

[25] Nasdaq invests in artificial intelligence: 2017.

http://www.zdnet.com/article/nasdaq-invests-in-artificial-intelligence/. Accessed: 2017-06-05.

[26] New Media and Society assignments: 2009.

http://alexandervandeursen.nl/nms/. Accessed:

2016-07-04.

[27] Newman, N. et al. 2016. Reuters Institute Digital News

Report 2016.

[28] Number of Internet Users: 2016.

http://www.internetlivestats.com/internet-users/.

Accessed: 2016-02-05.

[29] OECD 2013. OECD Skills Outlook 2013. OECD Publishing.

(12)

[30] Olteanu, A. et al. 2013. Web credibility: Features exploration and credibility prediction. Lecture Notes in

Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 7814 LNCS, (2013), 557–568.

[31] Oxford’s AI firm Oxbotica gets £8.6m to lead driverless

car consortium: 2017.

http://techcitynews.com/2017/04/24/oxfords-ai-firm- oxbotica-to-lead-driverless-car-consortium-following-8-6m-innovate-uk-grant/?via=indexdotco. Accessed:

2017-06-05.

[32] Purcell, K. et al. 2012. Search Engine Use 2012. PEW

Research Center. February (2012), 42.

[33] Scholz-Crane, A. 1998. Evaluating the future: A preliminary study of how undergraduate students evaluate web sources. Reference Services Review. 26, 3/4 (1998), 53–60.

[34] Schwarz, J. and Morris, M. 2011. Augmenting web pages and search results to support credibility assessment.

Proceedings of the 29th SIGCHI Conference on Human Factors in Computing Systems. (2011), 1245–1254.

[35] Search Engine Market Share in Netherlands 2016: 2017.

http://gs.statcounter.com/search-engine-market-share/all/netherlands/2016. Accessed: 2017-05-26.

[36] Shah, A.A. et al. 2015. Web credibility assessment: Affecting factors and assessment techniques. Information

Research. 20, 1 (2015).

[37] The 2016 AI Recap: Startups See Record High In Deals

And Funding: 2017.

https://www.cbinsights.com/blog/artificial-intelligence-startup-funding/. Accessed: 2017-06-05.

[38] The Race For AI: Google, Twitter, Intel, Apple In A Rush To Grab Artificial Intelligence Startups: 2017.

https://www.cbinsights.com/blog/top-acquirers-ai-startups-ma-timeline/. Accessed: 2017-06-05.

[39] The Value of Google Result Positioning: 2013.

https://chitika.com/google-positioning-value. Accessed:

2017-05-26.

[40] Usage and Population Statistic: 2016.

http://www.internetworldstats.com/europa.htm#nl.

Accessed: 2016-06-20.

[41] Wawer, A. et al. 2014. Predicting webpage credibility using linguistic features. Proceedings of the companion

publication of the 23rd international conference on World wide web companion. (2014), 1135–1140.

[42] Wu, M. et al. 2006. Do security toolbars actually prevent phishing attacks? CHI ’06: Proceedings of the SIGCHI

conference on Human Factors in computing systems.

Referenties

GERELATEERDE DOCUMENTEN

De vraag kan en moet daarom worden gesteld wanneer door middel van het toepassen van de methoden genoemd onder 1 tot en met 5 grenzen worden overschreden en wanneer dat evident is

The effect was as predicted, as respondents with a higher level of prior knowledge had a lower coefficient of puffery on maximum price (15.99) than respondents with an average

H5 : Compared to the no picture condition, an avatar profile picture positively impacts the perceived trustworthiness (a), expertise (b) and homophily (c) and indirectly

Thus both the temperature of the system as well as the bias voltage (below  K) could be used to tune the spin signal between a posi- tive and negative sign. We attribute

The changes that should be made to improve the VISA-comp system, suggested by the subjects, were most of the time a better speech module.. Furthermore a key that can

Hierdie verwantskap word gewoonlik in terme van herkoms of invloed verstaan, naamlik op watter wyse ’n Nuwe-Testamentiese skrywer deur filosofiese tekste beïnvloed kon word of

(Fig. 3), and the near coincidence of these curves in the case of hydrophobic particles, suggest that ~ and ~pL are proportional to n0, the viscosity of the

The Partnerships for the Delivery of Primary Health Care fulfil all of these requirements: It specifically addresses the selected objectives of EU development