• No results found

Requirement evaluation for expert-based ranking of web interface environments: the ZEEF.com case study

N/A
N/A
Protected

Academic year: 2021

Share "Requirement evaluation for expert-based ranking of web interface environments: the ZEEF.com case study"

Copied!
35
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Requirement evaluation for expert-based ranking of web

interface environments: the ZEEF.com case study

Submitted in partial fulfillment of the degree of Master of Science

Yana Ledeneva

10837418

Master Information Sciences

Business Information Systems

Faculty of Science

University of Amsterdam

June 16, 2015

Thesis Supervisor:

Dr. Frank Nack

Second Reader:

Dr. Peter Weijland

(2)

Table of Contents

Abstract ... 2

1. Introduction ... 2

2. Related work and definitions ... 3

2.1 Overview of content curation ... 3

2.2 Ranking of web-pages ... 3

2.2.1 Definition of ranking ... 3

2.2.2 Search engines ranking ... 4

2.2.3 Human-based ranking ... 4

2.2.4 Factors for evaluating web pages ... 4

2.3 Concepts of trust and agreement ... 6

2.4 Conclusion and research gap ... 7

3. Conceptual Framework and research question ... 7

4. The ZEEF use case ... 9

4.1 Why ZEEF ...10

4.2 Survey for the curators ...10

4.2.1 Design ...10

4.2.2 Participants ...11

4.3 Survey for the users ...11

4.3.1 Design ...11

4.3.2 Participants ...13

4.4. Results of the surveys ...14

4.4.1 The process of creating a curator page ...14

4.4.2 Commission and fraud ...15

4.4.3 Ranking factors ...16

4.4.4 Visitors’ general perception ...16

4.4.5 Conclusions ...20 5. Interface experiment ...20 5.1 Design ...20 5.2 Results ...21 5.3 Discussion ...22 6. Overall discussion ...22

6.1. General discussion and limitations ...22

6.2. Guidelines for the curators ...24

7. Conclusions and future work ...26

Appendix 1. Preliminary list of questions for surveys ...29

(3)

2

Requirement evaluation for expert-based ranking of web

interface environments: the ZEEF.com case study

Yana Ledeneva

10837418

Abstract

This research is aimed to analyze the process of creation of the expert-based rankings and their perception by the users of a human-based search environment. The first part of the research is focused on the investigation of the main criteria that influence the ranking of websites in human-based search engines, as well as the main factors that influence trust, agreement and general perception of these rankings by the visitors. The second part of the research covers the analysis of the main use case of this study - a human-based search engine ZEEF.com, in which the curators create their pages and establish the rankings of web sites for one or several topics. The empirical part of this study was conducted with the help of two surveys. The first survey addressed how curators of ZEEF pages generate their pages to check the reasoning behind, the effort invested, and the time needed to establish a ranking. The second survey addressed existing and potential visitors of ZEEF pages to check how, in their opinion, the rankings were established, and whether the results are satisfactory and trustworthy. Based on this evaluation, the list of the main ranking factors that were taken into account by both parties was developed. Finally, an interface test was performed to check which aspects of web pages influence the perception of the quality of the page by the visitors. As a result, recommendations for the curators of ZEEF pages regarding the process of creation of ZEEF pages were designed.

1. Introduction

In the era of information overload and significant use of search engines, the problem of filtering the world’s information is getting more and more important. However, the algorithmic search that is implemented in the main search engines, social networks and other websites sometimes does not provide a 100% retrieval quality. Taking into account that people usually tend to trust other people more than machines, especially if other people are experts in their field, it is possible to come up with a better alternative to algorithmic search: a human-based search environment. Nowadays such an environment is available in form of a content curation or human-based search engine. One example of such a search engine is ZEEF.com. However, the main algorithm of ZEEF - a human-based ranking of the links to different websites - has its risks. For instance, the curator may be too subjective or may use ranking factors that do not align with those applied by visitors of the page, or the visitor may distrust the curator, his page and/or the platform in general. Therefore, in order to analyze the process of ranking websites in a human-based search environment in general, and to reduce the business risks of ZEEF company in particular, the main factors and parameters that are taken into account by the curators for making the lists of links, and their methods of making rankings need to be analyzed.

The aim of this research is to understand what influences the perception of the quality of a human-based search environment by the visitors, and to check the difference between the ranking criteria curators take into account, and the criteria visitors think should be taken. Significant difference between these criteria may lead to possible misunderstandings, which in turn may lead to decrease of quality of the content from visitors’ point of view.

(4)

3

2. Related work and definitions

2.1 Overview of content curation

Due to information overload these days it becomes much more challenging to find the information one is interested in (Stephen Dale, 200). One of the solutions to this problem is content curation, which becomes more popular as a way to filter a massive torrent of information (Nancy K. Herther, 2012a). Content curation is “the process of sifting through information on the web, organizing, filtering and making sense of it and sharing the best and most relevant content on a specific issue within your network”. (Nancy K. Herther, 2012b, p. 41)

A content curator is a person who continually finds, groups, organizes, and shares the best and most relevant content on a specific issue online (Nancy K. Herther, 30). It is worth mentioning that the key component of this definition is the word “continually”, because data in the internet updates incessantly, and it is important to be up-to-date in order to provide the most relevant, right and useful information.

Automatic algorithm-based approaches of searching for information (such as Google1 or Bing2 search)

lack some of very important features of content curation: for example, the web-sites can be easily optimized to appear on top of Google’s list, which means that the searcher will find not necessarily the best content on a topic (instead, they will find the web sites who invested the largest amount of resources into Search Engine Optimization). On the other hand, a content curator organizes and categorizes information for easier search and manipulation, which is not done by automatic search engines yet.

However, most content curation tools today, such as scoop.it3, pinterest.com4 or curata.com5 lack clear

internal organization, meaning that users are still left to browse through long lists of content within a theme (Nancy K. Herther, 2012a). That means that it is more convenient and useful to combine content aggregation (which means that the content is discovered from different sources, classified into a pre-established order, and low-quality sources are filtered out) and content curation (which focuses on human involvement in managing content, adding editorial point of view and insights on why content is important) (Heidi Cohen, 2014). As a result, the curators not only collect the most relevant and high-quality information, but also provide their personal insight and comments on why this particular piece of information is important. One of the examples of such insight is ranking of links to websites.

2.2 Ranking of web-pages

2.2.1 Definition of ranking

Ranking is reorganizing search results so that the most relevant information appears higher in the list. Put it simply, ranking can be defined as “reordering of a result set (i.e. a search that has already been carried out) based on certain criteria” (Alain Materne, p. 5). The criteria (ranking factors) taken into account for ranking are critical: if a person makes two queries where each features different criteria (e.g. price of products

1 google.com 2 bing.com 3 www.scoop.it 4 www.pinterest.com 5 www.curata.com

(5)

4

in the shop and distance of the shop from the customer), he or she will receive two completely different rankings.

2.2.2 Search engines ranking

Most of search engines work with the query-dependent ranking factors, such as frequency of occurrences of the search items, position of the query terms in the document, relevance between a document and a query, etc. However, there also are the query-independent criteria, such as PageRank – “a measure of popularity and importance of web page by counting number of links to this page” (Altman, p. 2), number of clicks on the link, size of the website, etc. (Lewandowski, 2005). These criteria guarantee accurate, relevant, and useful results for a user searching for information on a particular query, which lead to high level of satisfaction among search engine users: in 2014, customer satisfaction with portals and search engines was 80%, 5.3% higher than in 2013 (ASCI, p. 3). However, even the most current advanced algorithms have their disadvantages. The main drawback is that search engines cannot define link manipulation and intrusive or spam-like promotion of the web site (Lewandowski, 2005). For instance, if the user looks for Java tutorials, as the first link he or she may find the web site, which has spent some money on deliberate manipulation of search engine indexes and affecting the visibility of the web site in a search engine’s results. However, this tutorial may be worse by the quality of the lessons or the content. The best tutorials may be much lower in the ranking because they pay attention on the quality of the content, not Search Engine Optimization (SEO). One of the solutions to this problem is human-based (or expert-based) ranking.

2.2.3 Human-based ranking

“Human-based models do not usually work as an alternative for all other models, but rather as a stopgap to fulfill a need while the other models are not available to present a high-quality solution” (Engler, 17). Particularly, the human-based method of ranking websites is used as additional method to filter irrelevant search results and facilitate a deeper analysis of website content. However, with this method, it is impossible to analyze a big amount of web sites (Vaughan, 2004), and experts, who work on the ranking, may be too subjective and rely solely on personal opinion or personal benefits, which means that expert-based ranking may have biases. Finally, it is difficult to identify factors used by an expert for their individual ranking process: if the expert takes into account ranking factors that do not match with the user’s information need parameters then the ranking may become irrelevant or, in the worst case erroneous.

2.2.4 Factors for evaluating web pages

Human-based ranking of websites involves the use of particular factors (criteria) that are taken into account by the curators for evaluation of web pages. An analysis of the literature (Bailey, Pearson, 1983); (Fogg, 2002), (Dholakia, 1998); (Hsuan Yu Hsu, Hung-Tai Tsou, 2011); (Evaluating Web Pages: Techniques to Apply & Questions to Ask, 2012); (Natalie Clewley Sherry, Chen Xiaohui Liu, 2009); (More guidance on building high-quality sites, 2011); (Gagan Mehra, 2012); (Hsiu-Fen Lin, 2007) resulted in the following list of factors that influence motivation and enjoyment of web experience among the visitors (see table 1). These factors are proven to be the most relevant for establishing the validity (the quality of being logically or factually sound; soundness or cogency), authorship, timeliness, integrity (the state of being whole and undivided) and presentation of the information found on the Internet.

(6)

5

Table 1. Factors used for evaluation of web pages

Factor Definition Establish:

1 Accuracy of the content The correctness of the output information Validity 2 Completeness of the content The comprehensiveness of the output information content. Integrity 3 Information credibility Perceived quality of the information by consumers Validity 4 Information about the author Information about the author, his contacts and background, plus any

reference needed for future correspondence

Authorship

5 Links to other sources (references)

Little footnote numbers or links that might refer to documentation, take the time to explore them.

Authorship

6 Timeliness of the content The availability of the information at a time suitable for its use. Timeliness 7 Relevancy of the content The degree of congruence between what the user wants or requires and

what is provided by the information products and services.

Validity

8 Confidence in system/ personal trust in web site

The user's feelings of assurance or certainty about the website. Validity

9 Understanding of a system The degree of comprehension of a user Validity 10 Size of the web site Amount of separate pages/articles/content in general Integrity 11 Position in ranking of the

main search engines

Including Pagerank and other query-related quantitative factors Validity

12 Visual layout The material design of the layout and display of the output contents. Presentation 13 Performance of the web site Availability, response time, processing speed, memory consumption

and utilization of computing resources, throughput, etc.

Presentation

14 Money received as commission

Amount of money received as commission from affiliate links, or directly from the site-owners

Validity

15 Recommendations by external parties

Web site may be recommended to the user by other trusted web sites, people, friends and relatives.

Validity

16 Adequate URL The unique address of any Web page or a document, including a domain extention

Presentation

17 Amount of promotional material

Amount of advertisements and pop-ups, the tone of writing (should be clear, direct, sincere and free of “sales-speak”).

Presentation

18 Personal experience Previous positive experience using this web site Validity

19 Prices of the products offered (for online shops only). Validity

20 Uniqueness of the products offered

(for online shops only). Whether the product is unique (original) or manufactured by the company.

Validity

21 Quality of the products offered

(7)

6

2.3 Concepts of trust and agreement

One of the main reasons why people go online and visit particular websites is search for information (Thomas Young, 2009), and entertainment (Why People Go Online, 2009). In other words, the main interest of the visitors going to a particular website is to find the needed information about a particular topic or to spend time entertaining themselves (Yong-Soon Kang, Yong Jin Kim, 2006), (Rik Crutzen et al, 2014). Therefore, the main goal of search engines, including human-based search environments, is to satisfy the visitors’ need of gaining the most relevant, useful and high-quality information. Informativeness of web pages and quality of the content are the most important aspects that have a positive effect on the attitude of users towards the web site and positively influences traffic. In order to increase traffic and make people start using the web site, the content of the page (in case of search engines - the rankings of the retrieved links) should be relevant and useful for the visitors.

The next step is to make people continue using this web site in the future. Engagement process of the visitors may be initiated by interest or information need (O’Brien, Toms, 2008); however, it is also important to take into account such aspects as trust of the users and their agreement with the ranking of the curator. According to (Mirjam Seckler, p. 40), trust is a multidimensional construct, consisting of three different facets:

- benevolence (user’s belief that a website is concerned with the present and future interests, desires and needs of its users and gives useful advice and recommendations);

- honesty (there are no false statements and the information on the site is sincere and honest);

- competence (the website has the resources [whether technical, financial, or human] and capabilities needed for the successful completion of the transaction and the continuance of the relationship). Trust of the visitors of web pages may be influenced by multiple factors, such as design of the page, personal information about the creator of the page and his/her domain knowledge, visitors’ background and expertise, statistics of the page (including the number of views and/or clicks), feedback from other people, quality of the content and so on (Ye Diana Wang, Henry H. Emurian, 2005). In human-based curation platforms, however, these factors were not tested yet, because in these environments, the pages are personalized, and the curators play more significant role in establishing trust. On these platforms, if the visitors do not trust a particular curator, they can leave the page of this curator and switch to the page of the same topic but of another curator. However, if the page is curated by only one person (due to limited amount of the curators), the visitor has no alternative page about this topic to visit. In this case, the visitor stops using the platform at all.

Besides that, the visitors may lose trust in content curation platform in general if they see the next curators to be trustless as well. They may distrust the platform itself, because they may think that the platform does not prove the expertise and/or experience of the curators in the field they create their pages about. Moreover, this may happen if the platform does not provide the visitors with the guarantee that only experts (professionals) can create the page on the platform. Therefore, the design of the website and personal information about the curator (including profile picture, description and links to the profiles in social networks) play important roles in establishing trust of the visitor. Finally, the fact that content curator is usually a stranger to a visitor negatively influences trust level in curation platforms in general, because people tend to distrust strangers more than their friends and relatives (Mirjam Seckler, 2014). In case the visitors distrust the content curation platform, they will leave the website and never return back, thus decreasing traffic of the site and its revenue.

(8)

7

Trust represents one side of the engagement process of the visitors, while agreement with the content represents the other side. The concept of trust is correlated with the information about the curator, design of the page and overall perception of the curation platform, while the concept of agreement relates to the content itself and level of satisfaction in terms of finding the most relevant and useful information in short time. According to Oxford dictionary, agreement is “the state of sharing the same opinion or feeling” (Agreement, Oxford Dictionary of English). For the curators of the pages it is extremely important to keep the level of disagreement as low as possible, and in order to reach this it is important to see how the content is made. Talking about human-based search environments - it is necessary to understand, which factors are taken into account by the creators while making up the rankings of websites, and whether visitors of the page perceive these rankings in a similar way that creators of the pages do.

The failure of one or both sides of the engagement process in human-based search environments leads to decrease in traffic and, consequently, revenue stream of the web site. This assumption is taken because the affiliate marketing program, which is implemented in search environments, implies that revenue of the company depends on the amount of people clicking on the links and purchasing the products via these links.

2.4 Conclusion and research gap

The problem addressed in this research covers two main areas: human-based methods for creating rankings and evaluation of web sites. In this research, these topics will be combined. The literature review revealed that there are very few articles and researches about human-based ranking of web sites yet, as well as about factors and methods used for such rankings. Moreover, this paper will have practical value, because according to the results of my research I will be able to come up with some practical recommendations for the curators of human-based search environments, with the help of which they will increase the quality of their rankings, lower level of disagreement among the visitors and increase traffic (and, consequently, their revenue). Though this investigation only one human-based search environment, namely ZEFF, will be tested, however I assume that this research can be useful for other content curation platforms as well.

3. Conceptual Framework and research question

In this research, the issue of different perceptions of human-based rankings by the curators and the visitors will be addressed. The main aspect which will be analyzed is the factors that are taken into account both by the users of the ranking (the visitors) and by the creators of these rankings (curators). The main and only case on which my research is based is the environment provided by ZEEF. In picture 1, the conceptual framework of the research is described. Starting from the right side: the revenue of the company and of the curators who create the pages in ZEEF depends on the traffic, which is directly related to the visitors’ perception of the quality of the content represented on ZEEF pages and their intention to use these pages in the future. The perception of the quality is dependent on many different aspects, but in this research, only two will be analyzed: trust (which mostly concerns the design of human-based search environments and the curators themselves) and agreement with the ranking. Agreement of the visitor with the ranking may depend on difference in the factors that establish the basis for this ranking - from the point of view of both the curator and of the visitor. The arrows in the scheme represent the connections of the topic, red and green arrows - the hypotheses that were checked in this paper.

(9)

8

Picture 1. Conceptual framework of the research With this conceptual framework, I intend to answer the following questions: Main research question:

“What are the trust and agreement factors that influence both curators and users perception of human-curated recommender environments?”

In order to answer this question, the following sub-questions should be answered (see table 2): Table 2. Subquestions of the research

Curators Visitors

C1. What factors do they consider when making rankings?

V1. What factors they believe were taken?

C2. Why do they create the lists? C3. How do they create the lists? C4. Is there a risk of the fraud?

V2. Do they agree with the ranking? V3. Do they trust the curator?

C5. Do they see themselves as experts? V4. Do they see the curators as experts?

C6. Commission: do they want to receive it and how? V5. How do they think the curators receive the commission? How do they feel about it?

C7. Do they care about the ranking of the links? V6. Do they understand the ranking of the links?

(10)

9

Based on the literature the following hypotheses will be tested in this research:

H1. There is a positive correlation between the level of disagreement of the visitor and level of discrepancy between curator’s and visitor’s ranking factors (C1-V1 vs V2).

H2. The visitor, who disagrees with the ranking, will not use it for the future search. (V2 vs V7). H3. The visitor, who does not trust the curator, will not use his page for the future search (V3 vs V7).

H4. Rankings of the curators who are prone to get bribed or do not make enough effort in making rankings, have higher level of disagreement among the visitors (C2, C3, C4, C5 vs V2)

H5. Rankings of the curators who are prone to get bribed or do not make enough effort in making rankings, have lower level of trust among the visitors (C2, C3, C4, C5 vs V3)

Taking into account the assumption that background, experience and knowledge of the visitors of human-based curated environments could potentially influence their perception of the page (see section 2.2.4). Therefore, it was decided to test additional hypotheses that were not covered by the sub-questions and framework:

H6. Experienced and inexperienced users take into account different ranking factors

H7. There is a significant difference between the perception of the page by experienced and inexperienced users

4. The ZEEF use case

The main use case of this research is ZEEF - a human search engine based on a directory with results that are selected and ranked by the curators – experts in a particular topic, on which they create their pages. This is a curated search and advertising directory with an advertising model that is monetized via affiliate marketing.

The methodology of the research includes qualitative and quantitative information analysis. Qualitative method includes literature review (which was mostly used for theoretical background and for making up the list of the factors that are generally used for evaluation of websites) and interviews with employees of ZEEF for a deeper understanding of the business processes of the company.

Quantitative method includes two surveys. The first one was addressed to the curators of active ZEEF pages, and it helped to investigate the process of making pages and developing rankings. The second survey was addressed to the potential and current visitors of ZEEF pages. The respondents saw the real ZEEF pages (the ones of the curators who took the first survey) and were asked to analyze how the rankings on these pages, in their opinion, have been made. The respondents could click on the links to check out the content of web pages. For this survey, the same Likert scale for the same factors was used as the one in the experts’ survey. In addition, the visitors were asked general questions about their perception of a particular ZEEF page (see Appendix 1).

For making the research as representative and reliable as possible, the amount of the respondents was crucial. That is why it was decided to make surveys and to disseminate them through the Internet. This method provided an easy access to the people who were willing to participate in the research. In addition, survey results are much easier to turn into quantitative data and to analyze using statistical analytics software such as Excel, Spss and Stata.

(11)

10

4.1 Why ZEEF

Aside from the opportunity to have an internship in this company, the reason for choosing ZEEF as the main use case of this research is that it perfectly combines curated content and human-based ranking of the web sites.

ZEEF tries to solve the problem of information overload and filtering: the company adds the human filter to existing search algorithms by crowdsourcing the expertise of trusted curators and builds a new, better performing online advertising model on the back of this. Each ZEEF page has a clear, well filtered, SEO and spam free ranking of links to web sites, which have the best content on a particular topic curated by real people.

ZEEF is made for people, and if they do not use it – it becomes abandoned and unprofitable. Therefore, the main risks of the company are related to the people. These are low amount of the curators who generate high-quality content and guarantee topic diversity, and low amount of visitors who use the pages and generate the traffic and revenue stream for the company. In order to reduce these risks it is critically important to analyze how the curators make their lists, whether visitors are interested in curated content and expert-based rankings, and whether they agree with the rankings made by the curators. In case the visitors do not trust or do not agree with the ranking, they will more likely stop using the page, because it brings no value for them. In addition, it is important to increase the quality of the content the curators make (particularly, the quality of the rankings), because without good and valuable content ZEEF may struggle to get more visitors. Therefore, these risks should be analyzed from two sides: from the point of view both of the curators and of the visitors.

4.2 Survey for the curators

4.2.1 Design

The first survey consists of 13-16 questions (some questions appeared depending on the answers people chose). First, the curators were asked to type down the URL of their page (for the survey they had to consider only one page - in case they have multiple pages on ZEEF platform) and to choose the topic of their page. After that, they were asked to answer other questions, which covered the following topics (see table 3). Table 3. The structure of the survey for the curators

Groups of questions

Examples of questions Type of question Why it is included in the

survey (research subquestions) Creation of

the page

What influenced you to make the page? Mult. choice To understand the process of creating the page by the curators in order to see possible flaws or mistakes that may influence the visitors’ perception of the quality of the page (C2, C3, C5)

How did you choose the topic of your page? Mult. choice

Do you consider yourself a professional in this topic and why?

Open question

Where do you take the links for the ranking lists you make?

Mult. choice

How much time does it take you to come up with the very best link?

(12)

11

How often do you check and/or revise the rankings or update the lists after finishing them?

Mult. choice

Ranking Is the the first link of each list (at the top of the list) the best among all the links in the list?

Mult. choice To understand whether the lists on the pages are ranked or not (C7) Ranking

factors

How important are the following factors for evaluation of the websites and of the links you add and rank on your page?

Likert scale To see which factors the curators take into account when creating the ranked lists (C1)

Bribery Would you (theoretically) put the link on the 1st (2nd, 3rd) place for money secretly received from a third party?

Mult. choice To evaluate the possibility of the curators to get bribed and to change their lists for money (C4) If yes - How much money (minimum) would you take for

changing your ranking?

Mult. choice

Official commission from ZEEF

Would you like to be paid by ZEEF (officially) for maintaining your page?

Mult. choice To check the curators’ attitude towards official commission from ZEEF and how it may influence the quality of the pages (C6)

If yes - What would you like to receive commission for? Mult. choice

The survey consists of mostly closed questions, so it is easier for the respondents to take it. However, some questions that need a deeper analysis of the respondents’ opinion, are open (for example, in the question “Do you consider yourself a professional in this topic and why?” the respondents should have provided the reasons why they thought positively or negatively).

4.2.2 Participants

The first survey was disseminated via e-mail to all the curators who have their own ZEEF pages published. The overall sample consisted of 1017 active curators who were registered in ZEEF and created the pages there. All the curators received an e-mail with the link to the survey, and in 1 week, there were reminders for those who did not take the survey yet. Thirty e-mails were not delivered due to technical reasons. All in all, 78 ZEEF curators took part in the survey, which means that response rate for such e-mails was 7.6%. Such a low response rate can be explained by the fact that surveys in general have low response rate (Robson K, 2011), and the e-mails from the company to the curators can be perceived as SPAM, so majority of the respondents could simply have this message in a spam folder.

From the whole population of possible respondents one group was taken - this group (strata) contains people who have created and published pages on the ZEEF platform. The e-mail with the link to the survey was sent to all the curators; however, it was impossible to predict who exactly would take the survey. Thus, the sampling technique used for the first survey is stratified random sampling.

4.3 Survey for the users

4.3.1 Design

The second survey was designed for both current and potential visitors of ZEEF pages. This survey helped to evaluate how the quality of the pages and curated content in general perceived by the visitors.

(13)

12

In the beginning of the survey, the respondents were asked to choose the topic they are interested in the most, and were asked to visit the page and click on the links to check out the websites they lead to.

Six pages from the first survey were chosen for the survey for the visitors based on the overall quality and completeness of the page (representing six most popular topics from the 1st part of the research) (see table 4). Table 4. The characteristics of the pages chosen for the second survey

Page Ranked?

Time spent on finding the best

link

Updates Bribery Commissi on Business:

https://energy-startups.zeef.com

No Several days Every month Yes, 10 eur

Yes Travelling:

https://amsterdam.zeef.com/willem.ij ssel.de.schepper

Yes Less than 1 minute Never No No

Technology: https://office-365.zeef.com

No Several days Every month No Yes Food: https://omega3.zeef.com No Several hours Never Yes -

100 eur

Yes Education:

https://teacher-playground.zeef.com

No Several days Every week No No

Shopping:

https://shopping.zeef.com/robert.w.h all

Yes Several days Every month No Yes

In order to compare different aspects of the pages and their curators and their influence on the perception of the page by the visitors, it was decided to choose the pages that differ from each other by such characteristics as:

1. Type of ranking used (whether the links are ranked based on their quality or time on which they were added),

2. Time spent on making up the ranking 3. Frequency or reviews/updates,

4. Attitude towards the official commission from ZEEF and bribery.

After viewing the page, the respondents had to answer general questions about quality of the page, ranking and ranking criteria, curators of the pages and intention to use such curated pages in the future. The structure of the survey is shown in the table 5.

Table 5. The structure of the survey for the visitors

Groups of

questions Examples of questions

Type of question

Why it is included in the survey

Evaluation of the page

How much do you agree with the rankings on this page?

Likert scale To understand the visitors’ perception of the quality of the page and their level of agreement with the content (V2)

How often do you think the curator

revises/updates the list of the links after finishing the page?

Mult. choice

Ranking factors

In your opinion, how relevant are the following factors for ranking the links to the web sites on the page?

Likert scale To see which factors the visitors think should be taken into account when creating the ranked lists to compare them with the curators’

(14)

13 opinion. (V1)

Ranking Do you think the order of the links depends on the date when they were added or on the quality of web sites?

Mult. choice To understand whether the visitors see the lists on the pages ranked or not (V5) Trust Do you think the curator is an expert in this topic

and why?

Open question To check the level of the visitors’ trust in the curator and the content of the page (V2)

Do you trust the curator’s choice? Mult. choice Bribery Continue the sentence: If I find out that the

curator was secretly paid (bribed) by the third party for changing the lists of the links...

Mult. choice To check the visitors’ reaction to the curators being bribed (V2)

Official commissio n from ZEEF

Imagine this curator receives commission from ZEEF. How do you think the curator may be paid?

Mult. choice To check the curators’ attitude towards official commission from ZEEF and how it may influence the quality of the pages (V4) How do you feel about the fact that some of the

curators may receive commission from ZEEF?

Mult. choice Intention

to use in the future

Would you (theoretically) use similar ZEEF lists for your search in the future?

Likert scale To check the possibility that the visitor will use ZEEF in the future (V6)

The majority of the questions were of multiple-choice type; however, some of them required open answer for a deeper insight and getting more feedback from the respondent. In addition, in some of the questions Likert scale was used for a better evaluation of the respondent’s attitude towards a problem. For example, in questions “How much do you agree with the rankings on this page?” and “Would you (theoretically) use similar ZEEF lists for your search in the future?” 10-point Likert scale was used for as accurate as possible estimation of the level of agreement of the visitors, and possibility that they will use curated content pages in the future.

4.3.2 Participants

The target audience of ZEEF is everyone who uses the Internet for information search. As the respondents of the second survey are already existing customers as well as potential users of this platform, it was decided to accept all incoming replies to the survey from all the respondents who could potentially use ZEEF for information search. However, during the analysis of the data, the respondents were clustered into basic demographic groups based on the age, gender and education. The main group were friends, relatives, colleagues of the author, as well as students and other people which were easy to reach via social networks (e.g. facebook.com or vk.com) and other related web sites and blogs (e.g. zeef.org). The sampling of the respondents was made by simple random sampling technique. In addition, the respondents were asked to promote the survey to their friends, which is related to the snowball technique (Leo A. Goodman, 1961).

Finally, in order to achieve a representative sample, it was decided to divide the population into specific groups based on age, country of residence and education level. In addition, the goal was to reach the same amount of respondents for each of six topics of ZEEF pages. In order to reach this the quota technique (C. A. Moser, 1952) was used. All in all, 174 people started taking the survey, however only 80 fully answered all the questions. The average age of the respondents is 26 years, while median is 23, and mode is 22. Minimum age is 18, maximum - 48. 46.3% of the respondents are male, 53.7% - female.

(15)

14

4.4. Results of the surveys

The most popular topics of the pages in the first survey were “Technology, gadgets, games and internet” (24 respondents), “Business, management and companies” (19 respondents), “Travelling and culture” (14), “Food and drinks” (4), “Education” (4) and “Shopping” (3). These topics were subsequently chosen for the second survey for the visitors in order to check the difference in ranking factors and other aspects among the topics. In the second survey the most popular topic among the respondents was Travelling (20 responses), followed by Technology (14) and Food (13). Business, Education and Shopping got 11 responses each.

4.4.1 The process of creating a curator page

The most frequent reasons for creating the page by the curators on ZEEF are “the opportunity to help people find information” (was chosen 27 times), “to build online authority and to show expertise in the topic” (19 choices) and “to bookmark and organize personal favorite links” (18 choices). The particular topic of the page was chosen because the curators believe that they have much expertise and experience in it, and because they are passionate about this topic. These results support the main idea and concept of ZEEF.com:

“The content curators on ZEEF are passionate people with knowledge on a certain topic. They search the Internet and find the best content on their topic. Then the curators organize, filter and rank the links into a convenient list. Simple as that.”6

The majority of the curators take links for their lists from their personal bookmarks (59.5%), or just know them by heart (45.9%). In addition, 39.2% of the curators use search systems like Google or Bing for searching for links for rankings, which means that the links are filtered both by algorithmic search engines and by the curators. This may result in a better quality of websites, because the curators choose the best links from Google, Bing, or other search engines, and these links are the best based on quantitative factors such as PageRank, frequency of keywords (queries) in the content and their position in the document, number of clicks on the link, size of the website, etc. In addition, the curators add the human factor to their webpage ranking. They evaluate websites from a qualitative point of view: starting from the design of the web site and resulting user experience, and finishing with the completeness and correctness of the content and credibility of the source - the factors that are not checked by algorithmic methods.

It takes less than one day to establish the ranking and to come up with the best link in the list for 80% of the curators, and 31.1% of the respondents know beforehand which link will be the best (need less than 1 minute to realize which link will be the first in the ranking).

As for revisions - the sample was split into three parts: 30% of the curators never revise their rankings, 40% updates them every month, and 29% updates them every week. The visitors, on the other hand, assume that the lists are updated every week (40% of the respondents) or every month (35%). Analysis of this variable for different topics revealed two interesting observations: first, 73% of the visitors who were analyzing the Shopping page stated that, in their opinion, the rankings on the page are updated every day. However, the curator of this page stated that he updates the ranking once per month. The same happened to Travelling page: 60% of the visitors thought that the page is updated at least once per week; however, the curator stated that he never updates it.

Finally, 54% of the curators do not rank the links in their lists (so the order of the links does not depend on the quality of websites they lead to). Instead, they put the links in the list based on the date (time)

(16)

15

they were added, alphabetical order or popularity (amount of clicks of the visitors). On the other hand, 61% of the visitors believe that the links are ranked according to the quality of websites they lead to.

In a sample of the pages that were shown to the visitors, only 2 out of 6 have ranked lists of the links - these are Travelling and Shopping pages. However, the visitors stated that the links are ranked not only for Travelling and Shopping pages, but for others as well. Particularly, the only page the visitors saw as unranked was Technology page (57% of the respondents stated that the lists are not ranked). Thus, the majority of ZEEF visitors see almost any list to be ranked based on the quality of the website, company or a product the link in this list leads to. The curators, on the other hand, tend to create the lists without comparing the links with each other.

4.4.2 Commission and fraud

The question about the risk of fraud among the curators was stated as “Would you (theoretically) put the link on the 1st (2nd, 3rd) place for money secretly received from a third party?” 31% of the curators answered this question positively, and after that were asked for a minimum sum of money they would agree to take for changing their ranking - this was a multiple-choice question with the answers 1, 10, 100 and 1000 EUR. Only two respondents chose the minimum amount of 1 euro, and only two chose maximum 1000, while both mode and median for this variable are 100 euros. The average sum is 140.5 euros.

The question about possibility of fraud has a correlating question in the survey for visitors, who were asked whether they would change their attitude towards ZEEF if they knew that the curator was bribed. 54% of the respondents would be disappointed that the ranking on the page is biased, but they would still use it for their search. 17% of the visitors would not pay attention on it, and 29% would never use the page of a bribed curator again. Thus, there is 9% possibility for ZEEF to lose its users forever (among all the curators 31% may be bribed, and because of them 29% of the visitors would never use the page anymore. If we multiply these figures, we will find the possibility of losing the visitors because of the bribery of the curators: 31%*29% = 9%).

Besides the secret, “illegal” payments, we have asked whether the curators would like to be paid officially by ZEEF for maintaining their pages and getting visitors. 53.8% of the respondents answered “yes” to this question. Among those who answered his question positively, 70.7% would agree to have a special sign on their page showing that they are paid by ZEEF. The most popular reasons of being paid were amount of clicks, amount of views and Click-through Rate (CTR) - clicks/views ratio. The same variants were chosen by the visitors; however, the visitors also paid attention on the amount of purchases made via the links provided by the curators. It is interesting that the visitor’s perception is much closer to the reality, than the curators’, because in real life paid publishers in ZEEF get commission according to the purchases made by the visitors through the affiliate links.

From the visitors’ point of view, official commission encourages curators to make better rankings (57.5% of the respondents chose this answer), and only 5% think that it negatively affects the quality of the ranking.

Finally, the visitors were asked if the information about how the curator is paid would influence their behavior (e.g. whether they would click on more links or share the page in social networks). The majority (70%) of the respondents stated that this information would not influence their behavior - they would click only the most interesting links and would not share the page if they do not like it. However, eight respondents

(17)

16

stated that they would try to help the curator earn money by clicking additional links, “liking” or sharing the page only if they have positive personal attitude to the curator and/or really love the content on the page.

4.4.3 Ranking factors

The most important outcome of this survey is the list of the factors the curators take into account while making the ranking lists, and which factors the visitors think have been taken. The respondents of both surveys were asked to evaluate each factor using Likert scale from 1 (not important at all) to 5 (very important). The list of the ranked factors is presented in the table 6 in the Appendix 2.

The factors presented in the Table 6 (see appendix 2) are general for all the topics. From the table it is clear that there are some differences between the visitors’ and the curators’ perceptions. The last column represents the difference between rankings of the curators’ and the visitor’s factors. The average absolute difference between the rankings of the factors is 4.63, standard deviation is 3.89, however we can see 4 factors, where the difference in rankings is more than the sum of the average and standard deviation (4.63+3.89=8.5). These factors have the largest difference, which means the biggest discrepancy between both parties’ opinions. For example, for the visitors it is very important how much advertising the website has, but for the curators this factor is the second non-important. The curators, on the other hand, take into account more quality of the products, their uniqueness and personal experience/knowledge that contribute to the ranking of the website, whereas for the visitors these factors are the least important.

The ranking of the curators’ factors does not considerably differ among the topics of the pages. The average correlation coefficient is 0.86, and the correlation between business, technology and travel pages and general data it is from 0.96 to 0.97. However, there are some features in Education and Food pages, but they can be explained by the fact that there are few respondents with the pages on these topics (4 respondents for each topic). Particularly, for Food pages the recommendations from the third parties and performance of the web site are more important than in general (the grade is 4.25 - third place in the ranking, whereas for all topics these factors are only 11th and 12th respectively). For Education pages ease of understanding (usability), quality of the products and their uniqueness are vital (the average, mean and median for these factors are 5 out of 5), while in ranking for all the topics these factors are 10th, 8th and 9th respectively.

The same analysis for the visitors’ estimations revealed that the correlation coefficient among the topics is 0.8, can be considered as high. However, there is one exception: for visitors of Education page such factors as amount of advertisements on a website and up-to-datedness are the most important (1th and 2th respectively, while in general ranking they are 5th and 4th).

Standard deviation of the means of the rankings of each factor among the curators is almost as twice as high as for visitors. This shows that the visitors, in general, are more consistent with each other, while the curators are more critical in evaluation of each factor. Interesting that both for the curators and for the visitors standard deviation is lower for the factors that are ranked higher. In other words, the respondents have a higher level of agreement on which factors are the most important.

4.4.4 Visitors’ general perception

In general, the average level of agreement of the visitors is 7.14 out of 10 (minimum value is 3, maximum = 10, median - 7.5, mode - 8, standard deviation - 2.04). The highest level of agreement was with the Education page (7.82), the lowest - with the Shopping page (6.27). These results correlate with the level of trust in these pages: the Education page has the highest level of trust (90% of the visitors stated that they trust the page), whereas the Shopping page has the lowest percentage (63%). In average, 75% of all the respondents

(18)

17

stated that they trust the curators and their choice of the links. The levels of trust and agreement do not correlate with such variables as tendency of the curator to get bribed and receive commission from ZEEF, reasons of creating the page and choosing a particular topic of the page, time spent on making the page and frequency of updates: the Spearman correlation coefficients for all these variables are less than 0.2 (Spearman correlation coefficient was used here because some variables were turned into ranked data: the variables were given a particular rank depending on the answer that was chosen by the respondent. For example, for question “How often do you update/revise your ranking?” the answers were coded as following: 1 = “Once per day”, 2 = “Once per week”, 3 = “Once per month, 4 = “Never”). Therefore, the hypotheses H4 and H5 are refuted. However, due to the fact that there were just 6 pages analyzed by the visitors, this result may be biased, because the sample is too small to be representative.

The level of trust relates to the fact whether the visitor sees the curator as an expert. The Phi coefficient7 between these two variables is 0.65, and it is the largest correlation coefficient among all the

variables. The fact that the correlation is moderate (more than 0.5, but less than 0.8) means that even though the visitors see the curators as experts, they do not necessarily trust them. The most popular reasons why the visitors do not trust the curators, according to the survey, were:

- “I don’t know this man personally/i don't know who this man is” (22% of the respondents). - “His appearance does not have credibility/non-professional photo” (19% of the respondents). - “Too many lists, like he himself doesn’t know which ones are the best” (13% of the respondents). - “He doesn't have any reliable/scientific sources” (9% of the respondents).

68.8% of the visitors have never used expert-based rankings and curated content before. With the help of Phi coefficient, this variable was compared to other variables, but the biggest values of correlation coefficients were 0.29 (whether the person used curated content before, and whether they trust the curators), 0.22 (whether the person used curated content before, and whether they think that the curator is expert) and -0.18 (whether the person used curated content before, and whether he sees the list of links ranked). This slight negative correlation means that:

- Non-experienced visitors tend to trust the curators more (84% of non-experienced visitors trust the curators, comparing to 56% of experienced visitors).

- Non-experienced visitors believe that the curators are experts more (67% against 44% as for experienced visitors).

- Non-experienced visitors see the lists of the links as ranked based on the quality of websites they lead to (67% against 48% as for experienced visitors).

Talking about ranking factors, there is no significant difference in the importance of each factor from the point of view of both types of visitors. The average difference in the rankings of the factors is 2.31, and the highest difference was connected to such factor as “Size of the website”: for non-experienced users this factor has rank of 15, however for experienced users the rank of this factor is 5. The average ranking of this factor for both types of the visitors is 3.66 for experienced users and 3.24 for non-experienced. The t-test for equal means (however, for this test not means of the factors, but their rankings were tested) resulted in a P-value = 0.5, which is much higher than the level of significance (be is either 0.05 or 0.1). This means that the null hypothesis (that the difference in rankings of the factors equals 0) should be confirmed. Thus, the sixth hypothesis or the research (H6. Experienced and inexperienced users take into account different ranking factors) is rejected.

(19)

18

To check the perception of the pages by two types of visitors, the same t-test was used for checking the null hypothesis that the means of each variable (connected to the perception of the page by the visitor) for two samples is equal 0. The results of the tests are shown in the table 7.

Table 7. The results of unpaired t-test for means of different variables

Variable p-value Level of trust 0.009 Curator is expert 0.029 Links are ranked 0.058

The P-values for all three tests are less than significance level (for these tests the significance level is taken as 0.06), which means that the null hypotheses is rejected. Therefore, the seventh hypothesis of the research (H7. There is a significant difference between the perception of the page by experienced and inexperienced users) is confirmed.

It is worth mentioning, that 70% of all the curators consider themselves experts in the topic they create their page about, while only 60% of the visitors believe that the curators have a considerable background. However, visitors of ZEEF do not have enough information to check the expertise of the curator. They only see the photo of the curator and a couple of sentences about the curator’s background. Thus, if the photo of the curator is of low quality or non-professional, the visitors will be influenced by this fact and might build negative image regardless of the real expertise and experience of the curator. On the other hand, if the curator does not have enough knowledge about the topic, he or she may write just one sentence in a description field that will convince the visitor (e.g. “I have been living in this country for all my life”, or “I am experienced in teaching”).

Finally, the visitors were asked whether they would use the same pages with curated content and expert-based rankings for future investigations. This question had a Likert scale ranging from 1 to 10. The average grade on this question was 5.8, median - 6, mode - 5, standard deviation - 2.34. Taking the theoretical intention to use similar pages in the future as dependable variable, and level of agreement and trust - as independent, the regression model was built (see pic. 2). As a result of 2-tailed p-values used for testing the null hypothesis that the coefficient (parameter) equals 0, the probability of both variables was less than 0.05. Therefore, the null hypothesis can be rejected, and we can state that these variables predict intention to use curated pages in the future. Adjusted R-squared of this regression is 0.28, which means that the level of agreement with the ranking explains only 28% of the changes of future use of curated pages by the user. In other words, the strength of association of two variables is moderate. However, P-value for the whole regression is zero, which is less than level of significance taken as 0.05. Therefore, the regression is statistically significant.

(20)

19

Picture 2. Regression analysis of level of agreement, level of trust and future use (STATA)

In order to check the first hypothesis (“there is a positive correlation between the level of disagreement of the visitor and level of discrepancy between curators’ and visitors’ ranking factors”) the correlation coefficient between the level of disagreement of the visitors, and difference between the perceptions of both sides on importance of the ranking factors was calculated. First, I calculated the absolute average difference between the rankings of all the factors of each visitor and the factors of the curator, whose page was shown to the visitor. After that, the correlation coefficient was found for two samples (differences in rankings, and level of agreement). For general sample the coefficient is 0.035, which means that there is no significant correlation between these two variables. The P-value for a regression model built for these two variable is 0.75, which means that the regression is not significant. Thus, the first hypothesis about significant correlation between these two variables is rejected.

However, the same analysis for each topic showed the following results (see table 7). For the food page, the correlation coefficient is also very close to zero, however for the business and shopping pages, the correlation is negative. This means that the more the difference between the rankings of the factors, the lower the level of agreement is. For the education, technology and travelling pages, however, this correlation is positive. Moreover, for education page this correlation is quite strong (coefficient is 0.56). In other words, for these pages the level of agreement is higher if there is more discrepancy between the rankings of the factors of the curators and of the visitors, which contradicts the logic.

Table 7. Correlation coefficients between the level of agreement and difference in factors for each topic

Topic Correlation coefficient Business -0.24151 Education 0.561468 Food 0.028381 Shopping -0.36125 Technology 0.251988 Travel 0.144784 TOTAL: 0.035026

(21)

20

The regression model for the level of agreement and difference between the rankings of the factors for the Education page has a P-value that equals to 0.09, so this model is not significant if the level of significance is 95%. However, it can be accepted if the level of significance is 90%. The adjusted coefficient of determination is 0.3, which means that only 30% of the changes in level of agreement are explained by the changes in difference between the rankings of the ranking factors.

Taking into account that the amount of the respondents of each page (only 10 people chose to evaluate the education page, and for other pages this number is less than 30) and a lot of other factors that may influence the level of agreement (e.g. pictures in the headers of each block, the type of the content of each link, the background of the visitors, etc.), it was decided to count such a high correlation as a coincidence, and the regression as not significant.

4.4.5 Conclusions

All in all, the process of creating the page on ZEEF by the curators correlates with the main idea and concept of ZEEF.com: the analysis of the results of the surveys did not reveal anything unusual. However, there are some differences and correlations between points of view of the visitors and the curators of ZEEF pages. Particularly, the curators, in general, update their pages less often than the visitors think they do. What is more, the majority of the curators do not rank the lists; however, the majority of the visitors think that they are ranked. Thus, it is important to make both parties perceive the lists in a similar way. For example, ZEEF may teach the curators that the lists should be ranked based on the quality of the website they lead to.

Talking about level of disagreement of the visitors - it is not related to how much the curators are prone to get bribed or how much effort they make in making rankings (there is no significant correlation between level of disagreement of visitors, and such variables as time needed for creating the page, or, for example, reasons for making the page). Thus, the 4th and 5th hypotheses (“rankings of the curators who are prone to get bribed or do not make enough effort in making rankings, have higher level of disagreement [or lower level of trust] among the visitors”) are refuted. However, the second and the third hypotheses (H2. “The visitor, who disagrees with the ranking, will not use it for the future search” and H3. “The visitor, who doesn’t trust the curator, will not use his page for the future search”) were confirmed, as the regression model built for intention for future use as dependent variable and the level of agreement with the ranking and trust in the curator as independent variables is statistically significant. Finally, the first hypothesis is also rejected, so the level of agreement with the rankings does not depend on the factors that are taken into account by the curator and by the visitors. Instead, this variable may be influenced by other factors, such as the background of the visitors and their knowledge, the topic of the page, the type of the websites in the lists, and even the names of the links in the blocks. These factors were not covered by this research, so they provide great opportunities for future work.

5. Interface experiment

5.1 Design

In addition to the main questions, the second survey for the visitors contains an experimental part - an A/B test of design (interface) of ZEEF pages. The respondents should have compared two screenshots of the pages with minor differences in interface, such as presence or non-presence of the personal information of the curator; pictures on headers of each list of the links; paid publisher sign, etc. Based on these screenshots, the visitors were asked to choose which page they would rather use. An example of the question is presented on a picture 2.

(22)

21

The third option “No difference” could be chosen by those who did not see the difference between two screenshots. Such test helped to understand whether particular design features of content curation pages influence the perception of the quality of these pages by visitors. If the respondents choose “No difference” option, it means that the interface aspect that is being tested does not influence the perception of the quality of the page and possible future use of this page.

Picture 2. An example of a question about the interface

5.2 Results

The majority of the respondents would rather use the page with a curator’s personal info (62.5%), with pictures in headers of the lists (92.5%), and recently updated (76.3%). They see no difference between pages with or without sign “paid publisher” (60% chose “No difference”), and would rather use pages with popularity ranking, however this preference is not very strong (43% chose popularity ranking, 36% - curator’s ranking and 21% saw no difference). Finally, the statistics of the page (amount of views, clicks, likes, shares, etc.) also does not play too much a role in the perception: 46% did not see the difference between to the same pages with different statistics; however, 39% chose the page with big figures (opposed to 15% who chose the page with small statistics).

(23)

22

5.3 Discussion

The results of design experiment support the results received in the main part of the second survey. Particularly, the visitors confirmed the importance of frequency of updates of the page. This means that they are interested in up-to-date information. This correlates with the fact that the ranking factor “Up-to-datedness of the information on a web site” is fourth most important one.

In addition, the curator’s personal information and contacts also play important role in building the perception of the quality of the page by the visitor. The majority of the visitors would rather use personalized pages than pages without any information about the curator. In addition, both the visitors and the curators stated that credibility of the source and contacts of the author is the third most important factor influencing the quality of any website. Finally, this aspect influences the level of trust of the visitor, because without the information about the person who curates the content on a page it is hard to estimate the validity and authorship of the content.

The sign “Paid publisher” on the page correlates with the visitors’ attitude to official commission from ZEEF to the curators. Particularly, 33% of the visitors stated that they do not care whether the curator receives the commission or not. Moreover, 40% of them believe that such a commission positively affects the quality of the pages in general. Still, the majority of the visitors do not see the difference between two pages with and without the “paid publisher” sign. This can be explained by the fact that this sign is quite unobtrusive, so people simply did not pay attention on it. However, each screenshot in the test had headers (comments) which described the difference between the variants, in order to understand the general attitude of the visitors towards the idea standing behind a design feature. Therefore, money received by the curator as an official commission does not influence the perception of the page by the visitor.

Finally, the fact that the visitors do not see the difference between the type of the ranking and the statistics of the page means that they tend to not to rely on other people’s opinions in building their opinion about the page. In other words, they do not care how many people use this page, share it and follow the updates: they build their perception on the basis of their own opinion, on the popularity of the page and opinions of other users.

6. Overall discussion

6.1. General discussion and limitations

With the help of this research, it was possible to check which factors influence the perception of the quality of the page with curated content (human-based ranking of the websites) in it. Particularly, it was proved that visitor’s trust in the curator and his page, as well as agreement with expert-based ranking of the websites influence the visitors’ attitude to use the page in the future.

Ranking factors play a crucial role in establishing the ranking. The results of our investigation showed that there is no correlation between the level of agreement of the visitor, and level of discrepancy between the ranking factors. This can be explained by the fact that the visitors do not know anything about the topic in which they are searching for information, and cannot say whether there are websites with the content of a better quality. In order to evaluate the ranking that they need some background in this topic. However, taking into account the main purpose of human-based search engines - to help people to find information - the assumption is that the visitors use the website to fill the gap in their background. Thus, they cannot say for sure whether they agree with the ranking or not, because they do not have enough expertise in the topic to judge.

Referenties

GERELATEERDE DOCUMENTEN

In experiment 1, the P indicator was maintained as the default criterion for ordering universities on the list view page, but the following message was prominently displayed when

Keywords: case-based reasoning, prediction, forecasting-by-analogy, weighted voting, information gain, nearest neighbour method, accuracy, mental health care, treatment outcome,

When asked who would post more – extraverts or introverts -, interviewees’ first instinct was that extraverted people do so. However, after some thought, they came to the

Running a cross-sectional regression including indicators for time periods and TARP recipients shows that insiders of TARP banks made significantly more

As the particle size increases, the avalanche amplitude, flow layer velocity and granular temperature increase as well, indicating an increase in relative importance of inertial

Both contri- butions are in (anti) phase and also in (anti)phase with the oscillations in peak width as plotted in figure 6.14B. This is due to the fit being overdi- mensioned.

For Mo-on-Si, a strong initial tensile stress due to silicide formation saturates upon reduced availability of free Si at the growing Mo surface, followed by a near

Table 3.3 shows the Cronbach‘s Alpha coefficients for the five measuring instruments identified in Chapter 2 as job satisfaction, employee empowerment, communication, and