• No results found

project. Full of trust I took a plain to Barcelona... Foreword

N/A
N/A
Protected

Academic year: 2021

Share "project. Full of trust I took a plain to Barcelona... Foreword"

Copied!
85
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)
(2)

Foreword

About a year ago, I had two desires about the final project for finishing my study Artificial Intelligence: I wanted to do something with multi-agent systems and I wanted to do this abroad. The first wish made me ask Rineke Verbrugge as internal advisor. She brought me into contact with Caries Sierra, which provided the opportunity to perform the project in Barcelona and so fulfil my second desire. After that, the subject of the project was easily picked. My research would be about trust and reputation in multi-agent systems and in September 2005 I was ready to go.

Spanish people have the name to be good cooks, to have a lot of parties and to enjoy life. This reputation gave me a lot of confidence that I would have a good time in Spain. On the other hand however, Spaniards are also known for not being too fast and efficient when something has to be arranged. Despite some little doubts about this mañana mañana culture, I really looked forward to go to Spain and to start with the project. Full of trust I took a plain to Barcelona...

In the meantime I can say that my trust was not groundless. I did an interesting project, enjoyed working at the research institute, met a lot of nice people and Barcelona was great. As will become clear in this thesis, after having new experiences or

information an old opinion can be updated. My experiences can only confirm a positive view of Spanish people and working and living in Spain.

I want to thank the people that contributed to this thesis. First of all, my gratitude goes to Caries Sierra and Rineke Verbrugge. Caries thanks for your hospitality and enthusiasm, Rineke thanks for your conscientiousness and involvement, and both thanks for the useful comments and the nice cooperation. I would like to thank Jordi Sabater for his help (and patience!) with the ART test-bed. Further, I owe thanks to the research institute of artificial intelligence (lilA) in Barcelona for providing me the facilities to perform the project. Finally, I want to thank my parents, my boyfriend Joost and many other friends for listening to me and giving support.

(3)

Contents

Forevvord ... ...

1

Contents ... ... ...

2

1

Introduction ... ... ...

4

1.1 Motivation 4

1.2 Research question 4

1.3 Structure of the thesis 5

2

Computational models of trust and reputation ...

6

2.1 What is a model of trust and reputation? 6

2.2 The relation between trust and reputation 7

2.3 Three examples of trust and reputation models 9

3 An information—based model for trust... 11

3.1 Information Theory 11

3.2 A negotiation language 12

3.3 Information-based negotiation 14

3.4 The trust model 15

3.5 Information theory compared with game theory 16

4 Reputation in the information-based model ...

18

4.1 Trust and reputation in the model 18

4.2 Updating trust from reputation 19

4.3 Combining trust and reputation 22

4.4 Conclusions 23

5 Social information in the model ... ... 25

5.1 Social information 25

5.2 Social constraints in an agent's knowledge base 26

5.3 The presentation of new social information 28

5.4 Updating from social information 30

(4)

6 1'he A1('f test—bed ••••••••••...,... 32

6.1 The choice of a test-bed 32

6.2 Overview of the ART test-bed 33

6.3 Rulesof the competition game 34

6.4 Use of the ART test-bed in this project 36

7 The test—bed agents •••••••••••••••••... 37

7.1 Buildinga test-bedagent 37

7.2 Application of the information-based model to an agent 38

7.3 Behaviour of the information-based agent 41

7.4 Variations on the information-based agent 42

8 'Fhe experinients ••... 46

8.1 Hypotheses 46

8.2 Methods 48

8.3 Results 51

9 Discussion..._..,...,__...,.... 62

9.1 Results of the experiments 62

9.2 The design of the experiments 66

9.3 The information-based agent 67

9.4 Testing with the ART test-bed 69

9.5 The information-based model of trust 70

9.6 The use of information theory 71

10 Conclusions and further research... 73

10.1 Conclusions 73

10.2 Further research 74

Appendix

••••••••••••••••••••••••••••••••••••••••••••••••••

...fl_...

75

S ulninary s••ss•••s•••••••••••...

81

Bibliography

•••••••• •••••••••s•e•••... ...

.. .._...

83

(5)

1

Introduction

This chapter will introduce the subject of this project. In the second section the research question will be mentioned and explained. Finally, an overview about the way this thesis is built up will be given.

1.1 Motivation

Negotiation is a process in which a group of negotiation partners tries to reach a mutually acceptable agreement on some matter by communication. It constantly takes place:

people negotiate about big deals of millions of dollars, but also about smaller matters like what to eat for dinner. Besides humans, software agents and robots also negotiate.

Negotiation plays an important role in multi-agent systems, in which it might even be the most fundamental and powerful form of interaction between different agents. Agents in a multi-agent system are autonomous, so they have no direct control over other agents and must negotiate in order to control their interdependencies.

In negotiations, one tries to obtain a profitable outcome. But what is a profitable outcome: pay little money for many goods of high quality? Although it seems to be a good deal, this might not always be the most profitable outcome. If negotiation partners will meet again in the future, it could be more rational to focus on the relationship with your negotiation partners, to make them trust you and to build up a good reputation.

If we take the future into account, another question arises: how will the opponent behave in the future? In the context of negotiations, agents have to make decisions about the acceptability of a deal. One of the determining factors in these considerations is the agent's opinion on the probability that the bargains made in a deal will be really

accomplished after accepting the deal. Will the other agent deliver products of good quality? Will they be delivered on time, too late or maybe even not at all? Beforehand, an

agent cannot know for sure whether the negotiation partner will fulfil his promises or not, so the agent has to deal with uncertain information. The modelling of trust and could help to make good predictions about the future.

This thesis will discuss the computational modelling of trust and reputation, an investigation topic receiving a lot of attention in the field of distributed artificial intelligence lately. The thesis will especially focus on a new way to deal with these topics, based on the information-based model for trust introduced by Sierra and

Debenham (2005). In this thesis, their information-based approach will be discussed and tested. Further, their model of trust will be extended with algorithms for dealing with reputation information and social information.

1.2 Research question

The main question of this project will be the following:

Isthe information-based approach a good waytodeal with trust andreputation in multi-agentsystems?

4

U

(6)

In order answer this question the project is divided into two main parts. The first part will be a theoretical discussion of Sierra and Debenham's information-based model for trust, in which extra attention will be paid to the modelling of reputation and the role of social information. The second part will be more practical, the model will be tested by

implementing an information-based agent and performing experiments with it.

Concretely, the graduate project will consist of the following two tasks:

Investigate how Sierra and Debenham's information-based model for trust could be extended with a more sophisticated way to deal with the influence of

reputation and social infonnation.

Implement a negotiation agent making use of Sierra and Debenham's model of trust and test it with the Agent Reputation and Trust (ART) test-bed.

By the execution of these tasks, the model is examined in a theoretical and in a practical way. The results of the two parts together, should help in giving a founded answer to the research question of the project.

1.3 Structure of the thesis

The thesis starts with a theoretical discussion of Sierra and Debenham's information- based model of trust. First a general overview of the research in computational trust and reputation models is given (chapter 2), then the information-based model itself will be introduced (chapter 3). In the following two chapters, possible ways to extend the model with more sophisticated ways to deal with reputation (chapter 4) and social information (chapter 5) will be proposed.

The description of the practical part starts with the introduction of the ARTtest- bed (chapter 6), the test-bed that will be used for the experiments. Then the translation of the information-based model to a test-bed agent will be discussed (chapter 7). The next chapter (chapter 8) will describe the experiments with the test-bed, followed bya

discussion (chapter 9), the conclusions and some suggestions for further research (chapter 10).

1

(7)

2

Computational models of trust and reputation

In this chapter an introduction will be given to computational models of trust and

reputation. Several possible design choices will be discussed. Extra attention will be paid to the meaning of trust and reputation and the relation between these two concepts.

Finally, three examples of existing models will be given.

2.1 What is a model of trust and reputation?

In computer science and especially in the area of distributed artificial intelligence, many models of trust and reputation have been developed over the last years. This relatively young field of research is still rapidly growing and gaining popularity. What exactly is a computational model of trust and reputation? And why are these models getting so much attention lately?

In multi-agent systems information is distributed among different parts of the system, and the different entities of the system, agents, are having interactions with each other. From the point of view of a single agent, it has to interact with other agents in a constantly changing environment. The agent has to make all kinds of decisions, for example the agents with which it will interact and the way to treat them. The agent does not know how other agents will behave in the future, so it has to make these choices based on uncertain information. The aim of trust and reputation models in these kinds of systems is to support the decision making in these kind of uncertain situations. A

computational model of trust or reputation derives trust or reputation values from the agent's past interactions with its environment and possible extra information. These trust or reputation values influence the agent's decision making process, in order to facilitate the dealing with uncertain information. For example, if two agents offer the product the agent needs for the same price, this agent could choose to commit itself to the one with the highest reputation in delivering products of good quality.

Applications of computational trust and reputation systems are mainly found in electronic markets. In comparison to face-to-face negotiation, trading partners in electronic markets often have less information about each other's reliability or the

product quality during the transaction. A trust or reputation system gives different parties the opportunity to rate each other, can derive a trust or reputation score from the

aggregated ratings and provide this score to possible future trading partners. The trust or reputation score can assist agents in selecting negotiation partners, but it also promotes good behaviour (Jøsang et al. 2005).Thisis how a trust or reputation system could increase the efficiency and quality of a market as a whole. Several research reports have found that seller reputation has significant influences on on-line auction prices, especially for high-valued items (Mui Ct al. 2002). Besides electronic markets, then notions of trust and reputation play important roles in distributed systems in general.

A trust or reputation model has to be based on a theory or conceptual model of reference.

Many present models of trust and reputation make use of game-theoretical concepts (Sabater and Sierra 2005). The trust and reputation values in these models are the result

(8)

of utility functions and numerical aggregation of past interactions. Some other

approaches use a cognitive model of reference, in which trust and reputation are made up of underlying beliefs. The trust and reputation values in these models are a function of the degree of these beliefs.

Trust and reputation of an individual can either be seen as a global property or as a subjective property'. In the first case, the trust or reputation of an individual is

calculated from the opinions of the individuals that interacted with it. The value is publicly available and the trust or reputation of an individual is a property shared by all the other agents in the community. Trust or reputation is a subjective property when each agent assigns its own trust or reputation value to each member of the community, based on its own experiences.

Trust and reputation values can be based on different kinds of information sources. Sabater and Sierra (2005) distinguished four different sources: direct

experiences, witness information, sociological information and prejudice. Information from direct experiences is the most relevant and reliable information for a trust or reputation model. Experiences based on direct interactions are used by almost all trust and reputation models. A less common form of direct experience is experience based on the observed interaction of other members of the community. Witness information is information assessed from other members of the community. The information can be based on their direct experiences or on information they gathered from other sources.

Witness information.is difficult for models to deal with, because information providing agents might hide information, change information or even tell complete lies.

Sociological information is information provided by the society and might exist of social relations between agents or the role that agents play in the society. The power of a particular individual for example, might influence its reputation or the trust we have in that individual. Currently, only a few models take this kind of information into account.

The last information source is prejudice, not very common in present trust and reputation models either. Prejudice is the mechanism of assigning properties to an individual, based on signs that identify the individual as member of a given group.

Besides decisions about the use of different sources of information, more choices about the presentation of information in the model have to be made. Is the information exchanged boolean information or a more sophisticated type of information? Does the model allow agents hiding information or providing false information or not? Are trust and reputation values accompanied by a reliability measure, indicating the probability of the information being true? Section 2.3 will provide some examples of computational trust and reputation models to make the ideas more concrete, but first the difference between trust and reputation will be discussed.

2.2 The relation between trust and reputation

The words trust and reputation are widely used by many people in many situations, but it is difficult to define the exact meanings of these concepts. With a look in a dictionary one will find out that both terms have more than just one meaning. Also in the context of distributed artificial intelligence, several different meanings of reputation (Mui et al., 2002) and trust (McKnight et al., 1996) have been discerned. The complexity of the terms 'Sabater and Sierra call this different visibility types' (Sabater and Sierra 2005)

(9)

makes it difficult to describe the relation between trust and reputation, but we at least know that trust and reputation are two different things. In some cases one can trust someone with a bad reputation, for example in a very close relation. Sometimes it is better to distrust someone with a good reputation, for example because a person once cheated on you.

In the context of computational models, the meanings of trust and reputation are determined by the way they are derived from a set of values. Therefore, this section will not provide the defmitions of trust and reputation, but it will remark some of the

important elements. To start with reputation, according to Jøsang et at. (2005), reputation is what is generally said or believed about a persons' or things' character or standing. The word 'generally' is important here, reputation is usually not based on the opinion of one individual. The examination of some important models of trust and/or reputation (Sabater and Sierra 2005) indeed shows that all models using the word reputation at least make use of witness information, information provided by other agents. So reputation values are mostly determined by the opinions of a whole set of agents.

Jøsang et al. (2005) define trust as the extent to which one party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible. In contrast to reputation, trust is

something personal, the amount of trust one has in a given agent is a specific property of each individual. Sabater and Sierra's overview (2005) shows that models called models of trust rely on several information sources, but all of them at least use direct experiences to determine levels of trust. So trust values are in general based on the information

gathered by one individual.

Conte and Paolucci (2002) give an extensive analysis of the relation between trust and reputation and some other related concepts. In their cognitive approach they make a distinction between image and reputation, where image is the direct evaluation of others and reputation is indirectly acquired. Image is based on an agent's direct experiences with other agents and reputation is based on information received from other agents. This information tells about other agents' direct experiences and reputation is the output of image spreading. According to Conte and Paolucci, image and reputation both contribute to trust.

Most existing models of trust and reputation do not differentiate between trust and reputation and only use one of the two concepts, and if they differentiate, the relation between trust and reputation is often not explicit (Sabater and Sierra 2005; Mui et al.

2002). The model proposed by Yu and Singh (2001) does distinguish between trust and reputation. Direct information is used to determine the trust in the target agent and witness information to determine the reputation of the target agent. However, the two information sources are not combined, the model only appeals to witness information when direct information is not available. The ReGreT system (Sabater 2002) is one of the few models that does combine trust and reputation. The reputation information (purely based on witness information) in this model is used to improve the calculation of trust values, which are also determined by other types of information. Mui et al. (2002) also proposed a model of trust and reputation in which both concepts are related with each other. According to them, increase in an agent ci's reputation in its embedded social network A should also increase the trust from the other agents for a, decrease should lead to the reverse effect.

(10)

The few approaches that distinguish between trust and reputation and combine the two concepts (Conte and Poalucci 2002; Sabater 2002; Mui et al. 2002), seem to agreeon a relation between trust and reputation in which reputation is (one of) the factor(s) that determine(s) trust. The strength of the influence of reputation depends on the specific context. This point of view about the relation between trust and reputation will also be taken in this thesis. In chapter 4, Sierra and Debenham's trust model will be evaluated

according to this criterion.

2.3 Three examples of trust and reputation models

EBay is one of the world's largest online marketplaces with a community ofover 50 million registered users (Jøsang et al. 2005). It allows sellers to list items for sale, and buyers to bid for those items. EBay uses a reputation mechanism that is based on the ratings users give after the completion of a transaction. The user can choose between the three values positive (1), negative (-1) or neutral (0). The reputation value is calculatedas the sum of the ratings over the past six months, the past month and the past seven days.

Reputation thus is considered as a global property. Studies of eBay's reputation system report that buyers rate sellers 5 1.7% of the time and that the observed ratings are very positive, about 99% is positive (Josang et al. 2005). Although the system is quite primitive and can be misleading, the reputation system seems to have a strong positive impact on eBay as a marketplace.

A second example is Casteifranchi and Falcone's (1998) cognitive model of trust.

According to them, the decision of agent a to delegate a task to agent ftisbased on a specific set of beliefs and goals, and this mental state is what we call trust. To builda mental state of trust the agent needs the following basic beliefs: competence belief (agent /1 can do the task), dependence belief (it is necessary or better when that/i performs the

task), disposition belief (ftwillactually do the task), willingness belief (ftdecided and intends to do the right actions) and persistence belief (ft is stable in its intentions of doing these actions). The first two beliefs compound 'core trust' and together with the third belief also 'reliance'. If agent a has all these beliefs, it trusts the agent/ion performing the task, and it could decide to delegate the task to that agent.

The last model of trust and reputation discussed here is proposed by Sabater (2002) and the system is called ReGreT. This system takes three different sources of information into account: direct experiences, information from third party agents and social structures. The direct trust module in the system deals with direct experiences and how these experiences can contribute to the trust on other agents. The reputation module of the system is divided in three types of reputation: witness reputation (calculated from information from other agents), neighbourhood reputation (calculated from information about social relations between partners) and system reputation (calculated from roles and general properties). A third module of credibility measures the reliability of witnesses and the information they provide. All these modules can work together to calculate trust.

Because of the modular design it is also possible to use only some of the parts.

The three examples above are all computational models of trust and reputation, and the big differences among them give an indication of the broadness of the research area. The usefulness of trust and reputation seems obvious and literature around it is rapidly

-,

(11)

growing. Several articles providing an overview of the field conclude however that the research activity is not very coherent and needs to be more unified (Sabater and Sierra 2005; Jøsang Ct a!. 2005; Mui et a!. 2002; Fullam Ct al. 2004). In order to achieve that, test-beds and frameworks to evaluate and compare the models are needed.

1

(12)

3 An information-based model for trust

This chapter introduces the information-based model for trust that will be examined in this thesis. The language, the methods and the trust model of this approach will be

discussed. In the last section a comparison between the information-based approach anda game-theoretical approach will be given.

3.1 Information Theory

Computational models of trust and reputation are always based on a certain theory or conceptual model. As mentioned in the preceding chapter, most present models of trust and reputation make use of game theoretical concepts. The model of trust that will be introduced in this chapter and that will be central in the thesis has another frame of reference. The model proposed by Sierra and Debenham (2005) is the first trust and reputation model based on information theory. Before the discussion of Sierra and Debenham's model specifically, this section provides a short introduction to the most

important concepts of information theory.

Flipping a coin, throwing a dice and picking a blind card from a pile are all actions of which the outcome is uncertain beforehand. If the probability of one possible outcome is known, information theory provides a way to derive the information content2 of that particular event. The information content h(x) of an outcome x is defined to be:

h(x) = log2 1

P(x)

According to this definition, infrequent events give more information (have bigger information contents) than frequent events. If the probabilities of all possible eventsare known, another information theoretical concept can be calculated: the entropy of all possible outcomes. Entropy H is a measure of the uncertainty in a probability distribution for a discrete random variable X. The entropy of X, H(X), is the average information content of all possible events:

H(X) p(x1 ) log2 , where p(x1) P(X x1)

p(x1)

In the following example, the probability distribution (p) of each letter (ai) being randomly selected in an English document is provided. The last column gives the corresponding information contents h(p1).

2Information content is also called Shannon information content (MacKay 2003)

(13)

'U

a1

p

h(pj) I a1 h(p1)

a .0575 4.1 15 o .0689 3.9

2 b .0128 6.3 16 p .0192 5.7

3 c .0263 5.2 17 q .0008 10.3

4 d .0285 5.1 18 r .0508 4.3

5 e .0913 3.5 19 s .0567 4.1

6 f .0173 5.9 20 .0706 3.8

7 g .0133 6.2 21 u .0334 4.9

8 h .03 13 5.0 22 v .0069 7.2

9 i .0599 4.1 23 w .0119 6.4

10 j .0006 10.7 24 x .0073 7.1

ii k .0084 6.9 25 y .0164 5.9

12 1 .0335 4.9 26 z .0007 10.4

13 m .0235 5.4 27 - .1928 2.4

14 n .0596 4.1

(MacKay2003, p32)

Not so often used letters like 'x' and 'q' have a low probability of being selected and thus a high information content. An often used letter like 'e' in contrast, gives less information according to information theory. Averaging all the information contents in the example gives the following entropy.

H(x)=

plog2_L=4.l

I PI

In a probability distribution with many low probabilities the average information content will be higher, and this explains why another name for the entropy of X is the uncertainty of X.

In the example, all probabilities of all possible outcomes of randomly selecting a letter are known. There are however many situations in which these data are not

available. Without any information about probabilities of possible outcomes the best option is to take the uniform probability distribution, in which the probabilities of all possible outcomes are equal (P (x1) = 1 / n). Another possibility is that only a part of the information about possible outcomes is available. The exact probability distribution is unknown, but information about some constraints on this distribution is available. In these cases, the maximum entropy principle offers a rule for choosing a distribution that satisfies all constraints posed to the distribution. According to this rule one should select the distribution p that maximizes the entropy. This constructs the "maximally non- committal" probability distribution (Sierra and Debenham, 2005).

3.2 A negotiation language

In Sierra and Debenham's model (Sierra and Debenham 2005), agent a can negotiate with agent fi andtogether they aim to strike a deal b. In the expression ô= (a,b),a represents agent a 's commitments and b represents fl's commitments in deal ö. A is the set of all possible commitments by a and B the set of all possible commitments by /1. All agents have two languages, language C for communication and language L for internal representation. The language for communication consists of five illocutionary acts, which

12

(14)

are actions that can succeed or fail. The illocution particle set i = {Offer,Accept, Reject, Withdraw, Inform) has the following syntax and informal meaning.

Offer (a,fi, (5) Agent a offers agent fi a deal (5=(a,b) with action commitments a for a and b for /1.

Accept (a,fl,(5) Agent a accepts agents fl's previously offered deal ô

Reject (a,fl,ö,[info]) Agent a rejects agents fl's previously offered deal (5.

Optionally,information explaining the reason for the rejection can be given.

Withdraw (a,fl,[info]) Agent a breaks down negotiation with agent fi. Extra info justifying the withdrawal may be given.

Inform (a,fl,info) Agent a informs agents fi about info.

Sierra and Debenham use info for referring to: (1) the process used by an agent to solve a problem, or (2) an agent's data including preferences. For this, they propose the

following content language (info L) in Backus-Naur form:

info ::= unit [and info]

unit ::= KlBlsoftlquallcond

K ::= K(WFF)

B ::= B(WFF)-

soft ::=soft(f,{V4})

qual ::= V=D [>V=D]

cond ::= If DNF Then qua!

WFF ::= any wff over subsets of variables {V}

DNF ::= conjunction [or DNF]

conjunction ::= qua! [and conjunction]

V

::=

D

::=aIa'Ib...

f

: : = anyfunction from the domain of subsets of V to a set A. For instance a fuzzy set membership function f A = [0,1]

K and B refer to the agent's knowledge and beliefs. A WFF is a well-formed formula and DNF refers to the Disjunctive Normal Form. Soft and qua! are used to express

quantitative and qualitative preferences, respectively. A soft constraint associates each instantiation of its variables with a value from a partially ordered set. For example: "The probability I will choose a red book is 30% and the probability I will choose a blue book is 20%". A qualitative constraint expresses a preference relation between variable

assignments. For example: "I prefer red books to blue books". The other expressions in the list make it possible to express sophisticated preferences. Some concrete examples of expressions are:

"I prefer slippers to boots when it is summer"

Inform (a, /3, [Season=summer then Shoeslipper> Shoe=boot)

(15)

"1 prefer more shoes to less shoes"

Inform (a, fi,soft(tanh, (Shoes)))

"I prefer black shoes to green shoes"

Inform (a, fi, f Ihing=shoe

then colourblack> colourgreen)

"I reject your offer since I cannot pay more than 200"

Reject (a, /1, Money =200, hard(Money < 200, (Money))) This section should give a basic idea of the language that is used in Sierra and

Debenham's model of trust. The language is especially rich in expressing preferences.

However, this thesis will not focus on the effect of information about preferences, so a deep understanding of the language will not be necessary to understand the thesis. For further details of the language is referred to Sierra and Debenham' s article (2005).

3.3 Information-based negotiation

With an agent's internal language L, many different worlds can be constructed. A possible world represents for example a specific deal for a specific price with a specific agent. To be able to make grounded decisions in a negotiation under conditions of uncertainty, the information-theoretic method denotes a probability distribution over all these worlds. If an agent would not have any beliefs or knowledge, all worlds would have the same probability to be the actual world. Often however, agents do have knowledge and beliefs which put constraints on the probability distribution. The agent's knowledge restricts 'all worlds' to all possible worlds, the agent knows that some worlds are not possible. A possible world v, element of the set of all possible worlds V, is consistent with the agent's knowledge. Worlds inconsistent with the agents knowledge are believed to be false and do not have to be considered any further. The notation of the set of all possible worlds consistent with an agent's knowledge is V1K = (v1). An agent's set of beliefs B determine its opinion on the probability of possible worlds, according to its beliefs some worlds are more probable to be the actual world than others. A random

world, K =

(p}, is a probability distribution over all possible worlds, where Pi

expressesthe degree of beliefs an agent attaches to each possible world to be the actual world.

From the probability distribution over all possible worlds, the probability of a certain sentence or expression in language L can be derived. For example the probability P (executed ö I acceptedö) of whether a deal, once accepted, is going to be executed or

not can be calculated. This den ved sentence probability is always a probability with respect to a random world, a particular probability distribution over all possible worlds. A sentence a's probability is calculated by taking the sum of the probabilities of the

possible worlds in which the sentence is true. For all sentences that can be constructed in language L counts:

P{w1K}(a)

{p:aistrueinv}

An agent with a set of beliefs has attached given sentence probabilities to all statements p in its set of beliefs B. A random world is consistent with the agent's beliefs if for all

(16)

statements element of the set of beliefs the attached probabilities to the sentences are the same as the derived sentence probability. Expressed in a formula, for all beliefs ( elementof B:

B() =

Pwiq (p)

So the beliefs of the agent impose linear constraints on the probability distribution. To find the best probability distribution consistent with the knowledge and beliefs of the agent, Maximum entropy inference states that the entropy of the probability distribution has to be maximized. The found probability distribution should have maximum entropy and be still consistent with the knowledge and beliefs. This distribution is used for further processing when a decision has to be made.

When the agent obtains new beliefs, the probability distribution has to be updated.

This happens according to the principle of minimum relative entropy, which searches a probability distribution satisfying the new constraints and that has the least relative entropy with respect to the prior one. The relative entropy between probability distribution p and q is calculated as follows.

D(pfq)= 1plog2RL

The principle of maximum entropy is equivalent to the principle of minimum relative entropy with a uniform prior distribution.

While an agent is interacting with other agents, it obtains new information. Sierra and Debenham (2005) mention the following types of information from which the probability distribution can be updated.

Updating from decay and experience. This type of updating takes place when the agent derived information from the direct experiences it had with other agents.

When such an update takes place, the evaporation of beliefs as time goes by is taken into account. Negotiating people or agents forget about the behaviour ofa past negotiation partner.

Updating from preferences. This updating is based on past utterances ofa

negotiation partner. If agent fiprefers a deal with property Qi toa deal with property Q2,hewill be more likely to accept deals with property Q than deals with property Q2.

Updating from social information. Social relationships between agents, social roles and positions held by agents influence the probability of accepting a deal.

Two ways to model the updating from social information are the modelling of power and the modelling of reputation.

3.4 The trust model

Once the probability distribution is constructed and up to date, it can be used to derive trust values which can be used in the decision process. From an actual probability distribution, the trust of agent a on deal ô with agent/I at the current time, or the trust on

(17)

agent fiingeneral at the current time can be calculated. Sierra and Debenham (2005) propose two ways to calculate trust values. The first way to model trust is trust as conditional entropy. In this case the trust value, a value between 0 and I, represents the dispersion of the expected observations: the closer to I the value oftrust, the less dispersion of the expected observations. This formulation of trust is useful when any variation from the agreed contract is undesirable. The trust thatan agent a has in agentfl with respect to the fulfilment of a contract (a,b) is calculated.

T(a,fl,b)= 1 +

4 P'(b'Ib)logP'(b'b),

B b'E8(b)

where B(b) is the set of contract executions thatagent a prefers to b. B = 1 if IB(b)l =I

and log IB(b)I otherwise. The trust of a in fingeneral is the average of a 's trust infl in all possible situations.

[p' (b)

bR(b) P' (b' I

b)log P' (b' I b)J

T(ajJ)= 1 +

________________________

B

The other way of modelling trust is trust as relative entropy. This models the idea that the more the actual executions of a contract go in the direction of the agent's preferences, the higher the level of trust. Therefore the relative entropy between the probability

distribution of acceptance and the distribution of the observation of contract execution is taken.

T(a,fl,b)= 1— P'(b')log

bB(b) ( I )

Similarlyto the previous trust calculations, the trust of a in fi ingeneral is the average of all possible worlds.

T(a1—P'(b) P'(b')log

tIER bB(b) P (b Ib)

After making observations, updating the probability distribution and calculating the trust, P(Accept (a,fl,ô)) can be derived from the trust and an agent can decide about the

acceptance of a deal.

3.5 Information theory compared with game theory

Instead of using information theory, trust and reputation could also be modelled with game theory. An important concept in game theory is utility, the amount of satisfaction an agent derives from an object or an event. In game theoretical models, the goal is often to maximize utility. In the context of negotiations, an agent should accept a proposal if

(18)

the utility u of the deal is higher than a particular margin value m. The utility can be calculated by taking the profits of a deal minus its costs. So the basic idea of game theoretical negotiation is that if u> m in a given situation, the agent accepts the deal.

However, when the utility of accepting a deal is unknown or uncertain, this method will not work. Game theory solves this problem by using a random variable S, assigning probabilities to all possible outcomes after accepting the deal. The higher S's standard deviation, the higher the uncertainty in the process will be. Now the agent can calculate P(S> m),theprobability that the utility of the outcome will be higher than the margin value. Taking its willingness to take risks into account, the agent is able to calculate P (accept 5), the probability that the agent accepts a deal.

In contrast to game theoretical approaches, Debenham and Sierra's information-based approach does not make use of the concept of utility and information-based agents are not 'utility aware'. The probability of acceptance, P (accept 5), is not an indication of how good deal S is in the information-based method. In contrast, P (accept 5) is a combination of properties of the deal and the of integrity of the information against which S has been evaluated. So P (accept (a,fl,S,)) > P(accept (a,fl4)) does not mean that 5, is a better deal than 2,itmeans that agent a is more certain that Sj is acceptable than 52 is acceptable.

Game theory and information theory both have some restrictions in the kinds of information they process. In order to calculate a utility, the game theoretical agent has to know the exact certainty of an event. This might be a problem, in the realistic world people are not always sure about uncertainties. In the information-based approach this is not required, without certainty about the uncertainty, probability distributions can also be calculated. However, the information-based approach has to deal with other problems.

When an agent's language is restricted it is no problem to calculate probabilities for all possible worlds, but when the amount of possible worlds grows this can be a problem.

Moreover, the information-based approach cannot deal with infinite domains and it can only deal with continuous values by representing the domain as a fmite set of intervals.

As long as the probabilities of different possible worlds are known, game theory does not have this problem.

Game theory is successfully applied in many different models. The concept of utility is intuitively very appealing and easily to understand. The game-theoretical approach does however suppose that agents are totally rational, which is not always the fact. And when few information is available its methods become less appealing. For example if uncertainties are high and an agent is willing to take great risk, the calculated utilities do not really make sense. In situations of few information, the information-based approach might be a better option as a guide for making decisions. Information-based approaches do not calculate utilities, but look directly to the information with which the decision is made. Even with only very few beliefs, a probability distribution can be calculated.

(19)

4 Reputation in the information-based model

Suggestions to extend the part about reputation in the information-based trust model will be given in this chapter. After an analysis of the role of reputation in the model, two possible approaches to deal with reputation will be worked out. The last section discusses the results of this work.

4.1 Trust and reputation in the model

Sierra and Debenham's information-based model of trust does not yet provide a fully developed way to deal with reputation, it only offers some ideas. In section 5.3 of their article, Sierra and Debenham (2005) propose to update trust from reputation. The

probability distribution from which trust values are calculated is updated from reputation information and the result is a new probability distribution and thus new trust values.

This relation between trust and reputation is found in some other models of trust and relation as concluded in chapter 2, for example in the ReGreT model (Sabater 2002). In section 7 of Sierra and Debenham's article reputation is mentioned again, this time in the calculation of the probability that a deal will be accepted. Sierra and Debenham do not give a defmitive calculation of the probability of acceptance, but they "can imagine the probability of acceptance of a deal as a composed measure" (Sierra and Debenham 2005).

Here they propose to add the weighed values of trust and reputation, to together determine the value of P(Accept(a,fl,ô)).

In a combination of both proposals of the role of reputation (section 5.3 and section 7, Sierra and Debenham 2005), the relation between different concepts could be represented with the following figure.

Trust(a,fl,t5) Reputation(ajJ,J)

j,reputation update

Trust_new(a,fl,J) .1.

P(Accept(a,/J,t5'))

Studying this figure, the following question comes up. Why is reputation information used to determine P(Accept(a,fl, ô)), if the information is already processed in the calculation of Trust_new? It seems redundant to use the same reputation information twice in the calculation of P(Accept(cz,fl,5)). Sierra and Debenham do not discuss this issue in their article and they do not provide a clear way to deal with it.

Because of the seeming redundancy, both ways to handle reputation in the model of trust are examined separately in this chapter. Below the two options that will be discussed are represented in a figure.

(20)

Trust(a,fl,ö) Trust(a,fl,ô) Reputation(a,/J,J)

,j,reputation update

Trust_new(a,/l,ô) I

P(Accept(a,/l,ô)) P(Accept(ajl,t5))

4.2 Updating trust from reputation

The first proposal to deal with reputation information is to consider it as one of the factors determining the level of trust. Reputation information here, is information an agent receives from other agents with their opinions of other agents. So besides an agent's own experiences with for example agent fi, witness information could also influence the agent's opinion about agent fl's behaviour. A lot of positive stories about agent fi, might increase its trust in agent fi.

By the illocution Inform ('y, a, info), agent a receives information from agent y. In the case that the information content is an opinion of y about another agent, the received information is reputation information. Sierra and Debenham (2005) represent this type of information 0 with Reputation ((D, fi), where fi representsthe agent the information is about and CI)theinstitution or domain the information applies to. An extension to the expression could be a variable r, to express the reliability of the provided information.

This results in a more sophisticated type of information 0, Reputation (G.,fl,r). After receiving Reputation (b, if) or Reputation ((DfJ,r), agent a will updatep(b 'Ib),which represents the prior probability that the contract execution will be preferred by a to fl's commitments b. The newp(b 'Ib),given the reputation information, can be calculated by the following formula (Sierra and Debenham 2005):

p(b 9 b,R eputation(,fl, r)) =p(b9 b) +g3 (b 9 b,Reputation(,fl,r)) (1- p(b 9 b)) In the formula, g3 (b 9 b,Reputation('P,fl,r)) represents the strength of agent a's belief that the probability that the execution of contract b at time t + 1 will be preferred to b should change, given that Reputation(P,fl,r) was received at time t.

Sierra and Debenham do not specify in their article how to calculate

g3('b 'Ib,Reputation(,fl,r)), the strength of belief. Some factors that could influence the strength of belief are the following.

The contentof the reputation information.

Very positive or very negative information will have more effect than slightly positive of slightly negative information. The content of reputation information is a value between -1 and 1. The bigger the absolute value, the more effect the information will have.

Possibly provided reliability information.

(21)

The infonning agent might provide a value between 0 and 1, assigning the reliability of the reputation information given. Reliability is an estimate of the extent to which information is correct. The higher the reliability, the more it will affect the trust value.

Persuasive power of the source agent.

This is a value between 0 and I stored in the agent's belief set. Initially this value will be 1, but when an agent has had negative experiences with the informing agent this value will decrease. Negative experiences could be the hiding of information or the providence of false information. The higher the persuasive power of the source agent, the more effect its provided information will have.

Similarity with other information.

Information that agrees with knowledge or beliefs about the agent's performance on similar domains, will have more effect than information that does not. If agent /3 is a good singer, he will probably also have feeling for rhythm. This similarity could also be represented by a value between 0 and 1.

The effect of the first aspect, the content of the reputation information Reputation(øb,fl,r) on a new probability distribution seems clear. As mentioned before, very positive or very negative information will have more effect than slightly positive of slightly negative information. If the reputation information is neutral, a value of 0, it will not have any effect at all. However, the value of this information could decreases for several reasons, reasons mentioned in the second, third and fourth factor. If the information is not 100%

reliable, the value of the reliability r is not 1, the information looses influence on the probability distribution. If the agent for some reason lost persuasive power, for example because he provided bad information in the past, the effect of the reputation information will also decrease. The last factor, in the case that stored information on a highly similar domain is totally different with the provided information, can also cause a decrease of influence. The desired effects of increase and decrease of influence on the new

probability distribution are reached by calculating g3(bReputation(P,/J,r)) by multiplying the four factors with each other. The result will be a value between -1 and 1.

A remark has to be made on the second factor, the reliability information. This information could be false and could in an unjustified way decrease or increase the influence of the reputation information on the probability distribution. A way to solve this problem is to ignore the reliability information and not use it. However, throwing away of information is usually not the way to make better decisions and there are arguments that the use of the information will not lead to worse results. An agent could provide false reliability information r in two possible ways: the provided value is too high or too low.

When the value is too low, correct reputation information could unjustly be ignored. This situation however is highly improbable, because it does not bring any advance to the other agent. The other possibility is that bad information with a high reliability value is provided, which could be of advantage of other agents and thus is quite probable. During the first interactions, the calculation of g3(bIReputation(cb,fl,r)) would deliver thesame answers as when the reliability information would be ignored. But when an agent continues providing high reliability measures to bad opinions, the receiving agent will start to 'learn' about the information providing agent. Because it provides bad

information, the persuasive power of the information providing agent will decrease. Then

(22)

a high reliability value does not matter any more, because the low value of persuasive power will already decrease the influence of the information.

So g3(bIReputation(,fl,r)) can be calculated by multiplying the factors of content, reliability, persuasive power and similarity. Then, as proposed by Sierra and Debenham (2005), agent a revises its estimate of p(b 'Ib) by using the principle of minimum relative entropy.

(P(bjIb))1 —argmrnp,log,"

This revision is subject to the constraint:

E8(b) ( I b) p(b'Ib,Reputation(P,fl,r)),

where B(b) is the set of contract executions that agent a prefers to b.

Here again a remark has to be made, a more fundamental and a more difficult one to solve than the problem with reliability information. Imagine the case that agent a receives very positive reputation information from agent fi aboutagent 'y. The provided reliability information is maximal, the persuasive power of agent fi is maximal and the provided information is highly similar with a's other beliefs about 'y. In this case the reputation information has a maximal effect on the probability distribution. But how

much effect will it have, as much as a direct experience? Although the circumstances of receiving the reputation information might be fme, it remains second hand information.

First hand information, obtained by direct experiences, should be of more influence than reputation information although the circumstances are perfect.

More general, the problem is that each change of the probability distribution means a loss of information. With every update of the probabilities, old values stored are replaced by new ones. So it is very important to consider carefully whether a particular update really improves the predictable power of the probability distribution instead of throwing away valuable information that was in the model. In the case of an update from reputation information, this update should not replace all information obtained by direct experiences. In contrast, with some slight changes it should perfect the probabilities obtained.

Some ways to achieve such a proportional contribution of reputation information on the calculation of trust are the following.

Reputation information is only used for updating, if the strength of agent a's belief that the probabilities should change (,g3 (b 'Ib,Reputation('Ii,fl,r))) is above a certain threshold.

When updating from reputation information, multiply the strength of agent a's belief that the probabilities should change (,g3 (b 'b,Reputation(Ii,fl,r))) with a factor between 0 and 1. This factor indicates the importance of updating from reputation information in comparison to updating from direct experiences (with an importance of 1).

(23)

Reputation information is only used for updating, if agent a received a certain amount of opinions from different agents about the same agent. In this case a way to aggregate different 'reputation informations' is needed.

Reputation information is stored and when the probability distribution is updated from reputation, it is updated from all reputation information received in a specific period of time. In this case a way to aggregate different entities of received information is needed.

One can apply one of the four points mentioned here, or use some combination of them.

A proposal to aggregate reputation information will be given below. Firstly, because the aggregation of different opinions about reputation before updating, prevents too much change of probabilities described to all possible worlds. A second reason is that

aggregation of opinions of different agents about reputation conceptually makes a lot of sense. In section 2.2, the notice that reputation is usually not based on the opinion of one individual was stressed. So by aggregating reputation information received from different agents, a reputation value for a particular agent can be derived. Reputation updates will not longer be updating from 'reputation information', but updating from 'a reputation'.

Before different pieces of information can be aggregated, they have to obey to certain conditions. The first condition is that the different pieces have to contain information about the same agent. If one agent provided more than one opinion about another agent, the most recent opinion should be taken. Furthermore, only opinions with a reliability value higher than a given x, from a providing agent with a certain minimum trustworthiness value of y and a similarity value of at least z should be taken into account. The values of x, y and z are variable and can be set according to the users wishes. Finally, the amount of contributing agents given the whole population has to be chosen. One has to decide which percentage of agents has to provide opinions about an agent, to speak of the reputation of that particular agent. When all these parameters are set and an agent has enough valuable information according to the parameters, the different opinions can be aggregated. The reputation of an agent can now be calculated by taking the average of the opinions about that agent. The standard deviation of all the opinions indicates the probability of the reputation being good. The smaller the standard deviation, the more different agents agree with each other, the higher the probability that the reputation value is useful. The more it should influence the probability distribution in an update.

Sierra and Debenham (2005) want to model the idea that beliefs evaporate as time goes by. In their proposal, the natural decay of belief is offset by new observations. One could choose to also update from decay when an agent receives other types of

information, for example reputation information. If the agent takes the evaporation of beliefs into account at any time it receives new information, it will always use the most up to date probability distributions for deriving its trust values.

4.3 Combining trust and reputation

Instead of reputation being one of the aspects updating trust, the two concepts could also be seen as two factors determining the probability that a deal will be accepted. In this case, reputation information still refers to information an agent receives from other

(24)

agents. The meaning of trust slightly changes, in this case trust is only determined byan agent's direct experiences or observations. This picture highly resembles Conte an Paolucci's (2002) approach to trust and reputation. They would use the word 'image' for what is called trust here. According to them, reputation and image (information basedon direct experiences) together determine the level of trust. In the naming in this section, reputation and trust together determine the probability of acceptance.

This second method keeps witness information and information from direct experiences separate till the probability P(Accept(a,/J,t5)) is calculated. This is achieved by calculating two separate probability distributions. One of them determines the level of trust and is only updated from direct experiences. A second one deals with reputation information, R, the set of all information received in the form Reputation(D,fl,r). Whereas the constraints in the first probability distribution are given by the agent's beliefs B derived from direct experiences, the constraints in the second probability distributionare only given by reputation information R. The probability distribution of trust is updated from direct experiences as described by Sierra and Debenham (2005) and the probability distribution of reputation is updated by one of the ways described in the previous section.

After the calculations of trust and reputation3 from the probability distributions, the two probabilities are combined to determine the probability of accepting a deal. Sierra and Debenham (2005) propose the following formula to combine the two values.

Pt (Accept (afl,ö)) = K T (a/J,ö) +1(2R(a,fi,ö),,

where i + 2 =

1,and they are constants or the result of a function depending on the environment. li and 2 representthe importance an agent gives to both aspects. In the case of trust or reputation the values of i and

2

coulddepend on the amount of

experience of an agent. The more experience agent a has with agent fion deals like deal ö, the more its decision of accepting the deal depends on trust, its own opinion. When agent a has no experience at all in this field, its decision is purely based on reputation information, opinions of others.

The formula could easily be extended with other dimensions influencing the probability P'(Accept (a,fl4)). Anextension should deliver the following form.

Pt (Accept (ajiM)= I (a,18M + K2 R(a,fl,o) + ... + icX(a,fl,ô),

in which the condition ic + (2+ ... +,c =I has to be satisfied. An extra dimension determining the probability of acceptance could for example be social information, about the power or social relationships between agents. Any new dimension should satisfy the condition that its information is not already being processed in another dimension.

4.4 Conclusions

Sierra and Debenham (2005) define trust as a measure of how uncertain the outcome ofa contract is. So according to them, trust should incorporate the overall opinion of an agent about another agent or about a certain deal. This overall opinion should be based on all the information an agent has. In this sense, the approach in section 4.2 is preferable to the

The reputation beingcalculated in the same way as trust is calculated from a probability distribution

I

(25)

approach in section 4.3. In section 4.2 trust values indeed are updated from all the information sources available. In section 4.3 trust is only based on the direct experiences of an agent with other agents. However, if Conte and Paolucci's (2002) terms would be used for the approach in section 4.3, trust would also contain all available information sources.

The approach of updating trust from reputation has some other advantages in comparison to combining trust and reputation. The first method only uses one probability distribution, which is simpler to handle than two or more probability distributions as in the second approach. The reputation update of this single probability distribution runs similarly to updating from other information sources, which is also a plus according to simplicity reasons. Finally, if all information is already processed in the probability distribution determining trust, than the trust value on a specific deal is automatically its probability of acceptance in Sierra and Debenham's model. Because all other dimensions are already integrated in the value of trust, there are no other factors left to determine pt (Accept (a,fl,ö)). So the method saves a calculation step.

A difficulty in the first approach however, is to control the contributions of direct experiences and witness information onpt (Accept (a,fl,5)). By keeping trust and

reputation separate till the end of the calculation, it is easier to see the effects of both aspects on the fmal decision. The second method provides a way to isolate the influence of reputation and to better investigate its role in the fmal decision. Influences of other aspects, like for example social information, could also be investigated this way. In most situations however, the profits of separating trust and reputation will not outweigh the conceptual arguments of the first approach.

(26)

5 Social information in the model

This chapter proposes to incorporate social information in the information-based trust model. After the discussion of social aspects that might play a role, a possible way to deal with social information is explained.

5.1 Social information

The updating of trust from an agent's own experiences and from preferences are worked out well in Sierra and Debenham's information-based model. They way they treat updating from reputation has been discussed in the previous chapter. Besides these factors, another information source could influences the level of trust. Lately, in the research field of computational models of trust and reputation, the role of social

information is stressed (Sabater and Sierra 2005, Ashri Ct al. 2005,Muiet al. 2002) and becomes more and more important. Although Sierra and Debenham reckon reputation information under updating from social information, social information is more than just that. In this chapter, social information not directly based on an agent's own experiences or on information based on the experiences of other agents will be discussed.

Social information for example could tell something about the relationship between two negotiation partners. A negotiation about the division of rooms in an office between two employees with the same status would change if one of the two becomes the other's boss. An agent would prefer negotiating with an agent who needs products he

sells, to negotiating with an agent without this dependency on his products. Ashri et al.

(2005) denote two important aspects in the rise of social relationships: interactions and organisational structures. In their article, they provide tools for identifying and

characterizing relationships between agents. They identify the following relationships or interaction types which are relevant with regards to trust.

Trade Agent a is able to buy a product from agent 8 within the same market.

Dependency Agent a is selling goods in a market that agent ft can view, and at the same time ft has the goal to buy the goodsa is selling in that market.

Competition Agent a and ft are selling the same goods in the same market, or a and ft have the same goal, they want to buy the same products.

Collaboration Agent a is selling goods to agent ft and at the same time, ft is selling different goods to a.

Tripartite Relationship between two agents if at least one more agent relationships is added to the analysis.

According to Ashri et al. (2005), an agent should distrust its counterpart whenever the latter has an opportunity to defect. In a situation where agent a is dependent on agent ft for example, ft may have an opportunity to exploit a because a has no other choice than ft

(27)

as an interaction partner. Agent a's trust in 8 should be lowest possible. According to Ashri et al. (2005), the different types of relation patterns, together with the context in which the relationship is developing, determine the intensity of the relationship between two agents. The context of a relation is determined by issues such as the abundance of a product, the number of sellers of the product and the amount being bought. The

instantiation of an intensity calculation function will depend on the type of application (Ashri et al., 2005).

Other kinds of social information could inform the agent about the position of an agent in an organisation or institution, for example information about the power relationships between different agents. A system could also provide information about the agent's reputations. This information is different from the reputation information discussed in chapter 4, in the sense that the social reputation information is not directly based on other

agents' experiences (witness information), but on objective reputation measures used by the system.

5.2 Social constraints in an agent's knowledge base

Sierra and Debenham (2005) give some initial ideas about how to deal with social information. Besides the influence of reputation information, they mention the influence of power on trust. According to them, the power of a negotiation partner influences the probability its opponent will accept a deal. In the model they accomplish this effect by adding the following constraint to an agent's knowledge K:

Power(J1) >Power(y)

-

P(Accept(a,fl,ö))> P(Accept(a,y5))

Here the assumption is made that power can be modelled as a function from agents to real values (Sierra and Debenham 2005). So this assumption presupposes a linearly ordered set of agents. There may be situations however, in which the order of agents according to power can be tree-like or in which an agent only has a lot of powerover one specific

agent and not over others. Therefore a partial order seems more appropriate to express differences in power between agents than a linear one. This can be achieved by

representing power by a value between -1 and I attached to the predicate Power(a,fl), indicating the strength of the power of one agent has over another. The expressions Power(a,fl) = 1 and Power(J3,a) = -1 are equal and mean that agent a has absolute power over agent fi. A small absolute power value, means that none of the two agents has much power over the other agent. According to this representation, the power constraint in an

agent's knowledge base would become the following.

Power(Ji,a) > Power(y,a) -* P(Accept(a,fl)> P(Accept(a,y,ô))

Following this method, constraints modelling Ashri et al.'s interaction types discussed in the previous section could easily be added to the knowledge base of an agent. To add these constraints to an agent's knowledge base K, the operators Dep(a,ff), Comp(a,18) and

ColI(a,fl) have to be introduced. The first one, Dep(a,ff), is a dependency relation in which agent fiis selling goods in a market and agent a has the goal to buy that goods

Referenties

GERELATEERDE DOCUMENTEN

Because of the language impairment in PWA information in speech was missing and the information in gesture “became” Essential; (b) Gesture and speech are part of one

perceive the collective dimension of religion, including in liquid moderni- ty, such as becomes apparent in religious events, small communities, global religious networks and

According to Razali and Wah (2011), the S-W test compared to the K-S test has more power. Therefore, the S-W test is interpreted and it can be concluded that residuals of the

As it is expected that this will also hold on brand level, all types of price promotions are expected to have a positive effect in the both the start-up, growth and maturity

This study aims to investigate the effectiveness of the gamification elements of scaffolding, score and hints in improving the user enjoyment and motivation of people of

Methods Design Design Design Design Teaching program Student Student’ Teacher Education Methods Compe- tences Classic Classic Entrepreneurial Entrepreneurial Culture..

After this important. practical result a number of fundamental questions remained. How MgO could suppress the discontinuous grain growth in alumina W&lt;lS not under- stood. In

Chinese fluid power : hoofdzakelijk voor nationaal gebruik Citation for published version (APA):..