• No results found

Feature Label Ordering in Learning the Meaning of the Polysemous Verb: `Run' / `Lopen'

N/A
N/A
Protected

Academic year: 2021

Share "Feature Label Ordering in Learning the Meaning of the Polysemous Verb: `Run' / `Lopen'"

Copied!
17
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Feature Label Ordering in Learning the

Meaning of the Polysemous Verb: ‘Run’ /

‘Lopen’

Jacob Verdegaal 5677688

bachelor thesis Credits: 18 EC

Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor dr. H. W. Zeevat

Institute for Logic, Language and Computation Faculty of Science

University of Amsterdam Science Park 904 1098 XH Amsterdam

(2)

Abstract

Feature Label Ordering in Learning the Meaning of a Polysemous Verb by Jacob Verdegaal

Feature label ordering was introduced as better performing on prediction of symbols from features. In this research it is explored of being useful for finding relevant semantic properties in the lexicalization of a polysemous verb. The effect of cue competition of feature label ordering is hypothesized and proven to be a good candidate when data needs to be classified based on overlapping feature sets. Cue competition performs well because if features are dependent on each other, predictive values of features (in view of prediction of a symbol from features) need to be dependent on other features’ predictive values as well. Label feature ordering learning does not account for these dependencies as well as feature label ordering does.

(3)

Contents

Abstract i Contents i 1 Introduction 1 1.1 Related work . . . 2 1.1.1 Semantic analysis. . . 2 1.1.2 Feature-label-ordering . . . 3

1.2 Cue competition and disambiguation . . . 3

2 Method 5 2.1 Used features . . . 5

2.2 Learning associations. . . 6

2.2.1 Prediction . . . 6

3 Results, Conclusion and Discussion 8 3.1 Results. . . 8

3.2 Conclusion . . . 9

3.3 Discussion . . . 9

A featureStructure 10

B Table of features 11

C Extracted feature sets 13

Bibliography 14

(4)

Chapter 1

Introduction

Word meaning is important for Artificial Intelligence (AI), because the Turing-test re-quires artificial agents to master skills of language in order to pass and to be perceived (by humans) as intelligent. The meaning of lexical items varies depending on the context in which they are used, but it is a non-trivial question how to define the exact meaning of a word in a certain context. When one opens a dictionary one finds whole lists of possible meanings of words. But elements of these lists, possible meanings of a word, are defined by examples, and thus not usable for AI. A lexicon would be optimally usable for AI when a logical meaning is defined for each possible use.

Much work has been done by linguists on the formalization of meaning [1], in which both

typological and cognitive approaches have been taken. The cognitive approach can serve as an inspiration for AI, but was not developed per-se to be formalized. Typological work is mostly done to compare languages and to research the origin and relatedness

of languages. Within AI, distributional semantics [2] has been developed in order to

formalize semantics, but is based on the idea that meaning can be defined fully in terms of the company words keep.

This paper is part of an ongoing project investigating the possible feature-sets defining correct meaning of a polysemous verb in its context. Furthermore, the feature sets must be unique and usable for both production and interpretation of language by AI. The challenge is to find feature-sets that work equally well for Dutch, English, German or, in principle, for any language whatsoever. This restriction necessitates that the feature-sets should be independent of syntactic rules and thus can only result from semantic analysis. This project is new in its approach in that disambiguation, prediction of local meaning, should be result of semantic definition (determined by a lexicon) of a single word, not of context restrictions.

This paper focuses on ‘run’ and ‘lopen’, which have similar meaning. A direct translation of ‘John runs to catch his train’ to Dutch is ‘Jan rent om zijn trein te halen’. However, a direct translation is often not correct, for example: ‘the engine is running’ is equal to ‘de motor loopt’, where the meaning of run and loopt is exactly the same. Above all, ‘de mototr rent’ is meaningless in Dutch.

(5)

Semantics in AI 2

1.1

Related work

1.1.1 Semantic analysis

Perhaps the best known work on the meaning of words is Wordnet. This is a collection of 117000 sets of synonyms, linked by basic relations such as hypo- and hypernyms. However, to know a word’s place in the ’hierarchy of things’ is not sufficient to represent its local meaning. As noted earlier, meaning depends on context in many cases, especially for polysemous words.

Currently a huge effort is being devoted to distributional semantics, in which semantic representation of words is captured in models, known as vector spaces, semantic spaces,

word spaces or distributional semantic models [2]. These models can be used to capture

the semantics of words by defining relations expressing shared attribute values. If two words share many attribute values, for example as dog and puppy share barking, having 4 legs and a tail, the meaning is similar. Attribute values are extracted from surrounding words, WordNet and syntactic structure of the sentence wherein the to be defined word is encountered. However, syntactic definitions do not have (enough) logical meaning to be used for AI or disambiguation.

1. Jan loopt al 40 jaar 2. Jan loopt al in de 40

Example 1: Different uses sharing attributes

Polysemous words need extra attention in this approach to define any useful meaning. For example, 1. and 2. share many attribute values, but the actual meaning of lopen is very different. Disambiguation requires definition of meaningwhith which AI could reason (i.e. logical meaning).

Disambiguation depends on deeper semantic analysis of possible contexts (i.e. the sen-tence in which run is encountered). Syntactic roles are not sufficient [3]. Semantic roles

of contextual elements, as described by Grimm in ‘Semantics of case’[4], can provide

some of these semantic properties. In the models described above disambiguation de-pends on knowledge of syntactic structure and meaning of the other words. Though, disambiguation of loopt in this example is determined by qualitative change of the

prop-erty age of Jan ( he is getting older ). A qualitative change is what Grimm calls a

persistence type. He forms a fine-grained case grammar by contruction of a lattice of agentivity and persistence properties. Semantic properties (combinations of agentive and persistence types) emerging from this grammar can then be used in semantic mod-els. And because systems are being developed to automatically extract these properties

from corpora [3], they are candidates for use in automatic disambiguation by a lexicon.

(6)

Semantics in AI 3

1.1.2 Feature-label-ordering

So to construct a lexicon with which disambiguation can be done automatically feature-sets must be found. Each of these feature-sets will represent a specific meaning of the polysemous verb. The elements of a specific set are hypothesized to be dependent of the other elements in the set. Because of these dependencies Feature Label Order learning will be tested to learn which features form the sets.

Feature Label Order (FLO) learning, opposed to Label Feature Order (LFO) learning

has been introduced by Ramscar et al.. They claim that humans are only able to

learn symbols poperly in this order. This claim is breaking with the current paradigm because symbol learning has always been approached with the goal of predicting a set of properties ( an object has ) given a symbol. So the predictor has always been the symbol, the outcome a set of features. But use of symbols demands a prediction of a symbol given a set of properties. Hence learning of the relations is more fruitful when

the set of properties is presented followed by the symbol [5]. The set of properties are

termed to be features, the symbols labels.

FLO learning has the advantage of cue competition. Prediction is the process of feature-sets, or the ‘cue (predictor)’ [5, pp. 913] voting for an ‘outcome (thing to be predicted)’

[5, pp. 913]. When learned with LFO the predictor is a symbol and the outcome a set of

features. Thereby information about dependencies of features is disregarded. Hence the predictive ‘strength’ of the label is in-, or decreased for the entire set. However, when learning in FLO the cue consists of features of which the individual predictiveness can change. For example, in the case where a strong predictive feature is present, but the outcome is other than predicted ( by this individual feature ), the predictive strength ( association value ) can be decreased. Furthermore, this decrease makes it possible for other features to increase their association strength, since associations between cues and

outcomes have a maximal value they can reach1.

1.2

Cue competition and disambiguation

Disambiguation by means of a lexicon of a polysemous word requires definitions of meaning in terms of combinations of features which determine the local meaning. Since the features in a specific set are assumed to be dependent on other features of this set, or more specific when a feature is present in the context, others are likely to be as well, while yet others are not, the effect of cue competition is a precondition for learning of these sets, at least according to the FLO learning hypothesis.

Cue competition as a precondition is tested by comparing FLO learned associations with

LFO learned associations performance on two prediction tasks:2

• Given a label and its context, predict the features defining the local meaning. • Given a set of features, i.e. semantic interpretation in a specific example, predict

the correct label.

1

A maximal association between cue and outcome cannot increase more because it expresses predic-tiveness which cannot increase when cue a predicts an outcome with 100% certainty

2

From here on the polysemous words run and lopen are labels and their local (disambiguated) mean-ings are feature sets.

(7)

Semantics in AI 4

The first prediction task requires a local meaning to be defined before learning takes

place. In section 2.2.1 is explained how this is done. In the second prediction task a

correct label is to be predicted. Though a polysemous word is only one label, the used data provides the possibility of discrimination among run and lopen. Furthermore, one of the meanings of lopen, viz walk, is added as label to guarantee discreteness of labels.

The Rescorla-Wagner model is used to learn associations. Section2.2gives an

explana-tion of the technical difference between FLO en LFO learning. In secexplana-tion 3.1results of

the prediction tasks are presented and in section3.2the interpretation and the

(8)

Chapter 2

Method

2.1

Used features

The project requires use of features which are independent of specific language, hence the most basic principles in language are used. In any language, an utterance is about something, there is a main topic of discussion. This is commonly known as the theme. The theme can be an agent (an entity that at least has sentience, i.e. a human), or not.

An agent itself can have further properties. Grimm constructed an agentivity lattice

in which possible degrees of agentivity are ordered. The lattice also contains degrees of persistence, expressing change of existence (where an event causes a theme to start or end existence) or change of qualities of a theme (for example the quality of place,

which is especially useful if the meaning of run is movement). Where Grimm assigns

these properties only to agents, entities in example sentences are allowed to be assigned persistence values. Furthermore, example sentences can be annotated with a second entity (ENT 2) which can also be assigned properties. Hereby the exact working of a process on entities can be specified.

Other features are obtained with help of linguistic resources. A list of all possible uses of run/lopen was constructed and analyzed.The most important result of this analysis has been the declaration of any use of run/lopen as a process. All encountered processes are listed and grouped in higher categories, which resulted in 13 process features to use. Two more process categories are concrete and abstract in which all the 13 are grouped. One of these process types, transformation, is enlisted in both, representing

the same meaning of an engine and a computer running. See appendixB for a full list

of used features with a description of their meaning.

Examples of use of run, walk, lopen were extracted from the British National Corpus and the Alpino Treebank (for Dutch). Examples of rennen were also analyzed but 99% of the examples were uses with meaning ‘currere’, which is not polysemous use, so these were not included in testing.

All examples were annotated with a specially designed program. This program helps in annotation by means of a feature structure. Some features imply others, or reduce the remaining possible set of features. The structure that is used for annotation can be

found in appendix A. An important note to go with the structure is that this structure

should not lead categorization, but only help in faster annotation. 5

(9)

method 6

2.2

Learning associations

The Rescorla Wagner model is used to test association learning in LFO and FLO,

iden-tical to the research of Ramscar et al.. This model learns association values between

cues and outcomes. The association values are product of an update rule:

Vijn+1= Vijn+ ∆Vijn ∆Vijn= αiβj(λj− Vtotal)

where:

• ∆Vn

ij is the change in associative strength between a set of cues i and an outcome

j on trial n

• α is a parameter that allows individual cues to be marked as more or less salient. • β is the parameter that determines the learning rate; in all experiments set to

0.001

• λj denotes the maximum amount of associative value (total cue value) that an

outcome j can support. In all experiments, λj was set to 1 (when the outcome j

was present in a trial) or 0 (when the outcome j was not present in a trial). • Vtotal is the sum of all current cue values on a given trial.

At the FLO tests i are the features and j the labels, at the LFO tests the roles are switched. This means that at the FLO tests association values for different cues are updated for 1 outcome with a specific value. For the LFO at each trial the association values for one feature are updated relative to each other for each of the labels. The resulting association values can then be represented in a matrix.

2.2.1 Prediction

In the first prediction task, where context and label are given, the local meaning must be predicted. The local meaning is chosen to be a process, because any use of run/lopen implies a process, one of these must represent the local meaning. For the second predic-tion task the labels run, lopen, walk are associated with features. The examples of run differ in frequency of distinct local meaning of those of lopen, so while the meaning is formally equal, the local meaning in the data is different.

All tests used a random 90% of the data to learn associations, the remaining 10% was used for testing. The resulting numbers are averages over 10 tests for each of the 4 prediction tasks. This method is chosen instead of a pre-selected test set because the total data-set contained only 591 examples. By repeated training and testing on different sets the chance of eventualities and influence of outliers (on training or testing) is decreased.

A prediction is result of activation of each feature a test example is annotated with. These activations are then multiplied by the value this feature has in the association

(10)

method 7

matrix. Then a summation of these values leads to a prediction value per label or pro-cess. The highest prediction value represents the most likely class to which an example belongs.

(11)

Chapter 3

Results, Conclusion and

Discussion

3.1

Results

The results listed in the table below show that FLO learned associations perform better at prediction than the LFO learned associations on both tasks. These percentages are averages, the deviation was never more than 10%.

order labels processes

FLO 71% 78%

LFO 50% 70%

Below, some (closest to the average) association matrices are showed. A red colored box means that this particular feature has a negative association value for the correspond-ing label, and green means positive association. The brightness of a color resembles associative strength. Non-transparent green must be interpreted as maximal associative strength and non-transparent red as maximal unassociative strength.

Figure 3.1: FLO on labels

Figure 3.2: LFO on labels

(12)

Results, Conclusion and Discussion 9

Figure 3.3: FLO on processes

Figure 3.4: LFO on processes

3.2

Conclusion

The matrices show a strong cue competition effect. Some features in the FLO learned association matrices do not have any value, while in LFO learned association matrices every feature has at least one green square. This is the direct result of association values learned for a set of features per label (in FLO), independent of feature association val-ues on other labels. The association valval-ues of features learned in FLO change relative to other features for one outcome only. The values of features in LFO learned matri-ces change relative to the values of the same feature on other labels, hence a strong association is learned even when a feature is only present on one label on few examples.

3.3

Discussion

It appears that when features have much overlap on varying labels, FLO is a much better approach to predict which label follows a set of features. However the data-set was small, only containing 591 examples. Hence the performance of FLO must be tested further on bigger data-sets to exclude the possibility of FLO better performing on small data sets.

These results indicate that the goal of the project to find feature sets can benefit from the cue competition effect of FLO. Since the associations are sparse in the FLO learned matrices, there is a possibility of extraction of feature sets which are unique and thus can

be used to define local meaning. In appendixC all sets are listed that have association

value greater than 0.2 for each process (THEME=... is excluded). It is from this point a task for linguists to investigate whether found sets can sensibly define local meaning.

(13)

Appendix A

featureStructure

1 % f e a t u r e s t r u c t u r e to be u s e d for f a s t a n n o t a t i o n of d a t a 2 % k e y s can i m p l y w h o l e sets , or can be f u r t h e r s p e c i f i e d 3 % 1 l i n e may s t a r t w i t h ’ > > ’ , m e a n i n g t h i s is the r o o t l a y e r 4 % k e y s are the f i r s t l i s t ( not in any b r a c k e t s )

5 % s e t s e m b r a c e d by ’{ ’ and ’} ’ m e a n t h a t the key i m p l i e s the f o l l o w i n g set

6 % s e t s e m b r a c e d by ’[ ’ and ’] ’ m e a n t h a t the key i m p l i e s one of the f o l l o w i n g set 7 % s e t s e m b r a c e d by ’ < ’ and ’ > ’ m e a n t h a t the key i m p l i e s a s u b s e t of the

f o l l o w i n g set 8 9 > > T H E M E = A G E N T T H E M E = P R O C E S S 10 11 T H E M E = A G E N T { [ m o v e m e n t P R O C E S S = c o n c r e t e P R O C E S S = a b s t r a c t ] E N T _ 1 E N T _ 1 = a n i m a t e < E N T _ 2 S O U R C E G O A L p a r t o f w o r d > } 12 T H E M E = P R O C E S S { [ P R O C E S S = c o n c r e t e P R O C E S S = a b s t r a c t ] E N T _ 1 < E N T _ 2 S O U R C E G O A L p a r t o f w o r d > } 13 14 P R O C E S S = c o n c r e t e [ m o v e m e n t d i s t a n c e l i q u i d t r a n s f o r m a t i o n v o l u m e b o d y i n s t r u m e n t ] 15 P R O C E S S = a b s t r a c t [ t e m p o r a l r a t i o n a l o r g a n i s a t i o n s o c i a l m o n e y c o m m u n i c a t i o n t r a n s f o r m a t i o n d e s c r i p t i v e ] 16 17 E N T _ 1 [ t o t a l ( E N T _ 1 ) q u a l i t a t i v e ( E N T _ 1 ) e x i s t e n t i a l B ( E N T _ 1 ) e x i s t e n t i a l E ( E N T _ 1 ) n o n E x i s t e n t c e ( E N T _ 1 ) ] 18 E N T _ 1 = a n i m a t e < i n s t i g a t i o n ( E N T _ 1 ) m o t i o n ( E N T _ 1 ) v o l i t i o n ( E N T _ 1 ) s e n t i e n c e ( E N T _ 1 ) > 19 E N T _ 2 { [ t o t a l ( E N T _ 2 ) q u a l i t a t i v e ( E N T _ 2 ) e x i s t e n t i a l B ( E N T _ 2 ) e x i s t e n t i a l E ( E N T _ 2 ) n o n E x i s t e n c e ( E N T _ 2 ) ] E N T _ 1 } 20 E N T _ 2 = a n i m a t e < i n s t i g a t i o n ( E N T _ 2 ) m o t i o n ( E N T _ 2 ) v o l i t i o n ( E N T _ 2 ) s e n t i e n c e ( E N T _ 2 ) > 21 22 m o v e m e n t T H E M E = a g e n t { M A N N E R P R O C E S S = c o n c r e t e < M A N N E R E N T _ 2 > } 23 m o v e m e n t E N T _ 1 = a n i m a t e { M A N N E R m o t i o n ( E N T _ 1 ) } 24 25 v o l i t i o n ( E N T _ 1 ) { s e n t i e n c e ( E N T _ 1 ) } 26 v o l i t i o n ( E N T _ 2 ) { s e n t i e n c e ( E N T _ 2 ) } 27 m o t i o n ( E N T _ 1 ) { q u a l i t a t i v e ( E N T _ 1 ) } 28 m o t i o n ( E N T _ 2 ) { q u a l i t a t i v e ( E N T _ 2 ) } 29 p a r t o f w o r d [ n o u n a d j e c t i v e v e r b o t h e r ] 30 M A N N E R { [ a m b u l a r e c u r r e r e ] < E N T _ 2 = a n i m a t e > } 31 m o v e m e n t { c a u s e ( AGENT , s t a r t ( P R O C E S S ) ) } 32 o r g a n i s a t i o n { c a u s e ( AGENT , s t a r t ( P R O C E S S ) ) } 33 r a t i o n a l { c a u s e ( AGENT , s t a r t ( P R O C E S S ) ) } 34 i n s t r u m e n t { c a u s e ( AGENT , s t a r t ( P R O C E S S ) ) } 35 c o m m u n i c a t i o n { c a u s e ( AGENT , s t a r t ( P R O C E S S ) ) } 36 r a t i o n a l { c a u s e ( AGENT , s t a r t ( P R O C E S S ) ) } 37 38 0 run l o p e n w a l k 0 10

(14)

Appendix B

Table of features

Figure B.1: Table of used features with their meaning in a lexicon of run,walk,lopen

THEME=AGENT true if the theme is a human or an explicit mentioned

group of people

THEME=PROCESS true if not THEME=AGENT

movement true if the theme moves from one place to another

PROCESS=concrete used for fast annotation

PROCESS=abstract used for fast annotation

ENT 1 always true, in every example a theme is present

ENT 1=animate true if the theme is animate

ENT 2 true if there is another entity or object mentioned

which is not the theme

ENT 2=animate true if ENT 2 is animate

SOURCE true if an explicit source is mentioned, generally only

applicable when movement is true

GOAL true if an explicit goal is mentioned, generally only

applicable when movement is true

partofword true if label is not a verb or part of word

distance true if the process descibes a physical path

liquid true if the process is, or describes properties of, a liquid

substance

transformation true if the process alters input to output

volume true if the process can be linked with an amount of

something, or with a physical mass

body true if the process can be linked with the human body

instrument true if an agent uses something or ‘runs’ a hand over

something

temporal true if the process is exclusivly describing a specific

time or passing of time

rational true if the process can be linked with a product of the

human mind

organisation true if the process can be linked with an organization

(15)

Table of features 12

social true if process descibes an event involving two or more

agents

money true if the process can be linked with money

communication true if the process can be linked with information

which is not part of the theme

descriptive true if the label has no meaning, no real process can

be identified and attributes are assigned to the theme

total(ENT X) true if ENT X does not change

qualitative(ENT X) true if ENT X does changes in the course of the

pro-cess

existentialB(ENT X) true if ENT X ceases to exist in the course of the

pro-cess

existentialE(ENT X) true if ENT X exists at the end of the process and did

not at the start

nonExistentce(ENT X) true if ENT X does not exist (agentive property)

instigation(ENT X) true if ENT X causes an event (agentive property)

motion(ENT X) true if ENT X undergoes physical movement (agentive

property)

volition(ENT X) true if ENT X intends a process to happen (agentive

property)

sentience(ENT X) true if ENT X is consious involved in the process

(agentive property)

MANNER true if the process is movement and

THEME=AGENT, used for faster annotation.

Implies currere or ambulare

ambulare true if movement is a process of walking

currere true if movement is a process of running

noun true if partofword is true and word is a noun

adjective true if partofword is true and word is an adjective

verb true if partofword is true and word is a verb

other (word) true if partofword is true and word is another kind of

word

cause(AGENT,start(PROCESS)) true if a human is nessecarily to the process

run label

lopen label

(16)

Appendix C

Extracted feature sets

Figure C.1: Important features in defining the processes

movement THEME=AGENT, ENT 1=animate, qualitative(ENT 1),

instiga-tion(ENT 1), motion(ENT 1), volition(ENT 1), sentience(ENT 1),

MANNER, ambulare, currere, cause(AGENT,start(PROCESS))

distance THEME=PROCESS, ENT 1, partofword, total(ENT 1), noun, run,

walk

liquid THEME=PROCESS, ENT 1, ENT 2, SOURCE, partofword,

to-tal(ENT 1), adjective, run

transformation THEME=PROCESS, ENT 1, descriptive, total(ENT 1), run

volume THEME=PROCESS, ENT 1, ENT 2, descriptive,

existen-tialB(ENT 1), existenexisten-tialB(ENT 2), run, lopen

temporal THEME=PROCESS, ENT 1, partofword, existentialB(ENT 1), noun,

lopen

rational THEME=PROCESS, ENT 1, ENT 2, total(ENT 1), total(ENT 2),

cause(AGENT,start(PROCESS)), run

organisation THEME=PROCESS, ENT 2, total(ENT 1), qualitative(ENT 2),

cause(AGENT,start(PROCESS)), run

money THEME=PROCESS, ENT 1, ENT 2, qualitative(ENT 1),

to-tal(ENT 2), lopen

social THEME=AGENT, ENT 1, ENT 1=animate, ENT 2, total(ENT 1),

sentience(ENT 1), total(ENT 2), ENT 2=animate, run

body THEME=AGENT, ENT 1=animate, ENT 2, qualitative(ENT 1),

exis-tentialE(ENT 1), sentience(ENT 1), total(ENT 2), lopen

instrument ENT 2, total(ENT 1), instigation(ENT 1), volition(ENT 1),

to-tal(ENT 2), adjective, cause(AGENT,start(PROCESS)), run 13

(17)

Bibliography

[1] James Pustejovsky. The generative lexicon. Computational linguistics, 17(4):409– 441, 1991.

[2] Marco Baroni and Alessandro Lenci. Distributional memory: A general framework for corpus-based semantics. Comput. Linguist., 36(4):673–721, December 2010. [3] N. Kambhatla and I. Zitouni. Systems and methods for automatic semantic role

labeling of high morphological text for natural language processing applications, September 3 2013. URL http://www.google.com/patents/US8527262. US Patent 8,527,262.

[4] Scott Grimm. Semantics of case. Morphology, 21(3-4):515–544, October 2011. [5] Michael Ramscar, Daniel Yarlett, Melody Dye, Katie Denney, and Kirsten Thorpe.

The effects of feature-label-order and their implications for symbolic learning. Cog-nitive Science, 34:909–957, November 2010.

[6] Hwee T. Ng and John Zelle. Corpus-based approaches to semantic interpretation in natural language processing. AI Magazine, Winter 1997(45–64), 1997.

Referenties

GERELATEERDE DOCUMENTEN

Keywords: fair trade, label, perceived quality, price, discount, altruism, frequency of buying fair trade, coffee, taste, overall quality, price-quality

By comparing the standards of the Fairtrade certification and the national legal framework on the elimination of child labour in Côte d’Ivoire, it can be

Video learning appeared to be a main aspect for the development of the e-learning module, so a context-specific version of the Demonstration Based Training model (DBT-model) was

Vooral omdat de aanteke- ningen van Duits uitvoeriger, maar niet beter of slechter dan die van Veenstra zijn (of in het geval van Geeraerdt van Velsen, dan die van De Witte,

The magnetic field lines along which these electrons escape are most likely closed since radio imag- ing shows that the low frequency radio bursts that occur high in the corona

Sets the rendered table’s top row’s cells to be rendered via passing the column heading’s text (see Section 2.2) to the given &lt;render-command&gt;.

Reported findings include the as- sociation between knee pain and MR imaging findings, such as joint effusion and synovial thickening (4), bone mar- row edema (5), osteophytes

Figure 12. Three Causes for a Fever. Viewing the fever node as a noisy-Or node makes it easier to construct the posterior distribution for it... H UGIN —A Shell for Building