• No results found

Computation or storage in syntax: An eye-tracking study to syntactic processing of raising and control verbs

N/A
N/A
Protected

Academic year: 2021

Share "Computation or storage in syntax: An eye-tracking study to syntactic processing of raising and control verbs"

Copied!
45
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Computation or storage in syntax:

An eye-tracking study to syntactic processing of raising and control verbs

Amsterdam Center for Language and Communication

University of Amsterdam Abstract

This study presents an eye-tracking design that can disentangle the processing predictions of two fundamental approaches to syntax: the computational approach (e.g. generative grammar) and the storage approach (e.g. construction grammar). One of the most important differences between these two approaches is that the computational approach assumes algorithmic transformations of a phrase structure (“movement”) whereas the storage approach assumes stored constructions that do not relate in derivational terms. Although these two models are usually not regarded as models of language processing, the current study tries to illustrate their potential within this empirical domain. Two syntactic structures were compared in Dutch: sentences with raising verbs (such as lijken “to appear”) and sentences with control verbs (such as proberen “to try”). The computational approach claims that the subject of verbs such as lijken is moved from a later position in the clause to the front and that sentences with raising verbs hence require more syntactic prediction. In contrast, the storage approach claims that there is an extra thematic relation between proberen “to try” and its subject and that sentences with control verbs therefore require more syntactic prediction. These two predictions were evaluated with an eye-tracking paradigm, developed by Konopka, Meyer & Forest (2018). 43 participants heard sentences with either a raising or a control verb and simultaneously looked at pictures that correctly or incorrectly depicted the information in the sentence. The number of times participants looked at the agent was determined and it was found that more linear processing, meaning more word-by-word processing without predicting ahead syntactic structure, was used for sentences with control verbs. This result verifies the prediction of the computational approach, because the structures that require less movement operations indeed allow for less planning and syntactic prediction. The study therefore concludes in favor of the computational approach and calls for more investigation of the real-time predictions of theoretical models of syntax.

1. Introduction

One of the most fundamental questions in cognitive science is whether the human mind processes information mainly with algorithmic computations on symbols, or mainly with stored associations (for example in neural networks). This debate is also relevant to theories

(2)

derivations (Chomsky 1995) that operate on stored lexical items. Storage approaches to syntax entail that, apart from words, sentence structure is stored itself (Fillmore 1987; Goldberg 2003) and therefore entail a smaller role for algorithms. The current study will present an experiment that contrasts the processing predictions of these two approaches to sentence formation. The results support the computational approach.

Both approaches to syntax have their counterparts in theory of general cognition. The computational approach in cognitive science can be compared with Simon and Newell’s (1975) Physical Symbol System Hypothesis. According to this hypothesis, all intelligent behavior and reasoning can be achieved by means of combining symbols into symbolic structures and applying transformations to such structures. As a result, even something as complicated as “meaning” can be derived from a computational mechanism that operates on symbols. An example of a task that was claimed to be performed with transformations on symbols is mental rotation (Shepard & Metzler, 1971). Participants were asked to judge whether two pictures of three-dimensional figures were actually the same figure but rotated in space, or were two different figures. An example is shown in Figure 1. The results of this study show that the time that participants take to make this judgment, is linearly correlated to the degree that the figures must be rotated in space in order to match. Shepard & Metzler (1971) interpret this linear relation as the application of transformations (‘mental rotation’) of the represented figures in the mind (the symbols): participants are mentally rotating the symbols to see if they match. The longer they need to rotate the figures, the longer it takes to make the judgment.

(3)

Figure 1: Shepard & Metzler (1971): figures rotated in space

The storage approach to cognition has its prime exemplars in neural networks and statistical learning. Both type of models assume that learners store statistical regularities and use this information to process new information. The statistical regularities are either stored by means of weights on the connections between nodes in a neural network, or by co-occurrence probabilities. A well-known example of a neural network is the mine/rock detector by Gorman and Sejnowski (1988). This network investigates echo sounds to determine if an underwater object is either a mine or a rock. The network analyzes a large amount of data to figure out exactly what combinations of echo sounds point at a mine and what point at a rock. Subsequently, the model makes use of the associations that it has formed to make a guess about the identity of a new object. In contrast to computational approaches, the knowledge on which the guess is based is not created online but is hardwired in the network in the form of weights between nodes. The network is not creative but uses the association information that it has stored based on previous experiences.

The current study addresses the debate between computational and storage approaches in cognitive science by investigating whether syntactic structure is processed by means of a computational mechanism or by stored constructions. To answer this question, we developed an eye-tracking experiment that can distinguish between hierarchically incremental processing and linearly incremental processing of sentences. When participants process a sentence in a hierarchically incremental way, they think ahead about the syntactic structure and the message that they want to convey. When participants process a sentence in a linearly incremental way, they process the sentence word-by-word and decide after every word what the next word might be, the options being restricted by all previous words. We use this paradigm to evaluate the contrasting predictions that the two approaches make about sentence processing. The computational approach predicts that a sentence that requires a transformation of a phrase from a later position in the sentence to an earlier position, requires more thinking ahead and hence will be processed in a a more hierarchically incremental way. Such a result would be similar to the finding that mental objects that need a complex transformation to see if they match another object take more time to process. In contrast, the storage approach predicts that a sentence with more (linearly organized) syntactic or semantic relations between words needs more syntactic prediction and therefore more hierarchically incremental processing.

(4)

transformations indeed require the participants to think ahead. This study therefore concludes in favor of the computational approach.

First, the stance of this paper on the role of syntactic theory in cognition will be given. After that, the two approaches to syntax will be explained in more detail. Subsequently, the linguistic structure that is used in our experiment will be analyzed from both the computational approach and the storage approach. Then, an earlier and canonical operationalization of the core difference between the two approaches to syntax is explained. After that, the two approaches to syntax will be operationalized in a new way, using the linear versus hierarchical incrementality opposition described above. This new operationalization is then applied to our central linguistic constructions, followed by the description of the method of this study. Lastly, the results of this method are reported and discussed and a conclusion connects the findings to the general computational/storage debate in cognitive science.

2. Syntactic theory and Psycholinguistics

Before we gain more in-depth knowledge about the two approaches to syntactic structure, it is important to define how to interpret syntactic theory in the cognitive system. Below, it will be explained that many linguists consider syntactic theory as not more than a model of the knowledge that a speaker has about his language. However, it will be argued that it can be a valuable enterprise to interpret syntactic theory as the actual algorithm that executes the function of language in the mind; in other words, as a model of the language process instead of (only) the language outcome.

Originally, syntactic theory was interpreted as a description of the speaker’s knowledge of a language. The models were a description of the speaker’s competence (Chomsky 1965), in contrast to their performance: the actual processes of language production and perception. The models aim at giving an adequate and efficient description of possible and impossible sentences: they were successful if they could predict with as few means as possible what sentences were possible and what sentences were not. In terms of Marr’s (1982) model of cognition, syntactic theory describes language at the computational level: it only accounts for the knowledge a speaker needs to have for us to claim that s/he knows a language. The way this knowledge is used in a model of speaking/hearing was left to psycholinguistics to develop models of the processing of this linguistic knowledge.

Many linguists restrict the scope of their models to the linguistic outcome of the cognitive system. Typically, theoretical linguists are uncomfortable discussing how their

(5)

models relate to the algorithm that would be responsible for processing this outcome. Phillips & Lewis (2013: 2) come to this conclusion:

“In the adult state, the knowledge in question is whatever-it-is that underlies the speaker’s ability to reliably classify sentences as acceptable and unacceptable. But this mission statement leaves us with little guidance on how to interpret the specific components of the theories that we encounter. When we are told, for example, that the wh-word what is initially merged with a verb and subsequently moved to a left peripheral position in the clause, what claim is this making about the human language system? When this is described as part of a ‘computational system’, does this mean that there is a mental system that explicitly follows this sequence of operations? In our experience, this is not a question that grammarians are typically eager to discuss.”

As Philips & Lewis (2013) sketch, it is unclear how syntactic theory should be interpreted with respect to cognitive processes. Moreover, many psycholinguists are also not interested in the models of syntactic theory and usually develop their own models of syntactic processing. As Marantz (2005) puts it: “However, sometime in the 1970’s it became legitimate for the study of language in psychology and computer science departments not to explore the ongoing discoveries of the generative linguistic tradition.” Consequently, the processing predictions of theoretical syntactic models are rarely evaluated, since theoretical linguists deny these predictions overall and psycholinguistics are often not interested in them.

I believe that this leaves much unused potential: using theoretical models of syntax as psycholinguistic models of actual language production and perception can be a successful enterprise and can teach us much about the process of speaking and listening. I will present three arguments for this stance.

The first argument is that there seems to be no clear reason to consider the theoretical models as different from psycholinguistic models: why would there be a difference? Assuming that the human mind functions as efficiently as possible and, similarly, the theoretical models of language are built to explain the possible sentences of a language in a maximally efficient way, the models should offer a likely hypothesis about how language

(6)

of the storage approaches, so why would these models have no real-time value? If they work so well on paper they might as well just work in the human mind. In other words, the models, with their efficient descriptive power, are the most promising hypotheses about processing that we have.

Because we assume that the human mind works efficiently and the models work efficiently as well, they have a good chance of being a truthful representation of the language process in the human mind; however, this is not to say that we do not need to test them anymore. The human mind does not always work efficiently and can leave things to coincidence. Although there might be more likely (more efficient) models and less likely ones, there are always endless algorithms to execute a certain function in the mind. For example, rudimentary residuals of evolution might exist, or our understanding of efficiency might just be different from what is efficient in the brain. For example, rod cells in human eyes are directed away from the light, although it would be easier to direct them to the light. This is just the result of a coincidental development in evolution. Similarly, it might be the case that a process in the human mind works in an inefficient way because of some residual of evolutionary development. Moreover, it might also be that our understanding of complexity and efficiency is flawed. For example, it is the intuition of adults that a complicated case system, such as in Russian, is complex to learn and to process but the brain of a young child has few problems with this. In short, an efficient and elegant model is worth considering but not necessarily a truthful model of the brain. Therefore, the theoretical models outlined above might be the most reasonable models that we have at this moment but we still need to verify their predictions.

Lewis & Phillips (2013) formulate a different argument for the cognitive reality of theoretical models of language. As a null hypothesis, they assume that the syntactic structures exist somewhere in the brain but they are not directly integrated in the process of speaking and listening. Lewis & Phillips (2013) argue that such a “box full of abstract structures” would be reasonable if humans use several distinct procedures for language: if the structures are implementation-independent. For example, if humans would have a different system to read, to listen and to speak, it would be efficient to have one central blueprint that all different processes have access to in their own way. Because the abstract structures exist separately from the different processes, they can all operate on them with their own procedures. If the abstract structures would be integrated directly in the process, they would have to be integrated in all the different processes, which is a rather inefficient organization, as can be seen in Figure 2.

(7)

Figure 2: integration and separation in an implementation independent situation

However, Lewis & Phillips (2013) argue that the current psycholinguistic evidence suggest that language is implementation-dependent: there exists only one algorithm in the mind for all kinds of language processing. This algorithm applies to speaking, listening, writing and reading. In this case, it is highly inefficient to separate the abstract structures from the operating system, because there is only one interface: between the process and the structures. In this case, it is more efficient to integrate the abstract structures directly in the process, as can be seen in Figure 3.

Figure 3: Integration and separation in an implementation dependent situation.

Because language is implementation-dependent, it would be more efficient if the abstract structures (the speakers “competence”) and the language process are not unnecessarily separated but integrated straight away.

Marantz (2005: 438) offers a third argument:

(8)

language particular (for well-studied languages) and universal level, about the inventories and distribution of sounds and phonological features, about morphemes and words and about phrases and sentences. Linguistic issues now frequently arise that are difficult to think about and to settle with distributional data and with judgments of well-formedness and meaning; the competing theories involved all account for the numerous known generalizations about these data and do not obviously differ in their predictions about similar data.

Marantz (2005) argues that the traditional data that are usually used to test syntactic theories, are running out. As a result, current research often considers sentences that are just on the edge of grammatical and that are often interpreted as evidence for several models. Consequently, additional data is necessary to further support syntactic theory. Because linguistics is part of cognitive science, Marantz (2015: 444) wonders, why not use methods, such as reaction times and brain imagery, that have proven so useful in other cognitive domains:

The well-developed representational and computational hypotheses of linguistics may be used to learn about how the brain stores and generates symbolic representations (this is of course true about any well-developed and empirically well supported linguistic theory). In return, cognitive neuroscience will help us flesh out our linguistic theories and provide additional rich sources of data to supplement what is cheaply available through standard work with informants.

In other words, the original data has been exhausted and linguistic theory can only survive by embracing recently developed methods that can shed new light on the cognitive reality of the theory.

To conclude, it is both potentially successful and legitimate to assess the real-time value of theoretical syntactic approaches to syntactic processing. Luckily, there are many recent examples of successful experimental studies that try to verify the predictions of syntactic theory (e.g. Stockall & Marantz, 2006 and several studies by the team of Collin Phillips). Many scholars have started to embrace processing predictions of theoretical constructs (Townsend & Bever, 2001; Marantz, 2005; Phillips & Lewis, 2013; Lewis &

(9)

Phillips 2015) and have started to build experiments to test these. The current study tries to contribute to this movement.

3. Theoretical background

3.1 The computational approach to language structure

In the previous section, it was argued that syntactic theory should be interpreted as a theory of real-time language processing. Now, we will discuss the technical assumptions of two approaches in more detail: the computational approach and the storage approach.

The computational approach to sentence formation assumes all sentence structures are built online and that different structures are formed by applying different transformations (called “movement operations”) to a similar basic structure. The approach is “compositional” because both structure and meaning are composed out of smaller elements and “derivational” because different structures can be derived from the same basic structure. Its assumptions can be found in theories such as Generative Grammar (Chomsky 1957; Chomsky, 1965; Chomsky 1981; Chomsky 1995; Cinque, 2002), Lexical Functional Grammar (Kaplan & Bresnan, 1982) and Head-driven Phrase Structure Grammar (Pollard & Sag, 1994).

For purposes of illustration, let us see how sentence (1) is constructed in a computational approach.

(1) John loves the girl.

The analysis will be given following the mechanism of Minimalist Syntax (Chomsky, 1995) and glossing over a lot of detail, focusing on the concept of movement operations, since these are crucial in this study. It is worth noting that these transformations work similarly in other computational approaches. A more detailed description of the same phenomenon sketched below in the minimalist framework can be found in Hornstein, Nunes & Grohmann (2005).

Let us now build the sentence in (1). Initially, the stored atomic elements of the sentence are listed: girl, John, love, the, -s, (abstract) T. This list is called the “numeration”. The syntactic structure is built with these atomic elements, on the basis of a fixed algorithm. This algorithm consists of the operation “Merge”. Operation Merge combines two terminals

(10)

terminals are put in place one by one. In our example, shown in (2), the derivation starts with the merging of the two atomic elements the and girl.

(2)

DP the girl

One of these terminals, in this case the determiner the, becomes the head of the duo (a “constituent”) and “projects” is properties upwards. The constituent will then behave as having the properties of the head. After forming a determiner phrase, the girl is merged with

love. In this configuration, the verb becomes the head and the three words form a verb phrase,

as shown in (3). (3) VP love the girl

As can be seen the verb merges with the object before it merges with the subject. This procedure accounts for the behavior of love the girl as a coherent unit to the exclusion of the subject; for example, you can substitute the verb phrase by the dummy verb do: John loves

the girl and Bill does too but you cannot substitute the subject and the verb to the exclusion of

the object: *John loves the girl and does too the boy (meaning that John loves both the girl and the boy).

After this, love the girl is merged with John, providing the verb phrase with a subject, shown in (4).

(4) John

love

(11)

This order of merging operations, first the object and then the subject, holds for every sentence in English. Consequently, it is the order of the merging operations that determines the syntactic function of the atomic elements. The algorithm that controls this order is, therefore, responsible for the sentence structure.

However, the numeration is still not empty, because it still contains the abstract category T. This head merges with the structure in (4) and provides the verb love with the tense and number conjugation –s. This operation then finalizes the derivation for the sentence in (1).

So far, we have seen that the computational approach assumes an algorithm that merges atomic elements one by one to form a phrase structure. This procedure neatly fits the assumptions of the Physical Symbol System Hypothesis, since both approaches assume a sophisticated symbol structure as central to human cognition. These symbol structures can subsequently be manipulated with transformations, as the Physical Symbol System Hypothesis entails. These transformations also play an important role in the computational approach to syntactic structure, as so-called “movement operations”: a procedure that moves atomic elements, or sets of atomic elements to a new place in the structure. These movement operations make the framework essentially different from other approaches to syntax because different sentence structures are often merely the result of different movement operations that apply to the same basic structure. For example, we have seen that the sentence John loves the

girl is formed from the basic structure in (5), that has all words in their original position

(before movement): (5) T John love the girl

As we will see, it is possible to derive the related structure who does John love? (with the intended meaning: who is it that John is loving?) starting with the same basic structure in (6): (6)

(12)

John

love who

Importantly, the object position of the sentence, previously taken by the girl, is now taken by

who. The structure is represented this way, because who and the girl both function as the

object of love. So, although who occurs at the front of the sentence in the surface order, it originally emerges in the same position as the girl, because it has the same function. In this way, the computational approach can explain why elements in a different surface position can have the same function: both elements originally emerge in the same position.

The who-sentence has an additional abstract category in the numeration: C. As a

result, the algorithm results in a different surface structure, although the basic structure is the same, as shown in (6).

The who-sentence is formed as follows: T, the tense information, is spelled out as a word on its own, instead of as a conjugation morpheme. Therefore, an auxiliary verb appears in T. Subsequently, the abstract category is C is merged with the basic phrase structure, resulting in the structure in (7):

(7) C T does John love who

This abstract category C has a certain feature (feature +WH) that attracts who to “move” from its original position to the most upward position, as shown in (8):

(8) C T who does John love who

(13)

Although we refer to this operation as “movement”, it actually entails something more complicated. In fact, the moved element (here, the word who) is merged again on top of the structure. Consequently, it appears double, namely in its original position and in its new position. Subsequently, the original position is deleted, which makes the word appear to have moved to a new position.

Abstracting away from the technical details, the core concept of a computational model is that a sentence such as who does John love? is derivationally related to the sentence

John loves the girl. Different types of sentences in a language are the result of different

transformations that apply to a similar basic symbol structure. And although who and the girl occur in a different place in the sentence, their similar function in the sentence is explained by their similar original position, before movement operations apply.

Moreover, relating different sentence structures to each other by means of transformations, enables the computational approach to explain much of the behavior of sentences: why are some sentences grammatical and others not? For example, the sentence

who did bill meet the man who likes who? is ungrammatical, because wh-words cannot move

from a relative clause: relative clauses are adjuncts, meaning that they are “extra” and not necessary for the sentence structure. Adjuncts take a special place in the structure, because their position is one step lower than other constituents, as can be seen in (9), where the CP of

who likes who is not the direct sister of the man but hangs from anther D’.

(14)

This structural position makes extraction impossible. All these details set aside, it is important to understand that such an explanation is only possible by assuming a relation between a wh-word and its original position. The computational approach uses movement relations to analyze linguistic variation with as much explanatory power possible.

As a consequence of having transformations in a model, some structures require more or harder transformations than others, making them more complex in derivational terms. This is one of the main predictions that the approach makes and will be relevant in the following discussion.

3.2 The storage approach to language structure

The second approach assumes that different sentences are the result of different stored constructions. This approach corresponds to general models that put storage and memory central to cognition. Its assumptions can be found in linguistic theories such as Construction Grammar (Fillmore, 1988; Goldberg, 1995; Croft, 2001), Cognitive Grammar (Langacker, 1987) and Functional Grammar (Dik, 1991; van Valen 1993; Givón, 2001; Hengeveld & Mackenzie 2008; Matthiessen & Halliday, 2009).

In the storage approach, grammatical constructions are stored in memory as templates. Structures are thus not composed online but the combination of slots is ready-made. The same holds for the grammatical meaning of structures, that does not arise from a combination of the

(15)

meaning of the components but is directly associated with the form of the complete construction. An example could be the transitive main clause construction, underlying the same sentence discussed in the previous section: John loves the girl. The construction, shown in (10), specifies three positions, one for the subject, one for the inflected verb and one for the object.

(13)

This construction can be filled in with the lexemes john, love, the and girl to form the sentence John loves the girl, as can be seen in (11):

(11)

As can be seen, the constructions directly show the linear surface order of structures and do not assume underlying structure: what you see is what you get. What is often considered a benefit of this approach is that it is simpler and, therefore, more likely to be an actual representation of the human mind. Moreover, because of its emphasis on memory and association, the approach is similar to more domain-general theories of cognition and therefore tries to explain human language capabilities without having to assume a different faculty for language.

If we now want the build the related structure who does John love? we simply retrieve another construction from the memory: the wh-construction for asking for a patient or recpient, shown in (12). A different construction would be used for a sentence that asks about a subject, because this question has a different form, namely it does not have an auxilary (who

likes soccer?) and hence is a different construction.

(12)

Transitive main clause construction. Meaning: X caries out action V that effects Y Noun phrase Y Patient: is effect by action

Verb V: Expresses action or change in state. -s for 3rd singular, -ed for past Noun phrase X

Agent: carries out action

Transitive main clause construction. Meaning: X caries out action V that effects Y

Noun phrase Y: the girl Patient: is effect by action Verb V: love + s

Expresses action or change in state Noun phrase X: John

Agent: carries out action

(16)

This construction has a wh-position in the first slot and an auxiliary in the second slot. Importantly, these positions are not connected to other places in the sentence, for example, the place where the object is. This construction can then be filled in with the lexemes, who, does,

john and love, resulting in the structure in (13):

(13)

Crucially, the structure of sentences is stored in the memory as a whole and not composed online. Different sentence structures are therefore not derivationally related to each other but every structure has its own form-meaning association. Since the structure is stored, it can be counted and thus may come with a frequency, contrary to the online computational approaches that only store atomic elements, rather than complex units.

4. Raising versus control

In the previous sections, we explained the difference between computational approaches and storage approaches to language with the example of passive and active sentences. Here, we will illustrate the difference between the two approaches with another example, which will be the structure central in the following experiment. The example is given in Dutch, since that is the language that our experiment is designed in but the structure works similarly in English. Consider the example in (14):

(14) a. De hond lijkt de postbode te pakken.

The dog appears the mailman to get

“The dog appears to get the mailman.”

b. De hond probeert de postbode te pakken.

The dog tries the mailman to get

“The dog tries to get the mailman.”

Wh-construction: wh-word asks for missing verb participant

Verb (not inflected) Love Noun Phrase X: John Auxiliary: Do + es Wh-word: Who

(17)

Superficially, the sentences in (14a) and (14b) are very similar. They both have a finite verb (lijkt and probeert respectively) and an infinite phrase de postbode te pakken “to get the mailman”.

However, there is also an important difference between the two sentences, that has to do with the relation between finite verbs and their subjects (de hond “the dog”). The dog plays the role of agent in the event of proberen “to try” and this agent role is absent when it occurs with lijken “to appear”. This can be seen in (15):

(15) a. *De hond is iets aan het lijken.

the dog is something PROG it appear.

“*The dog is appearing something.”

b. De hond is iets aan het proberen.

the dog is something PROG it try

“The dog is trying something”

(15a) is ungrammatical, because the verb lijken “to appear” is not something that can be carried out, i.e. it is not an action verb. So, although, the dog is participating in the event of

proberen in (15b) and (15b), the verb lijken “to appear” does not require such a relation and is

simply indicative of the predication of a certain characteristic (such as chasing a mailman). This difference in relatedness between the subject and the verb is also indicated by syntactic differences, such as the one in (16):

(16) a. Het lijkt dat de hond de postbode pakt.

It appears that the dog the mailman gets

“It appears that the dog gets the mailman.”

b. *Het probeert dat de hond de postbode pakt.

It tries that the dog the mailman gets.

“*It tries that the dog gets the mailman.”

The verb lijken “to appear” can have an expletive (het “it”), as subject but the verb proberen “to try” cannot. The reason for this is also the absence of an agent role with lijken “to appear”:

(18)

cannot carry out an agent function and the verb proberen “to try” must have an agentive subject.

Verbs such as proberen “to try”, are called control verbs, whereas verbs such as lijken “to appear” are called raising verbs. The computational and the storage approaches have different explanations for the difference between verbs such as to try and to appear (Davies & Dubinsky, 2004). According to the computational approach, the sentence in (16a) is the result of a transformation (“movement”) of de hond “the dog” from the subject position of the infinite phrase te pakken “to get” to the subject position of the matrix verb, indicated in (17):

(17) De hond lijkt de hond de postbode te pakken.

The dog appears the dog the mailman to get

“The dog appears the dog to get the mailman.”

De hond “the dog” does not get its agent role from the verb lijken “to appear” because lijken

cannot assign such a role. However, de hond “the dog” gets an agent role from the verb

pakken “to get”. Because the agent role must be assigned by pakken “to get” to an adjacent

position, de hond “the dog” originally emerges within the phrase of pakken “to get”. Subsequently, it moves from its original position to the subject position of the clause, because Dutch requires a subject. Alternatively, the expletive het “it” could be used, such as in (16a), to satisfy the same criterion and to leave de hond “the dog” in place.

Because control verbs such as proberen “to try” do assign an agent role, the computational approach analyzes these structures in a different manner. The subject of

proberen “to try” does receive an agent role and, therefore, is also generated in the subject

position of proberen and does not need to move: its original position is similar to its surface position.

However, with this explanation, it remains unclear why pakken “to get” also has the interpretation of de hond “the dog” being its agent. According to some computational approaches, this is because there is an empty subject before to get (called “PRO”), that is controlled (gets the same reference) by the subject of proberen “to try”, as shown in (18): (18) De hondi probeert de postbode PROi te pakken.

The dogi tries the mailman PROi to get

(19)

Summarizing, in sentences with verbs as proberen “to try”, the subject does not move from the end to the beginning of the sentence, as it does for verbs as lijken “to appear” but it transfers its properties form the beginning to the end, to give the infinite verb the same agent as the finite verb. This analysis makes clear the names of the verbs: control verbs have a controlling subject, raising verbs have a subject that “raises” from the bottom of the tree to a higher position.

Storage approaches also assume that the sentences in (14a) and (14b) have another structure but for a different reason. Storage approaches do not have transformations, so there is no difference in terms of movement. According to Fillmore (2013), both sentences are the result of the same construction. The slot for the agent of the verb to get is null-instantiated and gets the same interpretation as the overt subject of the matrix verbs, as can be seen in (19):

(19)

Null-subject construction

The storage approach explains the difference between raising and control verbs with a difference in the assigned agent role: “The traditional distinctions between Equi (control, JW) and Raising are recognized in terms of the presence or absence of semantic roles in the higher valent” (Fillmore, 2013). In other words: although (14a) and (14b) are the result of the same underlying construction, the control-verb in (14b) has one more thematic role to assign, namely the agentive role to the subject. This agentive role is simply lacking in structures with raising verbs such as lijken “to appear”. Consequently, control verbs bring a more elaborate structure because they assign an extra thematic role. It is not a problem in the storage approach that de hond “the dog” does not receive any semantic role from lijken “to appear”; it can simply remain role-less. This stance is surprising from a generative perspective, because noun phrases always need to receive a role in that framework.

The examples outlined in this section will play a central role in our experimental

Null-subject (same as subject) Object Finite verb Infinite verb [Trek de aandacht van uw lezer met een Subject

(20)

verbs come with a movement operation that is absent with the control verbs; therefore, the raising verbs introduce more complexity. In contrast, the storage approach claims that the control verbs have an extra thematic role with respect to the raising verbs; therefore, the control verbs introduce more complexity. The pair of constructions therefore enables us to create an experimental setting that can disentangle the prediction of both approaches. This prediction will be explained in more detail below but first we will learn about a previous attempt to test processing predictions that a model with transformations (movement) makes. 6. Derivational theory of Complexity

This study tries to collect behavioral evidence for the real-time status of movement operations or of linguistic constructions. In the 1960s, a similar attempt was undertaken (for a summary, see: Townsend & Bever 2001, 11-45; Pfau 2011, 45-48). In their canonical study, Miller & McKean (1964) compared active and passive sentences, as exemplified in (20), to measure behavioral effects of movement operations.

(20) a. The robot shoots the ghost. b. The ghost is shot by the robot.

In the theoretical framework of that time, the passive sentences were claimed to be derived from the active sentences by means of movement operations. Miller & McKean (1964) deduced predictions about syntactic processing from the computational approach: a sentence that is formed with more movement operations should be more complex than a sentence that is formed with less movement operations and hence be more difficult to process. This prediction was named the Derivational Theory of Complexity (DTC). Applied to the example in (22), the DTC predicts that passives take longer to process than actives, because the production of the passive requires an extra operation in comparison to the derivation of the active: one needs to move the object (in the active sentence) to subject position for passives. This prediction was verified by the results of Miller & McKean (1964). However, Slobin (1966) and Fodor & Garett (1967) later criticized the results of Miller & McKean (1964), which led linguists and psycholinguists to abandon the DTC.

At the time of their study, the storage approaches were not developed yet, so it was not necessary for Miller & McKean (1964) to find a result that falsifies the alternative approach. However, as far as we can see, the storage approach makes similar predictions, albeit for different reasons, as the DTC, regarding the active and passive constructions. In the storage

(21)

approach, constructions are more difficult to process if they are less frequent. Since constructions are stored, retrieving them is easier when they are high frequent. The passive construction is less frequent than the active construction and is therefore predicted to be processed slower. Consequently, the results of Miller & McKean (1964) can be explained by both approaches: the computational approach explains the delay in processing of passive sentences by their higher derivational complexity, the storage approaches by their lower frequency.

Unfortunately, this problem, of predictions going in the same direction although for different reasons, is not restricted to passives and actives alone but is likely to occur for many structural pairs (Wolterbeek, 2018). This is due to the linguistic principles of iconicity and economy (Haspelmath, 2008): It is economic to use derivationally simple structures for frequent expressions, because this saves the energy that is needed to perform the derivations. Moreover, a structure with a complex form usually has a complex meaning and, as a result, is less frequent. Therefore, the computational approaches and storage approaches will often make the same predictions.

Wolterbeek (2018) carried out an experiment with very similar pretentions, because a psycholinguistic method was used to test predictions of a derivational system (i.e. movement operations) in argument structure (Müller & Wechsler, 2014). He investigated a pair of structures that allows to distinguish between derivational complexity and frequency. This pair of structures is the Dutch present participle and the adjective derived from it. The present participle is derivationally simplex and therefore predicted to be easier to process by a computational approach. However, it is less frequent than the adjective and the storage approach thus predicts it to be difficult to process. Thirty participants did a self-paced reading task that measured the processing time of the present participles and the adjectives. The results show that the adjectives were processed slower than the present participles, although they are more frequent. This finding verifies the DTC and therefore supports the computational approach to the processing of argument structure.

Wolterbeek (2018) used processing speed to operationalize complexity predictions that movement operations bring. Indeed, if some information is processed slower than other information, it seems reasonable to conclude that there is more of it, or that it is more complex; however, the mind can often process information in parallel (Townsend, 1990). Consequently, it might be the case that a structure is processed relatively quickly, although it

(22)

found, could be the result of serial versus parallel processing, instead of a difference in structural complexity. Another disadvantage of processing speed is that it does not allow for a measurement of what information is being processed at what time.

A method that does allow for a measurement of what is processed, is the visual world paradigm using eye-tracking (Huettig, Rommers & Meyer, 2011). In this paradigm, the eye movements and fixations of participants are measured when they process visual information on the screen. The basic assumption of this method is that people look at the information that they are processing (Richardson & Spivey, 2007). For a linguistic application of the paradigm, this entails that participants look at elements of a visual stimulus while they are processing the linguistic information relevant for that element. For example, Griffin (2001) has shown that eye-movements towards a picture that participants must describe are indicative of the lexical activation of the words to describe the parts of the picture. Concretely, during the process of retrieving the lexeme “tree” participants also look at the tree on the screen. Eye-tracking can therefore not only indicate complexity but may also indicate what is being processed at what time. Eye-tracking appears to be a suitable method to compare the processing complexity of different structures and therefore, to assess the predictions of the computational and storage approach to language.

7. Linear and hierarchical incrementality

Kunchinsky (2009), Konopka & Meyer (2014), Konopka & Kuchinsky (2015) and Konopka, Meyer & Forest (2018) developed an eye-tracking design that makes it possible to distinguish between two types of linguistic processing: linear incrementality (Gleitman, January, Nappa & Trueswell, 2007) and hierarchical incrementality (Bock, Irwin, Davidson & Levelt, 2003). Konopka & Meyer (2014) show that these two strategies are two ends of a sliding scale. They show that people tend to use both strategies under different conditions. This method will appear highly suitable for testing the predictions of the computational and storage approaches to language and will therefore be outlined here.

In the experiments cited above, participants sat in front of a computer screen and were presented with drawings of transitive events, such as that shown in Figure 2. They were asked to describe the picture, mentioning all the participants depicted in them. The eye fixations of the participants were measured during speech production. During this task, the two different strategies can be distinguished by the eye-movements.

(23)

Figure2: the cowboy catches the cow.

The two processing strategies can be described as follows: when processing using linear incrementality, participants start lexically encoding the first word of their sentences, usually corresponding to the agent in the picture, before they have finished the complete message and syntactic structure. They then build the rest of the message and structure during speaking as they go; Linear incrementality therefore requires relatively little planning and thinking ahead. In terms of eye-movements, linear incrementality is characterized by a fast fixation on one of the objects in the picture, usually the agentive one. During this fixation, the lexeme for the first word of the sentence is retrieved, before thinking about the rest of the structure.

When processing in a hierarchically incremental way, participants first complete the message-level encoding and subsequently predict the syntactic structure. Only after finalizing these two levels are the lexemes encoded to fit the structure.

The hierarchically incremental processing strategy is characterized by a longer period of switching fixations between the two objects on the picture. It is assumed that during this period, the message level is encoded and the syntactic structure is built, before the agent lexeme is retrieved.

The paradigm outlined above has resulted in several insights about when speakers tend to process sentences more hierarchically. Konopka & Meyer (2014) show that participants exhibit more hierarchical incrementality when they are structurally primed, so when there is need to adhere to a specific syntactic structure. When speakers are adhering to a certain syntactic structure, they need to think ahead about the rest of the sentence before speaking. For example, if a speaker is, subconsciously, adhering to the passive sentence structure, he needs to direct his attention to the patient initially, although his attention is directed to the

(24)

example, when they speak in a second language, they use more hierarchical incrementality, because they need to think ahead more. And, participants use less hierarchical incrementality at the beginning of the test, when they are less used to the task and it is still more cognitively demanding. Summarizing, hierarchical incrementality is an indicator for general complexity and the need to predict syntactic structure.

8. Current predictions

The method of Kunchisnky (2009), Konopka & Meyer (2014), Konopka & Kuchinsky (2015) and Konopka, Meyer & Forest (2018) is a suitable way to assess the structural complexity of sentences.

Let us now look back to the difference between control and raising structures, the example in (14) repeated here in (21):

(21) a. De hond lijkt de postbode te pakken.

The dog appears the mailman to get

“The dog appears to get the mailman.”

b. De hond probeert de postbode te pakken.

The dog tries the mailman to get

“The dog tries to get the mailman.”

(21a) illustrates a raising structure, whereas (21b) shows a control structure. As explained in section 4, the computational and storage approaches make different predictions regarding the structural depth of these two structures, summarized in Table 1:

Computational approach Storage approach

Control structure No movement:

Linearly incremental processing

Extra thematic role of subject

Hierarchically incremental processing

Raising structure Movement:

Hierarchically

incremental processing

No thematic role of subject:

Linearly incremental processing

(25)

The computational approach predicts that the subject of raising structures needs to be extracted from the verb phrase in the embedded sentence. Therefore, participants would need more hierarchical incrementality for raising structures, because they must build the structure of the embedded sentences before they can utter the subject of the first sentence. Moreover, because the structure of raising sentences is more complex over-all, participants should rely on the safer hierarchical strategy. Control sentences are not dependent on a later sentence-position and therefore allow for more linear processing.

In contrast, the storage approach predicts that the subject of control structures needs to receive an extra thematic role from the finite verb and therefore requires more hierarchical processing. It should take more time to establish the syntactic relation, before participants can continue with the lexeme encoding for control sentences. This look-ahead is not necessary for the raising structures, that simply lack the thematic role.

As outlined above, we can use the method of Konopka & Meyer (2014) to assess the structural complexity of raising and control structures, in a way that disentangles the predictions of the computational and storage approaches. This method adds to the findings by Wolterbeek (2018), by testing the contrastive predictions of two approaches to sentence structure but it uses a more advanced behavioral method. Moreover, it depends less on the shortcomings of the DTC because it operationalizes complexity not just in processing time but also makes predictions about what is being processed: the subject lexeme (indicated by a focus on the agent) or the structure (by switching between the agent and patient).

9. Method

As outlined above, the current study uses the method by Kunchisnky (2009), Konopka & Meyer (2014), Konopka & Kuchinsky (2015) and Konopka, Meyer & Forest (2018) to disentangle the predictions of the computational and storage approaches to syntax. The same pictures that were used in the experiment of Konopka, Meyer & Forest (2018) were used in the current experiment. However, some adjustments in the design, explained below, were necessary to fit the needs of the syntactic structures. The task was programmed in E-Prime (Psychology Software Tools, 2012) and executed with a Tobii Pro X3-120 eye-tracker.

9.1 Production versus perception

(26)

pictures. A possible solution to this problem would be to prime participants with other raising and control sentences for some pictures and then see if they produce raising and control sentences for the following pictures. However, the results of Konopka & Meyer (2014) show that structural priming makes participants process a picture in a more hierarchically incremental way. Consequently, the priming solution interacts with the main outcome variable.

We therefore choose to change the design from a production task to a perception task. We made participants process spoken linguistic stimuli (c.f. raising and control sentences) while they saw comparable information on the pictures. The participants were then asked to judge whether the sentence that they heard correctly described the picture that they were seeing. Consequently, the processing of the picture would be influenced by the linguistic stimuli, because participants need to integrate the two types of information. We could then measure if the fixations of participants on the screen were different when they heard raising sentences compared to when they heard control sentences.

9.2 Word order and cover task

However, there is a problem with the word order of the sentences. Consider the sentence in (22):

(22) De cowboy probeert de koe te vangen.

The cowboy tries the cow to catch.

“The cowboy tries to catch the cow.”

An effect in the eye-fixations would be expected at the moment of the finite verb, because the verb is associated with the syntactic structure under investigation. Therefore, we are interested in the eye gazes of participants towards the agent (the indicator of linear versus hierarchical incrementality) at this point. However, when participants hear the finite verb, the agent has already been mentioned earlier in the sentence, so why would people still fixate on it?

This problem was solved by changing the declarative and use yes/no questions instead, as the one in (23):

(23) Probeert de cowboy de koe te vangen?

Tries the cowboy the cow to catch?

(27)

The yes/no-question order in Dutch has the finite verb in first position. This order enabled us to measure an effect during the pause after the verb of interest (the raising or control verb) and the rest of the sentence. This has two advantages: we could measure the proportion of looks to the agent and patient when the words for those entities have not been mentioned yet and because only the verb of interest has been heard at this point, there is no contamination from other words.

The question form of the sentences has another advantage. In the previous section, it was explained that participants were asked to judge whether the information in the picture matches the information that they hear in the sentence. This task makes the question forms very natural, participants just answered the questions with either “yes” or “no”. For example, in (10), if the cowboy is indeed catching the cow, participants responded with yes and if he is

releasing a cow, instead of catching it, participants would say no.

A potential disadvantage of the question word order, is that the subject of raising sentences does not seem to “raise” anymore; it remains in its original position, as can be seen in (24).

(24) Lijkt de cowboy een koe te vangen?

seems the cowboy a cow to catch

“Does the cowboy seem to catch a cow?”

However, in question raising sentences, there is still movement of the subject to an earlier position. (25) shows the basis structure before movement operations apply. Lijkt “appears” stands in V and the subject de cowboy “the cowboy” stands in the projection of V.

(28)

(25)

In contrast, (26) shows the same structure but after movement operations apply. Lijkt

“appears” has moved to C and the subject to the projection of T. Consequently, there is still

“raising” of the subject to a higher position, although this is not visible in the linear order. (26)

(29)

The prediction of the computational approach that raising sentences need more syntactic planning still stands: a position hierarchically further away needs to be created to extract the subject from.

In order to have a task that requires the participants to maintain their attention, it is of course necessary that participants also encounter non-matching sentences. Therefore, half of the sentences did not match the pictures. In all these cases, the non-matching was in the content verb (in our sentence structures in the final position), as is the case in (27):

(27) Probeert de cowboy de koe vrij te laten?

Tries the cowboy the cow free to release?

“Does the cowboy try to release the cow?”

Consequently, the clash in information between the sentence and picture would always be as far away from the verb of interest (the control or raising verb) as possible. Moreover, because the information clash was always in the verb, participants were not further influenced in their attention to one of the verb participants (agent or patient), because they would soon find out that the verb participants would always match the pictures (for example, the cowboy not catching a cow but a pig).

9.3 Materials and counterbalancing factors.

The test contained 25 sentences with a control verb and 25 verbs with a raising verb. Moreover, 25 filler sentences came with the auxiliary zijn “to be” or hebben “to have”, such as the sentence in (28):

(28) Heeft de cowboy de koe gevangen?

Has the cowboy the cow caught?

(30)

The purpose of these fillers was to have more types of verbs in the first position, so that it would be less obvious to participants that the task was about raising and control verbs. Table 2 shows the verbs that were used in the experiment and their frequency.

Control verbs Raising verbs Filler verbs

Proberen “to try” 10 Lijken “to appear” 10 Hebben “to have” 12 Helpen “to help” 2 Blijken “to turn out”

6 Zijn “to be” 13

Beginnen “to begin” 6 Schijnen “to seem” 9 Betreuren “to regret” 1 Hopen “to hope” 2 Verplichten “to oblige” 1 Dwingen “to force” 2 Bevelen “to order” 1

Table 2: verbs in the test and their frequency

In the previous experiments that used the same pictures, it was shown how much the responses were influenced by the factor of codability (Kunchisnky, 2009; Konopka & Meyer, 2014; Konopka & Kuchinsky, 2015; Konopka, Meyer & Forest, 2018). Codability is a measurement for how consistently participants choose a lexeme to describe an element of the picture: the agent, the patient or the event. For example, some pictures have a highly codable event, that almost everyone describes with the same lexeme vangen “to catch”. In contrast, there are low codable agents, such as the word punk “punk” that is also described as man “man” or jongen “boy”. Every picture comes with three codability scores: one for the agent, one for the patient and one for the event. The measurement is calculated using the statistical procedure Shannon’s entropy, explained in Kuchinsky (2009). The scores of Konopka, Meyer & Forest (2018) were sent to us, aggregated from the production data of that experiment. Subsequently, we divided the 75 pictures over the three conditions (raising, control, filler) in such a way that the three score averages were as equal as possible for the three conditions.

Moreover, some other factors were counterbalanced among the three conditions. Both the control and raising condition had 13 items with the agent on the right and 12 items with

(31)

the agent on the left. The fillers had 13 items with the agent on the left and 12 items with the agent on the right. Accordingly, both the control and raising condition had 13 matching picture-sentence pairs and 12 non-matching picture-sentence pairs. The filler condition had 13 non-matching picture sentence pairs and 12 matching picture-sentence pairs. The order of the items was randomized for each participant. Consequently, every item occurred in many different places within the item order. Lastly, it was administered for every item whether there was an instrument involved in it (for example, a man with a fishing rod), and it was administered what the size of the agent was in the picture. Both factors could thus be controlled for in the statistical analysis.

9.4 Participants and procedure

43 native speakers of Dutch participated in this study. All participants were students and were below 40 years old. 3 participants did not complete the test because of technical problems. No students of Linguistics participated in this study. All participants were paid 5 euro’s.

Participants read a brochure that informed them about the experiment. The participants were told that the experiment was about the processing of visual information in combination with language. They were instructed to look at the pictures, to listen to the sentences and to judge whether the sentences matched the pictures. By pressing a green or red button, they could give their judgement (red= non-matching; green = matching). The participants were given an example of an item with a matching and a non-matching sentence, were told not to think too deeply about the items and to choose intuitively. After the calibration procedure of the Tobbi eyetracker, all items occurred in a row, without any breaks. After the test, participants were asked to guess what the experiment was about. Finally, the participants were told the purpose of the study.

9.5 Combination of auditive and visual stimuli

All sentences were recorded. The verb of interest, the first word of the sentence, was recorded separately from the rest of the sentence. Every item was build up as follows: the participants see a cross at the center of the screen. If they look at this cross and are fixating at the center of the screen, the item starts. First, they hear the verb of interest, as the first verb of the sentence. While playing the audio for the verb, the cross remains visible on screen and the picture is yet to appear. After completion of the verb, a pause of 1200 milliseconds occurs in the sentence,

(32)

participant. After 1200 milliseconds, the sentence continues and participants decide to press the green button (yes, correct) or the red button (no, incorrect). A summary of the chronological build-up of an item can be found in Table 3.

Phase 1 Phase 2: Moment of

measurement

Phase 3

Audio Verb of interest 1200ms pause Rest of the sentence

Screen Cross Picture Picture

Table 3: the chronological build-up of an item, the moment of measurement in italics.

9.6 Analysis

To measure the looks at the agent, a rectangular area-of-interest was constructed for each picture around the agent. Looks inside this area were considered looks to the agent and looks outside this area were considered looks to something else. These area-of-interests always contained the face of the agent and did not contain any hands or a possible instrument, since these were usually carrying out the event and therefore less associated with the agent and more with the event.

The location of the eyes is measured between picture onset (directly after the verb of interest) and 1200 milliseconds of audio pause. During this phase, the proportion of looks towards the agent was administered in four time-windows during the 1200 milliseconds. Recall that more looks to the agent in this early processing stage is associated with linear incrementality, while less looks to the agent are associated with hierarchical incrementality. The time-windows that we measured correspond to the windows in which earlier studies found an effect, namely 0-400ms (effect reported in Konopka & Meyer, 2014), 400-600ms (Konopka, Meyer & Forest, 2018) 400-1000ms (Konopka & Meyer, 2014) and 1000-1200m (Konopka & Meyer, 2014).

The results were analyzed with a linear mixed effects model from the lme4 package (Bates, Douglas, Maechler, Bolker & Walker 2015) using R (R Core Team 2016). Initially, all administered factors were included in this model, because there was interest a priori in their contribution to the effect (Gelman & Hill 2007). If the model would not converge, first random slopes would be left out. If the model still would not converge, predictors would be left out one by one, starting with the predictor that makes the smallest contribution to the effect.

(33)

Because we analyzed the proportion of looks towards the agent in four different time-windows, we have a four times higher chance to find a significant effect (Simmons, 2011). Therefore, we divided the standard α-level of 0.05 by 4, resulting in an α-level of 0.0125. 10. Results

The model ran 100,000 simulations using the optimizer function to maximize its potential to converge. Unfortunately, it did not converge with any random slopes, so these were left out. What remained was the most sophisticated model that would converge, including all predictors.

This model has the proportion of looks to the area of interest of the agent (range: 0.00-1.00) as the outcome variable. The research question answering predictor was condition (raising, control or filler). Counterbalancing predictors (all within-subjects and between-items) were: location of the agent (either left or right), the presence of an instrument (either present or absent), the size of the area of interest of the agent (numeric, range after rescaling: 0.17-10,3), the agent codability (numeric, range: 0-2.58), the event codability (numeric, range: 0-3.13) and patient codability (numeric, range: 0-2.42). The last predictor was order, indicating the order number of an item in the test (within-subjects and within-item). In addition, the model has a random factor for participant. Picture was a nested random factor for the random factor finite verb (one of the verbs in Table 2), since one verb occurred with several pictures but every picture only occurred with one verb.

The values of the predictor size initially had a much larger range than the other numeric predictors. Therefore, the model required the predictor to be rescaled, by dividing all values by 10.000. As a result, the effects for most predictors change in proportion, making their size uninterpretable. Nevertheless, their direction can still be interpreted.

The model described above was executed for the four time-windows: 0-400 milliseconds, 400-600 milliseconds, 400-1000 milliseconds and 1000-1200 milliseconds.

The model revealed a significant effect in all time-windows of the research question answering predictor: condition (raising to control). In the time-window of 0-400 milliseconds, there were more looks to the agent for sentences with a control verb than for sentences with a raising verb (β = 36.39, t = 12.45, p from 0 < 2.2 × 10-16, 95% confidence interval: 30.54…

42.24). In the time-window of 400-600 milliseconds, there were more looks to the agent for sentences with a control verb than for sentences with a raising verb (β = 35.65, t = 11.36, p

(34)

for sentences with a raising verb (β = 35.34, t = 11.87, p from 0 < 2.2 × 10-16, 95%

confidence interval: 29.35…41.33). In the time-window of 1000-1200 milliseconds, there were more looks to the agent for sentences with a control verb than for sentences with a raising verb (β = 30.30, t = 8.026, p from 0 < 2.2 × 10-16, 95% confidence interval: 22.75…

37.85). These results are summarized in Table 4.

Raising to Control per time-window Effect (β) Standard Error t p from 0 95% C.I. 0-400ms 36.39 2.927 12.45 < 2.2 × 10-16 30.54…42.24 400-600ms 35.65 3.137 11.36 < 2.2 × 10-16 29.38…41.92 400-1000ms 35.34 2.997 11.87 < 2.2 × 10-16 29.35…41.33 1000-1200ms 30.30 3.775 8.026 < 2.2 × 10-16 22.75…37.85 Table 4: Model results for the factor raising versus control in the four time-windows.

Generalizing to population level, it can be said that people look more towards the agent when processing a picture in combination with a sentence that has a control structure than when processing a sentence that has a raising structure. This result is in line with the prediction of computational approaches to language and it falsifies the predictions of the storage approaches to language. More looks to the agent for sentences with control verbs indicate that sentences with control verbs are processed more in a more linearly incremental way, whilst sentences with raising verbs are processed in a more hierarchically incremental way. Interestingly, the fillers exhibited a lower proportion of looks towards the agent than the raising and the control sentences together (0-400ms: β = -37.08, t = -13.36, 400-600ms: β = 36.24, t = -12.15, 400-1000ms: β = 35.71, t = -12.65, 1000-1200ms: β = 30.36, t = -8.39). This result will be discussed in more detail in the discussion section.

Table 5 summarizes the mean and standard deviation of the proportion of looks to the agent in raising and control sentences. As can be seen, the difference in these raw means lies in the opposite direction from the difference that the model provides. In the raw means, the raising sentences show more looks towards the agent, whereas the model indicates more looks to the agent with control sentences. This discrepancy has two potential causes. It is possible

Referenties

GERELATEERDE DOCUMENTEN

In the narrative setting of the current study, children ’s accuracy of direct speech interpretation was signi ficantly higher than in the information-transmission setting of Köder

Kinderen van beide rekenniveaus vertonen dus geen adaptief gedrag wat betreft oplossnelheid in groep 8.Wat betreft strategie-accuraatheid wordt er alleen voor kinderen in groep 6

The model proposes that: (a) the individual characteristics of long-term orientation and collectivist orientation, and the situational characteristics of trust in the

Bij sterke sociale netwerken waar informele zorg tot stand komt is de vervlechting van het microperspectief en het macroperspectief van sociaal kapitaal goed zichtbaar: door

This is most apparent in the first adjective region where the common-gender conditions induced match effects in numerous eye-tracking measures, yet the mismatch effect in

The experts we interviewed expect the possible changes in suppliers and supply not only to have an effect on the market (turnover of casinos and number of visitors),

In the current study, mean reaction times of critical verbs for semantically acceptable A-not-B1 condition were shorter than those of semantically unacceptable A-not-B2 condition,

In conclusion, this thesis presented an interdisciplinary insight on the representation of women in politics through media. As already stated in the Introduction, this work