• No results found

Compatibility effects evoked by pictures of graspable objects

N/A
N/A
Protected

Academic year: 2021

Share "Compatibility effects evoked by pictures of graspable objects"

Copied!
55
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Compatibility Effects Evoked by Pictures of Graspable Objects By

Maria H.J. van Noordenne

B.Sc., Psychology, University of Victoria, 2016 A Thesis Submitted in Partial Fulfillment of the

Requirements for the Degree of MASTER OF SCIENCE In the Department of Psychology

© Maria H.J. van Noordenne, 2017 University of Victoria

All rights reserved. This thesis may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author.

(2)

Compatibility Effects Evoked by Pictures of Graspable Objects By

Maria H.J. van Noordenne

B.Sc., Psychology, University of Victoria, 2016

Supervisory Committee

Dr. Daniel Bub, Department of Psychology Supervisor

Dr. Michael Masson, Department of Psychology Co-Supervisor

(3)

Abstract

It has been claimed that action representations can be evoked by the image of a handled object (Tucker & Ellis, 1998). Contrary to this view, it may instead be the location of the object’s handle in visual space that generates a spatial code that in turn interacts with selection of response location. For example, an object with its handle extending into right visual space may bias attention to the right, resulting in a faster right- versus left-sided response (Cho & Proctor, 2010).

In the current experiments I present evidence that under certain task conditions, images of objects evoke their corresponding action representations. When subjects engaged in laterality judgments to images of hands presented after or in conjunction with an image of a handled object, motor representations associated with that object were evoked. Although the location of the handle was irrelevant to the task, subjects were faster at responding when the depicted handle location and hand of response were aligned (i.e., right-handed key press to a right-handled frying pan) rather than misaligned. The effect of alignment remained constant across the response time distribution. When subjects made a crossed-hand response, the alignment effect was driven by a

correspondence between the location of the object’s handle and the response hand, not the response location. These results contrast with what was found when observers responded to directional arrow cues in place of pictures of hands. With arrow cues, the observed alignment effect appeared to be driven by spatial correspondence between the location of the object’s body and the location of the response button. Moreover, in this case the alignment effect decreased across the response time distribution, in keeping with

(4)

other cases of spatial compatibility effects (Proctor, Miles, & Baroni, 2011). I conclude that attention to an image of a hand can induce observers to activate motor affordances associated with pictured objects.

(5)

Table of Contents Supervisory Committee . . . ii Abstract . . . iii Table of Contents . . . v List of Figures . . . vi Acknowledgments . . . viii 1 Introduction . . . 1 2 Experiments . . . 12 2.1 Experiment I. . . 13 2.1.1 Method . . . 16

2.1.2 Results and Discussion . . . 20

2.2 Experiment II . . . 20

2.2.1 Method . . . 21

2.2.2 Results and Discussion . . . 22

2.3 Experiment III . . . 26

2.3.1 Method . . . 27

2.3.2 Results and Discussion . . . 27

3 General Discussion . . . 34

4 References . . . 39 5 Appendices

5.1 Appendix A: Additional delta plots Experiment I and II . . . 5.2 Appendix B: Supplementary object analysis results. . .

44 46

(6)

List of Figures

Figure 2.1 Top: Object primes used in Experiment I, II, and III. Bottom: Hand cues used in Experiment I, II, and III . . . 14

Figure 2.2 left: Response box used in Experiment I, II, and III and hand position used in

Experiments I and II. right: Laboratory setup Experiment I, II, and III . . . . . 15

Figure 2.3 Trial procedure for 0-ms SOA and 250-ms SOA conditions Experiment I . . . 16

Figure 2.4 Analysis of reaction time in Experiment I. Mean RT for aligned/ misaligned stimuli and congruent/ incongruent stimuli across 0-ms SOA and 250-ms SOA

conditions. Error bars are 95% confidence intervals. . . 17

Figure 2.5 Delta plot of alignment effects in Experiment I across the RT distribution. Alignment effect size by quintile. X-axis represents mean RT of each quintile. Error bars are 95% confidence intervals. . .

19

Figure 3.1 Sample stimuli used in Experiment II . . . 21

Figure 3.2 Analysis of reaction time in Experiment II Mean RT for aligned/ misaligned

stimuli across SOA conditions. Error bars are 95% confidence intervals. . . 23

Figure 3.3 Delta plot of alignment effects in Experiment II across the RT distribution. Alignment effect size by quintile. X-axis represents mean RT of each quintile.

Error bars are 95% confidence intervals. . . 25

(7)

Figure 4.2 Analysis of reaction time in Experiment III. Mean RT for aligned/ misaligned stimuli and congruent/ incongruent stimuli. Error bars are 95% confidence intervals. . .

29

Figure 4.3 Delta plot of alignment effects in Experiment III across the RT distribution. Alignment effect size by quintile. Top: 0-ms SOA condition. Bottom: 250-ms SOA condition. X-axis represents mean RT of each quintile. Error bars are 95% confidence intervals. . .

(8)

Acknowledgments

I would like to thank:

Dr. Daniel Bub and Dr. Michael Masson for always having answers to my endless stream of questions and making me strive to always do better.

Morgan Teskey and Hannah van Mook for being the best two people to “threesis thesis” and pomodoro with.

Katlin Aarma, Rebecca Haegedorn, Fraser Hamilton, Katherine Harris, and Meghan Thompson for being the most supportive group of friends anyone could have.

(9)

Introduction

It has been widely claimed that images of graspable objects, even when passively viewed, automatically trigger affordances. On this view, action representations induced by the visual features of an object serve to activate motor cortical regions, even in the absence of any intention to act on the object (Chao & Martin, 2000). Tucker & Ellis (1998) required subjects to make a key-press response using either their left or right hand to determine whether an image of a frying pan was upright or inverted. If an image of an object activates a region of motor cortex that corresponds to one or the other hand, then motor representations could influence selection of a left- or right-handed key-press response. Although the location of the handle was irrelevant to the task, subjects were faster at responding when the handle location and hand of response were aligned (i.e., right-handed key press to a right-handled frying pan) rather than misaligned.

Although it has been widely assumed that these and other similar effects are due to the automatic evocation of motor affordances, another explanation is possible.

Alignment effects involving correspondence between the location of an object handle and the response hand may be due to abstract spatial codes (Proctor, Miles, & Baroni, 2011). An abstract spatial code is a representation of spatial location usually in reference to an anchor point such as the body midline. An interesting phenomenon that illustrates how these abstract spatial codes link the external environment to internal representations is the Simon effect. The Simon effect occurs when an irrelevant spatial dimension of a stimulus interacts with the manual response location. For example, in a task where subjects

classified the identity of a letter that was positioned either to the left or right side of the visual field, stimulus location, although irrelevant to the task, affected response times.

(10)

Subjects who were instructed to make a right-handed key press to the letter “E” for instance, were faster at responding when the letter appeared on the right side of the screen rather than the left (Wascher, Kuder, Schatz, & Verleger, 2001). The encoding of the spatial location of the target stimulus and of the corresponding location of response button do not depend on which hand executes the action, but rather the location of the response.

The same logic could be used to explain the Tucker & Ellis (1998) results. The handle in the image of a graspable object might generate an abstract spatial code that does or does not correspond with the response location and this potential correspondence affects the speed of responses (Cho & Proctor, 2010). Attention is being drawn to the side of space occupied by the handle, depending on how the object is presented to the observer. If the body of the object is centered in the visual field and the handle extends out into space, attention may be drawn to the location of the protruding handle. Cho and Proctor (2010) offered this possibility as an explanation for the alignment effects

demonstrated by Tucker and Ellis (1998). On their account, visual attention was drawn to one side of space or another due to the handle. The authors put forward evidence in support of their claim. Subjects were to make an upright/upside-down judgment like the one used by Tucker & Ellis (1998) by making a key-press response to images of objects. The objects were body centered so that the handle protruded into the left or right side of space. Subjects responded faster when the location of visual asymmetry (due to the handle) matched the side of response. If the correspondence effect induced were due to an object affordance, then the effect should be limb-specific. For instance, if an object is generating motor representations of a particular hand, a response to a handled object

(11)

using a left versus right hand would induce an alignment effect. No such effect would be found when responding with two fingers of one hand. Cho & Proctor (2010) found that responding to a depicted object with the fingers of just one hand yielded spatial alignment effects that were indistinguishable from effects obtained when responses were carried out with left versus right hands. This result is comparable to what has been found with cross-handed responses in the Simon effect, where the left-right spatial location of the response matters, not how that response is being made (i.e., between- or within-hands or crossed hands). As a result, Cho and Proctor concluded that handled objects do not trigger affordances. The correspondence effects between the location of the handle of a depicted object and side of response are simply “object-based Simon effects”. This effect, unlike the Simon effect, is not induced by the relative spatial location of a stimulus as whole but by a part of the object (the handle) in relation to the whole. The handle protrudes to one or the other side of a central body, inducing an object-based effect. Like the Simon effect, the object-based Simon effect is due to the influence of abstract spatial codes rather than representations of limb-specific grasp actions.

In contrast to Cho & Proctor (2010), when Tucker and Ellis (1998) tested subjects by having them respond with two fingers of the same hand, no alignment effect was found. A potential challenge to this finding is that although mean reaction times did not reach significance for within-hand responses, analysis of percentage of error and median response time showed correspondence effects. A second potential challenge to the Cho and Proctor proposal was reported by Pappas (2014), who had subjects make an

upright/upside-down judgment to a photograph of a frying pan with its handle pointed left or right. Unlike Cho and Proctor, an effect of alignment was observed under

(12)

conditions in which subjects made between- but not within-hand responses. However, a critical confound lies in the separation between response keys that Pappas used for his between- and within-hand conditions. The response keys used for between-hand

responses were 35.5 cm apart but the response keys for within-hand responses were only 2 cm apart. This is important because evidence suggests that the size of a spatial

correspondence effect is sensitive to response-key location (Chen & Proctor, 2014; Proctor & Chen, 2012). Recall that when Cho and Proctor made the distance between response keys consistent for both response modes, the alignment effects were the same in both cases. This result implies that the Pappas result is questionable and explains why other studies have not been able to replicate Tucker and Ellis (e.g., Philips & Ward, 2002).

To reiterate, the claim that objects automatically trigger action representations regardless of the intentions of the observer entails that a key-press response in the

between-hand condition shows an effect that is not present when responses are made with two fingers of just one hand. To our knowledge there is no convincing evidence in favour of such a result.

When Do Images Trigger Affordances?

Through experience, humans learn that images of objects are conceptually different than real objects; the action a photograph “affords” is pointing to a two-dimensional surface (DeLoache, Pierroutsakos, Uttal, Rosengren, & Gottlieb, 1998).However, in certain action contexts, depicted objects can evoke a motor affordance. Motor representations associated with an object are likely to be evoked if subjects are in the context of performing a reach-and-grasp action, but not if they are in the context of

(13)

pointing (Bub, Masson & Cree, 2008; Girardi, Lindemann, & Bekkering, 2010). When instructed to complete a reach-and-grasp action that corresponded to an image of a graspable object, subjects were faster than when the action did not correspond with the object. When the task context required subjects to reach-and-point to an image of an object, relevant motor representations were not evoked.

Bub, Masson & Kumar (2017) proposed that “engaging in a reach-and-grasp action itself plays a crucial role in triggering constituents from depicted objects” (p.1). This result was obtained when subjects were to perform an action cued by an image of a hand. These actions were performed by grasping a vertical or horizontal response element placed within arm’s reach. The egocentrically viewed hand was presented in conjunction with a photograph of a graspable object. A significant alignment effect was found whereby responses were faster when the object's handle was aligned with the response hand, relative to when it was aligned with the other hand. To better understand the underlying mechanisms of such correspondence effects, the temporal dynamics of the effects can be evaluated. One method of doing this consists of measuring correspondence effects across the full range of the response time distribution.

De Jong et al. (1994) has evaluated Simon effects in this way. The method for doing this involves arranging response times in rank order for each condition for each subject. Each set of ordered response times is then separated into successive bins containing equal numbers of observations. For example, if the rank ordered list of

response times is to be arranged into 5 bins, then the shortest 20% of observations will be in the first bin, next shortest 20% in the second bin, and so on across the response time distribution. An average correspondence effect can then be computed within each bin and

(14)

the size of the effect can be plotted across the bins, producing a delta plot. This plot provides a visualization of the effect of interest across the response time distribution. The mean response time within each bin across the two conditions being compared is plotted on the x axis and the mean effect size for that bin is plotted on the y axis (see

Ridderinkhof, 2002). The standard Simon effect tends to decrease across the response time distribution, such that as response time gets longer, the Simon effect dissipates (Hommel, 1994b; Ridderinkhof, 2002a). Under some circumstances, such as performing a Simon task with hands crossed, the correspondence effect can increase across the response time distribution (see Wascher, Kuder, Schatz, & Verleger, 2001).

The object-based Simon effect has also demonstrated an increasing effect of alignment across the response time distribution. When subjects made upright/upside-down judgments to images of graspable kitchen objects, key presses were faster when the response hand corresponded with the location of the object's handle than when it did not correspond (Iani, Baroni, Pellicano & Nicoletti, 2011; Riggio, Iani, Gherri, Benatti, Rubichi & Nicoletti, 2008). An increasing effect across the response time distribution suggests that there was slow rate of activation in the representation induced by the irrelevant feature of the stimulus (see Proctor, Miles & Baroni, 2011).

In contrast to these patterns of results across the response time distribution, in a case where motor representations are responsible for a correspondence effect, a flat delta plot has been observed. This result suggests that the correspondence effect is not the product of spatial correspondence, as it in the Simon effect (Bub et al., 2017). Alignment effects that remain constant across the RT distribution and occur at short SOA’s would not be

(15)

consistent with the view that images of objects trigger abstract spatial codes which affect speeded left/right decisions.

Key-press Responses and Features of Grasp Actions

The evidence reviewed so far does not support the claim that images of graspable objects automatically elicit their associated limb-specific motor representations.

However, we know that pictures of certain objects, in particular the image of a hand presented from an egocentric perspective, trigger specialized regions in the brain that are intimately involved in the preparation and execution of reach-and-grasp actions (Mahon, 2013). When subjects see a static image of a hand that implies movement, a

representation of the action is automatically evoked (Urgesi, Moro, Candidi, &Aglioti, 2006). Using single-pulse transcranial magnetic stimulation, increased corticospinal excitability was found when subjects viewed photographs of hands that implied

movement. This activation was comparable to that seen during overt performance of an action. Photographs of other types of hands such as resting or relaxed, or images demonstrating completed actions did not show increased corticospinal excitability. Regions of the ventral and lateral temporal-occipital cortex were also shown to be activated during processing of hand cues that implied movement (Gallivan, Chapman, McLean, Flanagan, & Culham, 2013). Critically, this region lies in close proximity to a region of the brain that responds to images of tools (Gallivan et al). To further investigate whether this region of the brain codes selectively for grasping objects specifically with the hand, subjects were to manipulate an object with either their hands or reverse tongs (tongs close when fingers open and open when fingers close). Indeed, the lateral occipital cortex showed activation for upcoming movements with the hand and not with tongs

(16)

(Gallivan, McLean, Valyear & Culham, 2013). These results provide further evidence for the close link between cortical responses to images of hands and to images of tools. Studies using fMRI have demonstrated that when subjects view images of body effectors, other body parts, object effectors (objects that require body effectors to be used, such as a hammer), and other objects, a similar region in the lateral occipito-temporal cortex and parietal cortex was activated specifically by body and object effectors (Bracci & Peelen, 2013). Bracci and Peelen concluded “that the organization of object representations in high-level visual cortex partly reflects how objects relate to the body” (p. 18258).

Behavioural evidence indicates that responses to images of hands are indeed primed by actions associated with pictured objects. Bläsing and colleagues (2014) instructed a group of climbers and non-climbers to classify images of hand grasps as either a crimp or sideways pull by making a key-press response. Prior to the hand cue, an image of a climbing hold irrelevant to the task was presented for 100 ms. The climber group but not the non-climber group was faster at responding when the hand grasp matched the grasp associated with the climbing-hold prime (Bläsing, Güldenpenning, Koester, & Schack, 2014).This result suggests that for climbers, perceiving climbing holds may lead to the activation of the representation of the grasp action associated with the hold. Of

importance, this effect was shown only with the expert climbers and not with novices. The climbing holds have meaning to the group of expert climbers so they have

corresponding action representations associated with them. These representations do not exist for non-climbers. This dissociation suggests that motor representations associated with graspable objects are based on knowledge of how these objects interact with the hand.

(17)

Regions in the lateral occipital cortex selectively respond to images of hands and images of tools. Therefore tasks that demand attention to visual images of hands may lead to the evocation of motor features of concurrently attended graspable objects. For example, Bub et al. (2017) showed that when subjects made reach-and-grasp responses cued by images of hands, those responses produced correspondence effects between the cued hand action and a prime consisting of an handled object. To my knowledge,

literature using key-press responses to objects (e.g., left for an upright object, right for an upside-down object) shows evidence only for object-based Simon effects. I conjecture that implementing a task procedure similar to that used by Bub et al., but within a key-press experiment will similarly evoke representations of real world actions.

In the experiments reported here, subjects made laterality judgments in response to photographs of hands by making a left-or right-handed key press. The hand cue was presented in the context of photographs of handled objects that served as primes. The timing of the presentation of the prime and cue was arranged so that the stimuli were either presented together (0-ms stimulus-onset asynchrony [SOA]) or asynchronously, such that a 250-ms delay occurred after the onset of the object prime and prior to appearance of the hand cue. The location of the object’s handle (to the left or right) did not predict the required response but the handle’s position could be either aligned with the response hand or not aligned with it. The depicted hand cue could also be positioned so that the wrist orientation was congruent or incongruent with the vertical/horizontal orientation of the object’s handle.

I anticipated that a hand cue shown from an egocentric perspective would evoke action representations relevant to the hand (Gallivan et al., 2013). This task context could

(18)

also influence the way in which the object primes are encoded. In particular, rather than encoding the object’s handle as simply a location in space it might now be coded as a potential target for a hand action. In such a case, it might be expected that effects would arise that are driven by activation of a motor representation rather than by spatial

compatibility. Therefore, alignment effects evoked when using a hand-classification task may look qualitatively different than the effects observed in previous reports of key-press responses during object-orientation judgments (Cho & Proctor, 2010; Pappas, 2014; Tucker & Ellis, 1998). The qualitative differences would be demonstrated in the size of the alignment effect across the response time distribution. For spatial compatibility effects such as the object-based and standard Simon effects, it has often been observed that the delta plot shows a positive or negative slope, respectively (Proctor et al., 2011). In a task of classifying pictures of hands, I anticipate that the delta plot will be constant across the response time distribution similar to what has been observed in studies where subjects make reach-and-grasp responses (Bub et al., 2017).

Previous support for the claim that pictures of objects do indeed evoke motor representations of actions has depended on cued reach-and-grasp actions. When subjects make reach-and- grasp responses on a vertical or horizontal response element,

correspondence effects (between hand cue and object) arise (Bub et al., 2017). The intention to act, along with the response elements, may jointly contribute to the observed effects. For example, the image of a beer mug with a vertical handle may direct attention to the grasp element matching this visual feature of the depicted object. The grasp element itself now becomes the real-world object responsible for the motor priming effects. When the primed element matches the cued grasp response, actions are produced

(19)

more efficiently than actions involving an incongruent grasp. In contrast, making responses using a key press removes the intention to produce a reach and grasp action and eliminates the need for response elements that match or mismatch the action afforded by the object prime. Any evidence that a depicted object continues to evoke

representations of actions under conditions of key-press responding would provide further and more general support for the idea that “…motor affordances have an explanatory role to play in spatial correspondence effects.” (Bub et al., 2017; p. 18)

(20)

Experiments 2.1 Experiment I

The goal of Experiment I was to evaluate whether action representations would be evoked when subjects make laterality judgments to hands following the presentation of an object prime. I hypothesized that the task context of performing laterality judgments to hand cues would lead to the activation of action representations when object primes are presented. To ensure that subjects were given adequate time to process the pictured object, a manipulation of SOA was included in this experiment. The hand cue could be presented concurrently with the object prime (0-ms SOA condition) or 250 ms after the prime (250-ms SOA condition). Using two stimulus onset asynchronies allows me to evaluate whether priming effects emerge immediately or build over time. Subjects were also prompted at the end of some trials to report the identity of the prime object to ensure that they were paying attention to it.

If a limb-specific affordance is evoked by an object prime, an alignment effect will be found. Response time on trials in which the object’s handle is aligned with the

response hand should be shorter than on trials where handle and hand are misaligned. Correspondence between object prime and hand cue exists on another dimension as well. Although subjects did not explicitly respond to the wrist posture (vertical or horizontal) of the hand cue a congruency effect can be evaluated. If motor representations are evoked by an image of an object, response time on trials in which the horizontal

orientation of the object’s handle is congruent with the wrist orientation of the hand cue should be shorter than on trails where orientation of the handle and hand are incongruent. Evaluating both types of correspondence effects can provide additional evidence for

(21)

activation of motor representations evoked by an image of an object. An effect found for both alignment and congruency dimensions would provide evidence against

correspondence effects being due to abstract coding of spatial location.

I anticipated that the effect of alignment would remain constant across the response time distribution as has been observed in reach-and grasp experiments (Bub et al., 2017). For both types of response, I assume that alignment effects are a result of activation of motor representations rather than spatial correspondence.

2.1.1 Method

Subjects: Thirty-seven undergraduate students, three left-handed, ten males (mean age = 20.32, SD=2.45) at the University of Victoria participated to earn extra credit in a psychology course. All the experiments reported below were approved by the University of Victoria Human Research Ethics Board.

Materials: Four objects were used in this experiment, two were associated with a vertical grasp (beer mug and teapot), and two with a horizontal grasp (saucepan and frypan).These objects could be presented with their handles pointed either left or right (see Figure 2.1 top). The object was positioned in the visual display by placing an imaginary rectangle around the object and then positioning the object so that the

rectangle was centered horizontal and vertically in the visual display. Four flesh coloured images of hands were used in this experiment. The hands had either a vertical or

horizontal wrist orientation, allowing them to match or mismatch the grasp associated with each object (see Figure 2.1 bottom).

(22)

Figure 2.1: Top: Four objects used (Experiment I, II, and III); vertical objects (beermug and teapot) on the left, horizontal (saucepan and frypan) on the right. Bottom: Four hand stimuli used (Experiment I and III); vertical grasp posture on the left,

horizontal on the right

Design: A total of thirty- two images was made using each possible combination of a hand image superimposed on an object (4 hands x 4 objects x 2 handle locations= 32). Additionally, two SOA conditions were used: 0 ms or 250 ms. Either the image of a hand superimposed on an object was presented right away (0-ms SOA condition), or the object was presented alone followed by the addition of a hand 250 ms later (250-ms SOA condition). This resulted in a total of 64 unique combinations of trials that constituted a block of trials. Subjects were to make key-press responses on a response box placed in front of them (see Figure 2.2 left). Subjects were given 16 practice trials that transitioned straight into the first critical block. In total there were five blocks of 64 trials resulting in 320criticaltrials (2 SOA x 4 objects x 4 hands x 2 objects orientations x 5 blocks = 320)

(23)

with self-paced rest breaks between blocks. The order of trials in a block was randomized for each subject.

Figure 2.2. Left: Response box used in experiments and hand position used in Experiments I and III. Right: Laboratory setup for all experiments.

Procedure: Subjects were seated about 50 cm away from a computer monitor in a quiet room. Subjects sat comfortably with their index fingers resting on a response box which was placed directly in front of them (see Figure 2.2 left and right). Each trial began with a fixation cross that remained on the screen for 250 ms. The cross was then replaced by either an image of an object and hand appearing in unison (0-ms SOA) or separated a delay (250-ms SOA; see Figure 2.3). The image remained on the screen until the subject made a laterality judgment with a key-press response.

Although on any given trial the hand and object prime could match with respect to the handle alignment and/or orientation congruency, these dimensions were irrelevant to the laterality judgment. These judgments were made by pressing the right or left key on the button box for right- or left-handed images of hands, respectively. On 25% of the trials, subjects were instructed to verbally report what the object prime was on that trial. These responses were scored by the experimenter using a Macintosh computer keyboard.

(24)

Figure 2.3: Trial procedure; 0-ms SOA condition (top), 250-ms SOA condition (bottom).

2.1.2 Results and Discussion

On average, accuracy on the hand classification task was 97%. The mean accuracy of object identification was 76%. Data were analysed using the R statistical language (R Core team, 2016). Response time was measured from the onset of the hand cue to the completion of a key-press response. Response times shorter than 100 ms were excluded as probable anticipatory responses. Response times longer than 1,400 ms were excluded as outliers. This upper bound was set so that no more than 0.5% of correct responses were excluded, in keeping with a recommendation by Ulrich and Miller (1994).

(25)

The mean response times as a function of SOA, alignment, and congruency are presented in Figure 2.4.An analysis of variance (ANOVA) was computed with SOA, alignment, and congruency as repeated-measures factors. This analysis indicated that responses on aligned trials were faster than on misaligned trials (F [1, 36] =100.90, MSE=1,451, p<0.0001). Responses were also faster on congruent trials than on incongruent trials (F [1, 36] =8.89, MSE=825, p <0.01).

Responses in the 250-ms SOA condition were faster than in the 0-ms SOA

condition (F [1, 36] =208.30, MSE=5,148, p<0.0001). An interaction between SOA and alignment was found (F [1, 36] =7.30, MSE=1,016, p=0.01) indicating that the alignment effect was bigger at the longer SOA. No other interactions were significant Fs< 1.74.

Figure 2.4: Mean response time in Experiment I as a function of prime alignment and prime congruency across 0-ms and 250-ms SOA. Error Bars represent 95% within-subjects confidence intervals appropriate for evaluating the alignment effect and the congruency effect within each SOA condition (Loftus & Masson, 1994; Masson & Loftus, 2003).

(26)

An ANOVA was computed for the error rates with SOA, alignment, and congruency as factors. The mean percent error in laterality judgments was higher in misaligned trials (3.0%) compared to aligned trials (1.3%) (F [1, 36] =20.8, MSE=20.50, p<0.0001). There appears to be no evidence of a speed- accuracy trade-off, as misaligned trials had longer RTs as well as higher errors. Congruency (F [1, 36] =0.46, MSE=9.21, p= 0.50), and SOA (F [1, 36] =2.04, MSE= 18.68, p=0.162) showed no effects on accuracy. None of the interactions were significant.

To evaluate the effect of alignment across the full response time distribution, separate delta plots were constructed for the 0-ms and 250-ms SOA conditions collapsing over the factor of congruency. Both plots produced the same pattern (see appendix A) so I will present here the effect collapsed across SOA as well as congruency. The plots were constructed by ranking each individual’s response times within an alignment condition from shortest to longest response times and dividing them into five equal-sized bins (quintiles). In each quintile, a mean value for aligned and misaligned trials was calculated these means was then averaged across subjects and compared to produce a measure of the alignment effect at each quintile (see De Jong, Liang & Lauber, 1994). Alignment effect size was calculated by subtracting the mean value for the aligned condition from the mean of the misaligned trials. An ANOVA was computed with quintile and alignment as repeated-measures factors averaging across SOA and

congruency. Importantly, there was no interaction between quintile and alignment in an ANOVA with these two factors (F [4, 144] =2.30, MSE=289.00, p= 0.0617). The lack of an interaction means that the alignment effect remained consistent across the full range of response times (see Figure 2.5).

(27)

An additional object analysis was conducted to evaluate the alignment effect size across each object (see Appendix B).

The effect of alignment found in Experiment I was present at the shortest response times and remained relatively constant across the response time distribution. Object-based Simon effects typically grow across the response distribution, showing little or no effect in among the shortest response times (Proctor et al., 2011).A departure from this pattern suggests that the underlying mechanism for the alignment effect may involve action representations associated with the object primes. Importantly, these results were produced by making a key press on a response box rather than a reach-and-grasp action on a grasping element. Therefore, the presence of grasp elements could not be

responsible for the alignment and congruency effects observed here. The image of an object itself evoked representations of relevant actions.

Figure 2.5: Delta plot of effect size across quintile in Experiment I, collapsed across SOA and congruency.

(28)

3.1 Experiment II

I have proposed that the alignment and congruency effects found in Experiment I are a result of the laterality task context evoking action representations associated with the object primes. If this proposal is correct, then using a different task context, one that is unlikely to invite action representations, should lead to a change in the nature of the alignment effect relative to what was found in Experiment I. To accomplish this, the task in Experiment II was altered so that subjects were to classify the direction in which a horizontal arrow pointed (left or right). Using an arrow cue removed the manipulation of congruency between the orientation of the object's handle and the action cue, but retained the ability to examine alignment effects. I propose that using a spatial symbol such as an arrow cue will produce an alignment effect based on spatial correspondence. As a result, the alignment effect found in Experiment II should be qualitatively different than the one found in Experiment I. Yu, Abrams, and Zacks (2014) showed that spatial

correspondence effects in an attempted replication of Tucker & Ellis (1998) were

reversed in the sense that responses were faster when the response side was aligned with the object's body, not its handle. A potentially important difference between the Tucker and Ellis and the Yu et al. study was that in the later case the objects were centered on the visual display using the same method as I used in Experiment I. In contrast, Tucker and Ellis placed their objects so that the body was centered in the display and the handle protruded into one or the other side of space.

I anticipate that the current experiment will similarly result in a reverse alignment effect or no effect. Arrows will draw attention to the spatial features of the depicted object and induce an object-based Simon effect. This will be apparent in the direction of

(29)

the effect as well as the temporal dynamics of the effect as evident in a delta plot: the alignment effect size will no longer remain constant across the response time distribution. Without the aid of a hand cue, the object should not induce affordance-like effects.

3.1.1. Method

Subjects: Thirty undergraduate students, four left-handed, four males (mean age= 20.03, SD=1.19), at the University of Victoria participated to earn extra credit in a psychology course.

Materials: The same objects were used as Experiment I but now images of arrows that could point either left or right were used as the response cues. A total of sixteen images were made using each possible combination of arrow images superimposed on objects (2 arrows x 4 objects x 2 handle locations= 16). Arrows were white with a black outline presented either pointing to the left or right (see Figure 3.1). The four objects and response box were identical to the first experiment and were positioned in the visual display in the same way.

(30)

Design: Experiment II was identical to Experiment I except arrow cues were used in place of a hand cue. Due to this change, no manipulation of congruency with the orientation of the prime object's handle was possible. Subjects were to determine whether the arrow was pointed left or right by making a corresponding key press on the response box.

Procedure: The procedure was identical to the first experiment. 3.1.2. Results and Discussion

On average, accuracy on the arrow classification task was 96%. The mean accuracy of object identification was 79%. Response time was measured from onset of the arrow cue to execution of a key press. Reactions times shorter than 100 ms and longer than 1,200 ms were also excluded as outliers; the upper bound was set such that no more than 0.5% of correct responses were excluded.

An ANOVA was computed with SOA and alignment as a repeated-measures factors and indicated that responses on handle-aligned trials were slower than on misaligned trials (F [1, 29] =22.23, MSE=499, p< 0.0001). Although this effect is reliable it is in the opposite direction to the effect found in Experiment I. A main effect of SOA was also found, with responses in the 250-ms condition faster than the 0-ms SOA condition (F [1, 29] =324.80, MSE=1,378, p<0.0001). An interaction was found between alignment and SOA (F [1, 29] = 4.51, MSE=308.90, p=0.042) (see Figure 3.2), indicating that the alignment effect was larger at the longer SOA.

The mean percent error in arrow direction judgments was higher for aligned trials (1.7%) than misaligned (0.8%) trials (F [1, 29] =10.99, MSE=4.67, p=0.0025). Mean percent error was also higher in 250-ms SOA condition (2.0%) compared to 0-ms SOA

(31)

(0.6%) (F [1, 29] =19.53, MSE=6.22, p<0.001). These effects on error rates indicate that there was no evidence of a speed-accuracy trade-off associated with alignment, however there may be one for the effect of SOA. Increased errors in the 250-ms SOA condition could indicate that by increasing the time spent viewing an object allowed for more competition between spatial asymmetries of the handle and the body of the object.

Figure 3.2: Mean reaction time for aligned and misaligned trials as a function of SOA conditions in Experiment II.

Separate delta plots were computed for both 0-ms SOA and 250-ms SOA

conditions. Similar to Experiment II, both plots produced the same pattern (see appendix A) so I will present here the effect collapsed across SOA. An ANOVA was computed with alignment and quintile as a repeated-measures factors collapsing across SOA. A significant interaction was found between quintile and alignment (F [4, 116] =8.37, MSE= 134, p<0.0001). The delta plot no longer revealed an alignment effect that remain

(32)

flat across the response time distribution as was seen in Experiment I. The effect now increased across quintiles (see Figure 3.3).

An additional object analysis was conducted to evaluate the alignment effect size across each object (see Appendix B).

Experiment II different from Experiment I only in the nature of the task cue. However, the effects look qualitatively different between the two experiments. Motor representations associated with the object no longer appear to have been evoked.

Alignment was defined as correspondence of object handle and response side, therefore a reverse alignment effect suggests that attention was drawn to the location with the higher density of pixels (or visual information), namely, to the body of the object.

It is presumably this influence on attention that produced an object-based Simon effect. Visual attention was drawn to one side of space and as a result, a response on that same side of space was made faster. This effect of alignment dissipated across the response time distribution. A similar pattern has been shown in the standard Simon effect (Hommel 1994b; Ridderinkhof, 2002a).

(33)

Figure 3.3: Delta plot of alignment effect size across quintiles in Experiment II, collapsed across SOA

4.1 Experiment III

To provide additional evidence for the claim that the effects found in Experiment I were limb-specific and involved motor affordances, the same task was repeated in Experiment III with subjects making crossed-hand responses. Subjects positioned their right hand on the left side of the response box and vice-versa for the left hand. Subjects were told to respond with the hand that corresponded with the hand depicted by the cue. A left hand, for instance, was responded to by making a key press with the left hand applied to the right-side response button. The task was arranged in this way to

demonstrate that the correspondence effects were not contingent on the side of space in which a response was made, but rather contingent on the limb making the response. It would not matter where the response is being made only that the response is being executed by a specific limb. If the effect in Experiment I were due to motor affordances,

(34)

a limb-specific effect should be found when subjects respond using a crossed-hand arrangement. Conversely, if the effect were due to spatial correspondence, then the alignment effect would remain tied to the response button. For instance, object-based Simon effects depend only on correspondence between location of an object's handle in space and the left/right spatial location of the response position (Cho & Proctor, 2010; Philips & Ward, 2002; Yu et al., 2014). I anticipate that the alignment effect in

Experiment III will be limb-specific.

However, a potential complication that may arise when using crossed-hand responses is that an unnatural body posture is in play. It has been that such novel

arrangements can affect response time. Verbal laterality judgments to hands were slowed when subjects placed their hands behind their back compared to placing them in their laps (Ionta, Fourkas, Fiorio, &Aglioti, 2007). When subjects' own hand posture did not correspond with a depicted hand, increased activity in the intraparietal sulcus, a region associated with perceptual-motor coordination, has been found (de Lange, Helmich & Toni, 2006). Due to this tendency, it is likely that the response time and temporal dynamics of the alignment effect will be different in the third experiment compared to the first (de Lange et al., 2006; Ionta et al., 2007). Of importance, the correspondence effect should be mapped to the hand making a response and not the location of response button. Responses should be faster if the handle is on the side of space that corresponds to the response hand not the hand's location on the response button.

(35)

4.1.1. Method

Subjects: Thirty undergraduate students (four left-handed, eight males) (Mean age= 21.20, SD=5.64) at the University of Victoria participated to earn extra credit in a

psychology course.

Procedure: Materials and design was identical to the first experiment. The

procedure was identical to the first experiment, differing only in hand position. Subjects sat with their left hand crossed over their right, resting their index fingers on opposite response keys (Figure 4.1). An image of a right hand was responded with the right hand pressing the left key. An image of a left hand was responded with the left hand pressing the right key.

Figure 4.1: Crossed hand posture used in experiment III

4.1.2. Results and Discussion

On average, accuracy on the hand classification task was 93%. The mean accuracy of object identification was 92%. Response times shorter than 100 ms and longer than 2,400 ms were also excluded as outliers; the upper bound was set such that no more than 0.5% of correct responses were excluded.

(36)

An ANOVA was computed with SOA, alignment, and congruency as repeated-measures factors. Alignment was defined as the correspondence between handle location and the hand making the response (e.g., a right-handled object and a right-hand response on the left key constitutes an example of the aligned condition). Congruency was defined by the orientation of an object’s handle relative to the wrist orientation of the hand cue. The ANOVA indicated that responses on aligned trials were faster than on misaligned trials (F [1, 29] =100.90, MSE =2,973, p=0.002). The alignment effect was limb specific in the sense that responses were faster when the handle location was aligned with the response hand rather the location of the response key. Unlike Experiment I, there was no evidence of a main effect of congruency (F [1, 29] =0.75, MSE =1,522, p=0.747). Responses in the 250-ms SOA condition were faster than in the 0-ms SOA condition (F [1, 29] =160.20, MSE =5,508, p< 0.0001). There was an interaction between alignment and SOA (F [1, 29] =7.82, MSE =2,079, p=0.009). The alignment effect was larger in the 250-ms SOA condition. (see Figure 4.2).

An ANOVA applied to error rates indicate that there were no effects, Fs< 3. The overall percent error was 2.1%.

(37)

Figure 4.2: Mean response time in Experiment III as a function of prime alignment and prime congruency across 0-ms and 250-ms SOA. Error Bars represent 95% within-subjects confidence intervals appropriate for evaluating the alignment effect and the congruency effect within each SOA condition

The alignment effect across the response time distribution was examined for both 0 and 250-ms SOA conditions. An ANOVA for each SOA conditions were computed separately, collapsing across congruency with quintile and alignment as repeated-measures factors. The two SOA conditions yielded different results, so delta plots are reported separately for the two cases. The ANOVAs revealed no evidence of an alignment effect at 0-ms SOA (F [1, 29] = 1.46, MSE =2,076, p = 0.237) as well as no quintile by alignment interaction (F [4, 116] = 0.23, MSE =1,039, p=0.92). The alignment effect reached significance in the 250-ms SOA condition (F [1, 29] =14.15, MSE =4,457, p <0.001) and there was an interaction between quintile and alignment (F [4, 116] =4.34, MSE =1,041, p< 0.01) (see Figure 4.3).

(38)

An additional object analysis was conducted to evaluate the alignment effect size across each object (see Appendix B).

(39)

Figure 4.3: Delta plot of alignment effect size across quintiles in Experiment III. Top: 0-ms SOA condition. Bottom: 250-0-ms SOA.

Experiment III demonstrated a significant alignment effect that mapped to the hand of response, not the response side. An effect reliant on abstract spatial codes should code

(40)

for relative spatial location even when hands are crossed (Roswaarski, & Proctor, 2000; Wascher, Kuder, Schatz, & Verleger, 2001). Similarly, object-based Simon effects have been found regardless of response modality (between or within hands, or foot responses; Cho & Proctor, 2010; Philips & Ward, 2002;). The limb-specific nature of the alignment effect found in this experiment provides support for the proposal that classifying images of hands led to the evocation of motor affordances when viewing the object primes.

The mean response time in Experiment III was longer (635ms) compared to Experiment I (547ms). This difference is comparable to behavioural and fMRI research that has demonstrated slowed response times when subjects' proprioceptive hand position differed from the implied hand posture of a task (Ionta et al., 2007; Matsumoto, Misaki & Miyauchi, 2004). Measures using fMRI have also demonstrated diminished stimulus-response compatibility effects with similar perturbations of hand position (Matsumoto et al., 2004). The overall effects found in Experiment III for both alignment and congruency were diminished relative to Experiment I. The alignment effect was effectively lost in the 0-ms SOA condition and the effect of congruency did not reach significance at either SOA.

The mechanisms at work were changed when the task context was changed from Experiment I to III, as indicated by differences in the delta plots. The alignment effect in Experiment III reached significance only in the 250-ms SOA condition in which the effect was present for the first three quintiles and then dissipated. This pattern contrasts with the constant effect across quintiles in both 0-ms and 250-ms SOA conditions in Experiment I. The standard Simon effect demonstrates a decreasing function across the response time distribution, but I argue the current finding is not an example of a spatial

(41)

coding effect (Iani et al., 2011; Riggio et al., 2008). At 0-ms SOA there was not sufficient processing time for an effect to evoked, as per slowed response times previously

demonstrated with the crossed-hand arrangement (Ionta et al., 2007; Matsumoto et al., 2004). In the 250-ms SOA condition a limb-specific alignment effect was produced but because responses were made with crossed hands, the limb-specific task had to be translated into a spatial key-press response. Consider, for example, a left-hand response cue, subjects were instructed to respond with the corresponding hand, so they should translate the response cue into a left-hand response. But then that left hand must to perform a response on the response button that is on the opposite side of the body. The dissipating alignment effect seen in the delta plot for the 250-ms SOA condition suggests that when subjects take a relatively long time to generate a response, they may experience a conflict between the required response hand and the spatial coding of that hand's

location. To the extent that the response hand's location can influence response execution, the correspondence between the chosen response hand and the handle location may be disturbed.

(42)

Discussion

The current study demonstrated in a laterality judgment task that an alignment effect (between a depicted object handle and hand cue) can be produced, and that this effect remains constant across the response time distribution. In addition, this effect is limb-specific. These features of the alignment effect run counter to the possibility that the effect is a product of abstract spatial codes. Instead, they suggest that in the context of performing laterality judgments, motor affordances associated with the object primes were activated. As further evidence, in Experiment 1 not only was there an alignment effect but subjects also performed faster when the vertical/horizontal orientation of an object’s handle was congruent with the wrist orientation of the hand cue.

In contrast to these results, when the task required subjects to classify the direction of a pointed arrow, qualitatively different effects were generated. First, the alignment effect reversed, such that subjects were faster at responding when the arrow cue pointed in the direction of the body of the object rather than the handle. This result suggests that the arrow classification task led observers to shift spatial attention to where there was highest pixel density or visual information, namely, the body of the object. With the centering technique used in the current study, the majority of the object’s body was positioned on one side of the midline and the handle was on the opposite side. Second, the alignment effect was no longer constant across the response time distribution. In keeping with what is often found with Simon effects, the alignment effect dissipated across the response time distribution (Iani et al., 2011; Riggio et al., 2008).

(43)

The contrast between results found in Experiments I and II is consistent with my initial hypotheses. Images of objects alone do not evoke motor representations. However, the temporal-occipital cortex has been shown to be activated when processing an image of a hand action and this region is in close proximity to a region in the brain that responds to images of tools. This close link between cortical responses to images of hands and tools indicates high-level visual processing of images that depends in part on how the object relates to human use (Bracci & Peelen, 2013; Gallivan et al., 2013). This

observation provided a basis for the assumption that the task context of classifying hand cues would allow motor representations associated with a prime object to be evoked. Images of arrows, on the other hand, would invite sensitivity to the correspondence between the visual-spatial properties of a depicted object and the response side.

The finding of a congruency effect in Experiment I added support to the claim that motor representations associated with a prime object were evoked when subjects made laterality judgments to pictures of hands. Not only did it matter which hand would execute the action relative to the location of an object’s handle (as indicated by the alignment effect), but also the congruency between wrist orientation of the hand cue and the orientation of the object’s handle significantly affected response time. This

congruency effect was a strong indication that Experiment I was not demonstrating a spatial correspondence effect between the handle location and the response button location. A correspondence effect of that kind should not be sensitive to the vertical/horizontal correspondence between the object's handle and the hand cue.

The controversy as to whether images of handled objects elicit motor

(44)

Ellis (1998) and Pappas (2014) suggested that limb-specific affordances are evoked by depicted objects. These authors put forward evidence in support of their claim by demonstrating that responding to a depicted object with the fingers of just one hand yields no alignment effect, contrary to what is found when responses are made with the left versus right hand. Of importance, a correspondence effect due to an affordance should be defined by faster responses when the handle is on the side of space that corresponds to the response hand not the location on of response button. Both these studies, however, have fundamental problems. The distance between response keys was not held constant when response mode (two hands versus two fingers) was manipulated and this confound could be responsible for the observed difference in alignment effects. Additionally, the percent error and median analysis in the Tucker and Ellis study found alignment effects regardless of response mode. A more probable claim, put forward by Cho and Proctor (2010), is that correspondence between spatial location of a depicted object handle and the spatial location of a response produce an object-based Simon effect. The arrow classification task in Experiment II was a surrogate for the upright/upside-down task used in the Tucker and Ellis, Pappas, and Cho and Proctor studies. When subjects responded to an arrow cue, an alignment effect between the object’s body (to which attention was apparently drawn) and location of the response button was found. This kind of alignment effect was similar to alignment effects reported by Yu et al. (2014).

Experiment III demonstrated that when subjects made laterality judgments with their hands crossed so that each hand was on the opposite side of space, the alignment effect was tied to response hand and not response location. Bub et al. (2017) previously

(45)

reported a limb-specific correspondence effect when subjects make reach-and-grasp responses on a grasping element. However, the intent to make a reach-and-grasp response, along with the use of a grasp element that matches the visual features of the depicted object may have contributed to the observed effects. I have presented a novel finding of an object affordance effect elicited in a key-press response task. Motor

representations were evoked by pictured objects even when subjects were not engaging in reach-and-grasp responses applied to three-dimensional response elements.

Understanding how humans respond to images of objects allows us to gain a deeper understanding of the mechanisms that underlie our actions. Early in life humans have little ability to differentiate between images of objects and real objects (DeLoache, Pierroutsakos & Uttal, 2003). Perceptually, both photographs and real objects share similar properties, but conceptually they are very different. Objects can be manipulated and grasped to perform an action whereas an image is a flat surface that can only be picked up or pointed at. This conceptual understanding of dimensionality is learned through experience. As infants develop, the manner in which they manually explore images of objects changes. Pointing behaviours become more frequent, and occurrences of attempted grasping behaviours decline (DeLoache, Pierroutsakos, Uttal, Rosengren, & Gottlieb, 1998). Over time we learn that images of objects do not afford the same actions as a real object. The current study provided evidence that under certain task conditions, humans still associate photographs of objects with the motor representations that correspond to the real object depicted by the image.

Understanding the link between the motor and visual systems in planning and executing movements can allow us to understand more about neurological disorders such

(46)

as ideomotor apraxia (IM). This disorder impairs patient’s ability to plan and execute actions on highly familiar objects (Mutha, Staoo, Stainburg & Haaland, 2017). Previous research has shown that IM is partly due to damage in the region of the brain involved in motor imagery (Buxbaum, Johnson-Frey & Bartlett-Williams, 2005). Due to this patients are unable to assess an internal model of the motor program associated with an object. Although early learning appears to be intact in IM previous research has shown that patients cannot learn new internal representations to overcome this deficit (Mutha et al., 2017). My work has demonstrated activation of motor representations despite no

intention to act. If this system can be overridden in a healthy human a similar experiment could be used to evaluate if a similar task context could facilitate motor representations in patients with IM.

(47)

References

Bläsing, B.E., Güldenpenning I., Koester, D., & Schack, T. (2014) Expertise affects representation structure and categorical activation of grasp postures in climbing. Frontiers in Psychology, 5, 1-11.

Bracci, S., & Peelen, M.V. (2013) Body and Object Effectors: The Organization of Object Representations in High-Level Visual Cortex Reflects Body–Object Interactions. Journal of Neuroscience, 33(46), 18247-18258.

Bub, N.D., Masson, E.J., & Kumar, R. (2017). Time course of motor affordance evoked by pictured objects and words. Journal of Experimental Psychology: Human Perception and Performance (in press, March 2017).

Buxbaum, L.J., Johnson-Frey, S.H., & Bartlett-Williams, M. (2005). Deficient internal models for planning hand-object interactions in apraxia. Neuropsychologia, 43(6), 917-929.

Chao, L.L & Martin, A. (2000). Representation of manipulable man-made objects in the dorsal stream. NeuroImage, 12, 478-484.

Chen, J., & Proctor, R. W. (2014). Conceptual response distance distinguishes action goals in the Stroop task. Psychonomic Bulletin & Review, 21, 1238-1243.

Cho, D. T., &Proctor, R. W. (2010). The object-based Simon effect: Grasping affordance or relative location of the graspable part? Journal of Experimental

Psychology: Human Perception and Performance, 36, 853–861.

De Jong, R., Liang, C.-C., & Lauber, E. (1994). Conditional and unconditional automaticity: A dual-process model of effects of spatial stimulus–response

(48)

correspondence. Journal of Experimental Psychology: Human Perception and Performance, 20, 731–750.

De Lange, F.P., Helmich, R.C., & Toni, I. (2006). Posture influence motor imagery: An fMRI study. NeuroImage, 33, 609-617.

DeLoache, J.S, Pierroutsakos, S.L., & Uttal, D.H. (2003) The Origins of Pictorial Competence. Current Directions in Psychological Science, 12 (4), 114-118.

Gallivan, J.P., Chapman, C.S., McLean, D.A., Flanagan, J.R., & Culham, J.C. (2013a). Activity patterns in the category-selective occipitotemporal cortex predict upcoming motor actions. European Journal of Neuroscience, 38, 2308-2424.

Gallivan, J.P., McLean, D.A., Valyear, K.F., Culham, J.C. (2013). Decoding the neural mechanism of human tool use. eLife, 2, 1-29.

Girardi, G., Lindemann, O., & Bekkering, H. (2010). Context effects on the processing of action-relevant object features. Journal of Experimental Psychology: Human Perception and Performance, 36, 330-440.

Hommel, B. (1994b). Spontaneous decay of response-code activation.Psychological Research, 56, 261–268.

Iani, C., Baroni, G., Pellicano, A., & Nicoletti, R. (2011). On the relationship between affordance and Simon effects: Are the effects really independent? Journal of Cognitive Psychology, 23(1), 121-131.

Ionta, S., Fourkas, A.D., Fiorio, M., & Aglioti, S.M. (2007). The influence of hands posture on mental rotation of hands and feet. Experimental Brain Research, 183, 1-7.

(49)

Loftus, G. R., & Masson, M. E. J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin & Review, 1, 476 – 490.

Mahon, B.Z. (2013). Watching the brain in action. eLIFE, 1-3.

Masson, M. E. J., & Loftus, G. R. (2003). Using confidence intervals for

graphically based data interpretation. Canadian Journal of Experimental Psychology, 57, 203-220.

Matsumoto, E., Misaki, M. & Miyauchi, S.(2004) Neural mechanisms of spatial stimulus-response compatibility: the effect of crossed- hand position. Experimental Brain Research, 158(1), 9-17.

Mutha, R.K., Staoo, L.H., Stainburg, R.L., & Haaland, K.Y. (2017). Motor adaption deficits in ideomotor apraxia. Journal of International Neuropsychological Society, 23(2), 139-149.

Pappas,Z. (2014). Dissociating Simon and affordance compatibility effects: Silhouettes and photograph. Cognition, 133, 716-728.

Phillips, J. C., & Ward, R. (2002). S-R correspondence effects of irrelevant visual affordance: Time course and specificity of response activation. Visual Cognition, 9, 540– 558.

Proctor, R.W., Miles, J.D., & Baroni, G. (2011). Reaction time distribution analysis of spatial correspondence effects. Psychological Bull Rev, 18, 242-266.

Proctor, R. W., & Chen, J. (2012). Dissociating influences of key and hand separation on the Stroop color-identification effect. Acta Psychologica, 141, 39-47.

R Core Team (2016). R: A language and environment for statistical computing. R foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.

(50)

Ridderinkhof, K.R. (2002). Micro- and macro- adjustments of task set: activation and suppression in conflict tasks. Psychological Res, 66(4), 312-323.

Ridderinkhof, K. R. (2002a). Activation and suppression in conflict tasks:

Empirical clarification through distributional analyses. In W. Prinz & B. Hommel (Eds.), Common mechanisms in perception and action: Attention and performance XIX (pp. 494–519). New York: Oxford University Press.

Riggio, L., Iani, C., Gherri, E., Benatti, F., Rubichi, S., & Nicoletti, R. (2008). The role of attention in the occurrence of the affordance effect. Acta Psychologica, 127 (2) , 449–458.

Roswarski, T.E., & Proctor, R.W. (2000). Auditory stimulus-response

compatability: Is there a contribution of stimulus- hand correspondence? Psychological Research, 63, 148-158.

Tucker, M., & Ellis, R. (1998). On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human Perception and Performance, 24, 830–846.

Ulrich, R., & Miller, J. (1994). Effects of truncation on reaction time analysis.Journal of Experimental Psychology: General, 123, 34-80.

Urgesi, C., Moro, V., Candidi, M., Aglioti, S.A. (2006). Mapping implied body actions in the human motor system, The Journal of Neuroscience, 26930, 7942-7949.

de la Vega, I., Dudschig, C., De Filippis, M., Lachmair, M., & Kaup, B. (2013). Keep your hands crossed: the valence- by left/right interaction is related to hand, not side, in an incongruent hand-response key assignment. Acta Psychologica, 142, 273-277.

(51)

Wascher, E., Kuder, T., Schatz, U., & Verleger, R. (2001). Validity and boundary conditions of automatic activation in the simon task. Journal of Experimental

Psychology: Human Perception and Performance, 37(3), 731-751.

Urgesi, C., Moro, V., Candidi, M., Aglioti, S.A. (2006). Mapping implied body actions in the human motor system, The Journal of Neuroscience, 26930, 7942-7949.

Yu, A.B., Abrams, R.A., & Zacks, J.M. Limits on action priming by pictures of objects. Journal of Experimental Psychology: Human Perception and Performance, 40(5), 1861-1873.

(52)

Appendix A Delta Plots separated by SOA for Experiments I and II

Figure a1. Delta plot of alignment effect size across quintiles in Experiment I, Top: 0-ms SOA condition. Bottom: 250-ms SOA condition.

(53)

Figure a1. Delta plot of alignment effect size across quintiles in Experiment II, Top: 0-ms SOA condition. Bottom: 250-ms SOA condition.

(54)

Appendix B

Alignment effect size across objects Experiments I, II and III

Figure b1: Alignment effect size collapsed across objects collapsed across congruency and SOA conditions in Experiment I. Error Bars represent 95% within-subjects

confidence intervals appropriate for evaluating the alignment effect for each object individually.

(55)

Figure b2: Alignment effect size across objects collapsed across the SOA condition for Experiment II. Error Bars represent 95% within-subjects confidence intervals

appropriate for evaluating the alignment effect for each object individually.

Figure b3: Alignment effect size across objects collapsed across the congruency

condition in the 250-ms SOA condition Experiment III. Error Bars represent 95% within-subjects confidence intervals appropriate for evaluating the alignment effect for each object individually. The 0-ms SOA condition not included because the alignment effect did not reach significance.

Referenties

GERELATEERDE DOCUMENTEN

from NTA in suspension. Figure 2: The same region imaged with multiple analytical instruments. a) SEM image recorded (in absence of gold coating) at 5 kV. b) Raman cluster image

Nadat Klein en Veluwenkamp, voortbordurend op het theoretische kader van Schumpeter, studies publiceerden over het succes van monopolistische

Omdat deze team player alles wel leuk lijkt te vinden, wordt door zowel het onderwijs als de bedrijven aangegeven dat het belangrijk is dat hij ook de andere rollen verkent die

While existing notions of prior knowledge focus on existing knowledge of individual learners brought to a new learning context; research on knowledge creation/knowledge building

This study builds on this brief history to analyze the ongoing dispute over Doel and Tihange. It focuses on the time period from 1980 to the present, to gain insight into how

The planning system (and its instruments) is therefore placed on this middle level. The politico-juridical rules determine how resources, the lowest scale, may be

Wanneer een belasting is aangesloten op een eenfasig wisselspannings- net is het mogelijk om het toegevoerde vermogen te regelen door in het circuit een

The superplastic forming can be simulated by means of the finite element method by applying a uniaxial material model in which three parts are represented: firstly the initial