• No results found

Effects of motivation on early stages of visual perception.

N/A
N/A
Protected

Academic year: 2021

Share "Effects of motivation on early stages of visual perception."

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Effects of Motivation on Early Stages of Visual Perception.

N. Godeschalk

University of Amsterdam

Project leader: Timo Stein

Colleagues: Dorien Melman, Anna van Vree and Manon Vollenberg Amount of words: 6029

(2)

Abstract

Is visual information detected better when it is coupled to a motivational value? In a value

learning experiment face stimuli were coupled to an Expected Value (EV) by associating the

faces with financial reward, punishment or no financial consequence. Detection of face

stimuli was measured in two different adjusted versions of the Attentional Blink (AB) task.

The first experiment measured the influence of EV on face recognition. In the second

experiment, a new version of the AB task was introduced that measured an earlier stage of

visual perception than face recognition, namely face localization. Participants had to detect

the location of either a new face that they did not see in the value learning experiment, or a

face with a certain expected value. In both experiments, all faces were detected equally well,

independent of their novelty and EV. Correct face detection only differed as a consequence of

the time that participants had between the judgement of two stimuli in the AB task. When

there was more time between two stimuli, the second face stimulus was generally better

detected, which is an indication of the AB effect itself. Conclusively, visual information with

a motivational value is not detected better than neutral information.

(3)

1. Introduction

The influence of top-down functions on visual perception are a widely discussed topic. Top-down functions refer to the influence of more complex, higher order brain functions, like memory, cognition, emotion or motivation on more basal functions like sensory perception (Sarter, Givens &

Bruno, 2001). Research has focused particularly on visual perception, probably because this is the sensory modality of which most is known (Kinchla & Wolfe, 1979; Gilbert & Li, 2013).

Although the influence of top-down functions on perception is widely discussed, the answer remains unclear and the opinions on it are divided (Firestone & Scholl, 2016). To get insight in the influence of top-down functions on visual perception, it is important to have some understanding about how a visual percept is constructed. The building blocks of a visual percept are bottom-up features. These are the most primary visual features like contrasts, colour intensity, orientation and movement and differences between them can attract our attention in the earliest stages of visual processing (Milanese, Gil & Pun, 1995). It is thus clear that bottom-up features play a role in visual perception, but what we see are not just separate bottom-up features alone. They are also combined into a whole before something meaningful is actually perceived.

There are different theory’s on how these bottom-up features are activated and combined to create an image and the possible role of top-down functions in this. One theory about the construction of a visual percept is the Feature Integration Theory (Treisman, 1988). It states that at the very beginning of building a visual percept, bottom-up features are identified throughout the visual field, without directing the attention to anything in particular yet. According to the Biased Competition Hypothesis (Moran & Desimone, 1985; Deco & Rolls, 2005), these bottom-up features activate corresponding neurons that are tuned to these specific bottom-up features. Several different neurons are thus active at the same time. What determines towards which features attention will then be directed? The biased competition hypothesis states that the competition between the active neurons is biased in favour of specific neurons by top-down functions. Whether or not the attention will be directed towards certain features is thus biased by the value that top-down functions give to the corresponding neurons. But how do top-down functions add a value to certain neurons?

It is known that through value-learning, the neural activity that follows on a certain combination of bottom-up features can change (Schultz, Dayan & Montague, 1997; Raymond & O’Brien, 2009). Value-learning is a mechanism that couples an expected value to (a combination of) certain features, based on the gain or loss that these features led to in the past. In classical conditioning (Pavlov & Anrep, 2003) for example, an expected value is coupled a stimulus that is normally neutral. When the expected value of a stimulus changes, the neural activity connected to the corresponding features also change. By changing the neural activity following on certain features, top-down

(4)

functions bias the competition between neurons and might play a role in the selection of bottom-up features at an early stage. Including knowledge about previous experience with features in the

selection of visual information would have a logical evolutionary advantage (Milders, Sahraie, Logan & Donnellon, 2006). In the most efficient case, visual features associated with a high reward or punishment will be selected before neutral information.

To test the possible influence of these value-learning codes, this paper will look at their effect on early visual perception. The hypothesis that visual features with an EV are more likely perceived than features without an EV will be tested. To do this, the constructs of value learning and perception first need to be specified to measurable values. To start with, there are different ways to measure perception. In this paper the Attentional Blink (AB) task will be used (Raymond, Shapiro & Arnell, 1992). An example of the attentional blink is shown in Figure 1. The AB refers to the inability to detect a stimulus that is presented a very short time after perceiving another stimulus. The attention so to say, blinks at the moment of presentation of the second stimulus. The first stimulus is called T1 and must be perceived for the AB to occur. The second stimulus is called T2. T2 is likely not perceived, because the attention is still occupied by the processing of T1. The time in between the T1 and T2 is called the lag. The shorter the lag is, the more the attention will still be occupied when T2 is presented, by the processing of T1. The AB paradigm is used for measuring stimulus detection (T2) when there is a high perceptual workload (T1). The two stimuli are (as in the biased competition hypothesis)

competing for attention and presumably, the more significant T2 is, the more likely it wins the competition with T1 and will be perceived.

Figure 1.

Above an example of an AB test used in an experiment of Milders et al., 2006. A sequence of images is shown, of which participants give an indication of whether or not they have perceived T1 and T2.

(5)

The AB paradigm has been used before as a measurement of the top-down influence of emotion on visual perception (Milders et al., 2006). In this experiment participants were shown faces in an AB test, that were coupled to an emotional meaning (through conditioning) or faces that had no emotional meaning. They then measured how well participants could judge in the AB test if they had seen a face at T2 or not. They found that faces with emotional value were more often noticed than faces without emotional value. From this they concluded that the emotional value of faces modulates people’s awareness of a face. A critique that has more often been given on this kind of approach is that it might not be the emotional meaning per se that causes the difference in correct responses, but instead the level of arousal. Although arousal is also linked to the recognition of emotion, arousal is not a top-down function (Firestone & Scholl, 2016).

The arousal problem can be undertaken by using another operationalization of top-down functions than emotion. One possible alternative top-down function that logically succeeds from the value learning theory, is motivation. In an experiment of Raymond and O’Brien (2009), the influence of motivation on visual perception was investigated. Participants were first motivated to detect (face) stimuli by coupling a reward or punishment to them (Figure 2). The reward or punishment that was coupled to a face was called the expected value (EV) of this stimulus. When faces with an EV would be recognised better than neutral stimuli, this would be proof of the top-down influence of motivation on perception. Recognition of a face was also measured by the AB test.

Figure 2. Participants were instructed to pick the most profitable face of a pair. The same two faces were always presented together (pairs are shown vertically in the picture). One of the faces would lead to gain in 80% of the cases and the other face only in 20% (probability). The same chances applied to the loss pairs. The probability of gain and loss coupled to the faces is also called the expected value (EV) of a face. The most profitable choices are marked with an asterisk. The neutral faces were not coupled to any gain or loss.

Raymond and O’Brien found that with a long lag between T1 and T2, high probability faces were recognised better than other faces, regardless of valence (win/loss). They also found that with the short lag, win associated faces were better recognised than loss associated faces, regardless of probability. From this, they drew the conclusion that under a high perceptual load, win associated faces are better recognised than loss associated faces and thus motivation influences visual perception.

(6)

One possible flaw of this experiment is that the ‘old’ and ‘new’ options at T2 can cause a response bias. Some participants might want to be absolutely sure that they saw the face before, when they answer ‘old’. One face might then seem more familiar then another, but they will still answer ‘new’. When a face with high motivational salience is perceived better than a neutral face, but still not fully, the participant might judge both faces as ‘new’. The amount in which he or she perceived the face differs, but this would not show in their response neither in the results. In this case, a participants tendency to respond in a certain way would be measured, instead of how well they perceived a face. Another possible pitfall of Raymond and O’Brien’s study is the influence that memory could have on face recognition. Since at T2 participants are asked if they had or had not seen a face before, this measurement of perception is actually dependent on memory retrieval. The influence of EV on memory retrieval is then measured, instead of visual perception. It is therefore important to use a measurement that actually tests visual perception instead of memory.

In the current study, these flaws will be undertaken by adjusting the AB task. First Raymond and O’Brien’s experiment will be partially replicated, by executing their experiment, with some slight adjustments. This is done to see if their results can be replicated before moving to the measurement of an earlier stage of visual perception, namely face localization. In this adjusted version of the AB task, participants will have to choose at T2 if a face was presented on the left or the right of the screen, which will undertake the response bias. Another important benefit of using this version of the AB task is that it will test only if participants perceived a face on a specific side. It does not test, like Raymond and O’Brien’s experiment, how well participants can retrieve a face from their memory and should be a more valid test of perception.

1.2 Method

1.2.1 Participants

Fifty-one subjects in total participated in the experiments (13 men, 37 women and one other,

Mage = 25.2 years, age range: 18-63, sd = 9.8). The participants were randomly assigned to

experi-ment 1, AB localization (N = 26) or experiexperi-ment 2, AB recognition (N = 25). Participants were reward-ed with 1.5 research crreward-edits or the amount of money they made in the face recognition task varying in between €1 and €4. All participants were Dutch and had corrected-to-normal vision. The experiments were approved by the Comissie Ethiek of the University of Amsterdam and all participants signed the informed consent. A power analysis revealed that there was 67% power to detect an effect size of d = 0.5 with the sample size of 25 participants in each experiment.

(7)

1.2.2 Stimuli

In both experiments, 22 pictures of an equal amount of male and female faces of young adults were used as T2 stimuli. The inner oval of the pictures was cut out in such a way that the faces showed no hair, teeth or neck. The pictures were all equated for mean luminance and SD of the luminance. For the masked stimuli these same pictures where cut into an amount of squares varying between a

minimum of 15 and a maximum of 40. These squares where randomly put together into a new

scrambled image of a face. Depending on the amount of squares that were used, there were 7 different ways to rearrange the faces into a new image. The 22 faces where thus scrambled in 7 different ways, which led to a total of 154 mask stimuli. The T1 stimulus was an oval form of the same size as the pictures, filled with either small green coloured circles or rectangles.

1.2.3 Procedure

All participants first read the general information letter about the goal of the study. Before beginning each experiment participants were asked to read the corresponding instruction sheet. The experiments were performed behind a computer on a desk in a further empty room with only dimmed artificial light. All participants start with the value learning experiment, which took about 45 minutes. After this, they had a short break, followed by the next experiment of about 45 minutes. After

finishing both experiments, participants were asked to write down their answer to three questions. These are, what they think the goal of the experiment was and what their strategy in both experiments was. The answers were meant to give extra insight in the results. After that they are handed over the money they earned in the value learning task.

1.3.1 Value learning

The stimuli that were used in the value learning task were 12 different faces, presented in 3 pairs of female and 3 pairs of male faces. The value learning task existed of 360 trials. After every 120 trials there was the possibility to take a short break and the amount of remaining trials was mentioned. One trial consisted of the presentation of the presentation of two different screens (Figure 3). Every screen (referring to what participants see on the whole computer screen at one moment) had a grey background. The first screen had one face on both sides of a fixation point and lasted until the participants choose a face. Participants were instructed to choose the face that they think, would lead to the most profitable outcome. After choosing a face, the gain or loss to which their choice led is displayed, which is always an amount of €0.05. Subsequently they are shown how much money they made in total until then. Two of the face pairs are coupled to winning money, two to losing money and two to nothing (neutral pairs). A win pair has one face that has a 0.8 chance and one face that has a 0.2 chance of leading to win (Figure 2). The same chances apply to the losing pairs. The neutral pairs do not lead to any profit or loss. To which face an EV is assigned, is for each participant stable over the

(8)

whole task, so they learn to couple the EV to a specific face. Between the participants, the EV are counterbalanced. This means that the faces had different EVs for different subjects, so the scores on recognisability and detectability should be the same between the different face exemplars. After completing the 360 trials participants see how much money they made.

Figure 3.

A trail with two screens. Participants can first choose one of the two faces with the corresponding arrow keys. In the next screen the gain or loss of their choice is displayed.

1.3.2 AB recognition

In the AB recognition task, the stimuli described in the stimuli paragraph above were used. There were 440 trials. After every 110 trials there was a short break that lasted until participants pressed a key to continue the experiment. One trial consisted of the subsequent presentation of ten different screens following up on each other, all presented for 100ms (see Figure 4). The first screen was T1, where participants were instructed to press 1 on a keyboard if they saw circles and 2 if they saw rectangles. The second screen was a mask stimulus. After this, T2 appeared after either 200ms (short lag) on the third screen or 800ms (long lag) on the ninth screen, followed by a mask. T2 was either an old face, that was seen in the value learning task, or a new face that participants did not see before. Six of the faces presented were new. Participants were asked to indicate if the face was an old with the left arrow key or new with the right arrow key. After answering T1, participants got feedback about the correctness of their response. A small cross appeared in the middle of the screen, coloured green if their response was correct and red if it was not. Participants did not receive feedback on the correctness of T2.

(9)

Figure 4. A trial on the recognition task. At T1 participants react with key 1 on the keyboard for circles and key 2 for rectangles. After a short or a long lag T2 follows, where participants are asked to indicate if they have (left arrow key) or have not (right arrow key) seen this face before. After that follows one more mask.

1.3.3 AB localization

The image in Figure 5 is based on the AB localization task. The only difference is, that the amount of images in the figure is less than in the actual task (to keep the figure clear). The stimuli used in this task are almost the same as in the stimuli paragraph above. The only difference is, that now, instead of one, three different mask stimuli are presented next to each other. The localization task consisted of 384 trials. After every 128 trials, there was the possibility to take a short break. In the localization task, one trial consists of a sequence of 24 screens, that all appear on the screen for 90ms. All the screens from the first until the seventh in the sequence are mask stimuli. T1 appears

somewhere randomly per trial between the 8th and the 12th screen in the sequence. In T1 three images are presented next to each other. The middle image was the T1 stimulus as described above, the other two were mask stimuli. When the sequence was done, participants were also instructed to press 1 on the keyboard if they saw circles and 2 if they saw rectangles at T1. After T1, T2 would appear somewhere between the tenth and the twentieth screen in the sequence, depending on the lag. During the short lag, T2 would appear two screens after T1 and during the long lag, T2 appeared eight screens after T1. In T2, three images were presented next to each other. Either the left or the right image was a face. The other two images were mask stimuli. At T2 participants were to press the left arrow key for left and the right arrow key for right.

(10)

Figure 5.A trial on the AB localization task. At T1 participants reported if they saw circles or rectangles. At T2 participants reported if they had seen a face on the left or on the right side of the screen.

2. Results

The significance of all results is determined by an alpha value of α = .05. Furthermore, only the trials were included where participants correctly answered T1. No participants were excluded.

2.1 Value learning

In the value learning task, participants were over time expected to start picking the one face of the pair that had the most beneficial outcome for them. This was tested by looking at the influence of condition and lag on percentage optimal choice in the value learning task, using a Repeated Measures

ANOVA (see Figure 6). Condition existed of two levels (reward and punishment) and bin of six levels

(1 t/m 6). Neutral was left out in the analysis because there is no optimal choice in this condition and the score will stay around 0.5. It is mainly relevant if the scores between the rewarded and the punished faces significantly differ. According to Maucly’s test, the assumption of sphericity was violated for the main effect of bin χ² (14) = 45.53, p < .001 and the interaction effect between bin and condition χ² (14) = 44.39, p < .001. The degrees of freedom for bin were therefore corrected with

Huynh-Feldt’s estimation of sphericity and for the interaction effect with Greenhouse-Geisser’s

estimates of sphericity. There was a significant effect of bin F(4.11, 205.70) = 32.32, p < .001 on probability optimal choice (Figure 6). There was no significant effect of condition F(1,50) = 0.20, p =

(11)

.653 and no interaction effect between bin and condition F(3.53, 176.35) = 1.31, p = .270 on probability optimal choice. This means that bin did, but reward and punishment did not have a significantly different effect on probability optimal choice.

Figure 6.The effect of bin and condition on probability optimal choice. In both conditions there was a significant increase in probability optimal choice between the first and the fourth bin. After the fourth bin the probability optimal choice stays more or less constant. As can be seen from the similar trend of both conditions over the bins, there is no significant difference between the reward and punishment condition.

2.2 AB Recognition

To find out if there was a difference in face recognition between reward and punishment relat-ed faces and whether recognition differrelat-ed between high and low motivational salience, a Repeatrelat-ed

Measures ANOVA was performed. Lag was included in the ANOVA to check if the AB effect had

occurred. Valence existed of two levels (reward, punishment), probability existed of two levels (high = 0.8/-0.8, low = 0.2/-0.2) and lag existed of two levels (short, long).

1 2 3 4 5 6 0,5 0,6 0,7 0,8 bin

Pr

ob

ab

ililt

y o

pt

imal c

ho

ic

e

Value learning

reward punishment condition

(12)

No significant effect of valence F(1, 24) < 0.01, p = .990 and probability F(1, 24) = 2.403, p = .134 on T2 performance was found. There was a significant effect found of lag F(1, 24) = 11.460, p < .05 on T2 performance. Correct face recognition did thus only differ between the short and the long lag, but not between the reward and the punishment condition, neither did it differ as a result of the motivational salience of a face. Furthermore both the interaction effects between valence and probabil-ity F(1, 24) = 1.779, p = .195 and between valence and lag F(1, 24) < 0.01, p = .99 as well as between probability and lag F(1, 24) < 0.01, p = .989 and between valence × probability × lag F(1, 24) = 0.05,

p = .828 were not significant.

Although neither reward and punishment, nor probability had an effect on correct T2 recogni-tion, faces coupled to any motivational salience might still be recognized better than faces with no motivational salience at all. To test this, a Repeated Measures ANOVA was performed, in which va-lence and probability where combined into one factor, called Expected Value (EV), which refers to the probability of reward or punishment that was coupled to a face. EV consisted of five (-0.8, -0.2, 0.2, 0.8, neutral) and lag of two levels (short, long).

A significant main effect of lag on percentage T2 correct was found F(1, 24) = 19.36, p < .001. Correct face recognition did thus differ significantly between the short and the long lag. Because the assumption of sphericity was violated for both EV and the interaction between EV and lag, the degrees of freedom were calculated using Greenhouse-Geisser’s estimates of sphericity. No signifi-cant effect of EV F(2.74,65.87) = 0.61, p = .594 and no interaction effect between EV and lag F(2.65, 63.71) = 0.45, p = .692 on percentage T2 correct was found. This means that correct face recognition does not differ between motivationally salient or neutral faces. The results are depicted in Figure 7.

(13)

Figure 7.Percentage correctly recognized faces on T2 for the different expected values in the short and the long lag.

In line with the above results, no significant correlation between the individual mean value learning scores of the rewarded and punished faces and mean T2 score of the faces with any EV on the short lag of the recognition task was found (r = .069, p = .744). Another correlation, that was also done by Raymond and O’Brien (2009) was performed, between the individual value learning scores for win pairs and the recognition scores for high probability win stimuli. In contrast with the results of Ray-mond and O’Brien, no significant correlation was found (r = .016, p = .938).

2.2.1 Face exemplar control

To check if some face exemplars were overall better recognized than others, a Repeated

Measures ANOVA was performed. Ideally, all faces would be equally well recognized, because the EV

was counterbalanced between the faces. There was one factor, which was face exemplar (16). Because the assumption of sphericity was violated for face exemplars the degrees of freedom were determined using Greenhouse-Geisser’s estimates of sphericity. A main effect of face exemplar on T2 performance was found F(6.47, 155.26) = 2.67, p < .05.

This means that percentage T2 correct differed significantly between the different faces, indi-cating that certain faces were better recognized than others. Since some faces were on average more correctly judged than others, there were initial differences how recognizable the faces were.

0.8 0.2 -0.8 -0.2 neutral 0,5 0,6 0,7 0,8 EV

Per

cen

ta

ge T

2

co

rr

ec

t

AB recognition

long short lag

(14)

Figure 8.Percentage T2 correct for the different face exemplars. An example of the different T2 score between faces can be seen when comparing the score of face exemplar six and eight.

2.3 AB Localization

For AB localization, the influence of valence, probability and lag on T2 score on the localiza-tion task were tested, using a Repeated Measures ANOVA. Like in AB recognilocaliza-tion, valence existed of two levels (reward, punishment), probability existed of two levels (high = 0.8/-0.8, low = 0.2/-0.2) and lag existed of two levels (short, long).

The main effects of both valence F(1, 25) = 1.59, p = .219 and probability F(1, 25) = 0.40, p = .533 on T2 performance were not significant. The main effect of lag F(1, 25) = 36.60, p < .05 on T2 performance was significant. Furthermore the interaction effects between both valence and probability

F(1, 25) = 0.94, p = .342 and between valence and lag F(1, 25) < 0.01, p = .941 as well as between

probability and lag F(1, 25) = 0.02, p = .880 and valence x probability x lag F(1, 25) = 0.10, p = .749 were not significant.

Although T2 score did not differ between rewarded and punished faces and between faces with a high or a low probability, correct T2 detection might still be different between faces with or faces without motivational salience. This was tested using a Repeated Measure ANOVA (Figure 9). EV existed of five levels (-0.8, -0.2, 0.2, 0.8, neutral) and lag existed of two levels (short, long).

The main effect of EV F(4, 100) = 0.827, p = .511 on T2 performance was not significant. Nevertheless, there was a significant effect of lag F(1, 25) = 35.163, p < .05 on T2 performance.

Be-1 2 3 4 5 6 7 8 9 Be-10 Be-1Be-1 Be-12 Be-13 Be-14 Be-15 Be-16 0,5 0,6 0,7 0,8 0,9 Face exemplar

Per

cen

ta

ge T

2

co

rr

ec

t

(15)

cause the assumption of sphericity was violated for the interaction effect between EV and lag, the de-grees of freedom were corrected with Greenhouse-Geisser’s estimates of sphericity. The interaction effect between EV and lag F(2.77, 69.25) = 0.07, p = .972 was not significant.

Figure 9. Influence of lag and EV on T2 performance. The distance between the lines means that lag has a clear effect on percentage T2 correct. Over the different EV the lines stay quite stable and the difference between the five points is not significant.

To check if the lag and novelty of the faces had an effect on percentage T2 correct, a Repeated

Measures ANOVA was performed. This tested the influence of memory on T2 performance. Lag

exist-ed of two levels (short, long) and novelty existexist-ed of two levels (old, new). For the level old of the nov-elty factor, only the neutral faces were used. Hereby, there should be no other differences between the faces (like EV) besides if it was seen before or not. If a difference in T2 score was, this could only be accounted to the novelty of the faces. No significant effect of novelty on T2 score was found F(1, 25) = 0.79, p = .383. there was a significant effect of lag on T2 score F(1, 25) = 27,26, p < .001. No signif-icant interaction effect between lag and novelty was found F(1, 25) = 0.01, p = .913. This means that no effect of memory was found.

In line with these results, no significant correlation was found between individual mean score for rewarded and punished faces on the value learning task and mean T2 accuracy for reward and pun-ishment related faces on the localization task was found (r = .108, p = .600).

2.3.1 Face exemplar control

To see if all faces were equally well perceived, a face exemplar was done. A Repeated

0.8 0.2 -0.8 -0.2 neutral 0,6 0,7 0,8 0,9 EV Pe rce nt age T 2 co rr ect

Mean T2 score on AB localize

short long

(16)

Measures ANOVA was performed to test the effect of face exemplar on T2 score in the AB localization

task. There was one factor, which was face exemplar (16). No main

ef-fect of face exemplar on T2 score was found F(15, 75) = 0.954, p = .504. This meant that there were no initial differences between the faces that influenced T2 performance on the localization task.

Discussion

The goal of the two experiments was to test the influence of motivation on visual perception. Participants were motivated to detect certain stimuli by coupling them to an EV in a value learning task. In this task, they learned over time to pick the most profitable face of a pair of two faces. The first, recognition experiment measured how well participants could judge whether or not they had seen a face before. In this experiment there was a possible influence of memory on the recognition of a face, that might confound the effect of perception. The second experiment was designed to rule out the possible effect of memory by looking at earlier stages of perception. This was done by asking partici-pants if they had seen a face on the left or the right side of the screen, instead of asking them to re-trieve the face from memory. Correct face recognition in the first experiment, did not significantly differ between faces with a different EV or without an EV. Correct face recognition did differ as an effect of the time between the two stimuli. Face recognition was better when there was a long time between the presentation of the two stimuli (long lag) than with a short time (short lag), which stands for the AB effect. In the second experiment, correct face localization did not significantly differ be-tween novel or old faces, neither did it differ bebe-tween faces with a different EV. The only significant difference between the correct detection of the faces was due to a difference in lag. When there was more time between the first and the second stimulus (long lag), the second stimulus was more often correctly detected, which is the AB effect.

We did not expect this lack of effect of EV on face recognition. Raymond and O’Brien (2009) did find an effect of EV while doing a very similar experiment. Possibly this could be ascribed to a difference in the face stimuli that were used. In the experiment of Raymond and O’Brien, they used computer generated faces, while this study used pictures of real faces. Some subjects reported after the experiment, to have recognised faces with “emotional value” better than neutral faces. Since the faces were supposed to show no emotion, this could have confounded the results. Especially since former research has pointed out that the emotional value of stimuli might improve how well they are remem-bered and therefore retrieved from memory (Taylor et al., 1998).Possible initial differences in recog-nisability of the faces were tested with a face exemplar control. From this analysis, some faces turned out to be generally better recognised then others, despite the counterbalancing. In contrast, no general differences in correct face localization in the second task were found between the face exemplars. This means that the tasks were measuring something different. Since the recognition task also measured

(17)

memory, the faces probably differed in how memorisable they were, but not in how detectable they were.

In the second localization experiment the lack of an effect of EV on correct face localization could thus not be explained by initial differences in the face exemplars. Possibly there was no connec-tion between EV and face localizaconnec-tion, meaning that motivaconnec-tion does not influence percepconnec-tion. This is supported by the fact, that we did not find a correlation between EV and correct face recognition or correct face localization. The EV coupled to a stimulus did thus not relate to it is recognisability or detectability and thus to how well it was perceived.

This is remarkable since Raymond and O’Brien (2009) did find a correlation between the value learn-ing scores and correct face recognition. Possibly our experiment did not sufficiently motivate partici-pants to detect certain faces to actually influence their recognisability. It might also be due to the fact that the faces differed in recognisability on forehand. If this effect was larger than the effect of EV, it might take away the possible influence of EV. This does not apply to the localization experiment though, since there was no effect of face exemplars on correct localization. This means that EV might influence (according to Raymond and O’Brien) the recognisability of a stimulus, but EV has no effect on localization and thus perception.

The theory about the influence of value learning on the selection and integration of bottom-up features (Schultz, Dayan & Montague, 1997; Raymond & O’Brien, 2009) is not proved wrong by these results though. The theory namely assumed that neural activity coupled to visual features chang-es, thereby making them more likely to be attended to. Our value learning task did possibly not pro-vide enough time and practice to actually change neural codes connected to the face exemplars. To more validly test the influence of EV on visual perception, it might be expedient in follow-up studies to couple a higher reward or punishment to increase the strength of participants associations with the stimuli. It would then also be useful to include a measurement of motivation coupled to the stimuli. It would also be recommendable to use computer generated faces to avoid possible confounds. To make sure that the faces do indeed not differ in detectability on forehand it is important to check if there are no differences in correct face detection between the different face exemplars.

A last possible explanation for the lack of influence of EV on face localization is that in the localization task some participants reported to use a strategy that might infer with the influence of EV. When judging if the face at T2 was presented on the right or the left side of the screen, participants could simply look at one side and if the stimulus did not appear here, conclude that it must have ap-peared on the other side. In this case, the detection of the face might have been simplified, undertaking the possible effect that EV of a face could have had. In follow-up research it could therefore be useful to include a third possibility, in which no face appeared. In this way participants could not rely on the above strategy anymore.

Our results do not support the theory that the EV coupled to a stimulus influences the visual perception of this stimulus. Our localization task substantiates that the EV of stimuli does not

(18)

influ-ence their likeliness to be detected. EV does not seem to influinflu-ence the earliest stages of visual percep-tion. Our experiment added another measurement of visual perception to the research area. The AB localization task is a new way to measure the visual detection of stimuli without the confound of memory. With some refinement it could provide a useful measurement for visual perception in future experiments on the influence of motivation on visual perception.

(19)

References

A. Treisman Features and objects: the fourteenth Barlett memorial lecture Quart. J. Exp. Psychol., 40A (1988), pp. 201-237

Deco, G., & Rolls, E. T. (2005). Attention, short-term memory, and action selection: a unifying theory. Progress in neurobiology, 76(4), 236-256.

Desimone, R. (1998). Visual attention mediated by biased competition in extrastriate visual cortex. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 353(1373), 1245-1255.

Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for "top-down" effects. Behavioral and brain sciences, 39.

Hansen, C., & Hansen, R. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54, 917–924.

Kane, M. J., & Engle, R. W. (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: An individual-differences

perspective. Psychonomic bulletin & review, 9(4), 637-671.

Kinchla, R. A., & Wolfe, J. M. (1979). The order of visual processing:“Top-down,”“bottom-up,” or “middle-out”. Perception & psychophysics, 25(3), 225-231.

Pavlov, I. P., & Anrep, G. V. (2003). Conditioned reflexes. Courier Corporation.

Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink?. Journal of experimental psychology: Human

perception and performance, 18(3), 849.

R. Milanese, S. Gil, and T. Pun, “Attentive Mechanisms for Dynamic and Static Scene Analysis,” Optical Eng., vol. 34, no. 8, pp. 2,428–2,434, Aug. 1995.

Milders, M., Sahraie, A., Logan, S., & Donnellon, N. (2006). Awareness of faces is modulated by their emotional meaning. Emotion, 6(1), 10.

Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Frontiers in cognitive neuroscience, 229, 342-345.

Müsch, K., Engel, A. K., & Schneider, T. R. (2012). On the blink: The importance of target-distractor similarity in eliciting an attentional blink with faces. PloS one, 7(7), e41257.

Raymond, J. E., & O'Brien, J. L. (2009). Selective visual attention and motivation: The consequences of value learning in an attentional blink task. Psychological Science, 20(8), 981-988.

Sarter, M., Givens, B., & Bruno, J. P. (2001). The cognitive neuroscience of sustained attention: where top-down meets bottom-up. Brain research reviews, 35(2), 146-160.

(20)

Science, 275(5306), 1593-1599.

Taylor, S. F., Liberzon, I., Fig, L. M., Decker, L. R., Minoshima, S., & Koeppe, R. A. (1998). The effect of emotional content on visual recognition memory: a PET activation

Referenties

GERELATEERDE DOCUMENTEN

Overall, this research shows that intrinsic motivation is related to auditor performance, and that more intrinsically motivated auditors will perform better than less

According to the list for effective entrepreneurship policy in the Dutch fashion design industry, these policies are expected to be effective and should support and stimulate the

After identifying the most relevant waste heat processes, this phase examines the energy and media demand of these processes over time which is usually not constant but very

Therefore, using PTMC membranes and PTMC-BCP composite membranes resulted in similar bone remodeling to using collagen membranes or e-PTFE membranes and the used barrier membranes

An opportunity exists, and will be shown in this study, to increase the average AFT of the coal fed to the Sasol-Lurgi FBDB gasifiers by adding AFT increasing minerals

time-resolved structure of reactants and catalysts as the reaction proceeds at the surface, we propose to combine photoelectron spectroscopy with the structural accuracy of the

CSR practices applied by MNEs in the tobacco sector show how companies respond and act in order to maintain legitimacy in Russia and UK.As already mentioned

Worse still, it is a book that brought Singh an enormous amount of stress and trauma, mainly due to a related column he wrote in April 2008 for The Guardian in which he accused