• No results found

No Effect of Value Associations on the Attentional Blink

N/A
N/A
Protected

Academic year: 2021

Share "No Effect of Value Associations on the Attentional Blink"

Copied!
25
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

No Effect of Value Associations on the Attentional Blink

Dorien Melman

Supervisor: Timo Stein

Date: 06-01-2018

Student number: 10595880

(2)

Abstract

Recently, there is a wave of research that suggests top-down effects on visual perception. The current study focuses specifically on the potential top-down effects of motivation. Raymond and O’Brien (2009) demonstrated that recognition enhances for motivationally salient stimuli and that visual processing is biased in favour of reward-associated stimuli. Although various other studies support such an effect of affective information on perception, literature shows inconsistency which indicates that the findings of top-down effects could be explained alternatively. Therefore, this study aimed to examine whether the effect of value prediction on recognition could be confirmed and whether this effect also occurs at initial perception. To test this, a replication of Raymond and O’Brien (2009) was set out and extended with a localization task. Participants completed a value learning task followed by an attentional blink (AB) recognition or localization task. In both tasks, the AB was not affected by the motivational values that were assigned to the stimuli. These findings indicate that the impact of motivation on perception is more limited than previously thought.

(3)

Introduction

What determines what we see? This question has been discussed for decades in cognitive science. According to the traditional understanding of perception, visual processing is encapsulated from higher-level cognition. However, recently there is a wave of research that argues that top-down influences on perception do exist (Firestone & Scholl, 2016). Hundreds of studies suggest that states such as beliefs, desires, emotions, motivations, intentions, and linguistic representations influence basic perception. On this view, higher-level cognitive states routinely ‘penetrate’ perception, such that what we see is a mixture of both bottom-up and top-down factors.

One factor that may influence perception, and which is the focus of this study, is motivation. People have the ability to differentiate between ‘good’ and ‘bad’ items, based on value codes that the brain assigns to them (Rothkirch, Schmack, Schlagenhauf, & Sterzer, 2012). According to

Thorndike’s (1927) theory, stimuli leading to pleasant outcomes upon their selection are more likely to be selected again in the future. By contrast, aversive outcomes lower the probability of a reselection of that same stimulus. These value-prediction codes provide a common ‘currency’ for the brain that allows comparison of diverse options with diverse outcomes (Raymond & O’Brien, 2009). An important question concerning the influence of motivation on perception, is whether stimuli that have such particular motivational or emotional significance are prioritized in perception, in comparison with stimuli that have no affective significance.

There is increasing evidence that affective information can influence visual processing. This evidence is mainly based on two strands of research: the attentional blink (AB) paradigm and the literature using a technique called binocular rivalry (BR). The AB firstly refers to the phenomenon that detection of a visual target presented in a rapid stream of visual stimuli is impaired when it appears about 100-400 ms after another target (Milders, Sahraie, Logan, & Donnellon, 2006; Müsch, Engel, & Schneider, 2012). Variation in duration of the lag between the two stimuli in a rapid serial visual presentation (RSVP) provides the possibility to modulate the availability of attention. Some AB studies found that selection of stimuli for access to awareness is influenced by emotional meaning. For example, preferential detection of emotional words (Anderson & Phelps, 2001; Keil & Ihssen, 2004) and schematic objects (Mack, Pappas, Silverman, & Gay, 2002) has been reported. These results are

(4)

extended by Milders et al. (2006), who found preferential detection of fearful faces compared with happy faces. Additionally,it has been found that motivation can modulate recognition in an AB task. In the study of Raymond and O’Brien (2009), recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses was measured. Because the recognition task was performed both with the short and long lag, Raymond & O’Brien could show that in the short lag condition, visual processing is biased in favour of associated stimuli. This effect of reward-associated stimuli has not only been found on recognition, but has affected rapid visual orienting (Rutherford, O’Brien, & Raymond, 2010) and latencies of saccades, rapid eye movements towards the target of attention, as well (Rothkirch, Ostendorf, Sax, & Sterzer, 2013).

In the literature using the BR technique on the other hand, studies also suggest a top-down influence of motivation on visual processing. In BR, different images are shown to the two eyes and the images compete for visual awareness (Stein, Grubb, Bertrand, Suh, & Verosky, 2017). Conscious perception is alternating back and forth between the stimuli, and the one that is consciously perceived is called dominant, while the other stimulus is called suppressed. The influence of emotion or

motivation can for instance be investigated by showing an emotional or motivational stimulus to one eye and a neutral stimulus to the other, asking participants to report their perception over time, or by giving reward or punishment during a BR task. Using this latter technique, it has been found that perceptual inference is influenced by reward and punishment in opposite directions (Wilbertz, van Slooten, & Sterzer, 2014). Dominance durations of the rewarded percept were longer than the non-rewarded percept, dominance durations of the punished percept were shorter than the non-punished percept. Anderson, Siegel, Bliss-Moreau, & Barrett (2011) investigated the influence of affective information on vision in the form of gossip. They showed that information about who is friend and who is foe does not only influence how a face is evaluated, it also affects whether a face is seen in the first place. Faces paired with negative gossip dominated longer in visual consciousness, compared to positive or neutral gossip. However, a replication of this study did not find any differences between the visual awareness of faces associated with negative, neutral or positive behaviours (Stein et al., 2017). Similarly, Rabovsky, Stein, and Abdel Rahman (2016), did not obtain evidence for genuine influences of affective information on access to visual awareness for faces either.

(5)

So far, some studies fail to find an influence of motivation or emotion, but most of the literature suggests that the motivational or emotional value with which a stimulus is associated influences the visual perception of it. However, there might be alternative explanations for these findings.

In the first place, it is possible that some cognitive effort is required to allow for an influence of motivation on perception. To illustrate, Raymond & O’Brien (2009) used a recognition task to investigate how motivational value might modulate a simple visual perceptual decision. Recognition was substantially enhanced for motivational salient stimuli. However, not only perception is measured in this task, memory is involved as well. This means that the effect found may not reflect effects on perception, but on memory. Therefore, it could be that whether a study finds an effect or not depends on which stage of visual processing is investigated. More cognitive processes, such as memory, may become involved in later stages of visual processing. Studies that focus on the access to visual

awareness, which is an initial stage in visual processing, show less convincing results. The finding that gossip affects whether a stimulus is seen or not (Anderson et al., 2011), could not be replicated (Stein et al., 2017) and Rabovsky et al. (2016) did not obtain influences of affective information either. If an effect is often found at a relatively “late” stage in visual processing, but less often at an “early” stage, then it is likely that motivation affects some cognitive processes but not perception itself. This thought is supported by the observation that only voluntary saccades, but not reactive saccades are modulated by motivational value (Rothkirch et al., 2013). Voluntary saccades require object-specific visual processing, while reactive saccades are initiated with only a minimum amount of cognitive effort.

Secondly, the possible low-level confounds should be considered. With this, the difficulty in manipulating stimuli across conditions is meant: it is possible that the intended top-down manipulation is confounded with changes in the low-level visual features of the stimuli (Firestone & Scholl, 2016). For instance, Milders et al. (2006) investigated the impact of emotional meaning on detection of faces and reported a difference between the detection of fearful faces compared with neutral and happy faces. The fearful faces showed their teeth and got their eyes wide open, while the happy and neutral faces had other specific facial features. These low-level differences might be responsible for the

(6)

finding of preferential detection of fearful faces; image artifacts have troubled previous studies into selective attention to facial expressions (Firestone & Scholl, 2016; Hansen & Hansen, 1988).

The third possible confound involves the demand characteristics. Vision experiments occur in controlled environments, thus inevitably a social environment, which raises the possibility that social biases may intrude on perceptual reports (Firestone & Scholl, 2016). This nature of vision experiments can lead to reports being contaminated by task demands, wherein certain features of experiments lead subjects to adjust their responses in accordance with their assumptions about the experiment’s purpose or the experimenters’ desires. This problem is prevalent in all studies where subjects rate stimuli. Also, for example, in binocular rivalry. To illustrate, in the BR study of Wilbertsz et al. (2014), subjects were instructed to report which stimulus they perceived. When hearing a sound of a falling coin, this meant that some amount of money was added or subtracted from the balance. It is possible that the participants kind of figured out the goal of the study and that they responded, consciously or

unconsciously, in a way that they thought was aligned with it. Here, there is no objective benchmark against which one could compare their responses. This is different in performance-based paradigms; even if subjects figure out the goal of the research, it’s hard to respond in a way that matches the researcher’s expectations.

The final confound that should be mentioned in addition to demand characteristics is response bias. Especially in yes-no tasks, decision strategies might contaminate the measuring of cognitive processes. Misinterpretations can arise when comparing individuals that have adopted different strategies (Kroll, Yonelinas, Dobbins, & Frederick, 2002); thus, this might have result in some tasks measuring response strategies rather than perception. Deffenbacher, Leu, and Brown (1981) suggest that recognition performance is better in forced-choice designs compared to yes-no designs.

Overall, it seems likely that motivation affects later stages of visual processing. However, it remains unclear whether this effect of motivation is limited to later stages or whether there is an effect on initial perception as well. Previous studies that measure initial perception - using detection tasks or binocular rivalry - were possibly contaminated by low-level confounds, demand characteristics or response bias.

(7)

Aiming to contribute to this gap into the literature, this study has two goals: firstly, to examine whether the effect of value prediction on recognition can be confirmed, and secondly whether this effect also occurs at initial perception. To test this, we set out to replicate Raymond & O’Brien’s (2009) finding and extend it with a detection task to study initial perception.

So, adopting Raymond & O’Brien’s protocol, the experiment consists of two stages. At first, participants engage in a conventional value-learning task using face stimuli. This learning session involves playing a simple choice game for modest monetary wins and losses, so that they can acquire different value-prediction codes for different faces. These codes incorporate information about the valence (predicting wins or losses) and motivational salience (probability of the monetary outcome). Then, in the second stage, half of the participants participate in an AB recognition task and the other half in an AB localization task. In both tasks, no reward is given any more and the associated prediction value is irrelevant now. In the recognition task, the value-laden faces and some new faces are presented within an AB task with a short and long lag condition. The participants view a rapid sequence of images in which two targets, an abstract object (T1) and a face (T2), are embedded. The task is to discriminate the texture of T1 and then to decide whether the face is one seen in the prior value-learning task or a new one. The localization task is using the AB paradigm as well. During this task, participants see three streams of scrambled faces. The appearance of an abstract object (T1) in the central stream, the same as in the recognition task, is followed by a face stimulus from the value-learning task (T2) that appears in the left or right stream. The participants should discriminate the texture of T1 and determine at which side the face appeared. This design with two different tasks allows to determine whether motivational value and salience determine both recognition, a relatively late stage in visual processing, and localization, an early stage. The AB paradigm allows modulation of the availability of attention (Raymond, Shapiro & Arnell, 1992).

Raymond and O’Brien (2009) found that regardless of available attention (i.e. both in the short and long lag condition), recognition was substantially enhanced for motivationally salient stimuli. In the short lag condition, the AB disappeared for only the win-associated stimuli, all other stimuli showed large ABs. Thus, motivational salience acted independently of attention on recognition, but when attention was limited, recognition was biased in favour of reward-associated stimuli. Based on

(8)

their findings, an effect of motivation on recognition is predicted in this study. Assuming that the results of Raymond and O’Brien (2009) will be replicated, it is expected that stimuli highly predictive of outcomes (wins or losses) lead to enhanced recognition, compared with equally stimuli that have weak or no motivational salience, in both the short and long lag condition. Secondly, it is expected that valence determines recognition in the short lag condition; the AB effect will be absent for reward-associated stimuli only.

If motivation affects not only later stages of visual processing but initial perception as well, reflecting a true top-down effect, then an effect would be expected also on localization. Assuming that the effect would be similar to the effect on recognition, it would be expected that the T2 performance in the AB localization task will be enhanced for motivationally salient stimuli both in the short and long lag condition. Also, the AB would disappear for only the win-associated stimuli in the short lag condition.

(9)

Method Participants

Fifty-one experimentally naïve, healthy adults (37 females, 13 males, 1 undefined; mean age = 25.24 years) participated in exchange for the money they won during the value learning task.

Psychology students of the University of Amsterdam received, in addition to the money, 1.5

participation credits. All of them started the experiment with the value learning task. Twenty-five of them continued the experiment with the recognition task and the other twenty-six participants did the localization task in the second phase. All participants completed the rating task afterwards and answered a written question in the end about the strategy that they used during the tasks. Informed consent was obtained. The study was approved by the local Ethics committee.

Apparatus

An Optiplex 9010 computer running Matlab R2017B, recorded data and presented stimuli on a 61-cm monitor (100-hz refresh, 1920 x 1080 resolution). The viewing distance was circa 70 cm. Stimuli

The 22 face stimuli were gray-scale faces of young adults. Hair, teeth and neck of the faces were not visible. Examples of the face stimuli are shown in Figures 1, 2, and 3. The T1 stimuli were 88 computer-generated, green, abstract elliptical patterns, composed of either small circles (44) or rectangles (44). Scrambled faces were used as masks in both AB tasks. These different mask stimuli were created by dividing the inner oval of the face stimuli into a grid and then randomly rearranging the pieces to create a non-face-like image while preserving the outer shape of the face. The scrambling of the 22 face stimuli was done in 7 different ways, such that there were 154 different masks. The 7 scramble approaches differ regarding how many squares were taken horizontally (between 3 and 5) and vertically (between 5 and 8) for a face. As a result, the minimum number of squares in one way of scrambling was 15 per face and the maximum number of squares was 40 per face.

Procedure

Value learning. Figure 1 presents the outline of one trial of this task. In each trial of the value learning task, a face pair was presented with one face on the left and one face on the right side of the screen. The participants were instructed to choose one face, by pressing one of two designated keys, in

(10)

order to maximize one’s gain at the end of the task. Immediately after the participant chose, the screen displayed for 1300 ms the message “WIN” (in green), “LOSS” (in red), or “NOTHING” (in black), depending on the face pair just presented and the probability governing the outcome. The running total of earnings appeared at the same time. Each face always appeared with its mate, but the location of the two faces was randomized across trials. The task involved six different face pairs, 3 of them were male and 3 of them were female. Two pairs produced only gains, two pairs produced only losses, and two never produced any outcome (serving as controls). In the win and loss pairs, one face was assigned an outcome probability of 80%, which is a high probability of winning or losing money. The other face of the corresponding pair, was associated with a low outcome probability of 20%. Wins and losses were always equal to 5 cents. Each pair was presented 60 times in a random order. The total of 360 trials were divided into six blocks, which took approximately half an hour. After 120 and 240 trials there was a short break between the trials. To eliminate image effects, assignment of each face pair to outcome type was counterbalanced across participants.

Figure 1. Outline of one trial for the value learning task. Two faces were presented, one on either side

of the screen. The selection of one face by a button press was followed by feedback, informing the participant about the outcome of the current trial and the running total of earnings.

AB recognition task. The recognition task started with 5 practice trials. In each experimental trial, four elliptical stimuli were presented in rapid succession at the center of the screen; T1, mask,

(11)

T2, mask (Figure 2). The stimuli matched in size to the faces of the choice game and were all presented for 100 ms. Each trial started with a central fixation point. Then, T1 and its mask were presented successively (without inter-stimulus interval). The T1 image, randomly selected on each trial from a set of 88 possible images, was green and composed of circles or rectangles. The mask was randomly selected on each trial. Then, either T2 or a blank screen was displayed, which created the short-lag (200 ms) condition and the long-lag (800 ms) condition. In the long-lag condition, T2 followed the blank screen after 600 ms. The presentation of T2 was followed by its mask. T2 was a face stimulus; on half of the trials selected from the 12 value-laden faces used in the choice game and on the remaining trials from 10 novel faces.

At the end of a stimulus sequence, participants were required to answer two questions without time pressure. First, they had to identify the T1 pattern as ‘circles’ or ‘rectangles’ by pressing one of two keys using the left hand. Then, they were asked to report their recognition decision (old/new) regarding the T2 face by pressing two other keys using the right hand. Feedback was provided for the identification of T1 via the color of the central fixation point after the first question: it turned green for the correct answer and red for the incorrect one. Monetary outcomes were not provided during this task. T1 type, T2 type, and lag combinations were randomly presented in a fully crossed design. There were 440 trials in total, in 220 of which T2 was a face from the previous value learning task. Short breaks occurred after 110, 220 and 330 trials.

(12)

Figure 2. Illustration of the sequence of events in the AB recognition task. Each stimulus was

displayed for 100 ms. Participants judged whether T1 comprised circles or rectangles and then decided whether or not T2 was a face that had been previously presented in the choice game.

AB localization task. The rapid serial visual presentation (RVSP), started with 5 practice trials. In each trial, there are three streams horizontally next to each other in which 24 stimuli are presented for 90 ms. In a rapid sequence, distractors, T1, distractors, T2 and distractors follow each other (Figure 3). The distractors were the scrambled face stimuli. A trial began with the presentation of 7 to 11 distractors in the left, central and right stream, all randomly selected. Then, in position 8, 9, 10, 11 or 12 (randomly selected per trial), T1 was shown in the central stream, whereas the left and right stream were still presenting a distractor. The T1 image, green and composed of circles or rectangles again, is selected randomly from the set of 88 possible images. After the presentation of T1, T2 followed with a lag of 2 or 8 distractors in the left or right stream. T2 was a complete face stimulus, randomly selected from the set of 16 counterbalanced face stimuli (of which 12 were already used in the value learning task). All stimuli that followed were distractors, displayed in the three streams.

After the stimulus sequence, participants pressed one of two keys using the left hand to identify the T1 pattern as ‘circles’ or ‘rectangles’ and then pressed one of two other keys using the right hand to report the location of the face (left/right). There was no time pressure. Feedback for the

Short Lag or Long Lag T1 T2 Mask Mask

(13)

identification of T1 was provided through the color of the central fixation point that appeared after answering the first question: green for a correct and red for an incorrect answer. Monetary outcomes were not provided. In total, there were 384 trials and short breaks occurred after 128 and 256 trials.

Figure 3. Illustration of the sequence of events in the AB localization task. In total, 24 stimuli are

presented for 90 ms. The sequence starts with 7 to 11 distractors, followed by T1 in the central stream. T2 then follows with a lag of 2 (short lag) or 8 (long lag) distractors, thus anywhere between position 10 and position 20.

Rating task. Motivational value reflects the desirability of a stimulus. Therefore, after completion of the main experiment, participants were asked to rate all faces with respect to their perceived likeability. A scale was used ranging from 1 to 7 (1 = very unlikeable, 7 = very likeable).

Short Lag or Long Lag T1

(14)

Results

In total, 51 subjects participated in the experiment. The data of one subject was excluded from the analyses, due to a T1 performance below chance level in the AB recognition task. Fifty subjects were included in the analyses, 24 of whom participated in the AB recognition task and 26 in the AB localization task.

Statistical power

Power analysis for the value learning task indicated that with the final sample size of N=50, we had 93.4% power to detect effect sizes of Cohen’s d ≥ 0.5. For the AB recognition task with N=24, power analysis indicated 65.0% power for detecting effect sizes of Cohen’s d ≥ 0.5. Power analysis for the AB localization task with N=26 revealed 68.8% power to detect effect sizes of Cohen’s d ≥ 0.5.

The key-effect found by Raymond and O’Brien (2009), which will be discussed below, had our special interest; therefore, we have also computed the power to detect that key-effect. Based on the F-value of 18.72 for the interaction effect between lag and valence, power analysis indicated that with N=24 we had 98.8% power to detect their key-effect.

Value learning

To assess the learning performance, the relative probability of optimal stimulus choice was computed for each valence. Optimal choices are trials in which the participants chose the stimulus associated with a high probability of obtaining a reward or the stimulus associated with a low

probability of losing money. For neutral stimuli pairs, optimal choices were randomly assigned to one stimulus of the neutral pair. A repeated measures ANOVA with factors time (block 1 to block 6) and valence (reward and punishment) showed that participants successfully learned to choose the optimal stimuli over time (Figure 4). Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of time, χ2 (14) = 45.43, p < .001, and the interaction effect between time and valence, χ2 (14) = 44.39, p < .001. Therefore, degrees of freedom were corrected using

Greenhouse-Geisser estimates of sphericity (ε = .75 for the main effect of time and .71 for the interaction effect between time and valence). There was a significant main effect of time on the performance for reward and punishment pairs in the value learning task, F(3.77, 188.43) = 32.33, p < .001. For reward pairs, the high-probability-win face was chosen on average on 58% (SE = 2%) of the

(15)

first block and 76% (SE = 4%) of the last block. For punishment pairs, the low-probability-loss face was chosen on 54% (SE = 2%) of the first block and 76% (SE = 3%) of the last block. The results show that the performance was not significantly affected by the type of valence, F(1, 50) = .21, p = .653. This indicates that similar levels of learning performance were achieved for both reward and punishment stimuli.There was also no significant interaction effect between time and valence, F(3.53, 176.35) = 1.31, p = .270.

Figure 4. Optimal choice rates, over the six blocks of the value learning task. Error bars indicate

between-subject standard errors.

AB recognition task

A repeated measures ANOVA with the factors lag (short or long), valence (reward or punishment), and motivational salience (high or low probability) showed a significant main effect of lag on T2 performance in the recognition task, F(1, 23) = 10.10, p < .01. The T2 hit rate was higher for the long lag condition (M = .74, SE = .04), compared to the short lag condition (M = .62, SE = .04). The T2 performance did not depend on the type of valence, F(1, 23) < 0.01, p = .973, or the level of probability that was associated with the stimuli, F(1, 23) = 1.84, p = .189. The interaction effects were all non-significant (lag × valence F(1, 23) = .11, p = .748.; lag × salience F(1, 23) = .25, p = .622; valence × salience F(1, 23) = 1.06, p = .313; lag × valence × salience F(1, 23) = .01, p = .911).

(16)

The expected value of a stimulus reflects the product of value (reward or punishment) and probability (.8, .2 or 0). To explore the effect of expected value on T2 recognition, a repeated

measures ANOVA with factors lag (short or long) and expected value (high and low reward, high and low punishment, neutral) has been performed. This analysis showed a significant main effect of lag on T2 performance, F(1, 23) = 17.87, p < .001. As expected, the participants performed better in the long lag condition (M = .73, SE = .03), compared to the short lag condition, (M = .62, SE = .03). Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of expected value,

χ2 (9) = 30.47, p < .001, and the interaction effect between lag and expected value, χ2 (9) = 32.80, p < .001. Therefore, degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (ε = .68 for the main effect of expected value and .66 for the interaction effect between lag and expected value). As Figure 5 shows, the T2 performance was not significantly affected by the type of expected value, F(2.73, 62.85) = .47, p = .676, and there was also no significant interaction effect between lag and expected value, F(2.65, 61.02) = .35, p = .766.

Figure 5. T2 hit rates as a function of expected value of the T2 stimulus, separately for the long and

short lag condition. Error bars indicate between-subject standard errors.

Stimulus exemplars. To investigate if some stimuli, independent of their motivational value or salience, were significantly better recognized than other, a repeated measures ANOVA with factors

(17)

lag (short and long) and exemplars (16 face stimuli) has been performed. The main effect of lag was significant again, F(1, 24) = 38.28), p < .001. Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of exemplars, χ2 (119) = 227.45, p < .001, and the interaction effect between lag and exemplars, χ2 (119) = 241.14, p < .001. Therefore, degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (ε = .47 for the main effect of exemplars and .27 for the interaction effect between lag and exemplars). Surprisingly, the main effect of exemplars on T2 performance was significant, F(7.00, 168.02) = 4.08, p < .001. So, despite of the counterbalancing, T2 performance for some face stimuli was significantly better than for other face stimuli.

Replication of the key effect by Raymond and O’Brien (2009). The most interesting effect that was found by Raymond and O’Brien (2009), is that the AB effect in the short lag condition disappeared for only the reward-associated faces. For reward-associated stimuli, there was no significant difference between T2 performance in the short and long lag condition, while there was a difference between the short and long lag for neutral- and punishment stimuli. To further test whether we could replicate the trend of this key effect, we computed the mean difference of the T2 recognition of reward-associated stimuli between the short and long lag condition (M = -.1094, SE = .05). The same calculation was done for the other stimuli (M = -.1052, SE = .03). A dependent T-test showed that there was no significant difference between the mean difference of reward and other stimuli (t(23) = -.064, p = .949), so the lag effect was not significantly different for the reward-associated stimuli, compared to the other stimuli.

Additional correlation analysis. Participants successfully learned to choose the optimal stimuli during the value learning task, but T2 performance in the recognition task does not seem to be affected by the learned values. Therefore, we have further explored whether learning performance was related to the effect of learned values on performance in the recognition task. The gain that was achieved in the value learning task was used as indication for the learning performance, and for the effect of valence on T2 recognition, the difference between the mean T2 recognition for

reward/punishment pairs and neutral pairs (baseline) was computed. Gain and the effect of valence on T2 recognition were not significantly correlated, r(22) = .20, p = .351.

(18)

Localization task

Similar to the analysis of the recognition task, a repeated measures ANOVA with factors lag (short or long), valence (reward or punishment), and motivational salience (high or low probability) was performed to investigate if those factors affected the T2 performance in the localization task. The main effect of lag was significant again, F(1, 25) = 36.60, p < .001. The probability of a correct T2 response was better for the long lag (M = .85, SE = .03), than for the short lag (M = .68, SE = .03).The results show that T2 performance was not significantly affected by valence, F(1, 25) = 1.59, p = .219, and salience, F(1, 25) = .40, p = .533. Also, no significant effect was found for the interaction effects (lag × valence F(1, 25) < .01, p = .941; lag × salience F(1, 25) = .02, p = .880; valence × salience F(1, 25) = .94, p = .342; lag × valence × salience F(1, 25) = .11, p = .749).

A repeated measures ANOVA with the factors lag (short or long) and expected value (high and low reward, high and low punishment, neutral) showed a significant main effect for lag only, F(1, 25) = 35.16, p < .001. Again, it was shown that T2 accuracy was higher for the long lag (M = .85, SE = .03) than for the short lag (M = .68, SE =.03). The main effect of expected value was non-significant,

F(4, 100) = 0.83, p =.511 (Figure 6). Mauchly’s test indicated that the assumption of sphericity had

been violated for the interaction effect between lag and expected value, χ2 (9) = 25.83, p < .01, thus degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (ε = .69). The effect of lag and expected value did not interact significantly, F(2.77, 69.25) = .07, p = .972.

To explore the effect of exposure, a repeated measures ANOVA has been performed with factors lag (short or long) and familiarity (neutral or novel stimuli). Participants were previously exposed to the neutral pairs during the value learning task; novel pairs were used for the first time in the localization task. In accordance with the other results, the lag effect was significant, F(1, 25) = 27.26, p < .001. However, a significant effect was not found for familiarity, F(1, 25) = .79, p = .383. So, T2 performance was not affected by earlier exposure to the stimuli.

(19)

Figure 6. T2 accuracy as a function of expected value of the T2 stimulus, separately for the long and

short lag condition. Error bars indicate between-subject standard errors.

Stimulus exemplars. A repeated measures ANOVA with factors lag (short or long) and exemplars (16 face stimuli) showed a significant main effect of lag, F(1, 25) = 33.04, p < .001. Because Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of exemplars, χ2 (119) = 172.37, p < .01, and the interaction effect between lag and exemplars,

χ2 (119) = 170.35, p < .01, degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (ε = .54 for the main effect of exemplars and .52 for the interaction effect between lag and exemplars). The T2 performance in the localization task was not affected by the type of exemplar,

F(8.13, 203.13) = 1.48, p = .166, and there was also no significant interaction effect between lag and

exemplars, F(7.74, 193.60) = .73, p = .664.

Additional correlation analysis. In the same way as for the recognition analysis, we investigated whether there was a correlation between learning performance and the effect of learned values on performance in the localization task. Using the same variables, a bivariate correlation analysis did not show a significant correlation between gain and the effect of valence on T2 recognition, r(24) = .06, p = .770.

(20)

Rating

At the end of the experiment, the participants were asked to rate the likability of the face stimuli. To investigate whether these rating scores were affected by the motivational value that was assigned to it in the value learning task, an analysis of the rating scores has been performed. Therefore, we firstly examined if there were any significant differences between the rating scores per face stimuli. The results of a one-way repeated measures ANOVA with factor exemplars (the set of 16 different face stimuli that are used in the value learning task) show that the rating score of likability is significantly different for the face stimuli, F(10.13, 496.52) = 6.85, p < .001. This effect cannot be explained by the motivational value that was assigned to the stimulus, because a one-way repeated measures ANOVA with factor condition (high and low reward, high and low punishment, neutral and novel) did not show a significant main effect of condition on rating scores, F(5, 245) = .58, p = .717. Because it seems that T2 performance was not influenced by the value assigned to the stimulus and that the condition of a stimulus and likability are not correlated, it was decided to explore if the likability of a face stimulus does relate to T2 performance. Analyses did not reveal a significant correlation between likability rating and T2 performance in the AB recognition task for the short lag,

r(16) = .272 , p = .307 nor for the long lag, r(16) = .118, p = .664. Correlations between likability

rating and T2 performance in the AB localization task were also non-significant for the short lag, r(16) = .183, p = .498, and the long lag, r(16) = .127, p = .638.

(21)

Discussion

The current study investigated the hypothesis that motivational valence and salience affect recognition. Furthermore, it was examined whether this effect extends to initial perception. This has been tested using the attentional blink (AB) paradigm, in which the AB refers to the phenomenon that detection of a visual target presented in a rapid stream of visual stimuli is impaired when it appears shortly after another target (Milders et al., 2006). A value learning task was followed by an AB recognition or AB localization task. It was found that neither recognition or localization was affected by value associations. The results show that participants successfully learned to choose the optimal stimuli during the value learning task, thus the first part of the experiment seems to have enabled a stable representation of motivational values. Performance on the subsequent AB tasks was affected by the duration of the lag. However, performance was not affected by valence, motivational salience or expected value. In addition, familiarity with stimuli did not enhance performance in the AB

localization task either. Lastly, even trend-wise our results do not correspond with Raymond and O’Brien’s (2009) most interesting finding, the disappearance of the AB for reward-associated stimuli only.

Our failure to find an influence of value associations in the AB tasks came as a surprise and is in contradiction with studies that reported effects of motivational value on visual perception. Raymond and O’Brien (2009) previously used the AB paradigm to demonstrate that recognition enhances for motivational salient stimuli and that the AB disappears for reward-associated stimuli only. The goal of the current study was to examine whether this finding could be confirmed and if it would extend to initial perception. However, despite designing the AB recognition task to be similar to Raymond and O’Brien’s task, their results were not reproduced. In accordance with the recognition results,

localization was not affected by motivation either. These results are in line with the findings of Stein et al. (2017), since they also failed to replicate an effect of affective information on visual perception. Considering that Rabovsky et al. (2016) did not obtain evidence for influences of affective

significance on visual access to visual awareness either, the lack of an effect of motivation in our study corresponds with their findings too. Furthermore, our findings are broadly consistent with Firestone &

(22)

Scholl (2016), who argue that compelling evidence for true top-down effects on perception has not been provided yet.

The failure to find an influence of value associations on recognition or localization is unlikely to be due to a failure to learn the values that were assigned to the faces. Despite of the smaller number of trials in the value learning phase, the learning curves for the reward, punishment, and neutral stimuli in our study were similar to the learning curves in Raymond and O’Brien’s (2009) study.

Instead, the differences between Raymond and O’Brien’s (2009) study and our study may be due to some other factors. Firstly, it is possible that subtle differences in the experiment could account for it. For example, in our experiment the T1 stimuli were green-scale instead of gray-scale to make T1 more clearly discernible in the AB localization RSVP stream and in order to be consistent the same T1 stimuli were used in the AB recognition task. This might have made it more difficult to pay

attention to T2. However, an effect of value associations on the AB could still have occurred. A difference with Raymond and O’Brien’s experiment that possibly did influence the results is the use of photo face stimuli instead of computer-generated face stimuli for T2. The consequence is that the face stimuli may have been less neutral and there may have been unintended differences between the faces. Regarding the finding that there was a difference in T2 recognition between the faces, it seems

possible that this was the case. Support for this thought can also be found in comments from participants in the interviews afterwards, from which it emerged that some participants could recognize faces by expression or facial features such as eyebrows or noses. In the future, computer-generated faces should therefore be used to ensure neutrality between the face stimuli.

Furthermore, it is also possible that the relatively small number of observations accounts for the failure to find an effect of value associations on perception. However, the power for detecting the effect size found by Raymond and O’Brien (2009) was almost 100%, thus it does not seem likely that this caused the failure to replicate their findings.

Lastly, it should be taken into account that the motivational value of the face stimuli defined by the outcome probabilities in the learning task did not influence the likability rating of those face stimuli. Phrased differently, participants’ subjective experience of the faces was not changed by the learned value associations. By way of contrast, previous studies did find a correlation between faces’

(23)

motivational value and face ratings (Rothkirch et al., 2012). This finding raises the possibility that the manipulation has not induced the intended effect. However, we have not found a correlation between likability rating and T2 performance either. Thus, even the phenomenon that subjective experience of stimuli affects recognition was not found. Therefore, it could be interesting to investigate the influence of other types of motivational associations in the future. For example, the use of sounds or physical manipulations could be considered to learn associations with specific stimuli.

In sum, it was found that the valence and motivational salience with which a stimulus was associated, did not affect recognition of that stimulus. Therefore, the effect of value prediction on recognition that was found by Raymond and O’Brien (2009) cannot be confirmed. Consequently, localization of a stimulus was not influenced by valence or motivational salience either. Our results suggest that the impact of value associations with faces is more limited than previously thought. This contrasts with the wave of studies that found top-down effects on perception, but is consistent with the more general notion that visual processing is encapsulated from higher-level cognition (Firestone & Scholl, 2016). Given that the discovery of a top-down effect of motivation would revolutionize our understanding of how the mind is organized, it is important that future studies will further investigate the extent and boundary conditions of motivation effects on perception.

(24)

References

Anderson, A. K., & Phelps, E. A. (2001). Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature, 411, 305-309.

Anderson, E., Siegel, E. H., Bliss-Moreau, E., & Barrett, L. F. (2011). The visual impact of gossip. Science, 332(6036), 1446-1448.

Deffenbacher, K. A., Leu, J. R., & Brown, E. L. (1981). Memory for faces: Testing method, encoding strategy, and confidence. The American Journal of Psychology, 13-26.

Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for "top-down" effects. Behavioral and brain sciences, 39, 1-72.

Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of personality and social psychology, 54(6), 917.

Keil, A., & Ihssen, N. (2004). Identification facilitation for emotionally arousing verbs during the attentional blink. Emotion, 4(1), 23-35.

Kroll, N. E., Yonelinas, A. P., Dobbins, I. G., & Frederick, C. M. (2002). Separating sensitivity from response bias: implications of comparisons of yes-no and forced-choice tests for models and measures of recognition memory. Journal of Experimental Psychology: General, 131(2), 241. Mack, A., Pappas, Z., Silverman, M., & Gay, R. (2002). What we see: Inattention and the capture of

attention by meaning. Consciousness and cognition, 11(4), 488-506.

Milders, M., Sahraie, A., Logan, S., & Donnellon, N. (2006). Awareness of faces is modulated by their emotional meaning. Emotion, 6(1), 10-17.

Müsch, K., Engel, A. K., & Schneider, T. R. (2012). On the blink: The importance of target-distractor similarity in eliciting an attentional blink with faces. PloS one, 7(7), e41257.

Rabovsky, M., Stein, T., & Rahman, R. A. (2016). Access to awareness for faces during continuous flash suppression is not modulated by affective knowledge. PloS one, 11(4), e0150931. Raymond, J. E., & O'Brien, J. L. (2009). Selective visual attention and motivation: The consequences

(25)

Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink?. Journal of experimental psychology: Human

perception and performance, 18(3), 849.

Rothkirch, M., Ostendorf, F., Sax, A. L., & Sterzer, P. (2013). The influence of motivational salience on saccade latencies. Experimental brain research, 224(1), 35-47.

Rothkirch, M., Schmack, K., Schlagenhauf, F., & Sterzer, P. (2012). Implicit motivational value and salience are processed in distinct areas of orbitofrontal cortex. Neuroimage, 62(3), 1717 1725. Rutherford, H. J., O'Brien, J. L., & Raymond, J. E. (2010). Value associations of irrelevant stimuli

modify rapid visual orienting. Psychonomic Bulletin & Review, 17(4), 536-542.

Stein, T., Grubb, C., Bertrand, M., Suh, S. M., & Verosky, S. C. (2017). No impact of affective person knowledge on visual awareness: Evidence from binocular rivalry and continuous flash

suppression.

Thorndike, E. L. (1927). The law of effect. The American Journal of Psychology, 39(1/4), 212-222. Wilbertz, G., van Slooten, J., & Sterzer, P. (2014). Reinforcement of perceptual inference: reward and

punishment alter conscious visual perception during binocular rivalry. Frontiers in

Referenties

GERELATEERDE DOCUMENTEN

Variable definitions: E(Rit) is the cost of equity, GHG is greenhouse gas emissions divided by total revenues, market capitalization is the natural logarithm, market value leverage is

Centraal argument is dat de combinatie van contract research organisatie en werkwijzen die stakeholders betrekken en daarmee nadruk leggen op contextueel leren bij uitstek past bij

H1: The motivations of information seeking, social influence, entertainment and remuneration are positively related to consumer-brand engagement on Instagram..

To achieve either of these forms of enhancement, one can target moods, abilities, and performance (Baertschi, 2011). Enhancement can be achieved through different

7 January 2015, right after the attack on Charlie Hebdo, leader of the Christian Democratic Party, Knut Arild Hareide, said that “(…) I perceive this as an attack

Similarities between Anita Brookner and Barbara Pym were noted for the first time in reviews of Brookner's second novel, Providence. Pyrn and Brookner have

To obtain insight into cell non-autonomous contributions of astrocytes to neurodegeneration, we performed an RNAi candidate screen in a Drosophila ND model for

The ASPIRE (Adaptive Social Protection: Information for enhanced REsilience) project aimed to provide technical support to the World Bank ’s Sahel Adaptive Social Protection