• No results found

Placebo effects on creativity: The impact of expectancy effects on convergent creative thinking

N/A
N/A
Protected

Academic year: 2021

Share "Placebo effects on creativity: The impact of expectancy effects on convergent creative thinking"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Running head

Name: Christoph Hügle Student number:1923412 Date: 12/02/18

Supervisor: Roberta Sellaro Second reader: Roel van Dooren Word count: 7689

Placebo effects on creativity

The impact of expectancy effects on convergent creative

thinking

(2)

Abstract

Being creative is becoming ever more important in today’s world. As a result, organ-izations and researchers alike are looking for ways to enhance creativity. Meanwhile, expec-tancy effects have been shown to influence cognitive functioning, but there are conflicting re-sults about their usefulness for cognitive enhancement. In this study, we investigated whether convergent creative thinking can be enhanced via expectancy effects, with a daylight lamp serving as placebo. We instructed 40 participants about the alleged positive effect of daylight lamps on creativity and paired exposure to one with higher success rates in a convergent-thinking task. In the test phase, one group solved items with the lamp turned on, one with the lamp turned off. We expected the lamp-on group to perform better. While the results trended in the expected direction, we found no significant effects of the daylight lamp on creativity. The results hint at the potential of using expectancy effects for enhancing creative thinking, to be investigated in further studies. Until then, it remains unclear if placebos can become a fea-sible way to improve creativity and other cognitive functions.

(3)

1 Introduction

How can you become more creative? This question is gaining importance in research and society. Not only do problems like climate change, growing need for resources, and digi-talization require new solutions. We also live in a world of increasingly fast cycles of innova-tion, making old ways and practices become obsolete ever more rapidly (West, 2017). Fur-thermore, increased connectivity means that it often does not matter where a job is performed, creating further pressure to come up with unique, novel solutions against worldwide competi-tion. This makes the study of ways to foster creativity more relevant each day.

An unlikely method to influence creative thinking is an effect well-known in medi-cine: The placebo effect. A placebo is a substance that has no therapeutic effect (Placebo, 2017), but can lead to symptom improvement just by believing that taking it will help. More generally speaking, placebo effects are a form of expectancy effects. An expectation is a ‘strong belief that something will happen or be the case’ (Expectation, 2017), and can influ-ence not only our physical health, but also our cognition in various ways (Schwarz, Pfister, & Büchel, 2016). Expectation effects are often seen as a nuisance in studies about the effective-ness of a medicine or intervention. However, the existence of expectancy effects across many domains also shows their usefulness to elicit reactions and changes that might be beneficial for the person believing in an effect.

This study combines expectancy effects and creativity. Its aim is to find out if crea-tive thinking can be induced by the expectancy to be more creacrea-tive. The general framework of this study is the Metacontrol State Model (Hommel, 2015) which we will first discuss. To-gether with expectancy effects (Schwarz et al., 2016) it forms the theoretical basis of the study.

1.1 Metacontrol State Model

1.1.1 Cognitive control and metacontrol.

The Metacontrol State Model (Hommel, 2015) embraces the common view of cogni-tion as informacogni-tion processing (Neisser, 1967). Environmental stimuli have to undergo several processing steps before an action is undertaken: registration and identification of sensory in-put, attentional selection, choice of action and motor execution (see the blue boxes in Figure 1).

However, the processing steps themselves do not automatically lead to achieving an intended goal. For example, think of driving a car to another city. It’s not sufficient to focus your attention on the street and operate the car correctly, i.e. process information and execute

(4)

the appropriate action. You also have to monitor your progress, avoid speeding, select alterna-tive routes, and decide when you’re close enough to look for a parking spot. In other words, goal-oriented cognition has to be configured and adjusted to changing contexts. Incorrect re-sponses have to be inhibited, goals updated, alternative actions selected or discarded. The ability to adjust cognition to changing contexts is called cognitive control (Goschke, 2003) (see red box in Figure 1). Cognitive control is a second layer on top of the basic cognitive pro-cesses, optimizing information processing to achieve the intended goals (Hommel, 2015; Verbruggen, McLaren, & Chambers, 2014).

Figure 1: Schematic representation of the interaction between cognitive control and met-acontrol (taken from Hommel & Colzato 2017).

This optimization process does not yield the same results for all situations. Cognitive control is sometimes more efficient when executed in a persistent manner, sometimes in a flexible manner. Coming back to our car example, a narrow focus on goals is more suitable if you know the way to your destination and just need to get there. If you’re looking for a spe-cific shop or a way around traffic, you need to monitor the surroundings and be open for new alternatives and rapid changes of action, which requires a more flexible approach. How can this be monitored and adjusted? This is where metacontrol comes into play: It determines and configures the ‘style’ of cognitive control (Hommel & Wiers, 2017) according to context and goals.

According to this model, cognitive control emerges out of the interaction between persistence and flexibility, which are called metacontrol states (Hommel, 2015). Metacontrol itself is the balance and adjustment of those states. Persistence and flexibility modulate the strength of the goal representation and the competition between alternatives (see Figure 2). In a persistent control state, the top-down bias of the goal is strong, and alternative representations

(5)

inhibit each other to a high degree. In this state, irrelevant information is excluded more effec-tively, and decision-making more selective (Hommel & Colazto, 2017). On the other hand, a flexible control state is characterized by less top-down bias and higher availability of alterna-tives. According this view, being susceptible to bottom-up influences and changing attentional foci and goals are the result of a flexible cognitive control style, rather than a loss of cognitive control.

Figure 2: Schematic representation describing how persistence and flexibility differentially affect decision making processes (taken from Hommel & Colzato, 2017).

While the precise mechanisms underlying cognitive control are unclear, there is evi-dence that it does not consist of a unified system but rather emerges out of the interaction of different dopaminergic systems (Cools, 2006, 2008). One system includes the prefrontal cor-tex and seems to originate from the mesofrontal pathway, while another stems from the nigro-striatal pathway. The prefrontal cortex has more dopmaine D1-receptors and is thought to cre-ate the persistence system, while the striatum contains more D2-receptors and seems to crecre-ate the flexibility system (Hommel, 2015). Metacontrol policies covary systematically with ge-netic predispositions of dopaminergic functioning in the mesofrontal and nigrostriatal path-ways (Hommel & Colzato, 2017).

(6)

1.1.2 Plasticity of control states.

Metacontrol policies or the balance of metacontrol states can be influenced both in the short and in the long term. First, there are long-term cultural influences: Bilinguals, whose effort to stay in one language is assumed to bias them toward more persistence, perform worse than monolinguals in a distributed-attention task which requires a flexible control state (Colzato et al., 2008). Religion and sexual orientation are also associated with differential pre-ferred metacontrol policies (Colzato, van den Wildenberg, & Hommel, 2008; Colzato, van Hooidonk, van den Wildenberg, Harinck, & Hommel, 2010; Colzato et al., 2010).

Second, metacontrol policies can be influenced by short-term factors, for example by mood: Positive affect leads to and benefits flexibility (Dreisbach & Goschke, 2004), but not a persistent state (Fischer & Hommel, 2012). Positive mood is further associated with increased brainstorming performance (Baas, De Dreu, & Nijstad, 2008) and greater distractibility (Dreisbach & Goschke, 2004), both signs of a flexibility-biased control state. Negative affect, in contrast, is linked to a more persistent control state: There is reduced cross-talk between tasks (Zwosta, Hommel, Goschke, & Fischer, 2013) and more top-down control (van

Steenbergen, Booij, Band, Hommel, & van der Does, 2012). Meditation is another short-term factor that can alter metacontrol policies (see next section).

Importantly, the specific metacontrol state established while performing a given task can spill over and influence performance on other (unrelated) tasks. For instance, it has been shown that performing a persistent-heavy task reduces cross-talk in another task (Fischer & Hommel, 2012). In contrast, performing a flexibility-heavy task increases the degree to which people consider others when deciding about their own actions (Colzato, van den Wildenberg, & Hommel, 2013), and interpersonal trust (Sellaro, Hommel, de Kwaadsteniet, & Colzato, 2014) - findings that are consistent with the idea that increased flexibility increases overlap between alternatives (here, it increases self-other overlap). In summary, metacontrol states are affected by different factors, but also have a systematic effect on cognition, generalizing over the task at hand.

1.2 Metacontrol states and creativity

Importantly, persistence and flexibility also have an influence on performance on creativity tasks. Creativity is commonly defined as originality combined with effectiveness (Runco & Jaeger, 2012). However, to date there is no satisfactory explanation of how creativ-ity works. In research, it is often divided into convergent and divergent thinking (Guilford, 1967). Convergent thinking is a process that leads to a single correct answer to a question or

(7)

problem. It is facilitated by a persistent control state, which inhibits alternatives and creates a goal-centered top-down bias (Colzato, Szapora, Lippelt, & Hommel, 2017). In this state, the search for solutions uses well-defined criteria and focuses on very few solutions (Colzato, Ozturk, & Hommel, 2012). Requiring strong cognitive control, convergent thinking is thought to be more resource-hungry than divergent thinking (Colzato et al., 2012).

Conversely, divergent thinking means generating many possible solutions, which is associated with a flexible control state. In this state, weak top-down control and inhibition be-tween alternatives allow a broad search.

There is no correlation between performance in convergent and divergent thinking tasks (Akbari Chermahini & Hommel, 2010), and the same experimental manipulation affects them differently (Hommel, Akbari Chermahini, van den Wildenberg, & Colzato, 2014). For example, research has shown that tyrosine, a precursor of dopamine found in a variety of foods, promotes convergent, but not divergent thinking (Colzato, de Haan, & Hommel, 2015). However, it also needs to be noted that convergent thinking correlates with fluid intelligence (Akbari Chermahini, Hickendorff, & Hommel, 2012) and with verbal intelligence (Lee, Huggins, & Therriault, 2014).

Interestingly, people can change their metacontrol balance by themselves, as shown in research about meditation: Open-monitoring meditation (not focusing on any thought or object) promotes divergent thinking, but focused-attention meditation (focusing on one ob-ject) does not robustly facilitate convergent thinking (Colzato et al., 2012; Colzato et al., 2017). The fact that meditation improves mood, and positive mood supports divergent think-ing, may explain why focused-attention meditation has not been found to improve convergent thinking (van Steenbergen, Band, & Hommel, 2010).

Furthermore, performing creativity tasks can also influence mood in the expected di-rections. Akbari Chermahini and Hommel (2012) found that performing divergent thinking tasks elevates mood, while convergent thinking tasks had the opposite effect.

Summing up, the metacontrol states have been linked to different forms of creativity, and more importantly, can be experimentally manipulated. Mood, food, other tasks, and medi-tation can bias the control state towards more flexibility or more persistence. The fact that mood and meditation can influence control states shows that creativity can be influenced by factors within a person.

This gives us new opportunities for creativity enhancement: Are there other ways to in which people can by themselves become more creative? A prime example of eliciting

(8)

men-tal and physiological changes without actually providing an intervention are placebos. There-fore, we can ask ourselves if metacontrol policies are, like many other processes, susceptible to expectancy, or placebo effects. We will address this possibility in the next section.

1.3 Expectancy effects on cognition

Expectancy effects or the effects of an expectation itself on a person’s bodily func-tions, cognition, or behavior, have been studied in a wide range of areas, but not for creativity. The most famous category is the placebo effect, i.e. the expectation of symptom improvement per se leads to the improvement. In medicine, placebos have been shown to be effective in the treatment of pain, depression, immunological and cardiovascular illnesses (Rief & Petrie, 2016). If a person expects their symptoms to worsen, we speak of a nocebo effect.

The mechanisms of expectancy effects are still being studied, and probably different for different domains and applications (Schwarz et al, 2016) They are hypothesized to be more than just a belief that was formed at some point of time (Rief & Petrie, 2016; Schwarz et al, 2016). A belief is just the foundation of the expectation. Subsequently, it is developed and strengthened through associative learning. This point is important for the induction of expec-tancy effects. It was derived from studies on placebo hypoalgesia (reduced pain sensation due to a placebo), where pairing suggestive instructions with instruction-congruent learning expe-riences maximized possible expectancy effects (Colloca & Benedetti, 2006; Colloca, Sigaudo, & Benedetti, 2008). This learning does not seem to be necessary for the induction of nocebo effects (Colloca et al., 2008). Expectancy effects could prove a cost-effective, nonintrusive way of cognitive enhancement fostering the full creative potential of people.

In in the research about cognitive enhancement, the focus of researchers has been on the confounding effects, and on the methodological implications (Boot, Simons, Stothart, & Stutts, 2013; Rabipour, Andringa, Boot, & Davidson, 2017), but some studies used expec-tancy effects as a means for cognitive enhancement (Schambra, Bikson, Wager, DosSantos, & DaSilva, 2014). In one of them, Schwarz and Büchel (2015) successfully induced, via instruc-tions and conditioning, the expectancy of better performance in a Flanker task paradigm when exposed to a certain tone frequency. The subjects indeed believed that the placebo tone im-proved their performance, but they did not actually improve in the task. Similarly, Looby and Earleywine (2011) reported that a placebo, which they told participants was a cognitive en-hancement drug, affected the subjective experience of participants without influencing perfor-mance in several tests of cognitive abilities (such as verbal learning, digit span, or SAT ques-tions).

(9)

These findings underscore three points: First, it is important that the subjects not only get an instruction, but actually experience the ‘effect’ of a placebo. Second, objective perfor-mance measures need to be obtained. Third, enhancing cognitive perforperfor-mance this way might prove very difficult. Nonetheless, it is established that cognitive performance can be affected by expectancy effects like stereotype threat or self-efficacy beliefs (Spencer, Logel, & Davies, 2016; Stajkovic, 1998). Even more importantly for our purposes, creativity differs from the cognitive abilities studied by Looby and Earleywine (2011) and Schwarz and Büchel (2015). The difference is that ‘more’ creativity can be achieved by changing the balance between per-sistence and flexibility. As we’ve discussed above, this balance can indeed be changed experi-mentally. Therefore, enhancing creativity via expectancy effects might be more feasible than enhancing other aspects of cognition that are possibly already be operating on an optimum level (Hills & Hertwig, 2011).

In a different vein, there is one study involving both placebos and creativity: Colzato et al. (2015) administered a placebo (powder dissolved in juice) as a control condition for their tyrosine study. However, they did not check for possible placebo effects.

Apart from studies like the ones mentioned above, expectancy effects on cognitive performance are largely uncharted territory. They influence performance in some paradigms but not in others. Above, we discussed how metacontrol policies influence cognition and can be influenced by various factors, including within-person factors like meditation. Further-more, learning a link between an (arbitrary) intervention and an effect can produce expec-tancy effects. Linking those two topics, metacontrol states could also be induced via instruc-tions and congruent learning of a placebo-effect association. Hence, this study investigated if expectancy effects can influence metacontrol policies. Concretely, we studied if convergent creative thinking is susceptible to expectancy effects.

1.4 Present study

The aim of this study was to combine two lines of research: The metacontrol-based creativity enhancement research, and studies about expectancy effects on cognition. Metacon-trol policies are adaptive, and can be induced or influenced. Expectancy effects are a possible enhancement method, but so far, no one has investigated this combination. The question we tried to answer was then: Is it possible to enhance creativity by creating the expectancy to be more creative?

The focus of our study was on convergent creative thinking, i.e. goal-directed think-ing to find one or very few correct solutions. We assessed convergent thinkthink-ing with the Re-mote Associates Test or RAT (Mednick & Mednick, 1967; Dutch version after Akbari

(10)

Chermahini et al., 2012, and Sellaro, Hommel, & Colzato, 2017). In this test, participants are presented with three unrelated words, such as ‘time’, ‘hair’, and ‘stretch’, and are to identify the one common associate (‘long’). The common word can be linked via synonymy, for-mation of a compound word, or semantic association, but not by applying standard logic, con-cept formation, or problem solving (such as ‘all words begin with an a, so the common word begins with an a, too’, Akbari Chermahini et al., 2012). Thus, finding a solution can be seen as creative thought. Subjects are presented with several items within a given timespan, and the number of correctly solved items determines the RAT score.

As a placebo for creativity, we used daylight lamps. Daylight lamps are very bright lamps that simulate natural daylight, with luminous emittance of more than 2000 lux. They are used in light therapy against depression and seasonal affective disorder (Mårtensson, Petterson, Berglund, & Ekselius, 2015). We are unaware of any effects of the lamps on crea-tive thinking, or metacontrol states in general.

Table 1.

Study Design. Note that not the order of sets in the induction phase was reversed for half of the sample, i.e. they started with a difficult set.

Test condition Induction phase Test phase

Easy set Difficult set Easy set Difficult set Difficult set

Lamp-on Lamp on Lamp off Lamp on Lamp off Lamp on

Lamp-off Lamp on Lamp off Lamp on Lamp off Lamp off

Our study consisted of two phases (see Table 1): The induction phase and the test phase. The induction phase was identical for all participants. They were instructed that expo-sure to a daylight lamp enhances creative thinking. In order to experience this ‘effect’, they then completed several trial blocks containing RAT item sets. These item sets were either easy or difficult to solve, and were congruently paired with the lamp being turned on or off: During easy blocks, the lamp was turned on, so that the participants learned to associate the lamp with higher success rates. In the difficult blocks, the lamp was turned off.

In the induction phase, we expected our subjects to solve more RAT items in the easy blocks than in the difficult blocks, just because of the different difficulty of the item sets. At the end of this phase and due to the instruction and the learning experience, the subjects should have experienced a higher success rate when the daylight lamp was turned on.

(11)

In the subsequent test phase, the subjects tried to solve another, longer item set, which was of comparable difficulty to the difficult sets in the induction block. However, half of the participants tried to solve the items with the lamp turned on, half with the lamp turned off. Assuming an expectancy effect, our hypothesis was that the lamp-on group will be more creative, i.e. solve more items in the test phase, than the lamp-off group.

2 Methods

2.1 Participants

Our sample consisted of N=50 subjects, (8 male, 42 female) who were between 17 and 27 years old (M = 19.86). We recruited participants via flyers in the faculty building, posts in the University’s Bachelor psychology Facebook groups, and the University’s online recruiting page SONA. We had the following inclusion/exclusion criteria: Participants had to be native Dutch speakers between 17 and 32 years old. People who had neurological or psy-chiatric disorders, took medication, or were under the influence of alcohol or other drugs dur-ing the experiment could not participate. The subjects received one credit as compensation.

2.2 Design

We tested the placebo effect on creativity with a mixed two-factorial design, with group as a between-subjects factor and item set as a within-subjects factor. The experiment consisted of two phases: The induction phase and the test phase.

The induction phase was identical for all participants: First, they were instructed that daylight lamps have been found to make people more creative. Then they solved RAT blocks with five items each, totaling 20 items. Two of the blocks were easy to solve, two difficult (for more info about the composition of the sets, see section 2.4). As mentioned above, the easy sets were always performed with the daylight lamp turned on, and the difficult sets with the lamp turned off.

The order of the blocks during induction phase was counterbalanced between partici-pants: Half of the participants started with an easy set and half with a difficult set. The order of the items within each set was randomized.

In the following test phase, the sample was randomly divided into two equal-sized groups: The ‘lamp-on’ group and the ‘lamp-off’ group. Both groups had to solve another set of ten RAT items. The difference was that the lamp-on group was given the placebo (daylight lamp turned on) while solving the RAT, while the control group was not. The test block was of comparable difficulty to the difficult blocks in the induction phase.

(12)

As mentioned before, mood affects metacontrol states, and vice versa. For our study, this means that a participant’s mood could influence his performance in the blocks: Subjects in a positive mood should perform less well. In order to check for differences in mood, partic-ipants were asked to complete the Affect Grid (Russel, Weiss, & Mendelsohn, 1989) which assesses mood via a two-dimensional pleasure-arousal grid. Another possible confounding variable was the subjects’ perceived susceptibility to external influences. This was assessed by the Locus of Control questionnaire (Rotter, 1966). Finally, the subjects had to fill out a ma-nipulation check. This questionnaire asked the following things: First, if the participants no-ticed a difference between the blocks. Second, what they attributed this difference to, and third, which blocks they found more difficult. The manipulation check was meant to find out if the subjects believed in the effect of the daylight lamp, as we wanted them to.

2.3 Procedure

The study took place in a laboratory room within the social science faculty at Leiden University. The study protocol was approved by our local ethical committee and is in line with the Helsinki agreements. Recruitment materials, instructions and tasks were all written in Dutch. The subjects gave informed consent and were asked if they wanted to be included in a database for future studies. They were instructed about the procedure of the experiment, about the affect grid, and filled out a standard screening questionnaire. The questionnaire asked about demographic information, native language, medication and drug use (general and in last 24 hours), history of mental illnesses, medical conditions and spirituality and leisure activi-ties.

2.3.1 Induction phase.

The subjects then received the instructions about the task. At this point, we tried to create a conscious expectation the daylight lamp’s beneficial effect on creativity. The instruc-tion read that daylight lamps had a positive influence on creativity, and that the aim of the study was to find out the magnitude of the effect (see Figure 3).

(13)

Figure 3: Instruction stating that daylight lamps improve creativity.

Figure 4: Items with solution added in green. Shown at the beginning.

The subjects then received instructions about the RAT. They practiced it by solving three items and being shown the correct result at the end of each set of items (see Figure 4). Afterwards, the subjects completed the induction phase with four blocks (two easy with the lamp turned on, two difficult with the lamp turned off, see Table 2).

For each item, they could type in an answer for 25 seconds. If a participant submitted an answer, she was asked how she came to her answer, and could select the options

(14)

‘Intui-tively’, ‘Analytically’ or ‘I don’t know’. After each set of items, participants filled out the Af-fect Grid. Afterwards, they had to call the experimenter, who turned the light on or off, and initiated the next trial set.

2.3.2 Test phase.

When the induction phase was completed, the participants solved the actual test set, consisting of ten items. In this last phase, the two groups were treated differently: The lamp was either turned on or off. As all participants were solving the same items, we compared the number of correctly answered items for the lamp-off group with the lamp-on group. Finally, participants answered the questions in the manipulation check and completed the Locus of Control questionnaire. The debriefing explained the true purpose of the experiment, and par-ticipants were asked if they had any more questions or comments. The whole experiment took around 30 minutes.

2.4 Remote Associates Test (RAT)

The 30 items used in our study stem from the Dutch RAT version with 220 items based on the performance of 150 participants (Akbari Chermahini et al., 2012; Sellaro et al., 2017). The items can be divided into difficulty levels: Easy, medium and difficult, based on the percentage of participants who could solve a given item. This was the basis for the crea-tion of item sets of varying difficulty (see Table 2).

In the induction phase, the easy sets consisted of two easy, two medium, and one dif-ficult item. Difdif-ficult sets contained one easy, two medium, and two hard items (but see sec-tion 2.6). The set in the test phase had two easy, four medium, and four difficult items, hence it was of the same difficulty (but double length) as a difficult set in the induction phase.

Table 2

Correct answer rates in validation sample

Item set Correct answers in %

Easy 55.26

Difficult 44.87

(15)

Before a trial started, the experimenter turned the daylight lamp on or off. At the be-ginning of each set, a blank screen appeared for 15 seconds, to expose the subjects to the day-light lamp. A fixation cross appeared during the last second. The three words of an item ap-peared next to each other centrally in the upper half of the screen. Below the words was a space where the subjects could type in their answer. The participants had 25 seconds to find the common word and type in the answer. They were asked how they came to their answer (see section 2.3). A fixation cross appeared in central location for one second, after which the next item was presented. Completing the the induction phase took around 8-10 minutes, and in the test phase around 5 minutes.

2.5 Questionnaires

2.5.1 Affect Grid.

The Affect Grid (Russel et al., 1989) measures affect along two dimensions: Pleasure – displeasure and arousal – sleepiness, with values ranging from -4 to 4. Each dimension rep-resents an axis in the grid, and participants mark a square in the grid corresponding to their arousal and pleasure (see Figure 5). Participants completed the Affect Grid on the screen after each block.

(16)

2.5.2 Locus of Control.

The Locus of Control questionnaire (Rotter, 1966) consists of thirty items and measures how much people believe to be in control of events and outcomes in their lives, as opposed to being guided by external forces beyond their influence. Each item consists of a pair of alternative statements about events in a person’s life or society (e.g. education, success in life, justice), and the respondent has to choose one of the alternatives. A high score means that a person has high control beliefs and attributes events as a cause of their own action. Low control beliefs mean that a person tends to externalize the cause of and responsibility for events. The daylight lamp might influence those people more. Therefore, we included the questionnaire in order to ensure that the groups were balanced in terms of people with low control beliefs.

2.5.3 Manipulation check.

The manipulation check consisted of three questions at the end of the experiment. Participants were asked if they found that the blocks (with the lamp on vs. off) they had to solve differed from each other. If so, they were asked what they attributed the difference to: The effect of the lamp, the difficulty of the items, or ‘Other’. If a subject responded ‘Other’ she could explain what she thought was the difference in her own words. In the end the sub-jects could indicate if the found the sets with the lamp on or off more difficult.

2.6 Adjustment after first ten participants

Originally, the difference between easy and difficult sets in the induction phase was greater: They were easier and harder to solve, respectively. Easy blocks consisted of three easy, one medium and one difficult item (correct answer rate 59,34%, compare to Table 2). Difficult sets consisted of one easy, one medium, and three difficult items (37,24%). How-ever, this difference between the sets was too obvious for the first ten participants, making the placebo manipulation less credible. This was shown by their answers in the manipulation check, which we checked after each participant: Eight of the first ten subjects noticed a differ-ence between the blocks, and every one of them attributed it to the stimuli. Four of the eight subjects thought it was harder with the lamp off, and four with the lamp on.

Therefore, the sets were adjusted. We hoped that this would make the manipulation subtler, while still providing a learning effect for the lamp-on sets. Consequently, the first ten subjects were excluded from the sample and analysis. Ten more subjects were recruited in or-der to arrive at the planned sample size of 40. Thus, N = 40 for the subsequent analysis.

(17)

2.7 Apparatus

During the whole experiment, the participants were in a standard lab cubicle (see Figure 6). They filled out the screening questionnaire and completed the RAT sets on a Dell OptiPlex 3040 PC with a Philips 109 B6 screen. The daylight lamp was a Daylight Power with 6500K color temperature. The daylight lamp was placed next to and slightly behind the screen, at a distance of about 50cm to the user.

Figure 6: Experimental setting with daylight lamp turned on.

2.8 Analyses

For the induction phase, the RAT scores were analyzed with a repeated-measures ANOVA with group (Light on/off) as between- and block (easy vs. difficult) as within-sub-jects factor. For the test phase, an independent sample t-test was use to compare RAT scores between the two groups.

In order to check for effects or interactions between condition and time on mood, pleasure and arousal were separately analyzed with a repeated-measures ANOVA with condi-tion in test block (Light on/off) as between-subjects and time (block 1, 2, 3, 4, test block) as within-subjects factor. Locus of control scores were entered as a covariate in a separate re-peated-measures ANOVA of the RAT scores. The answers to the manipulation check ques-tions were aggregated. Group differences between gender were not checked for due to the small number (four) of male participants. A significance level of p = 0.05 was adopted for all tests.

(18)

3 Results

3.1 Data screening

For the induction phase, the data is not normally distributed for the easy blocks (Shapiro-Wilk Test for normality, W = 0.886, p = .00078), where the distribution is skewed and kurtosed. For the difficult blocks, the data is normally distributed (Shapiro-Wilk Test for normality, W = 0.952, p = .087). The induction phase as a whole is not normally distributed (Shapiro-Wilk Test for normality, W = 0.948, p = .0027) and slightly skewed, but not kur-tosed. ANOVAs are robust against violations of the normality assumption if the experimental conditions are of equal size and variances are homogenous, which is the case for the data (Rudolf & Müller, 2012). Hence, we conducted ANOVAs to analyze the induction phase. For the test phase, the data were normally distributed (Shapiro-Wilk Test for normality, W = 0.968, p = 0.311).

3.2 Induction phase

In the induction phase, the participants performed in the expected manner: They solved more items in the easy blocks (M= 0.64, SD=0.48) than in the difficult ones (M=0.48, SD=0.5), as shown by the significant main effect of block, F(1, 38) = 33.621, p < .0001, ηp² =

0.469). Note the high standard deviations for both sets, meaning that the participants differed greatly in their correct response rates.

There was no significant effect of group in the induction phase, F(1, 38) = 0.683, p = .414, ηp² = 0.018. This is not surprising, given that the treatment of two groups did not differ

during the induction phase. More importantly, this shows that the baseline levels of both groups were similar, eliminating a possible confounding effect. Furthermore, there was no in-teraction between block and experimental group (F(1,38) = 1.906, p = .176, ηp² = 0.048), see

(19)

Figure 7: Percentage of correct responses during the induction phase. Vertical bars indi-cate standard error. ‘Group’ refers to the manipulation performed later during the test phase.

3.3 Test phase

We expected to see that the lamp-on group would solve more items in the test phase than the lamp-off group. While we saw trends supporting this hypothesis, the results did not reach statistical significance. The subjects in the lamp-on group solved 53.5% (SD=0.5) of the items, while the subjects in the lamp-off group solved 41% (SD=0.49, see Figure 8). How-ever, this effect was not significant (t(38) = 1.79, p = .081). The effect size (d = 0.25) is small according to Cohen (1988).

Figure 8: Percentage of correct responses during the test phase. Vertical bars indicate standard error. 0 10 20 30 40 50 60 70 80 Difficult Easy Co rr e ct A n sw e rs in % Difficulty of block

Lamp off group Lamp on group

0 10 20 30 40 50 60

Lamp off Lamp on

Co rr e ct A n sw e rs in % Group

(20)

3.4 Post-hoc comparisons

The difficult blocks of the induction phase and the test block were of almost equal difficulty. Therefore, without an expectancy effect, we would not expect differences in perfor-mance between the blocks. For the difficult blocks the lamp was always turned off, while it was turned on for half of the participants during the test phase. Thus, it could be argued that the lamp-on group should have expected to perform better during the test phase, and answered more items correctly, than during the difficult blocks of the induction phase. Conversely, the lamp-off group should solve the same number of, or less items in the test phase compared to the difficult blocks. We conducted a post-hoc analysis to answer those questions comparing the two. The lamp-on group did not solve significantly more items (48,5% vs. 53,5%, t(38) = -0.9, p = .337). The lamp-off group solved 7% less items (48% vs. 41%), which was not sig-nificant either (t(38) = 1.67, p = .11).

3.5 Mood and arousal

The ANOVAs performed for mood and arousal did not reveal any significant main effect or interaction between experimental condition and time, Fs < 2.9, ps > .1.

3.6 Locus of Control (LoC)

The effects of the sets and conditions do not change when using the LoC scores as a covariate. The mean LoC score was 0.48 (SD = 0.17). There was no significant correlation between LoC score and the belief that the lamp affected performance (r = 0.068).

3.7 Manipulation check

Of the forty participants (after excluding the first ten), 27 noticed a difference be-tween the blocks, while 13 did not notice it (see Figure 9). Of those 27, 18 attributed the dif-ference to the effect of the items, eight to the effect of the lamp, and one participant selected ‘Other’ (see Figure 10). Finally, only a minority (18 of 40) correctly noticed that the blocks with the lamp off were harder to solve (Figure 11). Seven subjects thought the lamp-on blocks were harder, and 15 indicated that they did not know. All in all, then, the participants noticed that the blocks differed, but there was no consensus on either the cause or the effect of those differences.

(21)

Figure 9: First question of the manipulation check.

Figure 10: The second question was asked only if the participant found a difference between the blocks.

Figure 11: Only a minority correctly answered that the lamp-off blocks were harder.

Lamp off was harder: 18

Lamp on was harder: 7 Don't know: 15

Was it harder with the lamp turned on

or off?

Yes: 27

No: 13

Was there a difference between the

blocks?

The lamp: 8

The stimuli: 18 Something

else: 1

(22)

4 Discussion

4.1 Findings

The aim of this study was to investigate the effects of expectancies on metacontrol policies in the domain of creativity. By way of instruction and contingent learning, we wanted participants to link exposure to a daylight lamp with being more creative in a convergent-think-ing task. First, the participants were told that daylight lamps made people more creative. Sec-ond, through manipulation of the item difficulty, they experienced more success in the RAT when the lamp was turned on. This procedure was supposed to make the daylight lamp a pla-cebo for creative convergent thinking and nudge their metacontrol policy towards a more per-sistent state when it was turned on.

We hypothesized that, once the expectancy effect was created, RAT performance would be better in the condition with the lamp turned on compared to the lamp-off group. This hypothesis could not be confirmed. There was a trend in the expected direction, with the lamp-on group answering more items correctly than the lamp-off group. However, this result did not reach statistical significance, and we cannot reject the null hypothesis. Since p-values are a gradual indicator of evidence and not a dichotomous one (McShane & Gal, 2015), we can conclude that the obtained results lend some support for the hypothesis that expectancy effects influence metacontrol policies and convergent thinking.

The post-hoc analyses also support this view. The difficult blocks of the induction phase were comparable to the test block, so the participants should, all else being equal, per-form equally well in those blocks. While the lamp-on group perper-formed better in the test block than during the difficult blocks, the lamp-off group did worse in the test block. Thus, while the lamp-on group might have shown a placebo effect, the lamp-off group’s performance sug-gests a nocebo effect. However, these results did not reach statistical significance either. As a whole, the results are promising and can be a basis for further studies on the subject.

There are several reasons that have possibly impeded clearer results and merit closer attention. First, the sample size was small, limiting the power of the study. Second, the high variance within the groups is striking. It revealed that the subjects, even though recruited from the relatively homogenous subpopulation of students, differed starkly in their ability (or moti-vation) to solve RAT items.

Third, the subjects also differed in their opinions about the effects of the daylight lamp. While most realized a difference in difficulty between blocks, only a minority of sub-jects thought that the lamp-on blocks were easier, and few participants attributed the differ-ence to the effect of the daylight lamp. It is not clear if conscious expectations are necessary

(23)

for a placebo effect, and in which circumstances (Schwarz & Büchel, 2015; Benedetti, Pollo, Lopiano, Lanotte, Vighetti, & Rainero, 2003). Did the participants form an unconscious ex-pectation which caused the group differences? Or is a conscious belief in the effect of the lamp necessary for a placebo effect? The absence of a widespread belief in the placebo might be a reason why the differences between the groups were not more pronounced. On the other hand, in Schwarz and Büchel’s (2015) and Looby and Earleywine’s (2011) studies, the partic-ipants did believe in the placebo but there were no changes in performance.

These considerations show that it would be premature to dismiss the possibility of expectancy effects on convergent thinking. It is rather an opportunity for future studies to ad-dress these issues: Higher sample sizes could reduce the variance. Other studies might suc-ceed in creating a conscious expectation for most participants. More items and different de-signs could allow for within-subjects testing as well as studying possible nocebo effects.

4.2 Comparison with other findings and implications

The studies conducted by Schwarz and Büchel (2015) and Looby and Earleywine (2011) could not find expectancy effects on objective measures of cognitive functions. There are some differences between those and the present study that should be emphasized. One dif-ference is that in our study most participants formed no conscious expectancy about the day-light lamp’s effect. The participants in the other studies believed that placebos in form of ex-posure to a tone (Schwarz & Büchel, 2015) or ingesting what they thought was methylpheni-date (Looby & Earleywine, 2011) enhanced their cognitive performance. Moreover, our study tried to influence metacontrol policies, which has been previously achieved by other interven-tions. These two studies suggest that for cognitive functions related to intelligence, placebo effects might not achieve the desired results. The cognitive abilities under the guidance of metacontrol, like creativity, might be more malleable. However, it has to be said that enhanc-ing cognitive functions via placebo effects might prove an elusive goal.

As for metacontrol, our study adds another facet to the growing research. Several studies found factors that can influence metacontrol policies, for example open-monitoring meditation (Colzato et al., 2012; Colzato et al., 2017) or mood (Dreisbach & Goschke, 2004; van Steenbergen et al., 2012), but there are also more ambiguous findings (van Steenbergen et al., 2010; Colzato, Szapora, Pannekoek, & Hommel, 2013). More research is needed to find out under which circumstances and interventions creativity is enhanced by changes in met-acontrol policies.

On a more practical note, our study shows that it’s too early to see expectancy effects as an easy way to foster creativity in the real world. In what way and to which extent does an

(24)

improvement in the RAT translate to meaningful improvements in one’s job and private life? How long do those effects persist? Are they prone to habituation? Those are questions that should be addressed in future studies about creativity. If, however, it turns out that expectancy effects cannot be demonstrated to improve creative convergent thinking, it means that there might be no simple way of just making people more creative. In other words, their creativity might be already at an optimum state, and improvements could only be achieved through more extensive interventions building creative potential, such as trainings (Scott, Leritz, & Mumford, 2004).

Research on expectancy effects not only serves to study possibilities for cognitive enhancement. Perhaps more importantly, a better understanding of expectations and cognition enables the identification of environmental factors that create negative expectations, which are arguably more easily induced (Colloca et al., 2008; Spencer et al., 2016). In our study, the lamp-off group performed slightly worse after experiencing the ‘effect’ of the daylight lamp. Could certain stimuli lead people to think they will be less creative? Avoidance of such nocebo effects might find more application than insights about placebo effects, since it is eas-ier to deteriorate performance than to improve it. Future experimental studies could try to find other ways of inducing, or prevent the induction of nocebos, while field studies could try to identify possible nocebo effects in occupational and organizational contexts.

4.3 Limitations

One drawback of this study was the small sample size which limited the power. Moreover, the sample consisted only of university students, which limits generalizability (Henrich, Heine, & Norenzayan, 2010). Most of them were psychology students who might have been more skeptical towards the placebo manipulation. In the same vein, other placebos, e.g. in form of pills, might have been more credible. On the other hand, that daylight lamp en-hances creativity does not seem less credible than that a tone enen-hances cognitive functioning, which was believed by the subjects in Schwarz and Büchel’s (2015) study.

Another limitation of our study was that the subjects only solved 20 RAT items in the induction phase, and 30 in total. With more items, the blocks could have been adjusted more subtly (they only differed on one item in difficulty). Moreover, there would have been more trials to learn the association between the lamp and success rates, ideally without notic-ing that the blocks differed in difficulty.

An important unsolved issue in this area of research is how to define and measure creativity (Scott, Leritz, & Mumford, 2004). Creativity is as important in our society as it is

(25)

hard to define. Research about it, as well the application of results, will be limited by the un-certainties about the concept and facets of creativity.

4.4 Future research

Apart from using different placebos and longer trials, there are promising avenues for future research: First, a different procedure could enhance placebo learning. The subjects could start with blocks that are equally difficult, leading them to think that the blocks do not differ. However, the blocks would gradually become more unequal. Combined with the in-struction that the lamp takes some time to reach its full effect, this might hide the difference between the blocks while making the placebo more credible.

Second, studies could include, or separately investigate, divergent thinking-tasks. Convergent and divergent thinking are uncorrelated, and interventions affect them differently (Akbari Chermahini & Hommel, 2010; Colzato et al., 2015; Colzato et al., 2012). Maybe these differences extend to expectancy effects.

Another possible research avenue is the study of interpersonal differences in suscep-tibility to expectancy effects. We included the locus of control questionnaire because we hy-pothesized people with an external locus of control to be more susceptible. Inducing expec-tancy effects in these people might be more fruitful.

There is also more research to be conducted on nocebo effects. There are well-known negative expectancy effects on cognitive functioning (Spencer et al., 2016; Stajkovic, 1998), and learning does not seem necessary for nocebo effects (Colloca et al., 2008). Thus, they might be induced via instructions and influence metacontrol policies. We already saw in this study that the lamp-off group performed worse in the test phase compared to the difficult blocks in the induction phase, although the items were of similar difficulty. The reason might have been a nocebo effect due to the lamp being turned off.

Moving away from creativity, there are other ways to investigate the effects of ex-pectancies on metacontrol policies. Metacontrol policies affect other functions beside creativ-ity, and those functions could be influenced by expectancy effects.

4.5 Conclusion

Summing up, our study showed intriguing results and new avenues for future re-search. While it did not provide all the answers we were looking for in a clear-cut fashion, the trends in the results are promising. The domains of metacontrol and expectancy effects still contain many open questions and opportunities. Placebo effects might be a feasible way to en-hance creative performance, while nocebo effects could be identified and addressed.

(26)

5 References

Akbari Chermahini, S., & Hommel, B. (2010). The (b)link between creativity and dopamine: spontaneous eye blink rates predict and dissociate divergent and convergent thinking. Cognition, 115, 458-465.

Akbari Chermahini, S., & Hommel, B. (2012). Creative mood swings: divergent and

convergent thinking affect mood in opposite ways. Psychological Research, 76, 634-640.

Akbari Chermahini, S., Hickendorff, M., & Hommel, B. (2012). Development and validity of a Dutch version of the Remote Associates Task: An item-response theory approach. Thinking Skills and Creativity, 7(3), 177-186.

Baas, M., De Dreu, C. K., & Nijstad, B. A. (2008). A meta-analysis of 25 years of mood-creativity research: Hedonic tone, activation, or regulatory focus? . Psychological Bulletin, 134(6), 779-806.

Benedetti, F., Pollo, A., Lopiano, L., Lanotte, M., Vighetti, S., & Rainero, I. (2003). Conscious expectation and unconscious conditioning in analgesic, motor, and hormonal placebo/nocebo responses. Journal of Neuroscience, 23(10), 4315-4323. Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with

placebos in psychology: why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445-454.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hilsdale, NJ: Law-rence Earlbaum Associates.

Colloca, L., & Benedetti, F. (2006). How prior experience shapes placebo alagesia. Pain, 19(1), 126-133.

Colloca, L., Sigaudo, M., & Benedetti, F. (2008). The role of learning in nocebo and placebo effects. Pain, 136(1), 211-218.

Colzato, L. S., Bajo, M. T., van den Wildenberg, W., Paolieri, D., Nieuwenhuis, S., La Heij, W., & Hommel, B. (2008). How does bilingualism improve executive control? A comparison of active and reactive inhibition mechanisms. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(2), 302-312.

Colzato, L. S., de Haan, A. M., & Hommel, B. (2015). Food for creativity: tyrosine promotes deep thinking. Psychological Research, 79, 709-714.

Colzato, L. S., Ozturk, A., & Hommel, B. (2012). Meditate to create: the impact of focused-attention and open-monitoring training on convergent and divergent thinking. Frontiers in Psychology, 3, 1-5. doi:10.3389/fpsyg.2012.00116

(27)

Colzato, L. S., Szapora, A., Lippelt, D., & Hommel, B. (2017). Prior meditation practice modulates performance and strategy use in convergent- and divergent-thinking problems. Mindfulness, 8(1), 10-16.

Colzato, L. S., van Beest, I., van den Wildenberg, W. P., Scorolli, C., Dorchin, S., Meiran, N., . . . Hommel, B. (2010). God: do I have your attention? Cognition, 117(1), 87-94. Colzato, L. S., van den Wildenberg, W. P., & Hommel, B. (2008). Losing the big picture:

How religion may control visual attention. PLoS one, 3(11), 1-3. doi: 10.1371/jour-nal.pone.0003679.t001

Colzato, L. S., van den Wildenberg, W., & Hommel, B. (2013). Increasing self-other integration through divergent thinking. Psychonomic Bulletin & Review, 20, 1011– 1016.

Colzato, L. S., van Hooidonk, L., van den Wildenberg, W. P., Harinck, F., & Hommel, B. (2010). Sexual orientation biases attentional control: a possible gaydar mechanism. Frontiers in psychology, 1, 1-5. doi: 10.3389/fpsyg.2010.00013

Cools, R. (2006). Dopaminergic modulation of cognitive function-implications for L-DOPA treatment in Parkinson's disease. Neuroscience & Biobehavioral Reviews, 30(1), 1-23. Cools, R. (2008). Role of dopamine in the motivational and cognitive control of behavior. The

Neuroscientist, 14(4), 381-395.

De Dreu, C. K., Baas, M., & Nijstad, B. A. (2008). Hedonic tone and activation level in the mood-creativity link: toward a dual pathway to creativity model. Journal of

personality and social psychology, 94(5), 739-756.

Dreisbach, G., & Goschke, T. (2004). How positive affect modulates cognitive control: Reduced perseveration at the cost of increased distractibility. Journal of Experimental Psychology-Learning Memory and Cognition, 30(2), 343-352.

Expectation. (2017). In OxfordDictionary.com. Retrieved from https://en.oxforddictionaries.com/definition/expectation

Fischer, R., & Hommel, B. (2012). Deep thinking increases task-set shielding and reduces shifting flexibility in dual-task performance. Cognition, 123, 303-307.

Goschke, T. (2003). Voluntary action and cognitive control from a cognitive neuroscience perspective. In S. Maasen, W. Prinz, & G. Roth (Ed.), Voluntary action: Brains, minds, and sociality (pp. 49-85). Oxford: Oxford University Press.

Guilford, J. (1967). The nature of human intelligence. New York: McGraw-Hill.

Henrich, J.; Heine, S. J.; Norenzayan, A. (2010). The weirdest people in the world? Behav-ioral and Brain Sciences, 33(2-3), 61-83.

(28)

Hills, T., & Hertwig, R. (2011). Why aren’t we smarter already: evolutionary trade-offs and cognitive enhancements. Current Directions in Psychological Science, 20(6), 373-377. Hommel, B. (2015). Between persistence and flexibility: The Yin and Yang of action control.

Advances in motivation science, 2, 33-67.

Hommel, B., & Colazto, L. S. (2017). Meditation and metacontrol. Journal of Cognitive Enhancement, 1(2), 115-121.

Hommel, B., & Colzato, L. (2017). The social transmission of metacontrol policies: Mechanisms underlying the interpersonal transfer of persistence and flexibility. Neuroscience and Biobehavioral Reviews, 81, 43-58.

Hommel, B., & Wiers, R. W. (2017). Towards a unitary approach to human action control. Trends in Cognitive Sciences, 21(12), 940-949.

Hommel, B., Akbari Chermahini, S., van den Wildenberg, W. P., & Colzato, L. S. (2014). Cognitive control of convergent and divergent thinking: a control-state approach to human creativity. Manuscript submitted.

Lee, C. S., Huggins, A. C., & Therriault, D. J. (2014). A measure of creativity or intelligence? Examining internal and external structure validity evidence of the Remote Associates Test. Psychology of Aesthetics, Creativity, and the Arts, 8(4), 446-460.

Looby, A., & Earleywine, M. (2011). Expectation to receive methylphenidate enhances subjective arousal but not cognitive performance. Experimental and clinical psychopharmacology, 19(6), 433-444.

Mårtensson, B., Petterson, A., Berglund, L., & Ekselius, L. (2015). Bright white light therapy in depression: A critical review of the evidence. Journal of Affective Disorders, 182, 1-7.

McShane, B. B., & Gal, D. (2015). Blinding us to the obvious? The effect of statistical training on the evaluation of evidence. Management Science, 62(6), 1707-1718. Mednick, S. A., & Mednick, M. T. (1967). Examiner's manual, Remote Associates Test:

College and adult forms 1 and 2. Boston, MA: Houghton Mifflin.

Neisser, U. (1967). Cognitive Psychology. New York, NY: Appleton-Century-Crofts. Placebo. (2018). In OxfordDictionaries.com. Retrieved from

https://en.oxforddictionaries.com/definition/placebo

Rabipour, S., Andringa, R., Boot, W. R., & Davidson, P. S. (2017). What do people expect of cognitive enhancement? Journal of Cognitive Enhancement, 1-8.

Rief, W., & Petrie, K. J. (2016). Can Psychological Expectation Models Be Adapted for Placebo Research? Frontiers in Psychology, 7, 1-6. doi: 10.3389/fpsyg.2016.01876

(29)

Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological monographs: General and applied, 80(1), 1-28.

Rudolf, M., & Müller, J. (2012). Multivariate Verfahren. Eine praxisorientierte Einfühung in SPSS. 2. Auflage. Göttingen: Hogrefe.

Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Creativity Research Journal, 24(1), 92-96.

Russel, J. A., Weiss, A., & Mendelsohn, G. A. (1989). Affect grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502. Schambra, H. M., Bikson, M., Wager, T. D., DosSantos, M. F., & DaSilva, A. F. (2014). It’s

all in your head: reinforcing the placebo response with tDCS. Brain stimulation, 7(4), 623-624.

Schwarz, K. A., & Büchel, C. (2015). Cognition and the placebo effect–dissociating subjective perception and actual performance. PloS ONE, 10(7), 1-12. doi:10.1371/journal.pone.0130492

Schwarz, K. A., Pfister, R., & Büchel, C. (2016). Rethinking Explicit Expectations: Connecting Placebos, Social Cognition, and Contextual Perception. Trends in Cognitive Sciences, 20(6), 469-480.

Scott, G., Leritz, L. E., & Mumford, M. D. (2004). The effectiveness of creativity training: A quantitative review. Creativity Research Journal, 16(4), 361-388.

Sellaro, R., Hommel, B., & Colzato, L. (2017). In preparation.

Sellaro, R., Hommel, B., de Kwaadsteniet, E. W., & Colzato, L. (2014). Increasing

interpersonal trust through divergent thinking. Frontiers in Psychology, 5, 1-4. doi: 10.3389/fpsyg.2014.00561

Spencer, S. J., Logel, C., & Davies, P. G. (2016). Stereotype threat. Annual review of psychology, 67, 415-437.

Stajkovic, A. D. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological bulletin, 124(2), 240-261.

van Steenbergen, H., Band, G. P., & Hommel, B. (2010). In the mood for adaptation: how affect regulates conflict-driven control. Psychological Science, 21, 1629-1634. van Steenbergen, H., Booij, L., Band, G. P., Hommel, B., & van der Does, A. W. (2012).

Affective regulation of cognitive-control adjustments in remitted depressive patients after acute tryptophan depletion. Cognitive, Affective, & Behavioral Neuroscience, 12(2), 280-286.

(30)

Verbruggen, F., McLaren, I. P., & Chambers, C. D. (2014). Banishing the control homunculi in studies of action control and behavior change. Perspectives on Psychological Science, 9(5), 497-524.

West, G. (2017). Scale: The universal laws of life and death in organisms, cities, and companies. London: Weidenfeld & Nicolson.

Zwosta, K., Hommel, B., Goschke, T., & Fischer, R. (2013). Mood states determine the degree of task shielding in dual-task performance. Cognition & Emotion, 27, 1142-1152.

Referenties

GERELATEERDE DOCUMENTEN

Taken altogether, we suggest that creative cognition in divergent- and convergent-thinking tasks is modulated by metacontrol states, where divergent thinking and insight solutions

With respect to the regression results with only the 2 outliers excluded, the significant coefficients of the control variables are the one for the openness to trade

Also, motivation is one requirement for creativity (Damasio, 2001). As said before movement toward a goal is linked to persons with a high BAS, persons with a high BAS probably

Now that both the process model of internationalization and the phases of internationalization are explained in more detail, the next step is to discuss the

Nine consequences of digitalisation were identified as having the most relevant effect on various industries: vertical integration, cost reduction, entry barriers, demand

Maar bovenal is de stichting een onafhankelijke organisatie voor ouderen die zorg en ondersteuning nodig hebben.'.

Earlier performed research [4–6] showed that the Modal Strain Energy Damage Index (MSE-DI) algorithm is a suit- able method to identify impact damage in skin-stiffened

Het college stelt in januari 2013 voor ieder verbindingskantoor een voorlopig beheerskostenbudget vast ter bepaling van de besteedbare middelen voor de beheerskosten ten laste van