• No results found

Vividness and Neural Distinctiveness of Multimodal Mental Imagery

N/A
N/A
Protected

Academic year: 2021

Share "Vividness and Neural Distinctiveness of Multimodal Mental Imagery"

Copied!
43
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Vividness and Neural Distinctiveness of Multimodal Mental Imagery Christina Bruckmann

(2)

Abstract

To improve upon current imagery decoding protocols and to allow for extension of imagery decoding methods to a variety of patient populations and research contexts, the goal of this study was to determine differences in imagery vividness across different stimulus categories and to explore the robustness and distinctiveness of associated fMRI data. To this end, 20 participants rated the vividness of their imagery of 11 different stimulus categories and imagined those same stimuli during several fMRI scanning sessions. The imagery stimuli, among which were the commonly used Spatial Navigation and Playing Tennis, were comprised of a variety of categories, covering a wide range of modalities and complexities. The most vivid imagery stimulus across participants appeared to be Language in the form of a tongue twister, presenting a possible alternative to Spatial Navigation and Playing Tennis in current imagery decoding protocols. However, in contrast to previous studies no correlation was found between imagery vividness and distinctiveness of neural activity in a preliminary analysis of seven subjects, and overall differences between neural activity during different imagery conditions seemed low. At the current moment we unfortunately cannot exclude possible errors in preprocessing and analysis of the fMRI data, we thus advise these results to be interpreted with caution.

(3)

Vividness and Neural Distinctiveness of Multimodal Mental Imagery

In 2006, Owen and colleagues (Owen et al., 2006) published their seminal paper on decoding motor and spatial imagery from functional magnetic resonance imaging (fMRI) data as a means to communicate with a patient assumed to be in a vegetative state. Since then, this non-verbal, non-motoric response method has been explored in further studies, ranging from using different neuroimaging methods (e.g. Cruse et al., 2011; Cruse et al., 2012; Goldfine, et al., 2011; Horki et al., 2014) to applying variations of the decoding paradigm to different patient populations (e.g. Daly et al., 2013).

Originally, this technique was used to determine whether some patients diagnosed as being in a vegetative state exhibit consciousness in the form of command following. The rate of misdiagnosis of vegetative state based on behavioral responses is estimated to be as high as 18-45% (Gill-Thwaites, 2006), illustrating the high demand for an alternative diagnostic method that does, in contrast to current approaches, not rely on behavioral responses.

To this end, Owen and colleagues made use of the generally established overlap in neural activation between imagining an action and carrying out an action (Borst et al., 2016; Koch, 2009; Reddy et al., 2010). This overlap allowed the researchers to distinguish between two different imagery conditions (spatial navigation and motor imagery) based on fMRI data. During the procedure, the presumably vegetative patient is placed in an MRI scanner and asked to imagine either walking through their house or playing tennis. If the participant is responsive, instructing the patient to imagine playing tennis should, just like in healthy controls, result in activity in the motor cortex. On the other hand, walking through the house should predominantly activate areas involved in spatial navigation. Based on the decoding of the resulting

neuroimaging data, the researchers were able to observe signs of compliance in some of the patients, indicating that despite their diagnosis they were indeed aware of what was going on

(4)

around them and could engage in command-following (Monti et al., 2010; Owen et al., 2006). Detecting signs of environmental awareness in patients who were originally thought to be in a vegetative state has profound implications for ethical decision-making regarding treatment and life-sustaining measures (Weijer et al., 2014).

Beyond the mere detection of consciousness, this method can also be used in order to communicate with the patients more extensively once they show signs of awareness (Monti et al., 2010); the researchers’ ability to differentiate between the two imagery conditions provided the patients with a way to answer binary choice questions (e.g. imagine playing tennis if your answer is ‘yes’, imagine walking through your house to say ‘no’). For example, the patients would be asked questions that the researchers did not know the answer to prior to judging the fMRI data, but the answer to which could later be verified, such as “Do you have any brothers?”. The patients could then respond to the question by willfully modulating their brain activity through mental imagery.

Outside of the area of vegetative state, improving this communication method might also find further applications in the field of brain-computer interfaces (e.g. Aflalo et al., 2015; Daly et al., 2013), and could be applied as a research method whenever a form of verbal,

non-motoric communication is desirable.

Problems and Difficulties

Despite the notable successes of the method and the large impact it has had on the field of vegetative state research, we believe there are still notable difficulties to be overcome and

potential ways to refine the procedure. We illustrate what we consider the two biggest obstacles to optimal application as a reliable non-verbal, non-motoric response method in the following.

(5)

Decodability and Imagery Vividness. Not all vegetative state patients who are assessed with this method show signs of command following through imagery (Monti et al., 2010). While these null-results might indeed indicate an absence of consciousness, there are other factors that could underlie the apparent lack of conscious awareness. As Owen has emphasized repeatedly (Owen, 2013; Owen et al., 2006), while imagery responses are a rather unambiguous indication of conscious awareness, the absence of such responses cannot be taken as a clear indication of the absence of consciousness. Indeed, even in studies solely carried out with healthy controls, some peoples’ imagery could not be reliably decoded from their brain activity (Harrison et al., 2017). This could be due to non-compliance with the imagery tasks, but also due to differences in imagery vividness across people. Whereas some people report having very vivid mental imagery, other people only have vague images in mind, and some do not experience visual imagery at all (Keogh & Pearson, 2018; Zeman et al., 2015).

This issue is linked to more specific differences in imagery abilities, as conditions such as congenital prosopagnosia have also been shown to negatively affect imagery abilities for specific stimuli (Grüter et al., 2009; Tree & Wilkie, 2010). People without brain damage as well as patients might display natural variations in imagery abilities and vividness between different stimuli categories. In other words, somebody with congenital prosopagnosia might have issues creating a clear mental image of a face but report average vividness for other stimulus categories such as motor movements or language (Tree & Wilkie, 2010).

Therefore, it has to be considered that for people who have trouble creating a vivid image in their mind, the brain activity associated with this process might also be less distinctive.

Indeed, Fulford and colleagues (2018) correlated imagery vividness with associated neural activity and found that while high-vividness participants showed distinct activity in regions directly pertaining to the imagery task (e.g. primarily visual cortex activation during a visual

(6)

imagery task), the subjects who reported low imagery vividness tended to exhibit more diffuse and less specific neural activity patterns. Thus, if the assessed vegetative state patients have low-vividness imagery, this could potentially lead to false negatives; they are aware of the

instructions but cannot create a mental image vivid enough to be clearly decodable from fMRI scans.

Cortical Damage and Imagery Abilities. Another hurdle for the decoding approach applied to vegetative state patients is the extent to which imagery abilities are linked to

perceptual abilities (Borst et al., 2016; Reddy et al., 2010). The majority of vegetative state cases are the result of traumatic brain injury as well as oxygen deprivation (Sazbon, et al., 1993). Due to this, the patients’ brains might have suffered widespread damage, resulting in a variety of functional cognitive deficits that cannot be detected due to the patients’ overall lack of responsiveness. Supposing for example, that a patient suffered severe damage to the motor cortex, it might be impossible for the patient to create clear mental images of motor movement due to the substantial involvement of this brain region in motoric imagery (Borst et al., 2016; Reddy et al., 2010). Thus, decoding motoric imagery activity might not be possible since the patient is impaired in this specific imagery task. Again, this would result in a null-result for the conscious awareness detection, even if the patient’s impairment might be of a very different nature unrelated to consciousness.

The Current Study

The aim of the current study was to refine the imagery decoding protocol by addressing the above-mentioned issues regarding modality and vividness, allowing not only for a more optimal application of the procedure to vegetative state patients, but also increasing the

(7)

asked to rate the vividness of their imagery of a variety of stimulus categories and to subsequently imagine these stimuli in the fMRI scanner.

First, the variability of imagery vividness across participants and stimulus categories was investigated to determine whether imagery vividness is a general trait or varies significantly across stimulus categories. Stimulus vividness was studied at group level in an exploratory manner, in order to shed light on which stimulus category appears to be the most vivid one to imagine on average. Of special interest was the comparison of the commonly used stimuli Spatial Navigation and Playing Tennis to other stimuli in order to potentially reveal a more suitable standard stimulus for decoding paradigms. Should a correlation be found between vividness and decodability, determining the overall most vivid stimulus category could help indicate which stimuli should be chosen for decoding procedures in cases where it is not possible to assess individual vividness, most notably in the case of vegetative state patients.

In a second step, the robustness of neural activity associated with each stimulus category was investigated. The current preliminary analysis of the fMRI data was expected to reveal neural activity that correlated more strongly across the same stimulus categories on different points in time than across different stimulus categories, indicating that mental imagery has a unique neural activity pattern corresponding to each type of imagery. Distinct neural activity has been found in previous studies (Boly et al., 2007; Harrison et al., 2017; Pilgramm et al., 2015) and is a prerequisite for future decoding analyses, allowing classifiers to distinguish between different stimulus categories purely based on their associated neural activity.

Lastly, the correlation between imagery vividness and robustness of neural activity was determined. We expected vividness to correlate positively with distinctiveness of neural activity, replicating and extending previous findings that high-vivid imagery is better to decode than low-vivid imagery (Fulford et al., 2018). If found, this would simplify the decoding protocol for

(8)

non-vegetative state patients and research participants; instead of investigating the individual decodability of several stimuli in the scanner, a selection of appropriate imagery stimuli per participant could be made based on a simple and inexpensive imagery questionnaire which would allow for a more widespread use of the technique in research settings.

In light of potential applications of this procedure to research settings with split-brain patients in the future, we also aimed to determine the decodability of each imagery condition per hemisphere. Although a hemisphere-specific analysis of the data has not yet been conducted, the experimental set-up and imagery stimuli were chosen specifically to allow for future

investigation into this direction.

Method Participants

For this study, 20 healthy participants (6 males) were recruited with an average age of 23.45 years (SD=3.41). Exclusion criteria consisted of past or present brain injury and any contraindications for MRI. Every participant completed all five parts of the study and was compensated with either money or study credits.

The study was approved by the Ethics Review Board of the University of Amsterdam, Faculty of Social and Behavioural Sciences (application number: 2018-BC-9432).

Procedure

The study consisted of one training session outside of the scanner and four imagery sessions inside the scanner for each participant. The sessions were scheduled on different days, but within a time span of three weeks.

(9)

Screening and Questionnaire. Before the begin of the study, all participants were informed about the purpose and procedure of the study. In order to match imagery stimuli to participants (e.g. choose famous faces they were familiar with), every participant filled in a questionnaire prior to the study, indicating which of a selection of stimuli they are most

comfortable imagining during the tasks. This was done exclusively for the following conditions: Faces, Fear, Melody, and Language. All other conditions were comprised of the same stimuli for each participant.

Imagery Training. In the first part of the study, participants completed an imagery training session, which lasted about 1.5 hours and was conducted prior to the first MRI scanning session. The purpose of this training session was not only to obtain vividness ratings for each of the stimuli, but also to assure that participants were able to recall imagery stimuli immediately when prompted in the scanner.

The participants completed computer tasks which instructed them to first view and then imagine a variety of stimuli involving different modalities (for a list of stimuli, see Materials). The training session consisted of 11 training blocks; one for each stimulus-type. After the participants were asked to imagine the previously shown stimulus, they were prompted to rate the vividness of their imagery on a shortened version of the Betts Imagery Questionnaire, comprised of a scale from 1-7 (Sheehan, 1967, see Materials).

All blocks followed this general structure:

1. Free Viewing. The chosen stimuli were presented in succession for the exact duration they should be imagined in the scanner (30 seconds per stimulus type). The participant was instructed to memorize the stimuli in the precise order and for the exact duration.

(10)

2. Imagining. The stimuli were again shown for 30 seconds total. Afterward, the participant was instructed to imagine the previously shown stimuli for 30 seconds. Participants were free to close their eyes or leave them open while imagining. 3. Vividness Ratings. After imagining a stimulus, the participant was asked to rate

the vividness of their imagery on a scale from 1-7 (see Materials). Vividness ratings were obtained separately for every stimulus category.

Participants were instructed to repeat each part if needed, and to only move on to the next part once they were confident that they could imagine the stimulus sequence.

fMRI Scanning. Each of the four scanning sessions had a duration of 1.5 hours total and followed the same procedure. Of the 1.5 hours, participants spent 40 minutes in the scanner. The rest of the time was used for instructing and screening the participants, as well as setting up the equipment. A session was comprised of four imagery blocks, interrupted half-way through with a structural T1-scan. Each imagery block consisted of 11 trials; one for each stimulus category. During every trial, participants were shown a written prompt on a screen, instructing them to imagine one of the previously practiced stimulus categories for 30 seconds until they heard a short sound. After the sound, they had 15 seconds to relax and clear their mind before the next prompt appeared. A fixation cross was shown during the 15 seconds of rest. Each block contained every stimulus category exactly once, with the order of stimulus categories being randomized across blocks. Participants were given the option to close their eyes while imagining. To ensure that participants were not asleep during the task, the eyes of the participants were monitored with an eye tracker (LiveTrack AV for fMRI, Cambridge Research Systems). Eye movement data was not recorded.

(11)

Questionnaire. A custom-made questionnaire was used for the participants to indicate their stimulus preferences. It included the categories Faces, Fear, Language, and Melody and was administered through the online survey software Qualtrics (Qualtrics, Provo, UT). Participants were asked to choose three famous faces, three fears, one tongue twister and one melody from predetermined selection options. Next to choosing one of those options, the

participants could give custom answers for their preferred fear and tongue twisters to account for individual differences in fears and familiarity with tongue twisters from different languages. The responses were only used to determine the input stimuli for the training session and were not further analyzed. The questionnaire used in this study can be found here:

https://nlpsych.eu.qualtrics.com/jfe/form/SV_eWd4tyYlXmNwnZj

Vividness Scale. Participants indicated their imagery vividness on a 7-point scale adapted from Betts Imagery Questionnaire (Sheehan, 1967). The scale was comprised of the following response options:

(1) Very clear and vivid, as in the real world. (2) Vivid and clear, almost as in the real world. (3) Generally clear and vivid.

(4) Not really clear and vivid, but recognizable. (5) Vague and not really clear.

(6) Very vague and hardly recognizable.

(7) I can think about it, but I do not have an image of this.

Imagery Stimuli. A total number of 11 stimulus categories were selected; 9 of which were chosen based on the expected brain activity associated with each type, the other 2 based on their prominent use in previous studies. For stimulus selection, emphasis was placed on

(12)

decodability for each stimulus. Hemisphere-based decoding not only has implications for possible follow-up studies with split-brain patients but is also relevant in cases of disorders of consciousness which result from lateralized brain injury. However, despite the stimulus selection being based on lateralization, the current manuscript concerns exclusively whole-brain analyses of the data. In addition to potential lateralization of stimuli, the total collection of stimulus types was intended to cover a wide range of modalities in order to allow for individual differences in imagery abilities.

The two most prominently used imagery stimuli for decoding paradigms are motor imagery in the form of Playing Tennis, as well as Spatial Navigation in the form of walking through one’s house. In order to be able to compare the different stimulus categories used in this study against the two most commonly used imagery stimuli, we included Playing Tennis and Spatial Navigation as stimulus categories.

For several example-stimuli, please refer to Appendix A. A complete database of all stimuli used in this study can be found under:

https://app.box.com/s/yoa13apx7x9n240spk3hzxmw6ix72stu

Fear. Considering the subjectivity of fear, every participant indicated three personal fears

before the first session. During the training, video clips associated with each of those three fears were shown to the subject, and the subject was instructed to imagine the most vivid one in the scanner (as long as they felt sufficiently comfortable doing so).

Arithmetic. Participants were instructed to choose a random three-digit number and

subsequently keep subtracting 7 from it for 30 seconds. To prevent the participants from learning the sequence by heart, a different starting number had to be chosen each time.

Melody. Each participant chose a melody out of a selection of well-known non-lyrical

(13)

the chosen melody and asked to imagine it for the same duration. No participant reported being familiar with less than one melody in the database and thus no additional stimulus options had to be added.

Landscape. During training, three images of natural scenes were shown in succession to

the participants, for a total duration of 30 seconds. Participants were prompted to imagine these scenes in the scanner; they were free to either imagine the picture of the scene or imagine themselves surrounded by the scenery.

Colored Circles. Three colored circles (red, green, blue) were shown in succession, one

at a time for a total duration of 30 seconds. Each circle moved vertically across the screen in a straight line. Participants were instructed to imagine the moving circles without moving their eyes.

Finger Movement. Participants were instructed to imagine tapping the fingers of both of

their hands simultaneously, after first actively carrying out the movement themselves.

Language. For the language imagery condition, participants were instructed to imagine a

tongue twister of their choice and repeat it in their head for 30 seconds. A tongue twister was chosen specifically to ensure that participants were actively engaged with language rather than passively imagining it. Participants could choose from a variety of Dutch, English, and Italian tongue twisters or select one in their native language.

Faces. The participants were instructed to imagine three faces in succession; every face

for roughly 10 seconds. They could choose three familiar faces from a database of famous faces. No participant reported being familiar with less than three faces in the database and thus no additional stimulus options had to be added.

(14)

Mental Rotation. During training, participants were shown a video clip of a rotating chair

and later instructed to imagine the same rotating stimulus. The video was meant to give a better idea of the imagery task; the order of rotation directions was not relevant for imagery.

Image acquisition. All imagery data was obtained with a Philips 3T Achieva magnetic resonance scanner and a 32-channel head coil at the Spinoza Centre for Neuroimaging in Amsterdam, The Netherlands. For functional imaging, an echo planar imaging sequence was used with a repetition time of 1600ms, an echo time of 30ms, and a flip angle of 70°. Isotropic voxel size was 2mm and 56 slices were obtained with a thickness of 2mm. For co-registration, a T1-weighted structural image was acquired (number of slices = 220, isotropic voxel size = 1mm).

Analysis

Vividness Ratings. The vividness ratings obtained during the training session were analyzed with IBM SPSS Statistics 23. If multiple ratings were available for the same stimulus by the same participant (in case the participant repeated a task block), only the last vividness rating was used resulting in exactly one vividness rating per stimulus for each participant. One vividness rating was missing for subject 20 and the subject was consequently excluded from the analysis.

First, to exclude the possibility of outliers (e.g. participants with aphantasia), the average vividness ratings per participant were assessed based on inter-quartile range. No outliers were detected, and no participants were excluded from further analysis. In the following, a one-way repeated measures ANOVA was used to determine whether vividness differed significantly between categories at group level. Results were corrected for multiple comparisons with a Bonferroni correction. Further, to establish whether there were clusters of stimulus categories

(15)

based on vividness ratings, an exploratory factor analysis (EFA) with a varimax rotation was run. It is important to note that the ordinal vividness scale was treated as interval for all analyses and that the adequate sample size for EFA was not reached.

fMRI Analysis. A preliminary exploratory fMRI analysis was conducted with the data of the first seven participants (mean age 25.4, 2 males) using MATLAB R2017b (MathWorks Inc., Natick, MA, USA) and IBM SPSS Statistics 23. All neural activity data was normalized before analysis.

Preprocessing. Preprocessing of the imaging data included slice timing correction, head

motion correction, co-registration and spatial as well as temporal smoothing. Additionally, the brain was sub-divided into 141 regions of interest (ROIs) and data was sectioned based on the different imagery conditions for each block. For further information, please contact Dr. Steven Scholte or Lukas Snoek.

Correlations Across Runs. On whole-brain level, including all 141 ROIs, the average

correlation in neural activity between and within conditions was determined for each participant. This was done by running Pearson correlations for each participant first between each condition and the identical condition during a different run (within-condition), as well as between each condition and all other conditions (between-conditions). To calculate the average correlations between conditions and within conditions per participant, the correlation coefficients for each comparison were transformed into z-values using Fisher’s r-to-z transform. The mean of the resulting values was determined and transformed back into a correlation coefficient using Fisher’s inverse transform. This way, for each participant two correlation coefficients were obtained; one for the average within-condition correlation and one for the average between-conditions correlation. To then determine whether the difference in correlation coefficients between and within conditions was significant, the difference in z-values was computed (within -

(16)

between) and the resulting z-value was matched with the corresponding p-value from a z-to-p look-up table, using a right-tailed significance level of 0.05 based on the hypothesis that the correlation within the same condition is higher than the correlation between different conditions.

Analysis of Most and Least Active Regions of Interest (ROIs). In a second analysis

approach, the most and least active ROIs were investigated. First, the consistency of most active ROIs within each condition across runs was investigated. This was done by splitting the

complete data set (112 runs) in half and selecting the most active ROIs in each half of the data set (>90th percentile). Then, for each condition, the overlap between the most active ROIs in the first 56 runs with the most active ROIs in the last 56 runs was calculated. The amount of overlap was then contrasted with the amount of difference and resulted in an overlap value (%) for each condition.

Following this, an analysis of the most and least active ROIs of the comparison between different categories was run for each participant, comparing the first 8 runs with the last 8 runs (total runs per participant = 4 runs x 4 blocks x 4 sessions = 16 runs). This was done to determine whether the differences in activity between ROIs were consistent across runs and sessions within subjects. The procedure will be exemplified on the contrast of Condition 1 and Condition 2. The following calculations were done for each category comparison:

Within each half of the runs, the average activity within each ROI for Condition 2 was subtracted from that of Condition 1. Then, the most and least active (>80th percentile and <20th percentile, respectively) ROIs of each half were determined based on the activity in each ROI after subtraction of Condition 2. The amount of overlap and the difference in most and least active ROIs between the first 8 and the last 8 runs was determined and the comparison was classified as “hit” when the overlap outweighed the difference. Otherwise the comparison was considered “miss”.

(17)

These steps were done for all possible between-category comparisons (N=110) and the total amount of “hit” and “miss” comparisons were calculated. This resulted in one “hit” and one “miss” score for each participant, for which then the percentage of hit comparisons relative to the total amount of total comparisons was determined. A paired-samples t-test was conducted with the standardized values of hits and misses for each participant to determine whether the number of overlapping ROIs was significant compared to the differences in ROIs.

Correlation of Vividness and Neural Activity

A very basic investigation into potential correlations between vividness and

distinctiveness of neural activity was done by running a Pearson correlation on the percentage of most active ROIs (>90th percentile) found across both parts of the split sample (56 runs - 56 runs) for each condition, and the average vividness rating obtained for each condition. This was done to determine whether categories that are on average more vividly imagined also activate the same ROIs more reliably than categories with low average imagery vividness.

Additionally, a second Pearson correlation analysis was conducted for each participant, correlating the results from the analysis of the consistency of most and least active ROIs when conditions are compared (% found in both first and last 8 runs per participant) with average vividness ratings for each participant across all stimulus categories. This was done to determine whether vividness and neural data are correlated within each subject.

Results Vividness Ratings

Interquartile ranges revealed that no outliers with respect to average vividness ratings were present in the data. A repeated measures ANOVA was conducted to determine whether the

(18)

vividness of stimulus categories differed at group-level. Mauchly’s test revealed no violation of the assumption of sphericity (χ2(54) = 51.062, p = .641). The repeated measures ANOVA indicated a significant main effect of stimulus category on imagery vividness (F (10,180) = 5.034, p < .000) and 55 post-hoc pair-wise comparisons with Bonferroni correction for multiple comparisons were run to determine which categories differ significantly. The most vivid

category overall was Language (M=1.21 SD=0.63), which was significantly more vivid than Faces (M = 2.74, SD = .991; p < .000), Landscapes (M = 2.68, SD = .820; p < .000), and Mental Rotation (M = 2.16, SD = .898; p = .044). The overall least vivid category was Faces, being significantly less vivid than Language (p < .000) and Spatial Navigation (M = 2.68, SD = .820; p = .022). See Figure 1 for a graphical summary of these results.

A full table of significant pair-wise comparisons between categories can be found in Appendix B. The full results table including non-significant comparisons can be found in Appendix C.

(19)

Figure 1. Significant Mean Differences in Vividness between Stimuli Categories. In line with the vividness questionnaire, lower ratings indicate higher vividness (see Methods).

Further, to investigate possible clustering of stimulus categories, an exploratory factor analysis (EFA) with varimax rotation was conducted. Principal component analysis resulted in a four-factor solution (eigenvalue > 1), explaining a total of 66.1% of the variance. Factor 1, contributing 24.12% of the variance, was comprised of the stimuli categories Mental Rotation, Spatial Navigation, Finger Tapping, Melody, Moving Circles, and Fear. Factor 2 showed

positive factor loadings of the categories Faces, Language, Moving Circles, and Landscapes and explains a total of 17.37% of the variance. The categories Melody, Mental Arithmetic and

Language all show positive factor loadings on factor 3, which explains 13.07% of the variance, and Spatial Navigation and Playing Tennis load positively and Fear negatively on factor 4.

(20)

Factor 4 explains 11.48% of the total variance. A visual summary of factor loadings, the variance explained by each factor and possible interpretations of factors can be found in Figure 2. For a table with eigenvalues and a rotated component matrix, see Appendix D and E respectively.

Figure 2. Exploratory Factor Analysis. Illustration of factor loadings of imagery categories on each factor, as well as the percentage of variance explained by each factor and possible interpretations of factors.

fMRI Decoding

Average Pearson correlation coefficients on whole brain level did not differ significantly (right-tailed, p>0.05) between within-condition comparisons and between-condition comparisons

(21)

for any of the seven participants, indicating that the same condition across different runs and sessions is not more similar to itself than different conditions across runs and sessions. The average correlation coefficients between and within conditions per participant can be found in Figure 3. The average correlations alongside the z-transformed values, the difference in z-values and the p-values of the comparison of correlations coefficients can be found in Appendix E.

Figure 3. Correlation Coefficients Within and Between Conditions for each Subject (1-7). No significant differences in correlation coefficients have been found.

Analyzing the most and least active ROIs between conditions and comparing them between the first 8 and the last 8 runs for each participant did not yield significantly more “hit” comparisons (M = 0.6779, SD = 0.204) than “miss” comparisons (M = 0.3221, SD = 0.204; t (6) = 2.301, p = 0.061) overall.

(22)

Correlation between Vividness and Decodability

A Pearson correlation analysis revealed no significant (one-tailed, p>0.05) correlation between average vividness rating for each condition and the respective amount (%) of

overlapping active ROIs for each condition with itself between one half (56 runs) and another half of the sample (56 runs). A second Pearson correlation analysis revealed no significant (one-tailed, p>0.05) correlation between average vividness ratings per participant and the average amount of “hit” comparisons (%) of contrasts between conditions of the first and the second half of runs.

Discussion

The goal of the current study was to refine the imagery decoding protocol by addressing issues regarding imagery modality and vividness. This would not only allow for a more optimal application of the procedure to vegetative state patients, but also increase the feasibility of using this method in a research context. To this end, vividness ratings as well as fMRI data were obtained from 20 participants for 11 different imagery stimulus categories, covering several modalities, including the standard stimuli Spatial Navigation and Playing Tennis. The differences in vividness and neural activity between the different stimulus categories were investigated respectively, followed by an analysis of the correlation between the two.

Vividness Ratings

Across participants, some stimulus categories were rated significantly more vivid than others, indicating that imagery vividness is not a unified quality but varies with imagery stimulus. Considering that several studies have found a positive correlation between imagery

(23)

vividness and associated decodability (Cui et al., 2007; Fulford et al., 2018; Lorey et al., 2011), the current results suggest that the choice of imagery category could potentially influence results in decoding studies, thus highlighting the importance of stimulus choice in decoding

paradigms. However, the traditionally used categories of Spatial Navigation and Playing Tennis were among the most vivid, and thus appear to be appropriate choices for standard use in

decoding paradigms. This result is in line with findings from Boly and colleagues (2007), who obtained consistently high decoding accuracies for Spatial Navigation and Playing Tennis in healthy volunteers.

While the overall most vivid category was Language, it was not rated significantly more vivid than the standard categories, indicating no urgent need for the revision of current protocols. Nonetheless, should Language prove to be as decodable as its vividness rating suggests, this stimulus could present a viable alternative or addition to the standard protocol and prove valuable especially in cases where brain damage to the motor cortex or hippocampal place area are suspected. In contrast to this, Boly and colleagues (2007) were able to distinguish imagery stimuli significantly less accurately for Subvocal Rehearsal compared to Playing Tennis and Spatial Navigation. While their findings are not directly comparable to the current study, since their subvocal rehearsal condition consisted of imagining singing a song out loud, thereby possibly conflating auditory and linguistic imagery, these results nonetheless should be

considered before including Language into current decoding paradigms. Further, while Harrison and colleagues (2017) obtained significantly higher decoding accuracies for Mental Arithmetic compared to Spatial Navigation, no significant difference in vividness between these two stimulus categories was found in the current study.

Significantly lower vividness ratings compared to Language and Spatial Navigation were obtained for the categories of Landscapes and Faces. The latter finding appears to be in

(24)

agreement with decoding results from Boly and colleagues (2007) indicating that imagining faces results in significantly less specific activity compared to spatial navigation. Whereas the low vividness ratings of Faces and Landscapes would suggest that complex visual imagery is rather difficult to perform for participants, this issue does not seem to extend to more simple visual imagery such as Colored Circles, the vividness of which did not differ significantly from other categories. The slightly more complex visual imagery of Mental Rotation was significantly less vivid only compared to Language, suggesting an inverse relationship between the

complexity of visual imagery and imagery vividness. In line with these results, Kalicinski and colleagues (2014) found similar effects of stimulus complexity on imagery vividness in the motor domain. While research into the relation between stimulus complexity and imagery

vividness is scarce, the current results seem to suggest that simpler imagery might be more easily imagined and could potentially increase decoding accuracy of imagery.

Further, seeing as imagery vividness seems to vary with stimulus category, as well as taking into account that there have been cases in which patients reported category-specific loss of imagery abilities (Keogh & Pearson, 2018), it seems reasonable to suspect that imagery

vividness might be clustered. In other words, it appears possible that underlying commonalities (e.g. modality) between different imagery categories drive correlations in imagery vividness across imagery categories; for example, one could speculate that if a participant reports low vividness for one visual imagery stimulus (e.g. Landscapes), this might correlate with low vividness for other stimuli relying on visual abilities, such as Faces or Moving

Circles. Regarding this, an exploratory factor analysis on the vividness scores revealed a four-factor solution, the interpretation of which does not appear to be straightforward however. A complete list of clusters can be found in the results section, though given the lack of

(25)

meaning of these clusters. The clusters do not seem to line up with stimulus relevant modalities (visual, auditory, linguistic, etc.) and the only somewhat interpretable cluster appears to be Factor 3, comprised of the categories Melody, Mental Arithmetic and Language, which might be summarized by the commonality of subvocalization. Overall however, we chose to refrain from an in-depth discussion and interpretation of this component analysis, due to the numerous violated assumptions and the small sample size that went into this analysis (Yong & Pearce, 2013).However, reports of subjects that appear to be impaired selectively in some imagery categories but indicate no impairment in different categories (Keogh & Pearson, 2018) suggest that despite our lack of interpretable results regarding the clustering of imagery, future studies with larger sample sizes and a set up specifically designed for a component analysis seem a promising endeavor.

Robustness and Distinctiveness of Neural Activation

A preliminary analysis of the first seven participants’ fMRI data revealed correlations equally strong between neural activity of different categories compared to correlating each category with itself. This finding suggests no notable differences in brain activity between different imagery conditions on whole brain level, implying that the different imagery conditions are unlikely to be classifiable reliably based on neural data. However, the obtained data might be noisy due to the lack of an appropriate baseline condition, which could be masking condition effects during a whole-brain analysis. To address this problem, a second analysis was run investigating only the most and least active ROIs across categories, which indicated that

differences between imagery categories appear to be at least somewhat, though not significantly, consistent across participants and runs. This presents a promising prospect for the success of more sensitive as well as more directed, hypothesis-based, analyses to contrast specific

(26)

categories. However, it is unclear whether this consistency would prove enough for robustly accurate classification of different imagery categories.

The overall non-significant results seem to be surprising in light of previous studies investigating the robustness of neural correlates of mental imagery (Boly et al., 2007; Harrison et al., 2017; Pilgramm et al., 2015), which were all able to distinguish between imagery categories based on neural data. A possible reason for the lack of significant results is the manner in which conditions were compared. For the current preliminary analysis of the data, all conditions were contrasted with each other and based on these contrasts the average overlap and differences were calculated. However, it is to be expected that some conditions share considerable overlap

between each other, for example all visual imagery categories are likely to involve the visual cortex to some extent (Winlove et al., 2018). This way, the large overlap of certain categories might have attenuated the overall measure of contrasts between different categories. Thus, for further analyses of the data of the remaining subjects it is recommended to contrast specific categories based on hypotheses indicating minimal or no overlap and refraining from comparisons of similar imagery types.

Should more targeted analyses fail to show differences in neural data, it might be the case that the process of mental imagery recruits similar or identical brain regions regardless of

modality and the impact of stimulus modality in neural variation might be too small to be detected with the analysis methods used. This however, seems unlikely in light of previous studies (e.g. Boly et al., 2007), which illustrate a stark distinctiveness in brain regions active during imagery of different stimuli.

Additionally, it is important to mention that in regard to possible applications in the context of individual patients, it is advisable to compare the neural activity of different imagery

(27)

stimuli within each participant during further analyses, rather than at group level, to account for possible differences between individuals.

Correlation of Vividness and Distinctiveness of Neural Activity

No correlation was found between vividness and robustness of distinct neural activity for the first seven participants; neither for averages across all stimulus categories, nor for averages across participants. These findings suggest that vividness is not likely to be a reliable indicator of which stimuli best lend themselves to decoding and would beg the question which neural

processes are associated with vividness of imagery, if not distinctiveness of activity.

However, these current findings as surprising, as they stand in stark contrast to several previous studies indicating a correlation between vividness and strength as well as

distinctiveness of neural activity (Cui et al., 2007; Fulford et al., 2018; Lorey et al., 2011). This discrepancy in results could stem from the low decoding accuracy obtained in our study, which in turn might be due to errors in data preprocessing. Thus, renewed analysis of the current data will have to determine whether the lack of correlation found between neural activity and vividness ratings represents an actual conflict with previous studies that would have to be investigated further.

However, while the lack of correlation could be confounded by the overall low decoding accuracy of our results, a possible alternative explanation for the current null-findings could be that vividness ratings obtained during the training sessions might not be representative of the participant’s imagery vividness in the scanner. Assessing vividness of imagery only once for each participant rests on the implicit assumption that imagery vividness is a stable quality and that vividness ratings of categories at one point in time (training) predict vividness of imagery at later points in time (scanning sessions). However, there seems to be no conclusive evidence that

(28)

imagery vividness as well as differences in imagery vividness between categories are sufficiently stable across time. It appears not unreasonable to assume that several factors such as alertness, motivation, or the ability to concentrate could impact the vividness of imagery. Indeed, there seems to be evidence for fluctuations of imagery vividness across time (Dijkstra, 2017). Changes in vividness based on circumstantial factors could impact the correlation between obtained

vividness ratings and neural activation; the data of which was acquired at different points in time. Considering that other studies which reported correlations between vividness and neural activity (Cui et al., 2007; Fulford et al., 2018; Lorey et al., 2011) did not have this temporal discrepancy between vividness ratings and fMRI sessions in their study design, it is a possibility that this discrepancy indeed influenced our results. Unfortunately, no vividness ratings were obtained in the scanner and thus the temporal stability of imagery vividness cannot be assessed with the current data. While this might be a factor in the lack of correlation between vividness and decodability however, we consider the low decodability success to play a much stronger role in the lack of correlation. In that regard, should analysis with revised data yield a higher positive correlation, this would indicate that vividness is at least somewhat stable and indicative across time-spans of a few weeks.

Limitations and Implications

The extent to which the current results are representative has to be considered with caution. While our present null-findings stand in stark contrast to most of the existing imagery decoding literature, several limitations, including possible errors in the data analysis process, would make a strong conclusion appear premature.

Future renewed analysis of the data is expected to shine a light on the extent to which the seemingly low decoding accuracy obtained for the first seven subjects represents a true finding.

(29)

Should the low decoding accuracy not turn out to be an artifact caused by data processing issues, this would call into question the comparability of the current study design and previous ones and warrant a more precise analysis of why there is such a drastic discrepancy between the studies. However, if in line with our expectations, renewed analysis reveals results and decoding

accuracies more similar to previous imagery studies, the current comparison of not only a variety of stimuli but also their potential correlation with vividness scores could lead to a deepened insight into the mechanisms of imagery creation as well as pave the way for refinements of the imagery decoding protocol.

While renewed data preprocessing and analysis might improve the current results, limitations of the study design are also likely to have had an influence on the decoding results. Most notably, there were no actual stimuli presented as a baseline condition, to which the imagery conditions could have been compared. This leaves the study without a proper sanity check and would have to be addressed in future set-ups. A more specific caveat that resulted from the chosen imaging method was that the Melody imagery condition was likely to be interfered with by the noise of the MRI scanner. Several participants reported struggling to imagine the melody as a result of distracting noises produced by the scanner. Further, actual auditory input might conflate the neural response of the imagery condition. Thus, while

investigation of auditory imagery seems to be promising based on the vividness results, it might be more suitable for other imaging methods such as EEG or MEG.

As this study was conducted on healthy participants who displayed no impairment in imagery creation, the direct application to patient populations is not entirely clear. For

application of decoding results to patient populations who could potentially suffer from impaired imagery creation, further studies should be conducted with participant groups affected by such conditions, including for example prosopagnosia, brain injury or aphantasia.

(30)

However, current data can inform decoding paradigms used on people who are not suspected to present with mental imagery impairment, for example in the context of brain-computer interfaces or split-brain studies. Indeed, should the resolving of potential issues regarding the data processing yield significantly higher decoding accuracies, the decoding analysis will in a next step be conducted in a hemisphere-based manner to allow for future application in split-brain patients. Obtaining reliable decoding results for each hemisphere individually would render the imagery protocol suitable for research studies with split-brain patients, by providing the means to collect non-verbal, non-motoric responses from each hemisphere separately. This would allow for investigating whether recent findings of apparent unified consciousness in split-brain patients (Pinto et al., 2017) can be found also prior to behavioral output, excluding the possibility that consciousness appears to be unified only at a behavioral level.

Conclusion

Based on current results it appears that imagery vividness can vary significantly at within-subject level depending on the imagery category, and that Language is the most vivid imagery stimulus on average. There seems to be no correlation between imagery vividness and the degree to which neural activity differs distinctly between imagery conditions.

At the current moment we unfortunately cannot exclude possible errors in preprocessing and analysis of the fMRI data. Future backtracking of the processing steps as well as re-analysis of the raw data with current and new methods is expected to shine a light on potential errors.

(31)

References

Aflalo, T., Kellis, S., Klaes, C., Lee, B., Shi, Y., Pejsa, K., … Andersen, R. A. (2015). Decoding motor imagery from the posterior parietal cortex of a tetraplegic

human. Science, 348(6237), 906–910.

Boly, M., Coleman, M., Davis, M., Hampshire, A., Bor, D., Moonen, G., … Owen, A. (2007). When thoughts become action: An fMRI paradigm to study volitional brain activity in non-communicative brain injured patients. NeuroImage, 36(3), 979–992.

Borst, A. W. D., & Gelder, B. D. (2016). fMRI-based Multivariate Pattern Analyses Reveal Imagery Modality and Imagery Content Specific Representations in Primary

Somatosensory, Motor and Auditory Cortices. Cerebral Cortex.

Cruse, D., Chennu, S., Chatelle, C., Bekinschtein, T. A., Fernández-Espejo, D., Pickard, J. D., ... Owen, A. M. (2011). Bedside Detection of Awareness in the Vegetative State: A cohort study. The Lancet, 378(9809), 2088-2094.

Cruse, D., Chennu, S., Fernández-Espejo, D., Payne, W. L., Young, G. B., & Owen, A. M. (2012). Detecting Awareness in the Vegetative State: Electroencephalographic Evidence for Attempted Movements to Command. PLoS ONE, 7(11).

Cui, X., Jeter, C. B., Yang, D., Montague, P. R., & Eagleman, D. M. (2007). Vividness of mental imagery: Individual variability can be measured objectively. Vision Research, 47(4), 474–478.

Daly, I., Billinger, M., Laparra-Hernández, J., Aloise, F., García, M. L., Faller, J., … Müller Putz, G. (2013). On the control of brain-computer interfaces by users with cerebral palsy. Clinical Neurophysiology, 124(9), 1787–1797.

(32)

the Neural Overlap with Perception in Visual Areas. The Journal of Neuroscience, 37(5), 1367–1373.

Fulford, J., Milton, F., Salas, D., Smith, A., Simler, A., Winlove, C., & Zeman, A. (2018). The neural correlates of visual imagery vividness – An fMRI study and literature review. Cortex, 105, 26–40.

Gill-Thwaites, H. (2006). Lotteries, loopholes and luck: Misdiagnosis in the vegetative state patient. Brain Injury, 20(13-14), 1321–1328.

Goldfine, A. M., Victor, J. D., Conte, M. M., Bardin, J. C., & Schiff, N. D. (2011). Determination of awareness in patients with severe brain injury using EEG power spectral analysis. Clinical Neurophysiology,122(11), 2157-2168.

Grüter, T., Grüter, M., Bell, V., & Carbon, C. (2009). Visual mental imagery in congenital prosopagnosia. Neuroscience Letters, 453(3), 135-140.

Harrison, A. H., Noseworthy, M. D., Reilly, J. P., Guan, W., & Connolly, J. F. (2017). EEG and fMRI agree: Mental arithmetic is the easiest form of imagery to detect. Consciousness and Cognition, 48, 104–116.

Horki, P., Bauernfeind, G., Klobassa, D. S., Pokorny, C., Pichler, G., Schippinger, W., & Müller Putz, G. R. (2014). Detection of mental imagery and attempted movements in patients with disorders of consciousness using EEG. Frontiers in Human Neuroscience,8. Kalicinski, M., Kempe, M., & Bock, O. (2014). Motor Imagery: Effects of Age, Task

Complexity, and Task Setting. Experimental Aging Research, 41(1), 25–38.

Keogh, R., & Pearson, J. (2018). The blind mind: No sensory visual imagery in aphantasia. Cortex, 105, 53-60.

Koch, C. (2009). Reading the mind’s eye: Decoding object information during mental imagery from fMRI patterns. Frontiers in Systems Neuroscience, 3.

(33)

Lorey, B., Pilgramm, S., Bischoff, M., Stark, R., Vaitl, D., Kindermann, S., … Zentgraf, K. (2011). Activation of the Parieto-Premotor Network Is Associated with Vivid Motor Imagery—A Parametric fMRI Study. PLoS ONE, 6(5).

Monti, M. M., Vanhaudenhuyse, A., Coleman, M. R., Boly, M., Pickard, J. D., Tshibanda, L., ... Laureys, S. (2010). Willful Modulation of Brain Activity in Disorders of Consciousness. New England Journal of Medicine,362(7), 579-589.

Owen, A. M. (2013). Detecting Consciousness: A Unique Role for Neuroimaging. Annual Review of Psychology, 64(1), 109–133.

Owen, A., Coleman, M., Boly, M., Davis, M., Laureys, S., & Pickard, J. (2006). Detecting Awareness in the Vegetative State. Science, 313(5792), 1402.

Phelps, E. A., & Ledoux, J. E. (2005). Contributions of the Amygdala to Emotion Processing: From Animal Models to Human Behavior. Neuron, 48(2), 175–187.

Pilgramm, S., Haas, B. D., Helm, F., Zentgraf, K., Stark, R., Munzert, J., & Krüger, B. (2015). Motor imagery of hand actions: Decoding the content of motor imagery from brain activity in frontal and parietal motor areas. Human Brain Mapping, 37(1), 81–93. Pinto, Y., Neville, D. A., Otten, M., Corballis, P. M., Lamme, V. A. F., Haan, E. H. F. D., …

Fabri, M. (2017). Split brain: divided perception but undivided consciousness. Brain, 104(5), 1231–1237.

Reddy, L., Tsuchiya, N., & Serre, T. (2010). Reading the mind's eye: Decoding category information during mental imagery. NeuroImage, 50(2), 818–825.

Sazbon, L., Zagreba, F., Ronen, J., Solzi, P., & Costeff, H. (1993). Course and outcome of patients in vegetative state of nontraumatic aetiology. Journal of Neurology, Neurosurgery & Psychiatry, 56(4), 407-409.

(34)

of Clinical Psychology, 23(3), 386–389.

Tree, J., & Wilkie, J. (2010). Face and object imagery in congenital prosopagnosia: A case series. Cortex, 46(9), 1189-1198.

Weijer, C., Peterson, A., Webster, F., Graham, M., Cruse, D., Fernández-Espejo, D., … Owen, A. M. (2014). Ethics of neuroimaging after serious brain injury. BMC Medical Ethics, 15(1).

Winlove, C. I., Milton, F., Ranson, J., Fulford, J., Mackisack, M., Macpherson, F., & Zeman, A. (2018). The neural correlates of visual imagery: A co-ordinate-based

meta-analysis. Cortex, 105, 4–25.

Yong, A. G., & Pearce, S. (2013). A Beginner’s Guide to Factor Analysis: Focusing on

Exploratory Factor Analysis. Tutorials in Quantitative Methods for Psychology, 9(2), 79– 94.

Zeman, A., Dewar, M., & Sala, S. D. (2015). Lives without imagery – Congenital aphantasia. Cortex, 73, 378–380.

(35)

APPENDIX A Example Stimulus Face

(36)

APPENDIX B

Based on estimated marginal means *. The mean difference is significant

b. Adjustment for multiple comparisons: Bonferroni. Significant Pair-Wise Comparisons

95% Confidence Interval for Differenceb (I)Stimulus Category (J)Stimulus Category Mean Difference (I-J) Std. Error

Sig.b Lower Bound Upper Bound

Faces Language 1.526* .234 .000 .598 2.454 Spatial Navigation 1.158* .263 .022 .096 2.220 Landscape Language 1.474* .193 .000 .709 2.239 Spatial Navigation 1.105* .228 .007 .201 2.009 Language Faces -1.526* .234 .000 -2.454 -.598 Landscape -1.474* .193 .000 -2.239 -.709 Mental Rotation -.947* .235 .044 -1.880 -.014

Spatial Navigation Faces -1.158* .268 .022 -2.220 -.096

Landscape -1.105* .228 .007 -2.009 -.201

(37)

APPENDIX C

Post-Hoc Pair-Wise Comparisons (Effect of Stimulus Category on Vividness) Measure: Vividness (I) Stimulus (J) Stimulus Mean Difference

(I-J) Std. Error Sig.b

95% Confidence Interval for Differenceb

Lower Bound Upper Bound

1 2 -.632 .267 1.000 -1.691 .428 3 -.579 .268 1.000 -1.643 .485 4 .895 .241 .086 -.059 1.848 5 .211 .423 1.000 -1.466 1.887 6 .158 .245 1.000 -.813 1.129 7 -.158 .299 1.000 -1.342 1.027 8 .263 .252 1.000 -.735 1.261 9 .526 .221 1.000 -.351 1.403 10 -.263 .274 1.000 -1.350 .823 11 -.053 .270 1.000 -1.123 1.018 2 1 .632 .267 1.000 -.428 1.691 3 .053 .209 1.000 -.776 .881 4 1.526* .234 .000 .598 2.454 5 .842 .392 1.000 -.712 2.396 6 .789 .271 .513 -.286 1.865 7 .474 .328 1.000 -.826 1.773 8 .895 .252 .127 -.106 1.895 9 1.158* .268 .022 .096 2.220 10 .368 .317 1.000 -.889 1.626 11 .579 .289 1.000 -.568 1.726 3 1 .579 .268 1.000 -.485 1.643 2 -.053 .209 1.000 -.881 .776 4 1.474* .193 .000 .709 2.239 5 .789 .371 1.000 -.683 2.262 6 .737 .274 .825 -.350 1.823 7 .421 .246 1.000 -.553 1.395 8 .842 .257 .228 -.175 1.860 9 1.105* .228 .007 .201 2.009 10 .316 .297 1.000 -.860 1.492 11 .526 .234 1.000 -.402 1.454

(38)

4 1 -.895 .241 .086 -1.848 .059 2 -1.526* .234 .000 -2.454 -.598 3 -1.474* .193 .000 -2.239 -.709 5 -.684 .297 1.000 -1.860 .492 6 -.737 .274 .825 -1.823 .350 7 -1.053 .270 .058 -2.123 .018 8 -.632 .219 .545 -1.500 .237 9 -.368 .205 1.000 -1.182 .446 10 -1.158 .318 .102 -2.418 .102 11 -.947* .235 .044 -1.880 -.014 5 1 -.211 .423 1.000 -1.887 1.466 2 -.842 .392 1.000 -2.396 .712 3 -.789 .371 1.000 -2.262 .683 4 .684 .297 1.000 -.492 1.860 6 -.053 .386 1.000 -1.583 1.477 7 -.368 .392 1.000 -1.921 1.184 8 .053 .310 1.000 -1.178 1.283 9 .316 .367 1.000 -1.140 1.771 10 -.474 .393 1.000 -2.030 1.083 11 -.263 .373 1.000 -1.743 1.217 6 1 -.158 .245 1.000 -1.129 .813 2 -.789 .271 .513 -1.865 .286 3 -.737 .274 .825 -1.823 .350 4 .737 .274 .825 -.350 1.823 5 .053 .386 1.000 -1.477 1.583 7 -.316 .306 1.000 -1.530 .899 8 .105 .228 1.000 -.799 1.009 9 .368 .219 1.000 -.500 1.237 10 -.421 .279 1.000 -1.527 .685 11 -.211 .224 1.000 -1.098 .677 7 1 .158 .299 1.000 -1.027 1.342 2 -.474 .328 1.000 -1.773 .826 3 -.421 .246 1.000 -1.395 .553 4 1.053 .270 .058 -.018 2.123 5 .368 .392 1.000 -1.184 1.921 6 .316 .306 1.000 -.899 1.530 8 .421 .279 1.000 -.685 1.527 9 .684 .242 .621 -.277 1.645

(39)

10 -.105 .374 1.000 -1.587 1.377 11 .105 .295 1.000 -1.065 1.275 8 1 -.263 .252 1.000 -1.261 .735 2 -.895 .252 .127 -1.895 .106 3 -.842 .257 .228 -1.860 .175 4 .632 .219 .545 -.237 1.500 5 -.053 .310 1.000 -1.283 1.178 6 -.105 .228 1.000 -1.009 .799 7 -.421 .279 1.000 -1.527 .685 9 .263 .168 1.000 -.404 .930 10 -.526 .290 1.000 -1.675 .623 11 -.316 .217 1.000 -1.176 .545 9 1 -.526 .221 1.000 -1.403 .351 2 -1.158* .268 .022 -2.220 -.096 3 -1.105* .228 .007 -2.009 -.201 4 .368 .205 1.000 -.446 1.182 5 -.316 .367 1.000 -1.771 1.140 6 -.368 .219 1.000 -1.237 .500 7 -.684 .242 .621 -1.645 .277 8 -.263 .168 1.000 -.930 .404 10 -.789 .271 .513 -1.865 .286 11 -.579 .192 .411 -1.341 .183 10 1 .263 .274 1.000 -.823 1.350 2 -.368 .317 1.000 -1.626 .889 3 -.316 .297 1.000 -1.492 .860 4 1.158 .318 .102 -.102 2.418 5 .474 .393 1.000 -1.083 2.030 6 .421 .279 1.000 -.685 1.527 7 .105 .374 1.000 -1.377 1.587 8 .526 .290 1.000 -.623 1.675 9 .789 .271 .513 -.286 1.865 11 .211 .260 1.000 -.821 1.242 11 1 .053 .270 1.000 -1.018 1.123 2 -.579 .289 1.000 -1.726 .568 3 -.526 .234 1.000 -1.454 .402 4 .947* .235 .044 .014 1.880 5 .263 .373 1.000 -1.217 1.743 6 .211 .224 1.000 -.677 1.098

(40)

7 -.105 .295 1.000 -1.275 1.065

8 .316 .217 1.000 -.545 1.176

9 .579 .192 .411 -.183 1.341

10 -.211 .260 1.000 -1.242 .821

Based on estimated marginal means *. The mean difference is significant

(41)

APPENDIX D EFA - Total Variance Explained

Initial Eigenvalues Rotation Sum of Squared Loadings Component Total % of Variance Cumulative % Total % of Variance Cumulative % 1 2.997 27.243 27.243 2.659 24.177 24.177 2 1.687 15.334 42.576 1.911 17.376 41.553 3 1.473 13.393 55.969 1.438 13.072 54.625 4 1.114 10.131 66.100 1.262 11.475 66.100 5 .864 7.854 73.954 6 .806 7.329 81.283 7 .732 6.657 87.940 8 .637 5.788 93.728 9 .324 2.946 96.674 10 .248 2.259 98.933 11 .117 1.067 100.000

(42)

APPENDIX E EFA – Rotated Component Matrix Stimulus Component 1 2 3 4 Mental Rotation .761 Spatial Navigation .691 .363 Finger Tapping .674 Melody .659 .501 Moving Circles .611 .420 Fear .602 -.429 Landscape .827 Faces .780 Mental Arithmetic .844 Language .592 .595 Playing Tennis .851

Table 3. EFA – Rotated Component Matrix.Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

(43)

APPENDIX F Difference in Correlation Within and Between Conditions Subject Correlation Within-Conditions (r) Correlation Between-Conditions (r) z-Value Within-Conditions z-Value Between Conditions p-Value of Difference (right-tailed) 1 .7733 .7741 1.0284 1.0304 .5067 2 .5458 .6352 .6124 .7502 .8737 3 .4978 .4972 .5464 .5456 .4974 4 .5580 .5563 .6300 .6275 .4919 5 .2493 .2486 .2546 .2539 .4976 6 .5928 .5890 .6819 .6761 .4807 7 .4627 .4608 .5008 .4983 .4918

Referenties

GERELATEERDE DOCUMENTEN

Analyses showed an interaction between mental imagery and book version: children with higher mental imagery skills outperformed children with lower mental imagery skills on

Methods: Six patients and six healthy controls performed an implicit mental motor rotation task (a hand and foot judgement task), an explicit mental motor rotation task, and two

code for establishing connection between the outer and inner worlds. Yet, as the digital code studies teach us, the reducability of the information to discrete

Uit onderzoek blijkt dat uitdagend opvoedingsgedrag een voorspeller is voor sociale angst bij kinderen, waarbij uitdagend opvoedingsgedrag van moeders meer sociale angst van het kind

T-test (assuming equal variances) showed no significant difference of mean tree crown diameter (t = -1.19, df = 1258, p &gt; 0.05) derived from UAV MS and RGB imagery. The test

Rowan-Legg discussed this concept of the brain being compared to a projector mentioning that ―much as a projector is fed information that it then displays onto a

It's beginning to smell (and sound) a lot like Christmas: The interactive effects of ambient scent and music in a retail setting. A sound idea: Phonetic effects of brand names

Results showed that the Negative imagery group, which was asked to imagine the US whenever B- was presented, and the Neutral imagery group showed similar levels of avoidance and CS