• No results found

Decoding an illusion from fMRI data : which cortical areas contribute to conscious visual experiences?

N/A
N/A
Protected

Academic year: 2021

Share "Decoding an illusion from fMRI data : which cortical areas contribute to conscious visual experiences?"

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

2017

Decoding an Illusion from

fMRI Data

Which cortical areas contribute to conscious visual

experiences?

Master Thesis (MS)

Name

:

H. Pielage

Student nr.

:

10778977

Mentors

:

Yaïr Pinto

:

Marte Otten

Date

:

25-10-2017

University of Amsterdam; Faculty of

Societal and Behavioural Sciences;

Brain and Cognition Program Group

(2)

Index

Abstract ... 2 Introduction ... 3 Methods ... 8 Participants ... 8 Stimuli ... 8 Data Acquisition ... 9 Procedure ... 9

fMRI Data Preprocessing ... 10

Masking of data ... 10 Decoding ... 10 Results ... 11 Behavioural results ... 11 Decoding results ... 11 Discussion ... 13 Conclusion ... 15 Literature ... 16 Appendix A ... 18 Mapper block ... 18 Training Blocks ... 18 Appendix B ... 19

(3)

Abstract

Despite of great research dedication, there still is no consensus about where conscious experiences arise during visual processing. Experiences do not seem to match the input signal coming from our retina, which becomes apparent with visual illusions or binocular rivalry tasks. Somehow the input signal seems to be filled-in to represent experience, rather than the input signal, somewhere along the visual processing stream. Where filling-in occurs is still debated. Most research suggest at least some visual cortical areas are involved in this process. By using an fMRI decoding paradigm we tried to determine which visual areas contained neuronal representations of illusory experiences. If an area’s neuronal activity is classified according to illusory experience, rather than physical input, it is thought to contribute to the visual neural correlate of consciousness (NCC). To create a difference in retinal input signal and illusory experience, an orientation variant of the Uniformity Illusion (UI) -an illusion that targets the periphery- was used. However, the machine learning algorithm that was used seemed unable to reliably discriminate between different physical orientations after being trained on non-illusion inducing variants of the UI displays. Therefore, the algorithm could not be used to determine whether visual areas contained neuronal representations of the experience or not. No inferences could be made about which regions contribute to conscious visual experience, and thus the NCC.

(4)

Introduction

After decades worth of studies and papers, researchers are still unable to explain a fundamental part of our world: the conscious mind. A joke used in a webcomic depicts our knowledge of the mind accurately by equalling consciousness with the sum of neurons and magic. Magic still describes what the relationship between neuronal activity and awareness, also dubbed the mind-body problem, feels like. Though advances are made, fundamental questions about this problem, and what the neural correlate of consciousness (NCC) is, remain. Here I focus on the ‘where’ question of visual consciousness. There seems to be no consensus about the stages of visual processing in which consciousness appears. While some researchers suggest we are not aware of neural activity in early regions, others suggest that, in some sense, we are (Tononi, Koch, 2008). Before expanding on the subject, I will first provide a definition of consciousness that is exemplar throughout this thesis to avoid confusion. Throughout this thesis, visual consciousness will be regarded as the ‘phenomenological’ or ‘subjective’ aspect of visual processing, which is roughly identical to the concept of an ‘experience’ (Bodovitz, 2008). In turn the NCC equals the minimal set of neuronal events and mechanisms sufficient for a conscious percept (or experience).

A striking property of visual consciousness is its ability to fill-in the retinal signal to create an experience that is useful for animal survival and functioning. A good example is our ability to maintain the experience of an apple being green, even though it is luminated differently throughout a day (midday and evening). While different illuminations provide for a difference in signal, the experience of a green apple remains, assuring us that the apple is as edible in the evening as it is at midday. Consistency of experience throughout different illuminations is dubbed the ‘Land effect’ (Crick & Koch, 1995), and provides for insights in the difference between input signal and a conscious experience. Somehow the brain seems to ‘fill-in’ the raw signal to provide for useful experiences. How the brain accomplishes this is not yet clear, and subject to an ongoing discussion. This discussion can be tracked back to Köhler’s (1920) idea of neuronal – perceptual isomorphism, stating that there is a direct mapping between experiences and the neuronal representations of those experiences. According to this view, any experience in the periphery should be neuronally represented in early stages of visual processing. This means that the visual cortices should be supplemented to represent experience, rather than signal (Pessoa, Thompson, & Noë, 1998). In the example of an apple’s colour, this means cortical activation in visual areas must be updated to represent the colour ‘green’, even though the input signal might be off due to illumination. However, early filling-in theories suffer under philosophical reasoning. As Dennett (1992) puts it in his article: to update neuronal activity the brain must already have figured out what it is looking at, so who needs the neuronal activity to represent it? He argues that the idea of neuronal – perceptual isomorphism still holds on to the idea of a dualistic

(5)

cartesian theatre, where brain activity is presented to a ‘brain interpreter’. Instead he argues that filling-in of experiences is accounted for on a conceptual level, in later stages of sensory processing. This could be a process of mislabelling, taking place on a post visual, interpretational level. Shortly put: it does not seem to be clear where in the visual processing stream experiences arise. Does this happen as early as V1, late as post visual processing, or somewhere in between?

Contradictory to Dennett’s ideas, some recent experiments seem to support early filling-in, which suggests the involvement of early visual cortices in consciousness. Experiments supporting these ideas predominantly use visual illusions to create a clear distinction between input signal and experience. One example is the neon colour spreading illusion in which transparently coloured circles seem to be present, while they are physically absent (Van Tuijl & Leeuwenberg, 1979). Another example is the Troxler-fading illusion in which circles seem to be filled-in with the background colour of the display. This causes the circles to disappear in the background, leaving only a uniformly coloured display (Balas & Sinha, 2007; Ramachandran & Anstis, 1990; Ramachandran & Gregory, 1991). Neuroimaging studies researching these phenomena found support for neuronal representations of the filled-in experience. For example, Sasaki and Watanbe (2004) found that voxels in early visual cortical areas which represent the location of a neon colour spreading circle, displayed similar activation when the circle was physically present compared to when it was illusory. Similarly, Hsieh and Tse (2010) found that the illusory colour of Troxler fading circles could be decoded from V1 neurons that represented the circle’s location. Both studies suggest that early visual cortical areas contain neuronal representations of physically absent stimuli, suggesting that neuronal activity must be filled-in by the brain early in the visual processing stream. This idea is supported by an experiment of Haynes and Rees (2005),they exploited multivariate analyses to successfully decode perceptually supressed information from V1. Orientations of masked stimuli could be decoded from V1 while subject guessing was at chance level, indicating V1 encodes information that is unavailable to the observer.

However, support for later-stage filling-in of neuronal activation can be found as well. Crick and Koch (1995) argue that there is no involvement of cortical regions as early as V1 in consciousness. Though their arguments that V1 is not part of the OCC because it does not directly project to the frontal cortex may not be valid (Block, 1996), they still provide some support. This evidence emanates from the finding that the aforementioned ‘Land effect’ is exhibited in V4, but not V1 neurons of an anaesthetized monkey. More evidence comes from an experiment in which people were unaware of very high spatial frequencies which should be picked up by V1 neurons. This evidence is, however, not very strong because it is lacking neuroimaging methods to verify V1 activation in reaction to the spatial frequencies. More convincingly Tong et al. found that the fusiform face area (FFA) and

(6)

parahippocampal place area (PPA) activations fluctuate in correspondence to participants’ altering awareness of either stimuli in a binocular rivalry task. While this suggests consciousness in higher visual areas, Large et al. found that these fluctuations are not present in earlier visual cortices. Again, suggesting that one is not conscious of the contents of early areas. A final example is a recent experiment by Lee, Baker, and Heeger (2007), they found while V1 voxel activity in humans did reflect attentional processes, it did not correlate with the percept of participants.

Finally, a post visual processing NCC would fit Dennett’s (1992) idea that neuronal activity is not updated to represent experiences best. This would imply that experiences are accounted for in frontal or parietal cortical areas, which are often linked to consciousness. The involvement of these areas becomes clear when looking at phenomena like spatial neglect. In spatial neglect patients often become (partly) unaware of one side of their visual field, often contralateral to a parietal lobe lesion (Tononi, Koch, 2008). While this suggests involvement of parietal areas, no evidence could be found for the disinvolvement of visual areas in consciousness. I would argue that the aforementioned literature unavoidably supportd influences of visual areas to visual consciousness. Therefore, the question remains: where in the visual processing stream is the NCC?

Tononi and Koch (2008) mention that much can be learned about the NCC by studying how brain activity reacts to changes in conscious content. The NCC can be studied by keeping the overall level of consciousness constant, while altering the subject’s percept. Here I introduce the Uniformity Illusion (UI), a visual filling-in illusion that targets the periphery (Otten, Pinto, Paffen, Seth, & Kanai, 2017), as a tool to accomplish this. The uniformity illusion can induce the perception of uniformity in an image, even though stimuli in the periphery differ from those in the centre. The UI alters a subject’s percept in the periphery, while keeping other variables constant. When looking at the centre of an example for a prolonged time, the stimuli in the periphery seem to assume the same properties of the stimuli in the centre. Some examples of the illusion are shown in figure 1, more examples can be found at the uniformity illusion website (http://www.uniformillusion.com). Because the UI works with a large range of stimuli (from orientation to text), it seems to tap into a fundamental process of conscious perception in the periphery. More specifically, it seems to hint towards the process that allows for richer and more detailed experiences in the periphery than would be expected from studying the retina and cortical organisation (Anderson, Mullen, & Hess, 1991; Land, & Tatler, 2009). As per example the retina contains very little cones in the periphery, rendering humans unable to accurately perceive colour there. However, healthy humans will still perceive colour in the periphery, meaning there is a gap between signal and experience that must be filled-in somewhere. Because the UI utilizes a similar gap, it can be used to pinpoint where in the visual processing stream conscious experience is represented.

(7)

To accomplish this, I will use functional magnetic resonance imaging (fMRI) machine learning techniques to try and decode conscious illusions from different areas of the visual cortex. Earlier research suggests line orientations can be successfully decoded from fMRI signal due to small, but reliable, orientation preferences of voxels (Haynes & Rees, 2005; Kamitani & Tong, 2005; Kay, Naselaris, Prenger & Gallant, 2008). Therefore, an orientation based UI variant will be used. A small circle in the centre will contain lines with the same orientation, where lines in the periphery will have random offsets in orientation from those in the centre. Looking at the centre for a prolonged time can induce an experience in which orientations of lines in the periphery are identical to those in the centre. Examples can be found in figure 2: F and G. If visual areas contribute to consciousness, the experience of the illusion should be decodable from fMRI voxel patters above chance level. If these areas do not contribute, a dissimilarity between physical and illusory line orientations will be expected. This will cause decoding conscious experiences to fail.

First, a machine learning algorithm must be trained to discriminate between experiences. Therefore, a machine learning algorithm will be trained to discriminate images in which only the periphery contains lines, whereas the centre is left empty. If the algorithm can discriminate between images with 15, 45 and random degrees oriented lines above chance, the training will be considered successful. When the algorithm proves to be successful, it will then be used to cross-validate data that was gathered from illusion inducing images. In these images, the periphery is filled with randomly oriented lines, and the centre is filled with uniformly oriented lines (15 or 45 degrees). If a visual area contributes to conscious experiences, neuronal data when experiencing a physical or illusory uniformity of orientation should be similar. If this holds true, the algorithm, trained on physical differences in orientation, should be capable of successfully distinguishing whether the illusion was experienced or not. If an area does not contribute there will be no similarity of neuronal activation

Figure 1. Examples of the uniformity illusion, with (A) shapes in the periphery that

assume the same shape to those in the center, and (B) circles in the periphery that assume the same luminance to those in the center.

(8)

when experiencing physical or illusory orientations, since the neuronal activation does not represent the experience. This will cause the algorithm to classify stimuli according to input signal.

Figure 2. All eight displays that were used in the experiment. First a mask was made to

select voxels reacting to peripheral areas by comparing voxels activity when the periphery was filled (A, B, and C) and when the centre was filled (D and E). Next, a decoding algorithm was trained to classify line orientations of 15 (A), 45 (B) and random between 20-40 (C) degrees. Finally, the algorithm was to be cross-validated to illusion inducing displays (F and G). Displays were shown in random order and separated by a fixation screen (H).

(9)

Methods

Participants

For this experiment, 33 participants were recruited (13 male, 20 female) with a mean age of 22.24 (range 17-28). All participants were healthy, with normal or corrected to normal vision. Three participants did not complete the second session, and two participants’ data from the first session was lost due to technical difficulties. Before participating, the participants signed a screening form according to the guidelines of the Spinoza Centre for Neuroimaging, an informed consent, and a form regarding accidental findings. Participants received either participation points, which are required to graduate the first year of psychology at the University of Amsterdam, or 47.50 euros in total.

Stimuli

Participants were presented with orientation based Uniformity Illusion displays, or parts of these displays, on a 32-inch screen (64.63°x42.58° visual angle) that was placed 63cm away from them. To create the displays, the width of the screen was divided by 50, splitting the screen in 50 evenly wide imaginary columns. Rows were created with the same dimension as the column width, resulting in a squared grid of 1.45°x1.45° visual angle cells (grid: 50x30 cells). A small correction was applied to make sure that the grid was centred on the display. To create a circular display, an imaginary circle was drawn with a radius of 23.82° visual angle (12.5 cells), which was centred on the grid. Every cell within this circle, but not in the centre, was labelled to be in the periphery. Similarly, a circle with a radius of 7.24° visual angle (5.5 cells) was used to mark the boundary of the centre. Whenever a cell was within the outer boundary, a line with a length of 0.48° visual angle was drawn and cantered in the cell. The orientation of the line depended on the trial type, and whether the cell was, or was not in the centre. Each line was tilted either 15, 45 or a randomly between 20 and 40 degrees. Importantly, the random orientation of a line at a certain position was consistent throughout a scanning session.

Eight different displays were created (examples can be found in figure 2). A light grey background (RGB: [100, 100, 100]), and a blue fixation dot (RGB: [0, 0, 250]), were consistent throughout all these displays. A first display served as fixation screen, containing nothing but the fixation dot. Three other displays were created for retinotopic mapping and algorithm training. For these displays, the centre of the display was left empty to avoid the UI from appearing. The periphery was either filled with lines of the same orientation (15 or 45 degrees), or filled with lines of different orientations between 20 and 40 degrees. Next, two displays with an empty periphery, but filled centre were created. All lines were uniformly oriented either 15 or 45 degrees. Lastly, two displays were created to induce the UI. For these displays the centre was filled with lines oriented of 15 or 45 degrees, and the periphery was filled with the randomly oriented lines. All lines were coloured dark-grey (RGB:

(10)

[150, 150, 150]). Line and background colour differences were minimalized to prevent strong after-images.

Data Acquisition

Anatomical and functional scans were acquired on a Philips Ingenia CX, 3 tesla scanner at the Spinoza Centre for Neuroimaging, using a standard head coil. One-millimetre isomorph T1-weigted anatomical images were created using a 3D fast field echo technique (FFE; 188 sagittal slices, TE = 3.7ms, TR = 8.2ms, flip angle = 8°). This technique is Philips’ take on gradient echo sequence scanning. For functional data, standard gradient-echo T2*-weighted SENSE EPIs were acquired (voxel dimensions: 3.00 x 3.08 x 3.00mm3, TR = 2.0 seconds, flip angle = 76.1°, FOV (ap, fh, rl) = 240 x 121.8 x 240mm3). Slices were acquired in ascending order, oriented approximately along the transverse plane. For some participants the entire brain could not be fitted within these slices. In these cases, a loss of data from parietal areas was preferred. Similar EPI’s with opposite phase-encoding were acquired after each block, which were used to create a field map for image unwarping.

Procedure

Both scanning sessions were identical and encompassed three blocks. The experiment was divided to prevent long scanning sessions, which could result in fatigue and attentional deficits. The first block consisted of trials to map the centre and periphery in voxel space. Participants viewed short lasting displays with either a filled centre or periphery. Again, the orientations were uniform 15, uniform 45 or random between 20-40 degrees (Figure 2: A, B, C, D, E). The mapper block consisted of 192 trials with a duration of one second, separated by a fixation screen that lasted between two and four seconds (randomly 2.0, 2.5, 3.0, 3.5, or 4.0 seconds). The second and third block consisted of trials to test and train the decoding algorithm. In these blocks, the participants were shown similar displays to those used in the mapper block (Figure 2: A, B, C, D, E). Additionally, they were shown illusion inducing displays (Figure 2: F, G). The previously described trials still lasted one second, but the illusion inducing trials lasted ten seconds to ensure that the UI could appear. Trails were separated by a six second lasting fixation screen. An overview of the trials used, display time, and separation time in each block can be found in Appendix A. When an illusion inducing screen was shown, participants were asked to indicate whether they experienced the illusion or not. Participants could do this by pressing a button with their right index or middle finger. Index and middle finger responses were counter-balanced over blocks, sessions, and participants. To assure wakefulness and continuous attention on the periphery, participants were required to keep fixation on the centre and press one of two buttons, depending on whether a small white dot appeared in the periphery or not. The dot appeared after a randomly determined interval and lasted 0.1 second. For examples of trials, see Appendix B.

(11)

fMRI Data Preprocessing

After acquisition, all data was organized according to the brain imaging data structure (BIDS) format. This allowed for easy use of the ‘fmriprep’ pipeline for fMRI data preprocessing (https://github.com/poldracklab/fmriprep). Using fmriprep and the BIDS format provides automated pre-processing, which is less error-prone and easy to inspect stepwise. The pipeline mainly uses FSL functions (Woolrich, et al., 2009) for minimal preprocessing of fMRI data, which includes: head motion correction, magnetic field unwarping, normalization, brain extraction, and EPI to MNI transformation.

For a more detailed list, see the fmriprep website

(http://fmriprep.readthedocs.io/en/stable/index.html). No slice time corrections were applied, practical experience in neuroimaging suggests that slice time correction is unnecessary for scans with a TR of two seconds or shorter (Poldrack, Mumford & Nichols, 2011). Additionally, no spatial smoothing was applied. Whether spatial smoothing affects decoding of columnar-level organisation or not, is still an ongoing discussion (de Beeck, 2010; Misaki, Luh, Bandettini, 2013). Because it is not a necessity, it was excluded from the preprocessing pipeline. Finally, in addition to the fmriprep pipeline, a 100-second high-pass filter was applied to reduce structural noise.

Masking of data

An occipital pole and a retinotopic mask were used to select useful voxels for the machine learning algorithm. Because we were only interested in visual areas, an occipital pole mask was created using the Harvard-Oxford brain atlas, which is incorporated in FSL (Woolrich, et al., 2009). Only voxels within the atlas’ probability spectrum of the occipital pole area were used in the decoding analysis. Furthermore, by using trials from the mapper block, a retinotopic map was created for each subject. The maps only contained voxels that reacted significantly stronger to stimuli in the periphery, compared to those in the centre. By using this map, the decoding algorithm was restricted to using voxels within peripheral visual fields.

Decoding

A support vector machine learning algorithm was used to assess the data from both sessions of each subject individually. For training and initial testing, data from displays where only the periphery contained lines were selected. This way, the algorithm would learn to discern between the 15, 45 and random degrees line orientation displays. The algorithm trained and tested itself on every combination of a ten-fold split of the data. One part was always left out of training for testing purposes, resulting in ten accuracies per session. The average of the accuracies was taken as a measure of decoding efficiency per subject. If the algorithm was successful in its first task, the algorithm would be used to cross-validate on data from illusion inducing trials. Now, every split of the data would also be used to predict what was seen on these trials. If participants indicated to have experienced the illusion, the trial would

(12)

be labelled like its physical counterpart. So, a display that would induce a uniform 15-degree oriented line display, would be labelled as the physical 15-degree oriented lines in the periphery. If participants indicated that they had not experienced the illusion, the trials were to be labelled equal to a periphery filled with random orientations. Accuracies would be collected in a similar way, but now a high accuracy would be representative of experience, rather than input. Meaning, a high accuracy would represent similarity in neuronal data between looking at physically and illusory uniform displays. If cross-validating to the entire visual pole was successful, explanatory power of different cortical areas (V1, V2, etc.) could be calculated. Areas that represent experience are expected to contain high explanatory power, and may allow for experience decoding from these areas individually. If an area does not represent experiences it is expected to yield low explanatory power and voxels encoding for these areas can be removed from the analysis without a decrease of accuracy in experience decoding.

Results

Behavioural results

Participants seemed to be highly accurate in detecting the white dot with a 95.5% mean accuracy. This implies that most participants correctly maintained attention on the task. Illusion effectivity, however, seemed to be quite low with an average of 33%. It seemed that participants significantly more often experienced the illusions with 15 degrees oriented lines (39%) than with 45 degrees oriented lines (27%) with t(32) = 5.31, p < 0.01.

Decoding results

To check whether the decoding algorithm was successfully trained on filled periphery images when using voxels from the entire occipital pole, accuracy values off all participants were tested against chance level. Because the algorithm could classify into three conditions (15, 45 or 20-40 degrees), chance level was set at 33%. A one-sample t-test was used to test the null-hypothesis that the algorithm classified at chance level. As displayed in figure 3, accuracy scores were largely around chance level. Meaning that the algorithm could not classify the three conditions above chance, and the null-hypothesis could not be rejected (average accuracy: 32.37%, t(32) = -1.26, p = 0.11). This invalidated our tool to cross-validate illusory data for theoretical implications.

To exclude that the inability to decode line orientations was caused by a human error, we tried to decode another display property. By checking whether displays with filled centres and displays with filled peripheries could be successfully decoded, we could validate the steps taken towards analysis. For this analysis the retinotopic mask was removed to preserve voxels sensitive to both the centre and periphery. As displayed in figure 4, accuracies were far above chance level, indicating it could be

(13)

successfully decoded whether the centre or periphery was filled with lines. The null-hypothesis, that centre versus periphery could not be decoded above chance (50%) was rejected (average accuracy: 94.84%, t(32) = 81.70, p < 0.01). This implies that the inability to decode line orientations was not caused by human errors in data processing, but rather a lack of signal.

A possible alternative explanation for the algorithm failing is the large field of view of cortical areas that are dedicated to the periphery. These areas are smaller, and process greater amounts of data compared to areas dedicated to foveal vision (Dow, Snyder, Vautin & Bauer, 1981; Wandell,

0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Ac cu ra cy Subjects

Figure 3. Subject-wise accuracies of classifying filled periphery displays with either 15, 45

or 20-40 degrees oriented lines, compared to chance level (33%).

0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Ac cu ra cy Subjects

Figure 4. Subject-wise accuracies of classifying displays either with a filled centre or a filled

(14)

Dumoulin & Brewer, 2007). Small lines, like the ones used could therefore induce a negligible signal in peripheral areas. To test this hypothesis, the algorithm was trained to classify line orientations in the centre of the displays (15 versus 45 degrees). Indeed, the algorithm seemed to be able to classify the line orientations above chance (50%). This was, however, by a very small difference (average accuracy: 53.30%, t(32) = 2.33, p = 0.01). Accuracy values per participants can be found in figure 5. The figure shows most average subject accuracies are on or just above chance level.

Discussion

The current research tried to examine where in the visual cortex neuronal signal is filled-in to

represent conscious experience. After learning a decoding algorithm to classify physical differences in line orientation, it should be used to cross-validate illusory orientations. Visual areas that contribute to successful experience decoding are assumed to be part of the visual NCC. If regions do not contribute, they are expected to have little explanatory power. Those regions, when isolated, are expected to classify stimuli according to raw visual input. However, the decoding algorithm already seemed unable to learn properly when using voxels from the entire occipital pole. It was unable to properly distinguish different physical orientations from each other. This rendered the algorithm useless as a tool to infer about cortical representations, and thus consciousness.

The failure of the algorithm to properly classify physical orientation was probably not due to human error because a different categorisation of the data could be classified successfully. This means other factors are at fault. A likely candidate is the size of the lines that were used, the lines may have been too small to be successfully detected. Indeed, earlier work on orientation decoding

0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 Ac cu ra cy Subjects

Figure 5. Subject-wise accuracies of classifying displays where the centre was either filled with 15

(15)

often used very large gratings, spanning a large part of the displays (Haynes & Rees, 2005; Kamitani & Tong, 2005, Carlson, 2014). Smaller line stimuli, especially when they have different orientations, may not contain enough signal to be accurately decoded from fMRI data. More specifically

orientation information in fMRI data can be lost due to heavy spatial pooling in the periphery (Dow, Snyder, Vautin & Bauer, 1981; Wandell, Dumoulin & Brewer, 2007). Voxels representing the

periphery may pool for too large receptive fields to maintain orientation information of the small lines. The idea that spatial pooling influences the ability to decode orientations is supported by the finding that orientation could be decoded (ever so slightly) from voxels that were located centrally, where voxel field of views are smaller. The lack of signal could be improved by creating an UI that is more alike the spatial gratings used in previous orientation decoding experiments, so that voxels contain more orientation information when pooling over large regions. Another improvement could be to change to the UI’s stimuli type to another feature like shape or colour. While colour seems to be a good candidate because Hsieh and Tse (2010) were successful in decoding illusory colour, this may also decrease the likeliness that information about the stimuli is encoded in early visual areas (V1 if often not associated with colour processing). A possible finding that V1 does not represent experiences will then be biased by the lack of colour processing by the area.

However, given that our findings are based on truth, and not a lack of signal in our data, what would the implications be for this visual consciousness? If no actual signal to classify different

orientations from each other exists in our data, this would imply a lack of a consistent neuronal representation in the visual cortex throughout trials. The inconsistency of representation would cause the algorithm to fail when trying to classify the stimuli. I would argue that this can best be explained by passive filling-in-like ideas of post visual cortex labelling of stimuli. Labelling of stimuli does not require visual areas to consistently represent experiences, the brain simply tries to find out what is being seen in a later stage (possibly parietal). However, it seems decoding the orientations from parietal regions (average accuracy: 31.29%) or the whole brain (average accuracy: 31.62%) fails as well. This indicates these volumes neither contain a neuronal representation of experience, which seems highly unlikely. If, however, decoding was successful in parietal areas or the whole brain, this would indicate visual experiences are represented on a high level of processing. The visual cortices would therefore not contribute to the NCC of vision. These findings would contradict earlier research on filling-in and the visual NCC, which found the occurrence of these processes in visual cortices.

If the algorithm was successful in discriminating between the three orientation conditions, it could have been used to cross-validate on illusory data. Successfully decoding illusory orientations with the algorithm, would indicate the experience is represented somewhere in the occipital pole. A next step towards the NCC would be to test what regions, when isolated, contain sufficient signal to

(16)

decode the orientations. If regions contribute to the NCC and contain a representation of experience, validating the algorithm on the isolated region should classify illusory orientations above chance. If regions do not contribute, they are expected to represent the input signal, rather than the

experience. In these cases, most trials should be classified as to represent the input signal. If, for example, isolated V2 voxels are enough to decode illusory orientations above chance, the area is assumed to represent experiences. The area would therefore contribute to the NCC of vision. In contrast V1 voxels could not be sufficient to decode the orientations above chance, but structurally decode stimuli according to the input signal (20-40 random orientations). V1 would then be assumed not to contribute to the NCC.

If individual regions contain too little signal for decoding, explanatory power could be looked at alternatively. High explanatory power for decoding to succeed is indicative of an experience representation in that area. By removing voxels with either high or low explanatory power,

information can be gathered about whether certain regions are necessary or sufficient respectively. The NCC can be defined as: minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. This would mean that when a group of voxels is sufficient and necessary, it would be a likely candidate for the NCC. For example, V2 voxels could represent a great part of the

explanatory power and be necessary for successful decoding, but they are not sufficient. Maybe in combination with some V1 voxels, the total would become sufficient as well. This indicates that V1 partly contributes to experiences, but that they are more clearly represented in V2. However, none of these hypotheses could be tested, so I will make no hard statements about the location of the visual NCC. Despite our findings, when improved, the concept of using the UI might prove useful in search for the NCC.

Conclusion

In conclusion this experiment failed to create a tool that could infer about the ‘where’ of visual consciousness. This was due to a failure to decode physical line orientations from fMRI data. The inability to decode orientation most likely occurred because voxels encoding for the periphery of vision contain too little signal when using small line stimuli. Because the algorithm failed, there are no findings to infer about where filling-in takes place. In future research the design could be improved to provide insights in which visual areas contribute to consciousness.

(17)

Literature

Anderson, S. J., Mullen, K. T., & Hess, R. F. (1991). Human peripheral spatial resolution for achromatic and chromatic stimuli: limits imposed by optical and retinal factors. The Journal of

Physiology, 442(1), 47-64.

Balas, B., & Sinha, P. (2007). “Filling-in” colour in natural scenes. Visual Cognition, 15(7), 765-778. Block, N. (1996). How can we find the neural correlate of consciousness?. Trends in

neurosciences, 19(11), 456-459.

Bodovitz, S. (2008). The neural correlate of consciousness. Journal of theoretical biology, 254(3), 594-598.

Carlson, T. A. (2014). Orientation decoding in human visual cortex: new insights from an unbiased perspective. Journal of Neuroscience, 34(24), 8373-8383.

Crick, F., & Koch, C. (1995). Are we aware of neural activity in primary visual cortex?. Nature, 375(6527), 121-123.

de Beeck, H. P. O. (2010). Against hyperacuity in brain reading: spatial smoothing does not hurt multivariate fMRI analyses?. Neuroimage, 49(3), 1943-1948.

Dennett, D. (1992). Filling in versus finding out: A ubiquitous confusion in cognitive science. In Cognition, conception, and methodological issues. American Psychological Association. Dennett, D. C. (2001). Surprise, surprise. Behavioral and Brain Sciences, 24, 982–982.

doi:10.1017/S0140525X01320113

Dow, B. M., Snyder, A. Z., Vautin, R. G., & Bauer, R. (1981). Magnification factor and receptive field size in foveal striate cortex of the monkey. Experimental brain research, 44(2), 213-228. Haynes JD, Rees G (2005) Predicting the orientation of invisible stimuli from activity in human

primary visual cortex. Nat Neurosci 8:686–691.

Haynes, J. D., & Rees, G. (2005). Predicting the orientation of invisible stimuli from activity in human primary visual cortex. Nature neuroscience, 8(5).

Herwig, A., Weiß, K., & Schneider, W. X. (2015). When circles become triangular: how transsaccadic predictions shape the perception of shape. Annals of the New York Academy of

Sciences, 1339(1), 97-105.

Hsieh, P. J., & Tse, P. U. (2010). “Brain‐reading” of perceived colors reveals a feature mixing

mechanism underlying perceptual filling‐in in cortical area V1. Human brain mapping, 31(9), 1395-1407.

Hubel, D. H., & Wiesel, T. N. (1965). Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat. Journal of neurophysiology, 28(2), 229-289.

Kamitani Y, Tong F (2005) Decoding the visual and subjective contents of the human brain. Nat Neurosci8:679–685.

Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Identifying natural images from human brain activity.Nature 452:352–355.

(18)

Köhler, W. (1920) Die physischen Gestalten in Ruhe und im stationären Zustand [The physical gestalts

in rest and in stationary states]. Veiweg.

Land, M., & Tatler, B. (2009). Looking and acting: vision and eye movements in natural behaviour. Oxford University Press.

Large, M. E., Cavina-Pratesi, C., Vilis, T., & Culham, J. C. (2008). The neural correlates of change detection in the face perception network. Neuropsychologia, 46(8), 2169-2176.

M.W. Woolrich, S. Jbabdi, B. Patenaude, M. Chappell, S. Makni, T. Behrens, C. Beckmann, M.

Jenkinson, S.M. Smith. Bayesian analysis of neuroimaging data in FSL. NeuroImage, 45:S173-86, 2009

Misaki, M., Luh, W. M., & Bandettini, P. A. (2013). The effect of spatial smoothing on fMRI decoding of columnar-level organization with linear support vector machine. Journal of neuroscience

methods, 212(2), 355-361.

Otten, M., Pinto, Y., Paffen, C. L., Seth, A. K., & Kanai, R. (2017). The uniformity illusion: central stimuli can determine peripheral perception. Psychological science, 28(1), 56-68. Pessoa, L., Thompson, E., & Noë, A. (1998). Filling-in is for finding out. Behavioral and brain

sciences, 21(6), 781-796.

Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of functional MRI data analysis. Cambridge University Press.

Rahnev, D., Maniscalco, B., Graves, T., Huang, E., De Lange, F. P., & Lau, H. (2011). Attention induces conservative subjective biases in visual perception. Nature neuroscience, 14(12), 1513-1515. Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic

edges. Perception, 19(5), 611-616.

Ramachandran, V. S., & Gregory, R. L. (1991). Perceptual filling in of artificially induced scotomas in human vision. Nature, 350(6320), 699.

Sasaki, Y., & Watanabe, T. (2004). The primary visual cortex fills in color. Proceedings of the National

Academy of Sciences, 101(52), 18251-18256.

Tong, F., Nakayama, K., Vaughan, J. T., & Kanwisher, N. (1998). Binocular rivalry and visual awareness in human extrastriate cortex. Neuron, 21(4), 753-759.

Tononi, G., & Koch, C. (2008). The neural correlates of consciousness. Annals of the New York

Academy of Sciences, 1124(1), 239-261.

Van Tuijl, H. F. J. M., & Leeuwenberg, E. L. J. (1979). Neon color spreading and structural information measures. Perception & Psychophysics, 25(4), 269-284.

Wandell, B. A., Dumoulin, S. O., & Brewer, A. A. (2007). Visual field maps in human cortex. Neuron, 56(2), 366-383.

(19)

Appendix A

Overview of used trials in each block. Information given in order: trial-type indicator (in reference to Figure 2), amount of times trial was shown, amount of times trial contained a dot, orientation of lines in the centre, orientation of lines in the periphery, and trial duration.

Mapper block

Containing 96 trials. Each trial was separated by 2 to 4 (steps of 0.5) seconds lasting fixation screens.

Trial-type Amount Amount with

dot Orientation centre Orientation periphery Duration (sec) A 12 3 Empty 15

°

1 B 12 3 Empty 45

°

1 C 12 3 Empty 20

°

-40

°

1 D 12 3 15

°

Empty 1 E 12 3 45

°

Empty 1

Training Blocks

Two identical blocks, each containing 80 trials. 6 seconds lasting fixation screens separated each trial.

Trial-type Amount Amount with

dot Orientation centre Orientation periphery Duration (sec) A 16 4 Empty 15

°

1 B 16 4 Empty 45

°

1 C 16 4 Empty 20

°

-40

°

1 D 8 2 15

°

Empty 1 E 8 2 45

°

Empty 1 F 8 0 15

°

20

°

-40

°

10 G 8 0 45

°

20

°

-40

°

10

(20)

Appendix B

Examples of trial types, with: (A) dot-present trial with an empty centre, but filled periphery, and (B) illusion inducing trial.

Referenties

GERELATEERDE DOCUMENTEN

Echter, de tests in dit onderzoek zijn vooral gekozen vanwege hun inhoudelijke (taal en reke- nen) en cognitieve relatie met schoolvakken (aan- dacht, concentratie,

Een derde item in de schaal Positief partnergedrag dat alleen bij de Paren steekproef niet het hoogste laadde in de schaal waar hij a priori toebehoorde was item 9; “In de

Existing crisis research largely fails to address the effects of different social media on stakeholder perceptions in crisis situations (Schultz, Utz, &amp; Goritz,

Hoewel er nog weinig tot geen onderzoek is gedaan naar de relatie tussen psychopathie en de mate van mindfulness, kan op grond van ander onderzoek verondersteld worden

In addition, (3) it was expected that the relationship between depression and emotion-relevant impulsivity (Three-Factor Impulsivity scale and Positive Urgency

By using the knowledge gained from chapter 2, chapter 3 provides a detailed description of the design and methodology of the Potsdam Gait Study (POGS), which aims to determine

Cortical Source Distribution of the M100 Inversion Effect Dynamic Statistical Parametric Maps (dSPM, [70]) confirmed that activation started focally in the medial surface of

In the present study, we investigated neuronal activity at different stages along the visual cortical hierarchy elicited by weak visual stimuli to address the