The Contralateral Bias Effect in Bilateral
Visual Stimulation:
An (f)MRI study in healthy humans
Yara Merlijne van SomerenUniversity of Amsterdam, October 2012
Abstract
Early visual areas are dedicated to processing of the contralateral visual field. In higher visual areas this contralateral processing decreases, and information from the ipsilateral hemisphere is also represented. In this study we examined what happens when the visual system is presented with bilateral stimulation. In an (f)MRI study we showed participants either unilateral or bilateral Faces and Chairs. Using multivoxel pattern analysis we found a strong increase of the contralateral bias effect during bilateral stimulus presentation compared with unilateral stimulus presentation. Apparently information transfer to the ipsilateral hemisphere is decreased when both hemispheres are occupied with information from the contralateral hemisphere.
Key words: contralateral bias, ventral stream, (f)MRI object representation, MVPA
Supervised by:
Judith Peters, Netherlands Institute for Neuroscience Joel Reithler, Netherlands Institute for Neuroscience Steven Scholte, University of Amsterdam
2
Introduction
Neurons in early visual areas have their relatively small receptive fields (RF) in the contralateral visual field, and information within these areas is highly dependent on position. On the other hand, high-level visual object representations show more position tolerance (Riesenhuber and Poggio, 2000; Dicarlo and Cox, 2007; Cichy, Chen and Haynes, 2011), which is in line with our experience of recognizing objects approximately equally well across different positions in the visual field. This position invariance in object recognition has been hypothesized to originate from either large receptive fields in inferior-temporal (IT) cortex (Gross et al., 1972), or complex interactions between neurons in the IT cortex (Kravitz et al., 2010). In 2000 Op den Beeck and Vogels already showed that RF sizes of neurons in monkey’s anterior IT cortex vary widely, ranging from 2.8° to 26°. Kravitz’s work in 2010 also showed that higher visual object areas in humans show differential responses to objects over different positions using (f)MRI. This more recent work suggests that the IT cortex is not as much position invariant, but it shows a high position tolerance, since position information is still available in the IT cortex (Di Carlo & Maunsell, 2003; Schwarz lose et al., 2008).
Though, higher visual are suggested to show a contralateral bias, which means that these areas seem to have a preference for the contralateral visual field. Merigen and Saunders (2004) found a contralateral bias in macaques, after lesioning IT cortex, these monkeys show a deficit in recognizing objects in the contralateral visual field, though performance did not drop to zero. In an (f)MRI study with humans Hemond and colleagues (2007) found that high level visual areas, important in recognizing faces and objects, still show a preference for contralateral visual input, but there seems to be a strong decrease in the contralateral bias effect, especially for the face- and object-selective regions. Rocha-Miranda and colleagues (1975) found that section of the corpus callosum and anterior commissure in rhesus monkeys led to an absolute processing of the contralateral hemifield in IT cortex. So they suggested that object information presented ipsilaterally is basically ‘inherited’ from the contralateral hemisphere. In other
words, large receptive fields in the IT cortex could result from information that crosses the cortex from the contralateral hemisphere towards the ipsilateral hemisphere, and thus combining the information received by both hemispheres.
Contrary to laboratory settings, in real life we often receive information from both visual hemifields, which raises the question of how this transfer of visual information towards the ipsilateral hemisphere takes place. Only few studies focussed on what happens when two stimuli are presented at the same time. MacEvoy & Epstein (2009) showed that activity patterns evoked by vertically paired objects are the average of the single presented components, though what happens when stimuli are presented in horizontal pairs remains unclear. Reddy & Kanwisher (2007) showed that information present in the spatial profile of the (f)MRI response in the left and right hemisphere together severely degraded for horizontal bilateral presentation of objects, and was eliminated for objects that were unattended, though this appeared not to be the case for preferred objects categories in category-specific regions. However, they did not take into account the separate contribution of both hemispheres.
Already in 1986 (see also Chelazzi, in 1998), Sato proposed a ‘winner-takesall’
principle for simultaneous stimulus
presentation in the right and left visual field. He measured activity in monkeys’ TE neurons, and found that in the case of bilateral stimulus presentation, the measured activity solely represented the contralateral stimulus, whereas the same neurons also represented the ipsilateral stimulus during single stimulus presentation. This suggests that information is prevented from transferring to the ipsilateral hemisphere when this area is occupied with information from the contralateral hemisphere. In the current study we investigate to what extend visual information is represented in the ipsilateral hemisphere of humans when (I) a single stimulus is presented unilaterally, and (II) two stimuli from different categories are presented in both visual hemifields. In the case of bilateral stimulus presentation, standard BOLD measures seem inefficient to extract whether the left or right presented
3
stimulus is represented in one hemisphere. Multi voxel pattern analysis (MVPA) provides a more sensitive measure to distinguish two categories within an activated area (Haxby, 2001). We expect that the ipsilaterally presented stimulus is represented in case of single presentation, but that the contralaterally presented stimulus wins in the case of simultaneous presentation of two laterally presented stimuli.
Methods
Participants
9 Healthy subjects with normal or corrected to normal vision (6 Female, mean age 27.2, SD +/- 4.63) participated in this study. The study was approved by the local ethics committee of the
Faculty of Psychology, University of
Amsterdam. Participants received a monetary reward for their participation.
Behavioral Stimuli and Task
Two randomized sets of fifteen unique faces (Tuebingen database,http://faces.kyb.tuebing en.mpg.de; 30 degrees rotated) or unique chairs (custom-made database; 30 degrees rotated) and their scrambled counterparts were presented in the lower left or lower right visual field (6° eccentricity; 135° and 225° latitude) in blocks of 30 s (400 ms per stimulus interleaved with 100 ms blank intervals), alternated with rest periods of 13.5 s (+- 1.5 s). In addition, next to these unilateral presentations, there also were two bilateral stimulation conditions in which chairs could be presented in the lower right together with faces in the lower left or vice versa. Stimuli were also presented in the center of the screen with a size of 2.75°x2.75° visual angle, whereas the peripherally presented stimuli were 5.5°x5.5° (to compensate for the cortical magnification factor).
Stimuli all had the same average luminance and RMS contrast (see Goffaux et al., 2012 for further details). Each of the 11 different stimulus configurations was presented twice in one run in pseudo-randomized order with the constraint that each condition was never directly repeated. Six runs were acquired
within one scanning session, leading to 12 repetitions per condition in total. For Face and Chair stimuli, subjects performed an inversion detection task (pressing a button with the right index finger whenever an upside down inversion was detected). Subjects were instructed to always fixate on the fixation cross presented in the middle of the screen.
(f)MRI Acquisition
MRI data were acquired using a 3T Philips
Achieva scanner with a 32 channels SENSE
headcoil. A T1-weighted sequence was used to make structural images (220 transversal slices, TR = 8.2 ms,TE = 3.8 ms, flip angle = 9°, FOV =
240*188 mm). Functional images were
acquired using a gradient-echo EPI sequence (TR = 1500, TE = 28 ms, flip angle = 71, FOV = 240*240*71.25, matrix = 96*96 voxels, ascending acquisition, gap = 0.25 mm, 2.5 mm isotropic resolution, 26 slices). Slices were positioned along the main longitudinal axis of the temporal lobe and covered ventral visual cortex.
fMRI Analyses
Preprocessing of the individual datasets included slice scan time correction, linear trend removal, temporal high-pass filtering and 3D motion correction as implemented in the BrainVoyager QX, v 2.4 software package (Brain Innovation, Maastricht, the Netherlands). The first two volumes of each run were discarded to remove T1 saturation effects. No spatial smoothing was applied to the functional data, which were interpolated to a 2 x 2 x 2 mm3
voxel target resolution.
Localizers and definition of regions of interest
We modeled the cortical response in the localizer events in the left and right visual field separately, with a General Linear Model (GLM) for each subject. By combining the contrasts ‘Faces > Scrambled’ and ‘Chairs > Scrambled’ through an ‘OR’ operation, we generated, we generated multi-cluster ROI’s for both the left and right hemisphere separately, so that for both the left and the right hemisphere a ROI based in ipsi- and contralateral presentation was generated. We used a minimum amount of 60 voxels for each category, so the minimum size of the multi-cluster ROI was 120 voxels.
4
Figure 1. Setup for the MVPA analyses. (A) For both theleft and the right hemisphere it was investigated whether both the left and right presented Faces and Chairs were represented in both hemispheres. (B) For both the left and right hemisphere we investigated whether the left or right presented stimulus was represented when both a face and a chair were presented at the same time.
With this procedure ROIs did not have be of equal sizes, and often existed of non-contiguous parts. For one participant we could not define a proper multicluster ROI in the left hemisphere for both the ipsi- and contralateral hemifield stimulation.
Multi Voxel Pattern Analyses (MVPA)
Two multivoxel pattern classification analyses (Haxby et al., 2001) were used to analyze the data. Both analyses shared the same basic frame work, and were independently conducted for each multicluster ROI. For each condition pattern vectors were estimated. Decoding accuracy was averaged over the 6 runs. We conducted second level analyses over all subjects on the decoding accuracies using the Wilcoxon signed-rank test for non-parametric matched samples comparison, against .5 chance level.
In the first analysis, pattern vectors from 5 out of 6 runs were assigned to a training set, to train a linear support vector machine (SVM). This SVM was tested on the sixth run. This procedure was repeated 6 times, such that each run was once the test run. Then we checked whether the SVM could correctly classify either left or right presented faces and chairs. This was conducted in both the left and right multicluster ROI, such that we could
compare ipsi- and contralateral classification accuracies (Figure 1A). For the ipsilateral classification we used the ipsilaterally based ROIs and for the contralateral classification we used the contralaterally based ROIs.
In the second analysis we trained the SVM on classifying either left or right presented faces from chairs across all 6 runs, but now we tested on the events with bilaterally (Face and Chair) presented stimuli, to test whether the ipsi- or contralaterally presented stimulus is extracted by the SVM (Figure 1B). Again, we used ipsilaterally based ROIs for ipsilateral classification and for the contralateral classification we used the contralaterally based ROIs.
We conducted statistical analyses over the two hemispheres collapsed, since we expect no differential effects for the left and right hemisphere.
Results
Classification of unilateral stimulus presentation
First we checked to what extent ipsi- and contralaterally presented unilateral stimuli are represented in the multi-cluster ROIs. We performed a Wilcoxon matched samples test (over LH and RH collapsed) to test decoding performance. For both ipsi- and contralateral decoding, performance was well above chance (ipsi, 0.92, Z = 3.78, p<.001; contra, 0.96, Z = 3.76, p<.001). Comparing ipsi- and contralateral decoding performances we found that contralateral performance was higher than ipsilateral decoding performance (t = 1.86, df = 17, p<.05; see Figure 2).
Figure 2. Classification Accuracy for the ipsi and
contralaterally presented stimulus (Face or Chair). Both ipsi and contralateral classification were good, and classifiatcion for the contralateral hemifield was best.
5
Classification of bilateral stimulus presentation
To investigate whether patterns of activity could classify which stimulus (Object or Face) was represented in each hemisphere we used MVPA. To maximize the amount of informative voxels in the ROI’s, we used the ROI’s based on ipsilateral stimulus presentation to train the ipsilateral classifier, and ROI’s based on contralateral stimulus presentation to train the contralateral classifier separately for both the left and right hemisphere. For the left hemisphere we trained a classifier on either left (ipsi) or right (contra) presented Faces versus Objects. For the right hemisphere we conducted the same procedure. Subsequently each classifier was tested on either bilateral Object-Face or Object-Face-Object stimuli. We collapsed data from the left and right hemisphere and performed a Wilcoxon matched samples test to test whether the classifier decoded the ipsi- or contralateral stimulus above chance level. To correct for testing on the same ROI’s for both Face-Object and Object-Face presentation, we divided all p-values by two.
With contralateral training, the contralaterally presented stimulus was more often represented during bilateral stimulus presentation (contra training, Face-Object presentation, contra stimulus: 0.75, Z = 3.79, p<.001; Object-Face presentation, contra stimulus: 0.68, Z = -2.97, p<.01; see Figure 3). Also for ipsilateral training, the contralateral presented stimulus was more often represented during bilateral stimulus presentation. (ipsi training Face-Object presentation, contra stimulus: 0.83, Z = 3.73, p<.001; ipsi Object-Face presentation, contra stimulus: 0.85, Z = 3.76, p<.001; see Figure 3).
Comparing unilateral and bilateral stimulus presentation
To compare the classification for bilateral presentation to the classification for unilateral presentation we created an Accuracy Index (AI) by subtracting unilateral classification accuracy from the bilateral classification accuracy for the left and right hemisphere collapsed. For the ipsilateral classification accuracy for the contralateral stimulus we see a strong decrease in accuracy for bilateral classification (AI = -0.63, t = -15, df = 17, p<.001). For the contralateral classification accuracy also the collapsed left and right hemisphere showed a
Figure 3. Ipsi- and contralateral decoding performance for
bilateral Face-Object and Object-Face presentation, collapsed for the left and right multi-cluster ROIs. ROIs were collapsed after testing, but are shown separately for illustration purposes. Ipsilateral training was conducted in ROI’s based on ipsilateral stimulus presentation whereas contralateral training was conducted in contralateral based ROI’s. The ‘Fx’ notation refers to a trial with a Face presented left from fixation, ‘xF’ stands for a Face presented right from fixation, and the same notation is used for unilaterally presented Objects. Tests were done on bilateral trials with a Face left and an Object right or vice versa. Each condition showed a strong contralateral bias effect.
decrease in performance for the bilateral classification (AI = -0.12, t = -4.42, df = 17, p<.001). When comparing the AI’s of ipsi- and contralateral classification we detect a stronger decrease in classification accuracy for ipsilateral classification (decrease in AI = -50.84, t = -9.69, df = 17, p<.001; see Figure 4).
Supplementary Univariate Analysis
The multi-cluster ROIs used in the MVPA also include putative Face selective areas (such as e.g., the Fusiform Face Area [FFA]) and Chair selective areas since they were based on the
contrasts Face>Scrambled OR
Chair>Scrambled. In this analysis we checked whether the results we found in the MVPA for bilateral stimulus presentation were driven by the relative mean amplitude levels of two regions per hemisphere with opposite stimulus category selectivity.
Four new ROI’s were defined for each participant, ipsilateral Face and Chair selective
6
Figure 4. Accuracy Index (AI) for ipsi- and contralateralclassification accuracy (LH and RH collapsed). AI is defined as the difference in classification accuracy between unilateral classification and bilateral classification. Both CAI’s differ from 0, so unilateral classification is always more accurate than bilateral classification. The ipsilateral AI was larger than the contralateral AI.
regions for both the left and right hemisphere separately, using the more strict contrasts ([Face>Chair] & [Face>Scrambled]) and the equivalent for chairs. If the relative activity in these areas explains the contralateral bias effect for bilateral presentation in the MVPA analyses, we would expect that the relative activation between the Chair area and the Face area, as found during single ipsilateral presentation (e.g. left Chair) changes as a consequence of adding an additional stimulus of the other stimulus category in the opposite hemifield (e.g. left Chair and right Face). To test this we conducted a random effects ROI GLM to generate beta values for each stimulus predictor in each ROI for each participant. Since we are interested in a possible interaction effect between the Face and Chair area we created an index by subtracting the Chair area beta values from the Face area beta values averaged over all participants. A negative index would represent a Chair (Chair area activity > Face area activity) and a positive index would represent a Face (Chair area activity < Face area activity). If the relative activation amplitude across these areas would be the main factor driving the MVPA results, we would expect to see the contralaterally presented stimulus to be represented in the index during bilateral stimulus presentation. This was not what we
found; none of the indexes significantly differed from 0 (all p’s > .1).
Discussion
Previous work has shown that, during unilateral stimulus presentation, higher visual areas show a contralateral bias effect (Cichy et al., 2011), although the ipsilaterally presented stimulus is also represented in these areas, contrary to lower visual areas (V1-2) where the visual processing is completely dedicated to the contralateral hemisphere (Hemond et al., 2007). Our results confirm these findings; both ipsi- and contralaterally presented stimuli were represented in each hemisphere, though
decoding performance was best for
contralaterally presented stimuli (also in line with MacEvoy and Epstein, 2009). In our study classification accuracies for the ipsilateral classification was relatively high(.92 compared with .85 as reported by Cichy et al., 2011) which could be a result of selecting more specialized voxels based also on ipsilateral stimulus presentation.
In natural settings we only rarely perceive isolated objects, as objects are mostly imbedded in more complex scenes. Only few studies focussed on what happens when two stimuli are presented together, taking into account differences between ipsi- and contralateral presentation. Our study therefore provides new insights in information processing of visual scenes in the separate hemispheres. Here we show that, whereas information from a single presented ipsilateral stimulus is still available in higher visual areas, adding a contralateral object in the visual field largely ‘occupies’ the visual information processing in the ventral stream. Apparently the higher visual information system is dominantly dedicated to processing of the contralateral hemifield, preventing information concerning the ipsilateral visual hemifield to cross to the ipsilateral hemisphere, such that scene perception leads to reinforcement of the contralateral bias effect. This effect appears to be so strong, that even in case of training a classifier to recognize the ipsilaterally presented stimulus in combination with the voxels that were specific for the ipsilateral visual field, we still found stronger representation of the contralateral stimulus.
7
These findings are in line with the results Sato (1986) found in monkey TE, but also behavioural results in human that show differential processing of left and right presented stimuli (Hellige, 1992; Marsolek, 1995).
At one point our results seem to contradict previous work from Reddy and Kanwisher (2007). They found that bilateral presentation of objects lead to a strong decrease in spatial pattern information (clutter cost), especially when the objects were part of a category that does not have a specialized region in the brain. In our study we find good classification of both Faces and Chairs during bilateral classification. This difference in results could be a consequence of the difference in ROI definition or an effect of attention. The differential results could be related to task differences given that in our study subjects attended both stimuli in the bilateral condition, whereas Reddy and Kanwisher instructed their participants to attend only one stimulus category (i.e., one hemifield per trial).
Alternatively, MacEvoy and Epstein (2009) also found near perfect classification for single presented objects, but they suggest that with equally attended bilateral presentation a substantial part of the ‘clutter cost’ can be recouped when the response pattern of the conflicting stimulus is known. Though, both Reddy and Kanwisher, and MacEvoy and Epstein collapsed both the left and right hemisphere in their ROI definition. Our results show that bilateral stimulus presentation enhances the contralateral bias effect, which means that each hemisphere is occupied with different information during bilateral stimulus presentation. A classifier’s performance will drop to chance level when half of the voxels code for one object whereas the other half codes for the other object, so classification of contralateral face versus house activity in a ROI encompassing both hemispheres will result in chance level performance.
We checked whether the classification decoding results in the bilateral stimulus presentation were driven by the relative BOLD
response amplitude across the most
pronounced Face and Chair selective regions, but this was not the case. So the distributed pattern of activity, used in MVPA provides us with detailed information considering the content of activity in the ventral stream.
For this study we chose to define ROI’s based on individual mapping to incorporate individual differences and maximize the amount of informative voxels for the MVPA. Since we used one dataset for both ROI definition and the analyses we had to be cautious for ‘double dipping’ (Kriegeskorte et al., 2009). However, we used multi-cluster ROI’s that were based on the contrast Face OR Chair > Scrambled, so our ROI’s did not show an a priori bias towards one of the object categories. Additionally, in the bilateral MVPA we tested the classifier on runs with bilateral presentation that were completely independent of the unilateral runs that we used for the ROI definition.
Another possible problem we
encountered was that with the MVPA analyses a classifier was always trained to either decode a Face or an Object, and to obtain good results in this situation, it is enough for the classifier during training to be able to recognize only one category (i.e. Face or not-Face instead of Face or Chair). But since we used voxels in the multicluster vor that are responsive for both cathegories we don’t expect this to have an influence on the result. Additionally we see a clear switch towards the contralateral hemisphere for both categories in bilateral presentation compared with single ipsilateral presentation.
The data do not meet the assumptions for a t-test, so we chose to do a stricter statistical nonparametric Wilcoxon test for dependent measures.
During data acquisition participants were instructed to keep fixation and we were able to monitor the participants’ eye movements online. But we did not explicitly control for possible eye movements away from the fixation cross. Though, since we do see a strong contralateral bias effect during both the uni- and bilateral stimulus presentation we can conclude that overall fixation was good.
Finally, these results show that scene perception is subject to a strong contralateral bias effect, not just in early visual areas, but also in higher visual areas along the ventral stream. During unilateral stimulus presentation the ipsilateral hemisphere still seems to ‘inherit’ some information from the contralateral hemisphere, but with bilateral input each hemisphere gets occupied with information from the contralateral field. This new insight in
8
how scene perception shows a contralateral bias effect in higher visual areas is new and should be considered in future research. It would be interesting to gain more insight in the effect of timing in the crossing of information from one to the other hemifield, and at what point information from both hemispheres is integrated for integrated scene perception.
References
Chelazzi, L., Duncan, J., Miller, E. K., Desimone, R. (1998). Responses of Neurons in Inferior Temporal Cortex During
Memory-Guided Visual Search.
Journal of Neurophysiology, 80(6), 2918-2940.
Cichy, R. M., Chen, Y., Haynes, J. (2011). Encoding the identity and location of objects in human LOC. NeuroImage, 54, 2297-2307.
Dicarlo, J. J., Cox, D. D. (2007). Untangling invariant object recognition. Trends in cognitive neuroscience, 11(8), 333-341. Dicarlo, J. J., & Maunsell, J. H. R. (2003). Anterior
Inferotemporal Neurons of Monkeys Engaged in Object Recognition Can be Highly Sensitive to Object Retinal Position. Journal of Neurophysiology, 89(6), 3264-3278.
Gross, C. G., Rocha-Miranda, C. E., & Bender, D. B. (1972). Visual properties of neurons in inferotemporal cortex of the macaque. Journal of Neurophysiology,
35(1), 96-111.
Haxby, J.V., Gobbini, M.I., Fury, M., Ishai, A., Schouten, J.L., & Pietrini, P. (2001).
Distributed and overlapping
representations of faces and objects in ventral temporal cortex. Science, 293, 2425–30
Hellige, J. B. (1993). Hemispheric asymmetry: What’s right and what’s left. Cambridge, MA: Harvard University Press.
Hemond, C. C., Kanwisher, N. G., & Op de Beeck, H. P. (200). A Preference for Contralateral Stimuli in Human Object- and Face-Selective Cortex. PLoS
ONE 2(6): e574.
doi:10.1371/journal.pone.0000574.
Kravitz, D. J., Kriegeskorte, N., & Baker, C.I. (2010). High-Level Visual Object Representations Are Constrained by Position. Cerebral Cortex, 20, 2916-2925.
Kriegeskorte, N., Simmons W. K., Bellgowan, P. S. F., & Baker, C. I. (2009). Circular
analysis in systems neuroscience: the
dangers of double dipping. Nature
Neuroscience, 12, 535 – 540.
MacEvoy, S. P., Epstein, R. A. (2009). Decoding
the representation of multiple
simultaneous objects in human occipital cortex. Current Biology, 19, 943-947.
Marsolek, C. J. (1995). Abstract visual form representations in the left cerebral hemisphere. Journal of Experimental
Psychological Human Perception Performance, 21, 375-386.
Merigan, H. M., & Saunders, R. C. (2004). Unilateral deficits in visual perception and learning after unilateral infer
temporal cortex lesions in
macaques. Cerebral Cortex, 14, 863-871.
Op de Beeck, H. Vogels. R. (2000). Spatial Sensitivity of Macaque Inferior Temporal Neurons. The Journal of comparative neurology, 426, 505–518. Reddy, L., & Kanwisher, N. (2007). Category
selectivity in the ventral visual pathway confers robustness to clutter and diverted attention. Current Biology, 17, 2067-2072.
Riesenhuber, M., Poggio, T. (2000). Models of
object recognition. Nature
9
Rocha-Miranda, C. E., Bender, D. B., Gross, C. G., & Mishkin, M. (1975). Visual activation of neurons in inferotemporal cortex depends on striate cortex and forebrain commissures. Journal of neurophysiology, 38(3), 475-491. Sato, T. (1989). Interactions of visual stimuli in
the receptive fields of inferior temporal
neurons in awake macaques.
Experimental Brain Research, 77, 23-30.
Schwartzlose, R. F., Schwisher, J. D., Dang, S., &
Kanwisher, N. (2008). The
distribution of category and location information across object-selective regions in human visual cortex. PNAS, 105(11), 4447, 4452.