• No results found

Spatiotemporal dynamics of the mouse brain in response to visual stimuli

N/A
N/A
Protected

Academic year: 2021

Share "Spatiotemporal dynamics of the mouse brain in response to visual stimuli"

Copied!
21
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

U

NIVERSITY OF

A

MSTERDAM

M

ASTER

S

T

HESIS

Spatiotemporal Dynamics of the Mouse

Brain in Response to Visual Stimuli

Author: Thijs BAAIJEN Student Number: 10006281 Date: August 2017 Supervisors: Drs. EnnyVANBEEST

Dr. Areg BARSEGYAN

Examinators (UVA): Dr. Jeannette LORTEIJE

Dr. SimonVANGAAL

A thesis submitted in fulfillment of the requirements for the degree of Master of Psychology: Brain & Cognition

(2)

ii

University of Amsterdam

Abstract

Master of Psychology

Spatiotemporal Dynamics of the Mouse Brain in Response to Visual Stimuli

by Thijs BAAIJEN

We performed a functional analysis of visually responsive areas using wide-field calcium imaging and a passive viewing task on mice expressing GCaMP6f. We identified 12 unique visually responsive areas. Besides the visual cortices, we also identified retrosplinial areas and temporal association areas as visually responsive. Furthermore, among the areas directly adjacent to the primary visual cortices, we found that medially located areas responded slower to visual input than laterally located areas. No evidence was found for response time differences between areas previously linked to dorsal and ventral streams of the visual pathway. These results show that a substantial part of the mouse brain is involved in visual information pro-cessing and highlight the potential of this technique for exploring the link between sensory input and brain activity.

(3)

iii

Acknowledgements

The author would like to thank the Netherlands Institute for Neuroscience and the University of Amsterdam for making this study possible. Areg Bagseryan is thanked for his supervision in animal experimentation and expertise in animal surgery. Matt Self is thanked for his helpful input on the data analysis. Jeannette Lorteije is thanked for her role as examinator. Pieter Roelfsema is thanked for having me in is lab.

Special thanks to Enny van Beest for her role as supervisor during the entire project.

(4)

iv

Contents

Abstract ii Acknowledgements iii 1 Introduction 1 2 Methods 4 3 Results 9 4 Discussion 13

(5)

1

Chapter 1

Introduction

Whether it’s by sight, hearing, taste, smell or touch, our behavior is for a large part guided by sensory experiences. In other words, our brain is constantly processing in-formation from the world around us. It is estimated that our brain consists of about 85 billion neurons [Azevedo et al., 2009]. Together, these relatively simple cells are somehow miraculously able to process all incoming sensory signals into useful in-formation. A central goal in neuroscience is to find out how our brain accomplishes this, by exploring the link between sensory information, brain activity and resulting behavior. Many mammals, including humans, use vision to guide their behavior. Not surprisingly, the visual system has traditionally been a topic of great interest for scientists.

Within the visual domain of neuroscience, a central goal is to figure out how our brains process incoming light into a conscious percept. Early stages of visual pro-cessing are well documented. Starting at the retina, incoming light generates neural signals that represent the image reaching the eye. Retinal ganglion cells (RGCs) then carry visual information through the optic chiasm and the lateral geniculate nucleus of the thalamus (LGN) to the primary visual cortex (V1) [Kalat, 2009, Huberman and Niell, 2011]. Interestingly, the spatial arrangement of the retina is maintained throughout the visual pathway (i.e., retinotopically organized). Consequently, each neuron in the visual cortex has a receptive field, responding only to a portion of the visual field. Within a receptive field, neurons in V1 are sensitive to specific features of visual stimuli (e.g., orientation, spatial frequency, edges). While this response se-lectiveness can explain how we’re able to distinguish simple visual features, it does not explain how these features are integrated and processed into the image we per-ceive.

A thorough understanding of the more advanced aspects of vision remains an important goal of visual neuroscience. Since these aspects are thought involve a complex hierarchy of extrastriate visual areas [Felleman and Van Essen, 1991], com-municating through a multitude of feedforward and feedback connections [Marshel et al., 2011], this is quite a challenge (see figure 1).

In order to understand how the complex visual network depicted in figure 1 gives rise to the experience of sight, it appears crucial to investigate both the func-tional specializations of involved areas, and the temporal aspects of information flows through the network. Brain imaging techniques like functional magnetic reso-nance imaging (fMRI) have provided valuable insights towards this goal, by reveal-ing the functional specializations of numerous cortical visual areas [Tootell et al., 2003, Kanwisher, 2010]. However, fMRI has a relatively low temporal resolution (i.e., 4 seconds [Meyer-Lindenberg, 2010]) compared to the relatively quick cascade of events associated with various processing phases (i.e., in the order of a few ms), e.g. as shown by [Roelfsema et al., 2007] in primates. Thus, while fMRI can be used

(6)

2 Chapter 1. Introduction

to measure which areas are involved in a specific brain process, it is of little use if one wants to know when or in what order neural activity takes place.

FIGURE 1: The complex hierarchy of vi-sual areas in the brain of a macaque mon-key as determined by [Felleman and Van

Essen, 1991].

Vice versa, electrophysiological tech-niques possess the high temporal resolu-tion to capture neural activity, but are usu-ally lacking in the spatial domain. For ex-ample, electroencephalography (EEG) has a very low spatial resolution, whereas sin-gle/multi cell recording techniques have a high yet extremely focal resolution [Kalat, 2009]. Thus, while electrophysiological techniques can be used to measure when neural activity takes place, it is less use-ful for measuring where activity takes place at a given moment. By combining the re-sults of multiple single cell recording stud-ies in meta-analyses [Lamme and Roelf-sema, 2000] or, these spatial limitations can to some extent be circumvented. Moreover, progress has been made that allows mul-tiple sites to be recorded at the same time [Buzsáki, 2004]. Still, it should be clear that such methods still provide a limited view on the spatial dynamics of neural activity, com-pared to the whole-brain analyses possible with brain imaging techniques.

Naturally, scientists have been looking out for techniques that possess good res-olutions in both the temporal and the spatial domain. Recently, promising progress has been made with the development of functional brain imaging in mice using cal-cium fluorescence microscopy, capable of measuring neural signals with relatively good spatiotemporal resolutions as well as monitoring cell-type specific activity. The goal of our present study is to carefully examine the temporal characteristics of cor-tical areas involved in visual processing using this technique. Calcium fluorescence microscopy relies on the principle that action potentials induce a strong influx of calcium ions into the firing neurons [Looger and Griesbeck, 2012]. During action po-tentials, the influx of calcium ions occurs within microseconds [Kalat, 2009], which makes calcium transients very suitable for measuring temporal dynamics of brain activity. The temporal resolution of calcium imaging techniques is therefore mainly dependent on the monitoring speed of calcium transients. Monitoring is done us-ing so-called calcium indicators, molecules that emit fluorescent light upon bindus-ing to calcium ions. By capturing the emitted light with a video camera attached to a microscope, brain activity can be closely monitored over time.

A spatial limitation of calcium imaging, compared to fMRI and electrophysiol-ogy, is the relative depth at which recordings can be made, due to optical limitations. Therefore, subcortical structures are difficult to assess. On the other hand, given the small size of the mouse brain (± 10 mm x 13 mm, dorsal to ventral view [Kovaˇcevi´c et al., 2005]), calcium imaging can be used to record activity both across the entire cortex and at the cellular level, using respectively wide-field imaging [Murakami et al., 2015] or two-photon imaging [Mohammed et al., 2016]. Both levels can even be studied in the same specimen (e.g., [Murakami et al., 2015]).

(7)

Chapter 1. Introduction 3

could only be used in acute experiments. From the 1990s and onwards, genetically encoded calcium indicators (GECI) have been developed, which circumvent many of the prior problems (for a detailed historical overview, see [Looger and Griesbeck, 2012]). For instance, unlike synthetic dyes, GECIs can be linked to specific cell pop-ulations and are capable of stable expression over months within an animal [Looger and Griesbeck, 2012]. Furthermore, GECIs are non-invasive, due to their inheritabil-ity. In 2013, Chen et al. published results from the development of a new family of ultrasensitive GECIs called GCaMP6. The fastest variant of this family, GCaMP6f, has a temporal resolution of 45ms ± 4ms for single neural events, while it can detect reoccurring activity of neurons at a 50-75ms interval [Chen et al., 2013].

Questions can be raised about the use of mice as a model for vision. For instance, mice view the world with a fraction (1/100th) of the resolution that humans possess [Niell and Stryker, 2008]. This low resolution is partly induced by the lack of a fovea in the retina, a structure found in other mammals (including humans) that is specialized in high contrast and high acuity tasks. Furthermore, mouse eyes lack the long wavelength cone, making them unable to perceive red light. Next to that, mice brains and eyes are relatively small, up to two orders of magnitude smaller than that of humans and other primates [Huberman and Niell, 2011]. Therefore, it appears that the mouse visual system is hardly comparable to the primate visual system.

On the other hand, many similarities have been found. First, classical on and off center-surround receptive fields of RGCs have also been found in mice [Niell and Stryker, 2008]. Secondly, like in humans, neurons in mouse visual cortex are selectively responsive to stimulus features like orientation, direction and spatial fre-quency [Niell and Stryker, 2008]. Furthermore, multiple extrastriate visual areas have been identified [Wang and Burkhalter, 2007, Marshel et al., 2011, Andermann et al., 2011], suggesting the capability to process complex visual information (for an extensive review on mouse as a model for vision, see [Huberman and Niell, 2011]). These findings suggest that despite differences, much could still be learned from the mouse as a model for vision.

In summary, compared to prior techniques, calcium fluorescence microscopy provides a novel and promising approach to investigating spatiotemporal dynamics of neural circuitry involved in processing visual information.

In the present study, we used calcium imaging to simultaneously investigate spa-tial and temporal aspects of visual processing in mice. Using a predefined map of cortical areas (i.e., Common Coordinate Framework, [Allen Institute for Brain Sci-ence, 2015]), we first utilized the spatial resolution of the technique to determine which of the predefined areas could be considered visually responsive. Next, we explored temporal dynamics, by comparing the relative latencies of visually respon-sive areas. Using this method, we expected to obtain valuable insights about mouse cortical visual processing, and to demonstrate the use of this technique for investi-gating the mouse visual system.

(8)

4

Chapter 2

Methods

Transgenic Mice

The study included four C57BL/6J Thy1-GCaMP6f transgenic mice. These mice ex-press green fluorescent calcium indicator GCaMP6f in excitatory neurons in the cor-tex. Mice were housed in a 12:12 reversed day/night cycle and tested during the dark phase. All procedures were approved by Animal Ethics Committee (DEC) of the Netherlands Institute for Neuroscience, and carried out in accordance with the DEC’s animal welfare guidelines.

Mouse preparation for wide-field in vivo imaging

Before testing, mice underwent surgical preparation to implement a clear-skull cap and attach a titanium head-bar to the skull. Surgeries were performed in a sterile en-vironment, under adequate anesthesia (3% isoflurane) and adequate analgesia was provided (metacam, 1mg/kg). During surgery, the anesthetized animal’s health was closely monitored by observing breathing rate, paw reflex and body temperature. First, the head was shaved and cleaned with iodine before lidocaine was applied as an analgesic. Then, an incision was made over the sagittal suture to allow the skin to be pulled aside and reveal the skull. After cleaning off any bone membrane, a thin layer of super glue (i.e., cyanoacrylate adhesive) was applied to the entire exposed skull to create a clear (i.e., see-through) skull.

FIGURE 2: Clear skull cap

and titanium headbar (poste-rior), secured with dental

ce-ment.

A head-bar was placed on the occipital bone of the skull, so that the anterior edge of the head-bar was aligned just posterior to the occipital su-ture. A layer of primer and light-cured dental cement secured the head-bar to the skull (figure 2). With the head-bar in place, the clear-skull cap was strengthened by applying a thin layer of clear dental cement over the earlier applied super glue. Furthermore, to reduce light glare, a thin layer of clear nail polish was applied. Fi-nally, a (removable) silicon cap was laid over the clear-skull cap for protection. After surgery, mice were given a full week to recover, before testing started.

(9)

Chapter 2. Methods 5

In vivo wide-field mouse imaging

The imaging procedures started by fixating the animal in a head-fixation setup using its head-bar. The silicon protection cap was removed and a custom-made plastic cone was put over the mouse’s head to prevent light leakage in the space between the skull and the objective. The head-fixed mouse was placed under the fluorescence microscope with the plastic cone directly under the objective. Before starting the experiment, the experimenter made sure there was no visible light leakage from the cone. If necessary, a bit of black clay was added to fill up existing gaps.

FIGURE 3: Schematic drawing of calcium

fluo-rescence microscopy setup. Filtered blue light from a mercury lamp is directed at the brain, changing the fluorescence state of cells expressing GcAMP6f. The resulting emitted green light passes through the objective and a dichroic mir-ror setup, and is pickup by a high speed camera (detector). Adapted from Fluorescence Microscopy, In Wikipedia, n.d., Retrieved April 13, 2017, from https://en.wikipedia.org/wiki/Fluorescence_microscope. Image by Henry Mühlpfordt, distributed under a

CC-BY 2.0 license.

Imaging was performed using a custom-built fluorescence micro-scope (Zeiss) and a high speed sC-MOS camera. Filtered blue light from a mercury lamp was directed at the brain to excite the cells ex-pressing GCaMP6f, the resulting emitted green light was picked up by the sCMOS camera through the 9.0x objective and a dichroic mir-ror setup (figure 3). Functional im-ages (800 x 800 pixels) of the brain were recorded at 50 Hz.

Visual Stimulation

Visual stimuli of a black & white checkerboard pattern and an in-verse colored pattern, generated using the Cogent Graphics Tool-box in MATLAB [The MathWorks Inc., 2016], presented using an LCD monitor (68 x 122 cm), placed 14 cm in front of the mouse’s nose. The stimuli were spheri-cally warped to account for the close viewing angle of the mouse. This way, from the mouse’s per-spective, the apparent size of the squares remained constant across the monitor [Allen Brain] (figure 4A). Trials started with a blank

pe-riod of uniform gray (4000 ms), followed up by the two stimuli (each 50 ms) and were concluded with another 4000 ms blank period (figure 4B). Image acquisition started 300 ms before and ended 500 ms after stimulus onset. A total of 50 trials were recorded (figure 4C) in each session. The stimulus presentation and image acquisition were synchronized using a Time Domain Transmission (TDT) system, which marked the first frame recorded after stimulus onset.

(10)

6 Chapter 2. Methods

FIGURE4: A: example of the original checkerboard pattern (left) compared to the spherically warped version used in the experiment. B: sequence of stimuli presented during each trial. C: 50 trials of images were acquired at 50 Hz, starting 300 ms before and ending 500 ms after

stimulus onset.

Data analysis

Preprocessing

To account for possible positional shifts of the brain during recording (i.e., due to the mouse moving), images were spatially aligned using the ImageJ plugin “Align Slices in Stack” []. Further analysis was done in MATLAB (Mathworks, Natick, MA, USA). To define regions of interest (ROIs), we used the Allen Brain’s Common Coordinate Framework (CCF; figure 5A). As mentioned on the institute’s website: "The Com-mon Coordinate Framework was built by carefully averaging the anatomy of 1,675 [C57BL/6J] specimens from the Allen Mouse Brain Connectivity Atlas. Researchers used transgenic mouse lines and data from viral tracers to draw boundaries between 43 regions of the cortex. The end result is a template brain rendered faithfully in three dimensions, which serves as a useful guide to mouse brain anatomy" [Allen Institute for Brain Science, 2015]. Specifically, we used the top view of the CCF from which we extracted a two-dimensional (2D) model (figure 5B). The resulting 57 ar-eas (i.e., 26 bilateral + 5 medial arar-eas = 31 unique arar-eas), depicted by the 2D model were used as ROIs in the analysis. Model alignment to the imaging data was guided by stereotaxic coordinates (bregma and lambda) as well as receptive field mapping data from a parallel study by our lab [Self, 2017. Unplubished data.] performed on the same mice (figure 5C).

The image acquisition speed of 50 Hz, starting 300 ms before and ending 500 ms after stimulus onset, resulted in 41 frames per trial. All frames were spatially smoothed using a rectangular shaped mean filter (3 x 3 pixels). Baseline signal ( ¯Fbaseline) was defined as the average fluorescence signal in the 300 ms period prior

to stimulus onset.

Changes in fluorescence over time were calculated for each pixel (∆F/F ):

∆F/F = F (t) − ¯¯ Fbaseline Fbaseline

,where F(t) is the fluorescence at time t, ¯

(11)

Chapter 2. Methods 7

The resulting ∆F/F values were averaged across all 50 trials. Subsequently, we calculated the maximum value after stimulus onset (∆F/Fpeak)per pixel.

Pix-els with a ∆F/Fpeak-value that deviated less than three standard deviations from

¯

Fbaselinewere considered too noisy and excluded from further analysis (∆F/Fpeak <

¯

Fbaseline+ 3 ∗ σbaseline).

FIGURE5: From top view perspective of the Allen Brain Common Coordinate Framework

(A), we extracted a two-dimensional (2D) model (B). To define regions of interest, the 2D model was aligned with the imaging data using receptive field mapping data (C).

Visually Responsive Areas

To determine whether a ROI could be considered visually responsive as a whole, the time series data of all (responding) pixels within a ROI were averaged. Next, a cumulative Gaussian distribution function was fitted on the time series between t0 (stimulus onset) and tpeak (highest value after t0). ROIs were considered visually

responsive if the fitted function could explain 95% or more of the data’s variance (R2≥ 0.95).

Latency Analysis

For all visually responsive ROIs, response latencies were calculated using the mean and standard deviation of the fitted Gaussian function: tlatency= tµcdf − tσcdf.

To minimize the role of individual differences, we first normalized latencies to the average V1 latency of each mouse. Therefore, the latency of each region was divided by the latency of the corresponding V1 latency (i.e., average of that mouse’s V1 latency). Since the resulting ratios were less intuitive to interpret, ratios were subsequently multiplied with V 1grand, the average V1 latency across hemispheres

and mice.

We performed statistical analyses on two predefined group configurations. First, visual areas directly adjacent to V1 were grouped into either one of two categories: Medial and Lateral. Medial posteromedial (PM), anteromedial (AM), anterior (A) vi-sual areas; Lateral included rostrolateral (RL), anterolateral (AL) and lateromedial (LM) visual areas (figure 6A & B). The latency of a category was defined as the mean latency across the included areas. To increase our small sample size, we treated the latencies of each hemisphere as separate observations, resulting in 8 datapoints per category (i.e., 4 mice * 2 hemispheres). Next, we compared the means of the cate-gories using a paired t-test (i.e., design: 8x2).

An alternative group configuration was also explored, dividing the areas sur-rounding V1 into the categories Ventral and Dorsal. This configuration was directly

(12)

8 Chapter 2. Methods

based on a list provided by [Smith et al., 2017], based on recent anatomical studies in mice that have linked these areas to ventral and dorsal visual pathways (respec-tively), as seen in primates [Wang et al., 2011, Wang et al., 2012]. Ventral included LM and the latero-intermediate (LI) areas; Dorsal included the following areas: PM, AM, A, RL and AL (figure 6A & C).

FIGURE6: Group configurations used in the analysis. A: Areas surrounding V1. B: Medial (dark gray) areas included PM, AM and A; lateral (white) areas included RL, AL and LM. C: Dorsal (dark gray) areas included PM, AM, A, RL and AL; ventral (white) areas included

(13)

9

Chapter 3

Results

We have run two kinds of group analyses on the data. The first analysis included all regions depicted by the 2D Allen Brain Model, to see which areas could be consid-ered visually responsive. The second analysis is performed on predefined groups of regions surrounding the primary visual cortex and was performed to compare latencies between these groups.

Visually Responsive Areas

We investigated the visual response properties of all ROIs depicted by the 2-D over-lay model. Before doing so we filtered out pixels that were not visually responsive. On average, 30% of the pixels were removed by the filtering process, though the variability between mice was quite high (σ = 20%, see also figure 7).

FIGURE7: Visual representation of the filtering results in each of the four mice. Black pixels were excluded from further analyisis.

Subsequently, we used a cumulative Gaussian distribution - fitted on the average time series data of each ROI - to determine which areas could be considered visually responsive (figure 8).

23 of 31 unique areas showed up as visually responsive in at least one mouse; 18 in at least two; 14 in at least three; 6 unique visually responsive ROIs were found in all four mice (figure 9).

Latency Analysis

Given the great number of visually responsive ROIs, here we report only the most robust results: ROIs that showed up as visually responsive in only one or two mice were excluded. Moreover, we only included medially positioned ROIs and lateral ROIs that showed up as visually responsive in both hemispheres. These 12 unique areas (i.e., 2 medial areas + 10 bilateral areas) were subject to the latency analysis.

(14)

10 Chapter 3. Results

FIGURE8: Examples of the fitted cumulative Gaussian distribution used to determine which

areas could be considered visually responsive. From left to right: primary motor cortex, bar-rel cortex and primary visual cortex (V1). The blue line depicts the average ∆F/F values of the pixels within the ROI (dark blue). The red line depicts result of the cumulative Gaus-sian fit function. Based on our criterion (i.e., R2 > 0.95), only V1 is considered visually

responsive in this example.

FIGURE9: Visually responsive ROIs. 23 of the 31 unique areas showed up as visually respon-sive in at least one mouse; 18 in at least two; 14 in at least three; 6 unique visually responrespon-sive

areas were found in all four mice.

General Analysis

Region latencies were normalized and multiplied by V 1grand (110.8 ms). The

ob-tained latency differences between contra-lateral regions were very small. Therefore, for reasons of brevity and clarity, the results of contra-lateral regions in table 1 are averaged (see figure 10A for a visual representation of the results).

Lateral vs. Medial and Ventral vs. Dorsal

A paired-sample t-test was conducted to compare latencies in medial regions and lateral regions. There was a significant difference in the scores between medial regions (µ = 120.3 ms, σ = 5.7 ms) and lateral regions (µ = 111.5 ms, σ = 2.4 ms); t(7)= 3.76, p = 0.0071. These results suggest that visual areas medial to V1 respond slower to visual stimuli than more lateral positioned visual areas.

Similarly, a paired-sample t-test was used to compare latencies in dorsal regions to ventral regions. Here, no significant differences were found between dorsal (µ = 116.2ms, σ = 3.5 ms) regions and ventral regions (µ = 115.1 ms, σ = 3.1 ms).

(15)

Chapter 3. Results 11

TABLE1: Response Latencies. The latencies of contra-lateral regions are averaged.

Region Label Latency

Visual areas Primary V1 110.8 ms Anterior A 115.3 ms Anterolateral AL 112.3 ms Anteromedial AM 123.6 ms Lateromedial LM 113.1 ms Laterointermediate LI 116.6 ms Posteromedial PM 122.0 ms Rostrolateral RL 109.2 ms Retrosplenial areas Ventral RSPv 134.4 ms Dorsal RSPd 130.5 ms Lateral agranular RSPagl 129.2 ms Other areas

Temporal association areas TEa 116.0 ms

FIGURE 10: Visual representation of the latencies of all 22 areas included in the latency analysis (A) and the areas included in the grouped latency analysis (B).

(16)
(17)

13

Chapter 4

Discussion

In the present study, we performed a functional analysis of visually responsive areas using wide-field optical fluorescence imaging with mice expressing GCaMP6f and a two-dimensional model, based on the Allen Brain Atlas CCF. We found that 12 of the 31 predefined areas reliably showed up as visually responsive. Besides visual cortex areas, we also identified retrosplinial areas as visually responsive. This finding is in line with previous work [Murakami et al., 2015].

Additionally, we identified the temporal association areas (TEa) as visually re-sponsive. However, given their lateral position, this finding remains somewhat questionable for two reasons. First, the CCF from the Allen Institute for Brain Sci-ence is based on averaged data from a large sample of mice. Therefore, brain areas of the individual mice used in this study might be misaligned to the CCF, due to indi-vidual differences in shape and size. On the other hand, volume variability of brain structures within a mouse strain is likely to be minimal [Kovaˇcevi´c et al., 2005]. Still, especially the more frontal and lateral areas might have been incorrectly classified as visually responsive (or unresponsive), since the positioning of the 2D model was primarily guided by retinotopic mapping data from the visual cortex. Secondly, the 2D CCF model does not take into consideration that the three-dimensional shape of the mouse brain is folded down-wards on the sides. Therefore, laterally positioned structures might be difficult to see from a top-view perspective. Nevertheless, the current results show that a substantial part of the mouse brain is involved in visual information processing. .

Among the areas directly adjacent to the primary visual cortices, also known as extrastriate areas, we found that medially located areas responded slower to visual input than laterally located areas. While one can merely speculate what the func-tional implications are of this distinction, we can safely conclude that the mouse brain processes visual stimuli through a sequence of activations in multiple visual areas. Therefore, this finding confirms the believe that "mouse extrastriate cortex has multiple tiers for higher level visual processing" [Huberman and Niell, 2011].

As mentioned, anatomical studies have identified two networks of extrastriate mouse areas that appear to resemble the ventral or dorsal pathways of vision seen in primates [Wang et al., 2011, Wang et al., 2012]. In the present study, no evidence was found for response time differences between the average latency of areas in dorsal and ventral streams. This finding appears to be somewhat in contrast with previous findings in primates and humans: various behavioral experiments have typically found the dorsal route to be faster than the ventral route [Pisella et al., 1998]. Of course, behavioral findings do not necessarily need to be reflected at a neural level: a slow but short lasting neural response could in theory, on a behavioral level, still be quicker than a fast but longer lasting neural response. More compelling arguments for an expected neural latency difference originate from single unit recordings in pri-mates: the subcortical magnocellular pathway, which mainly outputs to the dorsal

(18)

14 Chapter 4. Discussion

stream, is faster than the parvocellular pathway, which predominantly outputs to the ventral stream [Maunsell et al., 1999]. The current results therefore seem to sug-gest that mouse dorsal/ventral systems might be differently organized from those in primates, questioning the comparability of mouse and primate visual systems.

However, before embracing this conclusion, a few things should be considered. For instance, here we took a standard approach and defined dorsal/ventral latency as the average latency of all areas within each network. It could also be argued that the minimal latency within each network should be taken as the network’s latency, since that definition would better reflect the latencies of incoming subcortical path-ways. On the other hand, arguments could as well be made for taking the latency from the slowest areas, since these latencies reflect the total time it takes for incoming visual information to pass through the extrastriate network. Therefore, depending on one’s definition of dorsal/ventral latency, different conclusions could be drawn from the current results. Another factor to take into consideration, is that at the time of writing, data of only four mice was available for analysis. Thus, though we doubled the sample size by treating each hemisphere as a separate observation, cau-tion should be exercised when interpreting these results as a representative for the population mean.

In summary, here we demonstrate that a substantial part of the mouse brain is involved in visual processing by linking anatomically predefined areas of the mouse cortex to functional imaging data. Furthermore, our findings suggest that the mouse cortex processes visual information in multiple stages, as indicated by the latency differences we report. Additionally, the present study highlights the potential of wide-field calcium fluorescence imaging as a tool for assessing spatio-temporal char-acteristics of the mouse brain. The current results exhibit that the technique is capa-ble of identifying stimulus responsive areas and detect relative latency differences between them. Additionally, by combining imaging data with existing atlas data, we were able to produce clear activity maps of the mouse brain. Given the pioneering nature of this study, here we used relatively simple visual stimuli and a passive task design. Of course, future studies need not be limited to such a simple design and could also target other sensory modalities (e.g., hearing, taste, smell, touch). Further-more, the accompanying techniques and surgical preparations are constantly being improved (e.g., see crystal skull technique by [Kim et al., 2016]), increasing the res-olutions at which recordings can be performed. Therefore, calcium imaging likely has a bright (pun intended) future ahead as a tool for exploring the link between sensory information and brain activity.

(19)

15

Bibliography

[Allen Institute for Brain Science, 2015] Allen Institute for Brain Science (2015). Allen Mouse Common Coordinate Framework, Available from: http://help.brain-map.org/display/mousebrain/Documentation.

[Andermann et al., 2011] Andermann, M. L., Kerlin, A. M., Roumis, D. K., Glickfeld, L. L., and Reid, R. C. (2011). Functional specialization of mouse higher visual cortical areas. Neuron, 72(6):1025–1039.

[Azevedo et al., 2009] Azevedo, F. A. C., Carvalho, L. R. B., Grinberg, L. T., Farfel, J. M., Ferretti, R. E. L., Leite, R. E. P., Filho, W. J., Lent, R., and Herculano-Houzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513(5):532–541.

[Buzsáki, 2004] Buzsáki, G. (2004). Large-scale recording of neuronal ensembles. Nature neuroscience, 7(5):446–51.

[Chen et al., 2013] Chen, T.-W., Wardill, T. J., Sun, Y., Pulver, S. R., Renninger, S. L., Baohan, A., Schreiter, E. R., Kerr, R. A., Orger, M. B., Jayaraman, V., Looger, L. L., Svoboda, K., and Kim, D. S. (2013). Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature, 499(7458):295–300.

[Felleman and Van Essen, 1991] Felleman, D. J. and Van Essen, D. C. (1991). Dis-tributed hierachical processing in the primate cerebral cortex. Cerebral Cortex, 1(1):1–47.

[Goodale and Milner, 1992] Goodale, M. A. and Milner, A. D. (1992). Separate Visual Pathways for Perception and Action. Essential Sources in the Scientific Study of Consciousness, 15(1):20–25.

[Huberman and Niell, 2011] Huberman, A. D. and Niell, C. M. (2011). What can mice tell us about how vision works?

[Kalat, 2009] Kalat, J. (2009). Biological Psychology. Nelson Education.

[Kanwisher, 2010] Kanwisher, N. (2010). Functional specificity in the human brain: a window into the functional architecture of the mind. Proceedings of the National Academy of Sciences of the United States of America, 107(25):11163–11170.

[Kim et al., 2016] Kim, T. H., Zhang, Y., Jung, J. C., Li, J., and Zeng, H. (2016). Long-Term Optical Access to an Estimated One Million Neurons in the Live Mouse Cortex Resource Long-Term Optical Access to an Estimated One Million Neurons in the Live Mouse Cortex. Cell Reports, 17(Dec 20):3385–3394.

[Kovaˇcevi´c et al., 2005] Kovaˇcevi´c, N., Henderson, J. T., Chan, E., Lifshitz, N., Bishop, J., Evans, A. C., Henkelman, R. M., and Chen, X. J. (2005). A three-dimensional MRI atlas of the mouse brain with estimates of the average and vari-ability. Cerebral Cortex, 15(5):639–645.

(20)

16 BIBLIOGRAPHY

[Lamme and Roelfsema, 2000] Lamme, V. A. and Roelfsema, P. R. (2000). The dis-tinct modes of vision offered by feedforward and recurrent processing.

[Looger and Griesbeck, 2012] Looger, L. L. and Griesbeck, O. (2012). Genetically encoded neural activity indicators.

[Marshel et al., 2011] Marshel, J. H., Garrett, M. E., Nauhaus, I., and Callaway, E. M. (2011). Functional specialization of seven mouse visual cortical areas. Neuron, 72(6):1040–1054.

[Maunsell et al., 1999] Maunsell, J. H. R., Ghose, G. M., Assad, J. A., McADAMS, C. J., Boudreau, C. E., and Noerager, B. D. (1999). Visual response latencies of magnocellular and parvocellular LGN neurons in macaque monkeys. Visual neu-roscience, 16(01):1–14.

[Meyer-Lindenberg, 2010] Meyer-Lindenberg, A. (2010). From maps to mechanisms through neuroimaging of schizophrenia. Nature, 468(7321):194–202.

[Mohammed et al., 2016] Mohammed, A. I., Gritton, H. J., Tseng, H.-A., Bucklin, M. E., Yao, Z., and Han, X. (2016). An integrative approach for analyzing hun-dreds of neurons in task performing mice using wide-field calcium imaging. Sci-entific reports, 6(August 2015):20986.

[Murakami et al., 2015] Murakami, T., Yoshida, T., Matsui, T., and Ohki, K. (2015). Wide-field Ca2+ imaging reveals visually evoked activity in the retrosplenial area. Frontiers in Molecular Neuroscience, 08(June):1–12.

[Niell and Stryker, 2008] Niell, C. M. and Stryker, M. P. (2008). Highly selective re-ceptive fields in mouse visual cortex. J Neurosci, 28(30):7520–7536.

[Pisella et al., 1998] Pisella, L., Arzi, M., and Rossetti, Y. (1998). The timing of color and location processing in the motor context. Experimental Brain Research, 121(3):270–276.

[Roelfsema et al., 2007] Roelfsema, P. R., Tolboom, M., and Khayat, P. S. (2007). Re-port Different Processing Phases for Features , Figures , and Selective Attention in the Primary Visual Cortex. Neuron, 56(Dec 6):785–792.

[Schenk et al., 2010] Schenk, T., Mcintosh, R. D., Schenk, T., and Mcintosh, R. D. (2010). Do we have independent visual streams for perception and action ? Do we have independent visual streams for perception and action ? (August 2013):37–41. [Smith et al., 2017] Smith, I. T., Townsend, L. B., Huh, R., Zhu, H., and Smith, S. L. (2017). Stream-dependent development of higher visual cortical areas. Nature Neuroscience, 20(2).

[The MathWorks Inc., 2016] The MathWorks Inc. (2016). MATLAB 2016b. The Math-Works Inc., Natick, Massachusetts.

[Tootell et al., 2003] Tootell, R. B. H., Tsao, D., and Vanduffel, W. (2003). Neuroimag-ing weighs in: humans meet macaques in "primate" visual cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience, 23(10):3981–3989. [Ungerleider and Haxby, 1994] Ungerleider, L. and Haxby, J. V. (1994). ’What’ and

(21)

BIBLIOGRAPHY 17

[Wang and Burkhalter, 2007] Wang, Q. and Burkhalter, A. (2007). Area map of mouse visual cortex. Journal of Comparative Neurology, 502(3):339–357.

[Wang et al., 2011] Wang, Q., Gao, E., and Burkhalter, A. (2011). Gateways of Ventral and Dorsal Streams in Mouse Visual Cortex. Journal of Neuroscience, 31(5):1905– 1918.

[Wang et al., 2012] Wang, Q., Sporns, O., and Burkhalter, A. (2012). Network Analy-sis of Corticocortical Connections Reveals Ventral and Dorsal Processing Streams in Mouse Visual Cortex. Journal of Neuroscience, 32(13):4386–4399.

Referenties

GERELATEERDE DOCUMENTEN

In order to increase our understanding of drug distribution within the brain in health and disease conditions, we have devel- oped a 3D network of single brain units that includes

In 2015 hebben jongvolwassenen relatief vaker een hogere voorkeur voor de auto en maken ze relatief meer autoverplaatsingen als er in het jaar ervoor een kind is geboren (figuur

The transition of prophecy from an oral to a written form did not merely lead to the disappearance of oral prophets, but it also led to the emergence of the

Deze opzet werd vervolgens door het groepje beeldend kunstenaars geanalyseerd: 'wat is de opzet en het karakter van de grote vorm van het hele gebied , hoe zijn

Emotionele Empathie Taak als uit de Emotional Contagion Scale naar voren kwam dat mensen met sociale angst juist meer emotionele empathie lijken te hebben als het om negatieve

per proefveld. De mestgiften worden gegeven in m3/ha. De opgebrachte hoeveelhe­ den ammonium· en totaalstikstof worden in kg/ha gegeven. De ammoniakemissie wordt uitgedrukt als

All patients with cancer are at risk of malnutrition and deterioration in their nutritional status due to the effect of the chemotherapy and/or radiotherapy and

3 will nOW be used to compute the electric field distribution, the current densities in the tissue layers and the surface charge densities on the tissue