• No results found

The Ability to use Music for Diagnosis of Sensory Processing Deficits

N/A
N/A
Protected

Academic year: 2021

Share "The Ability to use Music for Diagnosis of Sensory Processing Deficits"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Research Project I

The Ability to use Music for Diagnosis of

Sensory Processing Deficits

a systematic review

Sara Hollander

11430443

January 2017 – August 2017 32 EC

Universitair Medisch Centrum Utrecht (UMCU)

Supervisor: dr. Hilgo Bruining

UvA Representative: dr. Heleen Slagter UvA Co-assessor: dr. Heleen Slagter MSc Brain and Cognitive Sciences University of Amsterdam

(2)

Introduction

Sensory processing deficits are seen in a variety of disorders, ranging from schizophrenia, Parkinson’s disease and stroke to epilepsy and autism spectrum disorder (Baum, Stevenson & Wallace, 2015; Boecker et al., 1999; Campfens et al., 2015; Javitt & Freedman, 2015; Van Campen, et al., 2015). Even though the prevalence is high, diagnostic tools for sensory processing deficits are scarce. In this review, new possibilities of measuring sensory processing will be discussed, and a proposal for its practical use will be considered.

Here, sensory processing is regarded as the unconscious reaction to stimuli before the conscious application of this information to higher-order processes. It is the ability to analyze and integrate incoming sensory information in the brain, and the basis of creating a coherent presentation of the surrounding world. This integration of information has been argued to be an essential element for the composition of higher-order processes, such as cognitive abilities and representations (Figure 1; Baum, et al., 2015) and ultimately for a person’s understanding of the world. Integration in this context refers to the consolidation of electro-potentials from different sensory stimuli in the brain, creating one information ‘current’ incorporating all other information. The base of this integration is unisensory processing, the transduction of individual senses, which leads into multisensory integration and affects perceptual and cognitive processing, and behavioral output (Baum et al., 2015). The direct link between multisensory integration and behavioral output is observable in a reaction-time task, in which participants responded significantly faster when they are previously cued by a multisensory cue compared to a unisensory cue (Sperdin, Cappe, Foxe & Murray, 2009). These behavioral outputs can also be called real-time behaviors: behaviors based on “…mental actions immediately prior to behavioral performance and thus directly influencing social behavior.” These mental actions correspond to the blocks depicted in Figure 1, and start with sensory processing. Since real-time behaviors are based on the immediate unconscious reactions to stimuli, they can be considered a more accurate measurement of sensory processing compared to latent behaviors such as attitudes or beliefs that are influenced by higher cognitive functions (Fontaine & Dodge, 2006). Thus, sensory processing is an important underlying action that makes way for higher-order pathways and ultimately, real-time behavior, and the ability to measure sensory processing can be helpful to explain certain differences in behavior between people.

Figure 1. The construction of higher-order processes. Each block influences the block above it, but sensory

processing appears to influence all other processes, not just the multisensory integration. Adapted from Baum et al. (2015).

(3)

Currently, it is a challenge to properly measure sensory processing in a person due to the different levels of measurement that can be used, ranging from neurophysiological and cognitive measurements to behavioral assessments. One of the most used measurements of sensory processing is the Sensory Profile questionnaire (SP; Dunn & Westman, 1995). This questionnaire measures the responses and associated behaviors of children to a range of sensory stimuli (Dunn & Westman, 1997; Ermer & Dunn, 1997). An example statement is “My child cries often,” to which the answer ranges from ‘never’ to ‘always’. However, this statement can only measure a general behavior, as it asks a question about the common behavior of the child. This means that these behaviors are not linked to one specific event, but happen regularly like habits. Thus, general behaviors are not the same as real-time behaviors, as they tend to look more at the ‘average’ of a behavior, and not at the various mental actions that underlie a specific behavior and can alter an action. It can therefore be reasoned that these types of questionnaires reflect aware or conscious appraisal and planned responses, and do not measure a person’s true sensory processing, especially since there is no focus on the unconscious reactions that are evident with real-time behavior (Figure 2). So, how could sensory processing be measured in real-time?

Figure 2. Schema showing the research gap in measurements of sensory processing.

Music is a promising tool to assess the relation between sensory processing and real-time behavioral responses. It has been previously shown that music has a direct positive effect on a person’s emotional expression, social behavior, focus and attention (Whipple, 2004), and specific elements of music, such as rhythm, have been shown to enhance planning and organizing (Thaut, 2005). Additionally, several studies argue that music is an evolutionary adaptation and cultural form that is unique to humans, emerging early in development (Trainor, 2015), possibly making it something that is processed automatically.

(4)

Especially the temporal features in sounds are likely to be subjected to automatic processing, as several studies have shown that rhythm appears to be an innate feature in human beings (Fujioka, Mourad & Trainor, 2011; Hepper, 1991; Kisilevsky & Low, 1998; Trehub, 2003) and research by Bouwer (in press) indicates that even when participants are not asked to focus on rhythm, their brain still registers and processes it unconsciously. Music could thus be seen as a means to generate real-time behavior through its temporal features.

With the knowledge that sensory processing underlies higher-order processing and behavioral output, and is focused on unconscious reactions to stimuli, and that music is a promising art form that appears to be influential towards a person’s real-time behavior, it can be hypothesized that the processing of sensory stimuli related to music leads to real-time behavior. Furthermore, if temporal features such as rhythm are processed automatically and unconsciously in the brain, then the link between sensory processing and music seems evident: sensory processing influences the automatic processing of temporal features in music, which are used as the mental actions directly underlying real-time behavioral output. This paper aims to answer the following question: “How are the sensory, cognitive and motor processing pathways involved in music processing, especially with regards to the musical features rhythm and pitch?”

The goal of this paper is to integrate studies of sensory, motor and cognitive processing in music and create an informed platform of knowledge that can be applied to new diagnostic methods for sensory processing deficits. Specifically, Autism Spectrum Disorder will be used as an example to exhibit the practicality of the acquired and summarized knowledge.

Methods Search strategy

A systematic research was conducted by two independent researchers, S.H. and J.S. First, the search strategy were defined (see supplementary table 1). Following this, articles were retrieved from several databases: PubMed, Embase and Cochrane, from January 2000 up until March 2017. After checking for any duplicate articles, around half of the obtained articles were screened by both researchers (S.H. and J.S.) for title and abstract and discrepancies were resolved by consensus. The other half was screened by S.H. only. Articles written in English, with a focus on music (processing) or auditory processing and looking at the brain activity of several musical processes were included in full-text analysis. If there was uncertainty about meeting the inclusion criteria, the article was still included in the full-text analysis to avoid the omission of relevant articles. During the full-full-text analysis, articles were excluded if the focus was not on music, but for example on speech or emotion, if the musical aspect was too complex (i.e. musical training, improvisation or playing), if animal models were used or methods were tested for accuracy or if the experiment groups consisted of participants with systemic disorders (supplementary table 2).

(5)

Data extraction

Study characteristics were extracted from the articles by the first researcher. The information documented included the purpose of the study, the design of the study, the specific music characteristic (rhythm, pitch and timbre) addressed, the use of control groups, and the results or conclusions. The studies were divided in two groups: those that assessed music processing and auditory processing generally, and those that assessed specific characteristics of music, such as rhythm, pitch or timbre.

Results

We retrieved a total of 1189 references from our literature search, of which 7 were double references. Of the leftover 1182 references, 341 articles met the inclusion criteria based on the title and abstract screening and 109 studies further met the inclusion criteria based on full text screening (supplementary figure 1).

Definitions

Music processing is a complex phenomenon (Figure 3). First, music can encompass a wide spectrum of definitions. In this review, music will be defined as a set of sounds that is defined and organized by specific musical features based on internal rules. Furthermore,

music processing will be defined as the neurobiological processes that start once an external

auditory stimulus has entered the cochlea, up until the generation of a behavior in the motor cortex. Here, the neurobiological processes refer to inter-neural network communication through synaptic input and the effect of action potentials on the strength of this communication. These processes shape cognitive and motor processes or behavioral output.

Music also beholds different elements. First of all, rhythm is “…the pattern of time intervals in a stimulus sequence” (Grahn, 2012), or the temporal distribution of auditory stimuli. Rhythm is a human construct, a composition of organized elements based on the concept of time (Thaut, Trimarchi & Parsons, 2014). Furthermore, listening to this pattern of intervals in time often leads to a beat, or a sense of pulse, which can be seen as the perception of rhythm. For example, the sounds that one would clap to when hearing music are the beat. The beat has a certain speed, referred to as tempo. The temporal organization of beat is called meter: the repeated pattern of all accents in a beat, both stressed and unstressed. Meter determines what beats are stronger or weaker than others in a sequence (Epstein, 1995). Finally, if the above-mentioned processes are synchronized to an external auditory cue, this is called entrainment (Grahn, 2012; Molinari, Leggio, Martin, Cerasa & Thaut, 2003; Thaut, McIntosh & Hoemberg, 2014): the alignment of internal rhythm to an external rhythm. An example here is the tapping of a foot when listening to a song, but also when a musician has his or her audience spellbound, the audience is entrained to the musician’s music and rhythm. These examples clearly show that sensory information, the music stimuli, are translated into motor behavior. Lastly, pitch is defined as sound frequency,

(6)

or more specifically the organization of a sound along a scale from low to high frequency (Alluri, et al., 2012).

Figure 3. A schematic representation of music architecture and sensory processing elements.

Auditory processing: how is sound translated from an external environment into behavior?

Music processing is largely auditory processing, which follows a complex pathway with different actions. Here, they will be divided into three distinct phases. First, the sound waves are processed unconsciously. They enter the brain via the ears, after which they are translated into electrical signals. Via different brain regions they are transported to the auditory brain, where the sound waves are perceived as music. In the auditory cortex, the transition from unconscious processing to conscious registration, the second phase, takes place. This conscious acknowledgement happens in the auditory cortex and related regions such as the superior temporal gyrus, where different complex features of music, like rhythm and melody, are analyzed more specifically. In the third and last phase, the sound waves that have been analyzed as musical stimuli potentially prompt an action, carried out by associated cortices such as the motor cortex and leading to a behavioral output (Figure 4). In the following chapter, we review the different steps of auditory processing in further detail: unconscious processing, conscious registration and action.

(7)

Figure 4. Neurocognitive model of music processing. Adapted from Koelsch (2011). The three phases are

depicted above the model, and all incorporate several different aspects of music processing. Phase 1 and Phase 2 have some overlapping processing, as it is still unclear at which point in time unconscious processing turns into clear conscious registration. All phases are set out against time, on the x-axis.

1. Unconscious processing

Auditory processing starts in the ears. External sounds generate a motion in the surrounding air, creating pressure waves. These pressure waves carry auditory information and are taken up by the outer ear, travel through the middle ear and reach the cochlea, the inner ear. The cochlea is filled with a watery fluid, is divided by the organ of Corti and has a fiber-like structure called the basilar membrane, which aligns it (Gilroy, MacPherson & Ross, 2008). Once the cochlea receives the pressure waves, the fluid is put into motion. These pressure waves move alongside the basilar membrane and try to find fibers that have matching resonance frequency on the membrane. The farther away the matching resonance frequency is, the lower the pitch of the tone is, as lower pitched tones have a lower frequency and travel longer distances. Once the corresponding location is found, the energy of the pressure waves is released in the basilar membrane, triggering depolarization of the ciliated hair cells in the bordering organ of Corti. These hair cells are linked to the loudness of the sound: the louder a sound is, the more hair cells are put into motion. Following, this depolarization of hair cells leads to a release of neurotransmitters and a production of action potentials in the adjacent auditory nerve (Robles, & Ruggero, 2001; Yost, 2003; Yost,1991), turning the pressure waves into auditory impulses. The auditory nerve transports this current of action potentials to several brain structures, all part of the auditory brain (Box 1), passing the auditory brainstem and the thalamus, and moves via the medial geniculate body to the primary auditory cortex (Angulo-Perkins & Concha, 2014). There, the auditory stimuli are further processed unconsciously.

In the brainstem, thalamus, and the auditory cortex, so-called feature extraction takes place. Specific information is extracted from the incoming auditory impulses. The brainstem and the thalamus first extract information about timbre, intensity, location and periodicity

(8)

of the sound (Koelsch, 2011). Following, the auditory cortex retrieves information about different features, ranging from pitch height to timbre, loudness and location (Koelsch, 2011). Furthermore, the auditory cortex prepares the acoustic information for other processing pathways, such as determination on a conceptual and contextual level. This means that in this stage, the sounds are associated with a meaning, such as “dull” or “fierce, giving the auditory impulses more context as the auditory processing unfolds.

2. Conscious registration

After all pressure waves are transformed into impulses during the first phase, two steps happen simultaneously. The first process is the engaging of auditory sensory memory, or the ‘echoic memory’. This memory is active during long-term training effects on music perception, but also during integration of music with working memory and long term memory. This memory activation is partially reflected by mismatch negativity (MMN), generated by the frontal lobes. These areas probably play an important role in working memory processes, attentional processing and sequencing of information (Koelsch, 2011). The second process is the creation of auditory ‘objects’ or groups that organize auditory

BOX 1. Anatomy of auditory brain

The auditory brain consists of three different parts: the core, the belt and the parabelt. The primary auditory cortex, the core, is situated on the dorsal side of the superior temporal gyrus (STG). This is the first part of the auditory brain that receives auditory impulses from the ear via the thalamus and auditory brainstem (Koelsch, 2011). On the lateral side, this core is surrounded posteriorly by the belt, and anteriorly by the parabelt region, of which the posterior part is also called Wernicke’s area (Figure a; Angulo-Perkins & Cancho, 2014). These three cortices together form the auditory cortex, and engage with different brain areas during auditory processing, such as the brain stem, thalamus and premotor areas (Angulo-Perkins & Cancho, 2014). Studies have suggested that there are functional differences between the left and right auditory cortex: the left auditory cortex has a higher temporal resolution than the right auditory cortex and is more sensitive to the high-speed temporal changes needed for speech, whereas the right auditory cortex has a higher resolution of spectral information than the left auditory cortex, important for music processing (Hyde, Peretz & Zatorre, 2008; Koelsch, 2011; LaCroix, Diaz & Rogalsky, 2015).

Figure a. Schematic representation of the distribution of the

auditory cortex in a human brain. Top picture: lateral views of human brains. Bottom picture: dorsolateral view of the auditory cortex on the lower bank of the lateral sulcus. Primary auditory area (the core) is shown in dark gray, the belt in yellow and parabelt areas in green. Picture from Angulo-Perkins & Cancho, 2014.

(9)

features such as melody, rhythm and chords. An auditory object is an acoustic experience of a sound, a representation of a sound with frequency and time dimensions. The auditory object simplifies the information about complex features such as melody or rhythm, creating a two-dimensional image for easy interpretation and categorization. For further information, we refer to the more detailed review by Griffiths & Warren (2004).

Following the grouping of auditory impulses and engaging of auditory sensory memory, the time intervals are broken down into segments which help define chords, melodies or exact timing. Studies have shown that this analysis takes place in the superior temporal gyrus, which has been associated with music in the right hemisphere (Koelsch, 2011). During this analysis, there is a more detailed processing of the pitch relations and possibly of the temporal intervals. The neural activations underlying these processes is yet to be determined, but it appears to be that for detailed pitch processing the temporal and inferior frontal lobes are involved (Koelsch, 2011). Finally, more complex features of music, such as harmony and rhythm, are processed, which is called music-structural processing.

Complex acoustic features such as harmony, rhythm and meter are organized into sequences that are structured according to syntactic rules. One study suggests that a cortical network is involved in musical-syntactic processing, consisting of the ventro-lateral premotor cortex (vlPMC), the anterior part of the superior temporal gyrus (aSTG, planum polare) and the inferior frontolateral cortex (IFLC) (Koelsch, 2006; Koelsch, Jentschke, Sammler, & Mietchen, 2007). The vlPMC is likely to be associated with the processing of regular and irregular chords and the recognition of structural properties of complex auditory sequences (Koelsch, 2006). The aSTG is an area in the (para)belt region of the auditory brain that is highly perceptive to complex frequency changes with larger bandwidths, contrasting the core regions that appear to focus more on pure tones (Zatorre & Belin, 2001). The aSTG has been seen to have a higher activation for music compared to human voices (Angulo-Perkins, Aubé, Peretz, Barrios, Armony, & Concha, (2014), for which the exact reasons are yet to be researched. Finally, the IFLC specifically has been associated with structural processing of harmony, melody, timbre and rhythm (Koelsch, 2006). The IFLC is also called Broca’s area, an area that has been associated many times with language and music processing (Koelsch, 2006), and early right anterior negativity (ERAN) activation is likely sourced in the right hemispheric counterpart of Broca’s area (Friederici, Wang, Herrmann, Maess, & Oertel, 2000; Koelsch & Friederici, 2003; LaCroix, Diaz & Rogalsky, 2015; Maess, Koelsch, Gunter, & Friederici, 2001). This ERAN activation has also been shown using ERP’s, where the ERAN is activated during the syntax processing of music, showing the violation of music sound expectancy (Koelsch & Friederici, 2003). Another study shows that when engaging in tasks that are either relevant to music, such as playing a beat, or irrelevant for music processing, such as a song playing in the background of a conversation, the ERAN is visible on the EEG recordings (Koelsch, Jentschke, Sammler, & Mietchen, 2007). Thus, the ERAN measurements supports the idea of the IFLC activation for music-structural processing. Altogether, there is ample evidence that the vlPMC, the aSTG and the IFLC are involved in music-syntactic processing.

(10)

3. Motor action

Once the second phase has passed the structural integration of auditory information, the last step is (pre)motor action. This last phase is not critical: one can hear auditory information, process it properly but not physically act on it. However, when one does act on it, the integration of all previous information is transported to the motor cortex and other brain areas involved in cognitive action, where a fitting action is set into motion (Koelsch, 2011).

All in all, the processing of auditory information starts in the ears, following several pathways and engaging with different brain regions, before a coherent representation of the auditory source and sound is created. Each phase of auditory processing contributes to a full understanding of a sound, but there are also distinct parts of music that are important for a complete perception of music. There are some difficulties when looking at auditory processing and music processing, as they appear to overlap in the human brain. This is discussed in Box 2. In the following chapter, two musical features and their importance relating to the understanding of music will be discussed: frequency and temporal structure.

BOX 2. Overlap between language and music in the brain

Research has not agreed about the differences and similarities of language and music processing. The difficulty in this regard is that music often contains language and language often contains musical features such as rhythm and pitch. Thus, the necessity of seeing music and language as two separate pathways in the brain is maybe unaccounted for. There appears to be neural overlap between the two domains in the auditory cortex, meaning that activation happens in the same brain areas for both processes. However, this does not always indicate neural sharing (Peretz, Vuvan, Lagrois & Armony, 2015; Rogalsky, Rong, Saberi, & Hickok, 2011): an overlap in brain areas does not always result in identical neurobiological activations or processes. Additionally, the measurement technique for neural sharing is usually fMRI, which may have a spatial and temporal resolution that cannot distinguish the activations properly. Spatial resolution in fMRI is currently able to distinguish activations up to millimeters, but the brain has about 100,000 cells per mm3, thus possibly not having a distinct enough spatial resolution.

Temporal resolution of an fMRI scanner usually comes down to at least a couple of minutes, and can detect a time interval as small as 100 ms (Glover, 2011). However, timing intervals can be as small as sub-seconds, showing that the resolution of fMRI may not be adequate yet to assess the resolution of neural networks (Huettel, Song, & McCarthy, 2009)

Rogalsky et al. (2011) did show that there is distinguishable activation in the specific regions of overlap for the two domains, but that processing of auditory cues early on, in the ears and its transportation to the auditory cortex, is likely to be the same. However, more research needs to be done to clarify the relationship between language and music in the brain.

(11)

Frequency

Different sounds can have the same frequency or pitch. While the cochlea does pick up the frequency of a sound, its detailed pitch is only extracted in later stages, when in the primary auditory cortex (Bendor, 2012). Frequency features of music are often associated with pitch. The registration and organization of pitch is based on musical scales. A scale is a group of musical tones, differing in length and depending on culture, as each culture may have different rules for interpretation of the same pitch. Each scale has one central tone, the

tonic, to which all other tones are musically related. This creates a tonal hierarchical

organization (Peretz, & Coltheart, 2003). This specific organization is said to facilitate perception, memory and performance of music by creating expectancies in a person and anticipating upcoming auditory information. Bidelman (2013) showed that this pitch hierarchy is also visible in the brain: there is a strong correspondence between the hierarchical structure and the ordering of musical pitch intervals, which is done based on the size of their subcortical representations in the brain. These representations help distinguish pitch relationships, and activity can be found in the inferior frontal / middle gyri, premotor cortices and interior parietal lobe, amongst others.

Auditory perception of pitch relies on three different processes: a) the extraction of information by perceptual systems, b) maintenance of auditory stimuli in the echoic memory, where the pitch memory trace of the sound is established and lastly c) storage of the memory traces in the auditory short term memory. The first two processes can be compared to the feature extraction and auditory sensory memory components in Figure 4 (Albouy, Cousineau, Caclin, Tillman, & Peretz, 2016; Peretz & Colheart, 2003) and use a process called auditory scene analysis (ASA; please refer to Trainor, 2015 for more information). Although researchers do not yet agree about the location of this process, Janata and colleagues (2002) performed a functional neuroimaging study, suggesting the activation of the rostro-medial prefrontal cortex as the specific brain location for the “tonal encoding” of pitch. Additionally, it seems that biological computations of pitch relations happen in the right temporal neocortex (Peretz, & Zatorre, 2005) and that pitch discrimination activates parts of the right cerebellum (Lega, Vecchi, D’Angelo & Cattaneo, 2016). The cerebellum has been shown to monitor sensory events and optimize perception (Roth, Synofzik, & Linder, 2013), which could explain its activation for pitch discrimination as it may optimize pitch perception for a better understanding of the auditory information.

After the extraction and maintenance of the pitch stimuli, the established memory trace is compared to other memory traces previously stored in auditory working memory. This seems to take place in the auditory brainstem, bilateral auditory cortices and inferior frontal gyri (Albouy et al., 2016). Interestingly, people who suffer from congenital amusia, a chronic disorder of music production and perception, may need more time to encode and compare sounds and their pitches (Albouy et al., 2016; Albouy et al., 2013). This could mean that disrupted pitch processing relies on impaired encoding of rapid auditory information (Albouy et al., 2016), and that timing is important for pitch encoding.

(12)

It has been suggested that instead of multiple activations in brain areas for pitch processing, there is one pitch center in the brain (Bendor, 2012). The theory is not well-supported, but some studies claim that the lateral Heschl’s gyrus, or perhaps a region posterior to that, the planum temporale, serves as a pitch center (Bendor, 2012; Hall & Plack, 2009). Although several studies have shown activation for pitch processing in only one area of the brain, a true pitch center is difficult to believe seeing that music processing is a complex system that engages different parts of the auditory brain and associated areas along the way. For example, the processing of pitch appears to begin already in the cochlea, and is further specified in parts of the auditory brain. Thus, there seems to be different areas that engage in this action which makes it difficult to support the assumption of a single pitch center. Nevertheless, this does not mean that the center does not exist. Possibly, all other processes may be sole in- or output mechanisms for this one center. Further research is needed to shed more light on this subject.

While frequency features, and specifically pitch processing, are important for music perception, it is not an essential part of our music understanding. Especially studies focused on congenital amusia have shown that those suffering from amusia still enjoy and process music like normal people, but are simply not aware that they are singing out of tune. This could mean that pitch encoding and perception are not essential automatic processes of music processing, but a complex addition that ameliorates one’s understanding of the sounds. Considering this, frequency features are probably not useful to measure sensory processing and/or real-time behavior when one is listening to music. In the following paragraph, temporal structure will be elaborated on.

Temporal structure

Besides frequency, music also has several temporal structures, associated with timing. An important element of these temporal structures is rhythm, as it is said to be the central organizing structure of music (Thaut, Kenyon, Schauer, & McIntosch, 1999; Thaut, Trimarchi, & Parsons, 2014). Rhythmicity is critical for learning, development and performance, as timing and movement aligned with timing are the basis of many cognitive functions and motor control (Thaut et al., 1999; Thaut, Trimarchi & Parsons, 2014). While research has not agreed on the neurobiological processes underlying rhythmic synchronization, there have been studies that showed time processing-related activations in the basal ganglia (Grahn & Brett, 2007; Grahn & Rowe, 2009; Teki, Grube, Kumar & Griffiths, et al., 2011), cerebellum (Thaut et al., 2009) and thalamus (Krause, Schnitzler, & Pollock, 2010). In the following chapter, timing and rhythm will be reviewed and their neural- and cognitive pathways, and motor movement activation will be analyzed (Figure 5).

(13)

Figure 5. Explanation of temporal processing. External auditory stimuli with different timing intervals and

tones enter the auditory cortex in the form of percepts.

Timing

There are two types of timing: automatic timing and cognitively-controlled timing, sometimes referred to as implicit and explicit timing respectively. The first type of timing is referred to as “continuous measurement of predictable sub-second time intervals defined by movement” (Lewis & Miall, 2003). Thus, this process is continuous and done unconsciously. Cognitively-controlled timing involves the “measurement of supra-second intervals not defined by movement and occurring as discrete epochs” (Lewis & Miall, 2003), meaning that instead of defined by movement, this type seems to be influenced by other higher-order functions, such as attention or working memory. These two types of timing operate on different time scales: automatic timing is on the sub-second level, whereas controlled timing is on the supra-second level, indicating that the preciseness of automatic timing is higher. Additionally, the automatic timing seems to be a more innate timing mechanism, whereas the latter is more attention-based (Grahn, 2012).

Timing is processed in the brain in several different cortical and subcortical regions. Most studies have indicated that the processing of timing uses the ganglia-thalamo-cortical circuit (CBGT), including the cerebellum, basal ganglia, premotor cortex and supplementary motor area (SMA) (Bengtsson et al., 2009; Edagawa & Kawasaki, 2017; Grahn, & Brett, 2007; Merchant, & de Lafuente, 2014; Thaut et al., 2009). Coincidentally, all these areas are also related to motor activity. The cerebellum has been linked to coordination of movements and the fine-tuning of actions (Danielsen, Otnaess, Jenssen, Williams, & Østberg, 2014; Grahn & Brett, 2007; Thaut et al., 2009), whereas the basal ganglia has been linked to motor control, learning and action selection (Grahn & Rowe, 2012). There is a strong connection between the premotor cortex, SMA, the cerebellum and basal ganglia, and these former regions have been shown to be associated with planning, execution of movement and voluntary control (Catalan, Honda, Weeks, Cohen, & Hallett, 1998; Gerloff, Corwell, Chen, Hallett, & Cohen, 1998), whereas the cerebellum and basal ganglia have been associated

(14)

with more automatic behaviors and unconscious actions (Diedrichsen, Criscimagna-Hemminger, & Shadmehr, 2007). One way to explain the strong connection between motor areas and temporal processing is to look at the development of temporal processing in children. It seems that children represent time in motor terms when they are younger, to create a clearer overview of duration (Merchant & de LaFuente, 2014), which could underlie the strong connection between the two seemingly different aspects later in life. This strong connection could be important for sensory processing, and later music processing, as well. For example, to be able to clap your hands during a song, the timing of the music must be processed properly in the brain for a person to be able to engage in the motor action at the right time. Thus, looking back at Figure 1 and keeping this in mind, it seems that timing is a fundamental block underlying multisensory integration and higher-order processing, and potentially a mechanism that not only underlies different blocks, but also is intertwined throughout the movement from one block to the next.

Rhythm

Several different models have been developed to explain rhythmic processing, such as clock models, spectral models and network models. All models have the overlapping feature that they appear to focus on automatic timing as a basis, indicating that current theories and research believe that rhythm seems to be a more automatic processing, controlled neither by action nor attention. This belief is understandable, as studies have shown that infants already show auditory-specific rhythmic activity before the age of 6 months regardless of culture (Fujioka, Mourad & Trainor, 2011; Trehub, 2003). Some studies even believe that this ability is already learned in utero (Hepper, 1991; Kisilevsky & Low, 1998).

The first model is the three-stage clock model (Povel & Essens, 1985), which assumes rule-based, sequential processing and that time can only be assessed by a clock. This internal clock is a pacemaker, emitting pulses. In the first phase, the incoming auditory rhythm is assigned accents (tones whose onset has increased salience), giving weight to certain tones or intervals based on a set of preference rules. These rules are registered in a person’s memory and lay the emphasis on temporal accents that differ per culture. After this, all intervals are assessed, and finally they enter the “matching stage”. Here, the intervals are compared to so-called negative evidence: clock pulses or beats that fall on silent or unaccented elements in a sequence. This may be done using a same mechanism as pitch perception for example, comparing older ‘memory traces’ to the incoming auditory stimuli and seeing if they correspond to one another. This model looks a lot like the interval models for timing. The interval that has the least negative evidence is deemed the proper rhythm. This model was supported by studies that showed that rhythms with less evidence, that had less tones to accent for instance, were considered simpler and reproduced more accurately. However, this model has not explicitly been supported by neurobiological evidence, making it difficult to express these terms concretely in neural activations.

(15)

While this model has been widely used and elaborated on, there are several aspects that are not incorporated in this model. First of all, this model does not operate in real-time: it assesses all possible clocks before settling on one. Secondly, as mentioned above, this model does not easily incorporate synaptic activation in specific brain regions. Lastly, this model believes that there is one central timing principle, whereas it may be, with such an important and multifaceted feature, that there is more than one pathway or structure involved. Based on these three limitations a second type of model has been proposed.

The second group of models are the spectral models (Grossberg & Schmajik, 1989), which can be associated with the entrainment models of timing. They are based on the neural resonance theory: pulse and meter correspond to neuronal network rhythms, that synchronize with acoustic rhythms (Large, Herrera, & Velasco, 2015). These models operate in real time and can thus adapt and predict incoming pulse and meter. The idea of this model is that there is a self-sustaining organization of oscillatory neural networks that interact both with sensory and motor networks, creating a quick output. The cells in the networks react to external rhythms with different timed responses, thus being able to assess timing intervals instantly. The interaction between motor and sensory pathways helps generate the fastest response possible (Mauk & Buonomano, 2004). This approach has been supported by several researchers, showing that rhythmic stimuli can entrain neural groups (Grahn, 2012; Large, Herrera, & Velasco, 2015). However, also for this model, concrete neural evidence is yet to be found. Different biological mechanisms have been proposed for the activation of neural networks at different time points (Mauk & Buonomano, 2004) but none have been considered distinct enough to explain this model properly.

Both models above consider temporal processing as a top-down approach. Yet, some believe that temporal processing uses the bottom-up method, meaning that neural networks and their inherent abilities to measure intervals result in automatic timing, not the other way around. This leads to a new set of models: the network models. Studies have proposed that networks of cortical neurons can encode the selectivity of intervals due to the use of synapses that all exhibit the same temporal profile (Maass, Natschläger & Markram, 2002; Buonomano, 2000). Buonomano (2000) indicated this with an interval discrimination task: if a timing event arrives at 0 milliseconds (ms), there will be a stimulation of hundreds of neurons, of which only some will fire and produce action potentials. Additionally, this stimulation activates a succession of time-dependent processes, which were short-term synaptic plasticity and slow inhibitory postsynaptic potentials (IPSPs) in this model, but different properties could be used (Mark & Buonomano, 2004). These processes create different states in the neural network at different time points. Thus, if a second event arrives at 150 ms, this stimulus will arrive in a different “network state” and encounter a different version of the neural network, as some are now activated or inhibited compared to time point 0 ms. This will create a dissimilarity in activity between the first and second time event. This difference can be used as information to detect the timing interval

(16)

(Mauk & Buonomano, 2004) and start collecting intervals to determine a rhythm. However, more research needs to be done to clarify the concrete processing of rhythm in the brain.

It has been shown that the basal ganglia, specifically the putamen, are relevant for the mental representation of a beat, and thus for the representation of rhythm (Grahn & Rowe, 2013). Using fMRI, Grahn and Rowe found that activation in the putamen appeared to be linked not only to the presence of beat, but also had different activation patterns to different beats. For example, the putamen showed no when there was no beat, or when it was a novel beat. In contrast, if a beat was repeated, the putamen appeared highly activated. This indicates that besides processing basic timing, the putamen possibly also processes different accents and patterns distinctive for a specific auditory beat and may recognize a repeated beat. This coincides with research that has shown that temporal processing is prominent in the basal ganglia (Schwartze, Keller, Patel & Kotz, 2011), demonstrating that temporal processing is needed to properly analyze time intervals and deduce rhythm and beat perception from this (Grahn & Brett, 2007; Grahn & Rowe, 2012). All in all, it seems that temporal processing research has focused on the use of the basal ganglia, and possibly the putamen, for processing of temporal intervals.

There are some difficulties that come up when researching rhythm and rhythm processing. First of all, there is no consensus for a definition of rhythm, making it a challenge to compare results of studies. Sometimes, pure timing is studied, or only beat processing, whereas as mentioned above rhythm appears to be an integration of several different aspects (Figure 5). Furthermore, the processing of rhythm seems to be an innate feature of humans (Grahn & Brett, 2007; Grahn & Rowe, 2009; Winkler, Haden, Ladinig, Sziller, & Honing, 2009), and the ability is already found in infants. Sessions (1950) proposed that the creation of rhythmic patterns is the most basic feature of all musical impulses, as infants and more primitive societies are able to feel and experience rhythm. Moreover, as has been mentioned in previous chapters, rhythm is most profoundly linked to movement of time, indicating that rhythm processing inherently involves brain regions used for temporal processing (Limb, 2006). All this together means that the processing of rhythm probably happens continuously and unconsciously in a human’s brain. This makes it difficult to distinguish this action from other processes in the brain as one cannot be sure whether rhythmic processing is measured or whether it results from other disconnected brain processes. Even though this should be kept in mind when researching rhythm processing, this possible restraint could also be seen as an aspect that could gain us insight into sensory processing. As mentioned early, sensory processing is the unconscious reaction to stimuli before any cognitive or higher-order processes are set into motion and before conscious applications of the information. Thus, types of processing that happen unconsciously and prior to other complex processes appear to fall under sensory processing. If rhythm processing has the following three distinct features – the most basic feature of musical impulses, automatic processing, and inherent connection to temporal processing and motor processing in the brain –, this might imply that it can be used to assess and measure sensory

(17)

processing since sensory processing possibly underlies and is intertwined with the three distinct rhythmic features mentioned above. To measure this rhythmic processing, the link between temporal processing and motor processing could be used. The motor output that is linked to the rhythmic processing might be a direct reflection, or a real-time behavior, of the sensory processing in the brain. Thus, measuring rhythm processing, through its motor output, could lead to a direct measurement of sensory processing in a human and create a window into the world of the sensory processing capabilities a person has, or potentially does not have.

Discussion: practical implications

From the chapters above it is evident that music processing, and specifically rhythm processing, requires several sensory, cognitive and motor pathways to function, and leads to initiation of motor behavior. Yet, what is the relevance of this knowledge? As mentioned above, there is no diagnostic tool to measures sensory processing and the effect on real-time behavior. In the following chapter, this possible practical implication will be explained and discussed using the example of Autism Spectrum Disorder.

Sensory processing deficits

Autism spectrum disorder (ASD) is an increasingly prevalent neurodevelopmental disorder (WHO, 2016). Current diagnosis is based on the DSM-V criteria, which covers two behavioral impairments: social interaction and restricted and repetitive interests and behaviors (American Psychological Association, 2013). Due to the disease’s heterogeneity, it is difficult to standardize diagnosis and treatment for all people with ASD. The golden standard for diagnosis of autism spectrum disorder is currently psychiatric assessment by a therapist. In children with ASD, it appears that around 90% have difficulty processing sensory information (Baum et al., 2015; Foster, Ouimet, Tryfon, Doyle-Thomas, Anagnostou & Hyde, 2016; Tomchek & Dunn, 2007). This is evident in multiple modalities, showing both hyper- and hyporesponsitivity to sensory stimuli (Marco, Hinkley, Hill & Nagarajan, 2011). Most evidence points towards hyper- and hyporeactivity to auditory information (DePape, Hall, Tillmann & Trainor, 2012). The prevalence of altered sensory processing might point to a shared phenomenon of autism etiology. Different hypotheses behind the sensory processing deficits in ASD have been postulated, such as the Weak Central Coherence Theory which states that people with ASD have an enhanced local integration but diminished global assessment (Happé, 1996; Ouimet, Foster, Tryfon & Hyde, 2012) and the Enhanced Perceptual Functioning theory, which believes that the degree of neural complexity needed for processing stimulus features is inversely related to performance in those with ASD (Mottron & Burak, 2001). All in all, the mechanisms of sensory processing deficits in ASD are still undetermined, but evident in a large subset of clinically diagnosed patients. This makes it difficult to properly pin down the scale and significance of sensory processing deficits, and leads to similar treatment for the diverse population with ASD. Perhaps the focus of ASD should be shifted towards the sensory processing deficits and their

(18)

ability to guide treatment and help clarify categorization amongst those suffering of ASD. This shift, moving away from the traditional DSM guidelines, could open a world where treatment is more in line with unconscious processing, instead of focusing on habitual actions. This could generate treatments that fit the individual better, and that can change over time according to the individual’s response to sensory input.

Considering the information above, music processing might offer an opportunity to measure direct behavioural responses to sensory stimuli and assess sensory processing at an additional level. Rhythmic processing might be a promising assessment, since research has shown that pitch discrimination and detection is similar in TD and ASD children, whereas rhythmic processing in ASD is not yet examined (Hardy & LaGasse, 2013). Additionally, Falter and colleagues (2012) showed that ASD patients have a reduced sensitivity for temporal interval difference compared to TD. Additionally, one study has showed that there is less assimilation to metrical categories of the musical system in children with ASD (DePape et al., 2012). The processing of temporal structure and rhythm involves many different brain areas and pathways, indicating that long-distance neural connectivity is essential for proper functioning. Since rhythm processing is likely an automatic and unconscious activity in the brain and is probably linked to automatic timing, there might be an overlap between rhythm and sensory processing. Furthermore, as sensory processing underlies higher-order pathways, such as cognitive aspects and behavioral output, the overlap between rhythm and sensory processing could, via these higher-order pathways, lead to a real-time behavior, which can be measured. This measurement could indicate an abnormality in sensory processing when listening to music or partaking in musical activities. Thus, the possibility of detecting sensory processing deficits through music listening and production is an interesting practical implication of the prior review.

To explore evidence for rhythmic abnormalities in ASD patients, data was gathered from fourteen children who participated in music observation research of the sensory processing care program, of which all were diagnosed with ASD, and four additionally with epilepsy. For each child, the music therapist rated their responses and behaviors on a scale from 1 to 10. For some scales, 5 was the expected average for healthy children, and for others 10 was considered the healthy children’s response average (Table 1). Two distinctions were made by the observer: response features and temporal features. Response features included elements associated with the actual action elicited by the child: the strength of the action on the instrument, the intensity of the action and the ability to anticipate the upcoming sounds or features. Temporal features included elements related to the timing of the child’s action: the speed of the sequential actions, the regularity of the rhythm, the ability to induce well-timed stops and the rigidity of the pattern (Table 1). Although the sample size was small, some numbers stood out when looking at the average over all participants. First, for the response features an average of 2.86/10 was found for the dynamics of the produced sounds. This refers to the ability of a child to anticipate upcoming sounds, and the low score indicates that children on average had difficulty with

(19)

this anticipation. Additionally, for temporal features the regularity of the rhythm had an average score of 3.71/10, indicating that the children tended to have an irregular rhythmic pattern when playing. Moreover, all the temporal features received an average that was below a 5/10, indicating that children were slower, had an irregular rhythmic pattern, were lagging in the acoustic stops and had difficulty adapting to varying patterns (Table 2). Although this data is subjective and not compared to typically developing children, there is consistency in the responses of children with ASD to music listening and production.

It seems that the children scored lower on average than the expected norm on temporal features, and had difficulties with the anticipation of sounds to come. This may hint to the fact that children with autism do indeed have difficulties with processing temporal features and seem to deviate from the expected average. However, the sample size in this simple data gathering is small and there is no use of controls. Therefore, more research needs to be done with a larger sample size and healthy controls to see if this possible trend stretches out to larger groups and withstands healthy control data.

It is still possible to attempt to conclude from these small results. Especially since all temporal features were below average, this may indicate that the sensory processing underlying and intertwined with rhythm processing is maladaptive, and generates a real-time behavior that is too slow and inconsistent. This points towards the possibility to start distinguishing children with ASD from TD children through responses to music processing. Perhaps with the right musical instrument that measures response times of production of music, it can be measured whether a child is consistently irregular with a rhythm, or responding without properly anticipating the following sounds.

Temporal features

Tempo 1 = slow // 10 = fast

Rhythmic regularity 1 = irregular // 10 = regular

Timing of stops 1 = lagging // 10 = running ahead

Pattern 1 = rigid // 10 = flexible Response

features

Response 1 = weak // 10 = strong

Intensity 1 = low // 10 = high

Dynamics 1 = not anticipating // 10 = anticipating

Table 1. All musical features that the children were scored on, with meanings given to each minimum and

(20)

Average Median Lowest score Highest score Temporal features Tempo 4,29 4,00 2,00 7,00 Rhythmic regularity 3,71 4,00 2,00 6,00 Timing of stops 4,29 4,00 2,00 9,00 Pattern 3,75 4,00 2,00 7,00 Response features Response 5,71 6,00 1,00 9,00 Intensity 4,86 4,75 0,50 9,00 Dynamics 2,86 2,00 1,00 7,00

Table 2. Data sheet from music therapy with children with autism and/or epilepsy (n=14). Each number

indicates the score given by the music therapist to the child about a certain feature of music.

Conclusion

This review aimed to combine separate studies of sensory, motor and cognitive processing in music and use the integration of this knowledge to offer new ideas for diagnostic tools for sensory processing deficits. Laying the focus on pitch perception and rhythmic processing, this review has shown that rhythm processing appears to be a novel method for detecting sensory processing deficits in people, as rhythm processing and sensory processing seem to be linked and intertwined in the unconscious mind of a human. Furthermore, these processes underlie higher-order processes and ultimately, behavioral output or motor action. This gives the opportunity to measure the real-time behavior that is generated by rhythm processing and the underlying sensory processes, and use those measurements to predict or distinguish sensory processing deficits and eventually help guide treatment for those with deficits.

Auditory processing starts in the ear and frequently ends with a motoric action. The process can be divided into three phases: unconscious processing, conscious registration and motor action. The first phase shows the importance of proper sensory processing for a functional behavioral output. The latter phases, conscious registration and motor action, are both important after multisensory integration for higher-order capabilities (Figure 1), and thus come after sensory processing. When looking at distinctive musical features, pitch perception is an important but higher-order aspect of music production and inadequate to assess sensory processing real-time behavior. However, rhythm processing relies largely on automatic timing and appears to be processed involuntarily in the brain. This makes it a good candidate for measuring the level of sensory processing in a person, as it is influenced by unconscious processing of sensory stimuli and can lead to a behavioral output that is acted out in real-time.

(21)

Contributing to this hypothesis, a small data gathering showed that children diagnosed with ASD appear to score below average for many temporal features, such as rhythm regularity or tempo consistency. Although the sample size was small and no healthy controls were included in the data, it is a promising starting point for the possible link between rhythm processing and sensory processing and ultimately between real-time behavior and sensory processing.

Altogether, this review has one significant and novel practical application: the measurement of sensory processing through real-time behavior guided by musical rhythm. However, empirical research needs to be conducted to provide evidence. The next step is to design a musical instrument that measures both temporal features and response features, with a heavy focus on tempo and rhythmic regularity. The data gathered by this instrument needs to be compared to a computer-generated rhythm, from which a standard deviation can be established that can help categorize the children. The grouping of these children will ultimately lead to different ‘types’ of sensory processing deficits, which can help guide treatment plans. This next phase is needed to gain more insights and gather more evidence for a new way to diagnose sensory processing deficits using music and rhythm and help create successful therapies.

(22)

Appendix

“music” AND “neurological pathway or processing OR brain pathway OR brain circuit OR brain structure” AND “auditory processing” AND “temporal processing OR temporal pathway”

Supplementary Table 1. The search strategy for systematic review.

Inclusion criteria Exclusion criteria

Main focus is:

- Music processing, perception or listening

- Auditory processing

Main focus is: - Speech - Emotion - Bilingualism Neural basis of:

- Rhythm (or beat) - Pitch - Timbre Aspects of music: - Training - Expertise - Playing / improvisation Disorders: - Amusia - ASD Methods: - Animal models

- Testing of models or methods Disorders:

- All systematic (schizophrenia, dementia, Parkinson’s, epilepsy)

Supplementary Table 2. Inclusion and exclusion criteria for the title/abstract screening.

(23)

References

Albouy, P., Mattout, J., Bouet, R., Maby, E., Sanchez, G., Aguera, P. E., ... & Tillmann, B. (2013). Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex. Brain, 136(5), 1639-1661.

Albouy, P., Cousineau, M., Caclin, A., Tillmann, B., & Peretz, I. (2016). Impaired encoding of rapid pitch information underlies perception and memory deficits in congenital amusia. Scientific reports, 6.

Alluri, V., Toiviainen, P., Jääskeläinen, I. P., Glerean, E., Sams, M., & Brattico, E. (2012). Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. Neuroimage, 59(4), 3677-3689.

Angulo-Perkins, A., & Concha, L. (2014). Music perception: information flow within the human auditory cortices. In Neurobiology of Interval Timing (pp. 293-303). Springer New York.

Angulo-Perkins, A., Aubé, W., Peretz, I., Barrios, F. A., Armony, J. L., & Concha, L. (2014). Music listening engages specific cortical regions within the temporal lobes: Differences between musicians and non-musicians. Cortex, 59, 126-137.

Baum, S. H., Stevenson, R. A., & Wallace, M. T. (2015). Behavioral, perceptual, and neural alterations in sensory and multisensory function in autism spectrum disorder. Progress in Neurobiology, 134, 140-160. Bendor, D. (2012). Does a pitch center exist in auditory cortex? Journal of Neurophysiology, 107(3), 743-746. Bengtsson, S. L., Ullén, F., Ehrsson, H. H., Hashimoto, T., Kito, T., Naito, E., ... & Sadato, N. (2009). Listening to

rhythms activates motor and premotor cortices. cortex, 45(1), 62-71

Bidelman, G. M. (2013). The role of the auditory brainstem in processing of musically relevant pitch. Frontiers

in Psychology, 4, 1-13.

Boeckerc, H., Ceballos-Baumann, A., Bartenstein, P., Weindl, A., Siebner, H. R., Fassbender, T., ... & Conrad, B. (1999). Sensory processing in Parkinson's and Huntington's disease: Investigations with 3D H215O-PET. Brain, 122(9), 1651-1665.

Bouwer, F.L. (in press). What Do We Need to Hear a Beat? The Influence of Attention, Musical Abilities, and Accents on the Perception of Metrical Rhythm. Supervisor: Prof. H.J. Honing. Co-supervisor: Dr J.A. Grahn (University of Western Ontario).

Buonomano, D.V. (2000). Decoding temporal information: a model based on short- term synaptic plasticity. J.

Neurosci., 20, 1129–41

Campfens, S. F., Zandvliet, S. B., Meskers, C. G., Schouten, A. C., van Putten, M. J., & van der Kooij, H. (2015). Poor motor function is associated with reduced sensory processing after stroke. Experimental brain

research, 233(4), 1339.

Catalan, M. J., Honda, M., Weeks, R. A., Cohen, L. G., & Hallett, M. (1998). The functional neuroanatomy of simple and complex sequential finger movements. Brain, 121, 253–264.

Danielsen, A., Otnaess, M. K., Jensen, J., Williams, S. C. R., & Østberg, B. C. (2014). Investigating repetition and change in musical rhythm by functional MRI. Neuroscience, 275, 469-476.

DePape, A. M. R., Hall, G. B., Tillmann, B., & Trainor, L. J. (2012). Auditory processing in high-functioning adolescents with autism spectrum disorder. PloS one, 7(9), e44084.

Diedrichsen, J., Criscimagna-Hemminger, S. E., & Shadmehr, R. (2007). Dissociating timing and coordination as functions of the cerebellum. Journal of Neuroscience, 27(23), 6291–6301.

Dunn, W., & Westman, K. (1995). The Sensory Profile. Un- published manusctipt, University of Kansas Medical Centet, Kansas City.

Dunn, W., & Westman, K. (1997). The sensory profile: the performance of a national sample of children without disabilities. American Journal of Occupational Therapy, 51(1), 25-34.

Edagawa, K., & Kawasaki, M. (2017). Beta phase synchronization in the frontal-temporal-cerebellar network during auditory-to-motor rhythm learning. Scientific Reports, 7.

Epstein, D. (1995). Shaping time: Music, the brain, and performance. New York: MacMillan.


Ermer, J., & Dunn, W. (1997). The Sensory Profile: A Discriminant Analysis of Children With and Without Disabilities. The American Journal of Occupational Therapy, 52(4), 283—290.

(24)

Falter C. M., Noreika V., Wearden J. H., & Bailey A. J. (2012). More consistent, yet less sensitive: interval timing in autism spectrum disorders. Q. J. Exp. Psychol. 65, 2093–2107.

Fontaine, R. G., & Dodge, K. A. (2006). Real-Time Decision Making and Aggressive Behavior in Youth: A Heuristic Model of Response Evaluation and Decision (RED). Aggressive Behavior, 32(6), 604–624.

Foster, N. E., Ouimet, T., Tryfon, A., Doyle-thomas, K., Anagnostou, E., & Hyde, K. L. (2016). Effects of age and attention on auditory global-local processing in children with autism spectrum disorder. Journal of autism

and developmental disorders, 46(4), 1415.

Friederici, A. D., Wang, Y., Herrmann, C. S., Maess, B., & Oertel, U. (2000). Localization of early syntactic processes in frontal and temporal cortical areas: a magnetoencephalographic study. Hum. Brain Mapp, 11, 1–11.

Fujioka, T., Mourad, N., & Trainor, L.J. (2011). Development of auditory-specific brain rhythm in infants.

European Journal of Neuroscience, 33, 521-529.

Gerloff, C., Corwell, B., Chen, R., Hallett, M., & Cohen, L. G. (1998). The role of the human motor cortex in the control of complex and simple finger movement sequences. Brain, 121, 1695–1709.

Gilroy, A. M., MacPherson, B. R. & Ross, L. M. (2008). Atlas of anatomy. Thieme. p. 536.

Glover, G. H. (2011). Overview of Functional Magnetic Resonance Imaging. Neurosurgery Clinics of North

America, 22(2), 133–139.

Grahn, J. A. (2012). Neural mechanisms of rhythm perception: current findings and future perspectives. Topics

in cognitive science, 4(4), 585-606.

Grahn, J. A. & Brett, M. (2007). Rhythm perception in motor areas of the brain. Journal of Cognitive

Neuroscience, 19(5), 893–906.

Grahn, J. A. & Rowe, J. B. (2009). Feeling the beat: Premotor and striatal interactions in musicians and non-musicians during beat processing. Journal of Neuroscience, 29(23), 7540–7548.

Grahn, J. A. & Rowe, J. B. (2012). Finding and feeling the musical beat: striatal dissociations between detection and prediction of regularity. Cerebral cortex, 23(4), 913-921.

Grahn, J.A. & Rowe, J.B. (2013). Finding and feeling the musical beat: striatal dissociations between detection and prediction of regularity. Cereb Cortex, 23(4): 913-21.

Griffiths, T. D., & Warren, J. D. (2004). What is an auditory object? Nature Reviews Neuroscience, 5(11), 887-892.

Grossberg, S. & Schmajik, N.A. (1989). Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Networks, 2, 79–102.

Hall, D.A., & Plack, C.J. (2009) Pitch processing sites in the human auditory brain. Cereb Cortex 19: 576–585. Hardy, M. W., & LaGasse, A. B. (2013). Rhythm, movement, and autism: using rhythmic rehabilitation research

as a model for autism. Frontiers in integrative neuroscience, 7.

Happé, F. G. (1996). Studying weak central coherence at low levels: children with autism do not succumb to visual illusions. A research note. Journal of Child Psychology and Psychiatry, 37(7), 873-877.

Hepper, P. G. (1991). An examination of fetal learning before and after birth. Irish J Pscyhol, 12, 95-107. Huettel, S. A., Song, A. W., & McCarthy, G. (2009). Functional Magnetic Resonance Imaging (2 ed.),

Massachusetts: Sinauer

Hyde, K. L., Peretz, I., & Zatorre, R. J. (2008). Evidence for the role of the right auditory cortex in fine pitch resolution. Neuropsychologia 46, 632–639.

Janata, P., Birk, J. L., Van Horn, J. D., Leman, M., Tillmann, B., & Bharucha, J. J. (2002). The cortical topography of tonal structures underlying Western music. science, 298(5601), 2167-2170.

Javitt, D. C., & Freedman, R. (2015). Sensory Processing Dysfunction in the Personal Experience and Neuronal Machinery of Schizophrenia. The American Journal of Psychiatry, 172(1), 17–31.

Kisilevsky, B.S., & Low, J.A. (1998). Human fetal behavior: 100 years of study. Dev. Rev., 18, 1–29.

Koelsch, S. (2006). Significance of Broca's area and ventral premotor cortex for music-syntactic processing. Cortex, 42(4), 518-520.

Referenties

GERELATEERDE DOCUMENTEN

The moderating effect of cognitive abilities on the association between sensory processing and emotional and behavioural problems and social participation in autistic

In het huidige onderzoek zal worden achterhaald in hoeverre angst effect heeft op mate van eisen op interpersoonlijk niveau of dit wordt gemodereerd door self-efficacy.. Verwacht

The results of DEM simulations performed in this work show an increase of average bed height with decreasing normal restitution coefficient of particles for investigated flow

• Vid lekar där det gäller att hitta varandra såsom till exempel kurragömma, eller vid olika kullekar kan alla deltagare ha ett ljud i form av en bjällra eller något

Tweede Algemene Waterpassing: hoogte tegenover de zeespiegel.. Dit betekent dat het bijna 5ha grote onderzoeksterrein met een dekkingsgraad van 10% werd onderzocht. Dat

Via de onderwaterdrains wordt slootwater in het perceel geïnfiltreerd waardoor in de zomer de grondwaterstand niet onder het slootpeil kan wegzakken en het veen onder water

Herstructurering van de Greenports biedt ruimte aan andere functies (woningbouw, recreatie, waterberging) en creëert nieuwe ontwikkelingsmogelijkheden voor het tuinbouwcluster,

In the latest of many reports on the matter from the Arts Council England, its director Sir Nicholas Serota affirmed that '[d]iversity is crucial to the connection between the