• No results found

Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions

N/A
N/A
Protected

Academic year: 2021

Share "Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions"

Copied!
2
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Tilburg University

Temporal and identity prediction in visual-auditory events

van Laarhoven, Thijs; Stekelenburg, J.J.; Vroomen, J.

Publication date:

2017

Document Version

Publisher's PDF, also known as Version of record

Link to publication in Tilburg University Research Portal

Citation for published version (APA):

van Laarhoven, T., Stekelenburg, J. J., & Vroomen, J. (2017). Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions. Poster session presented at 18th International Multisensory Research Forum (IMRF)., Nashville, United States.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal

Take down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

(2)

Temporal and identity prediction in visual-auditory events:

Electrophysiological evidence from stimulus omissions

Conclusions

Asynchronous Random-sound Synchronous

Introduction

A rare omission of a sound that is predictable by anticipatory visual information or

self-generated motion induces an early negative omission response at around 45-100 ms (oN1) and subsequent mid- and late latency omission responses (oN2, oP3) in the EEG during the period of silence where the sound was expected.[1,2]

It was previously suggested that such omission responses are primarily driven by the identity of the anticipated sound.[3] Here, we examined the role of temporal prediction in conjunction

with identity prediction in the evocation of the auditory oN1, oN2 and oP3.

A video of an actor performing a single hand clap (Figure 1) containing reliable anticipatory information about both the identity and onset of the sound served as a reference condition. In two additional conditions, we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Regular visual-auditory trials were interspersed with unpredictable sound omissions. Neural activity associated with visual-to-auditory predictions was acquired from these silent trials.

Method

Participants

N = 27 (23 female, all neurotypical) Mean age 19.93 (SD = 2.40)

Stimuli

Hand clap video + sound of a hand clap or 100 different environmental sounds

(e.g. a doorbell or a car horn) Experimental conditions

1. NATURAL timing of hand clap sound

2. RANDOM-TIMING of hand clap sound -250 to 320 ms relative to visual onset 3. RANDOM-IDENTITY of 100 different

environmental sounds with natural timing 88% regular visual-auditory trials 1232 / condition

12% sound omission (silent) trials 168 / condition [3]

Thijs van Laarhoven, Jeroen Stekelenburg & Jean Vroomen

Results

200 -1.0 -0.5 0.5 1.0 μV ms 200 -1.0 -0.5 0.5 1.0 μV ms oN1 (45-100 ms)

Left temporal ROI Right temporal ROIoN1 (45-100 ms)

μV ms 200 -1.0 -0.5 0.5 1.0 μV ms 200 -1.0 -0.5 0.5 1.0 oN2 (120-230 ms)

Frontal ROI Frontal-central ROIoP3 (240-550 ms)

Figure 2. Direct comparison of the grand average omission-ERPs recorded at the regions of interest (ROI) showing maximal activity in the denoted time-windows. Omission responses were corrected for visual activity via subtraction of the

visual-only waveform and collapsed over electrodes in each ROI.

Random-timing oN1 (45-100 ms) (120-230 ms)oN2 (240-550 ms)oP3 Natural Random-identity ± 0.65 μV ± 0.80 μV ± 0.80 μV - +

Figure 3. Scalp potential maps of the grand average visual-corrected omission responses.

Random-timing Random-identity Natural oN1 oN1 oN2 oP3

Department of Cognitive Neuropsychology

Figure 1. Screen capture of the video used in all experimental conditions

scan for VIDEO

Relative to a natural context with correct auditory timing and

identity, the oN1 and subsequent oN2 and oP3 components were abolished when either the timing or the identity of the sound could not be predicted reliably from the video.

This indicates that precise

predictions of timing and identity are both essential elements for inducing an oN1, oN2 and oP3.

scan for FULL-TEXT

Brain Research Volume 1661, 15 April 2017, Pages 79–87

FULL-TEXT available at: https://doi.org/10.1016/j.brainres.2017.02.014

Thijs van Laarhoven

t.j.t.m.vanlaarhoven@tilburguniversity.edu linkedin.com/in/tvanlaarhoven

1. SanMiguel, I., Widmann, A., Bendixen, A., Trujillo-Barreto,

N., & Schroger, E. (2013). Hearing silences: human auditory processing relies on preactivation of sound-specific brain

activity patterns. The Journal of Neuroscience, 33(20), 8633-8639. doi: 10.1523/JNEUROSCI.5821-12.2013

2. Stekelenburg, J. J., & Vroomen, J. (2015). Predictive coding of visual-auditory and motor-auditory events: An

electrophysiological study. Brain Research, 1626, 88-96. doi: 10.1016/j.brainres.2015.01.036

3. SanMiguel, I., Saupe, K., & Schroger, E. (2013). I know what is missing here: electrophysiological prediction error signals

elicited by omissions of predicted "what" but not "when".

Frontiers in Human Neuroscience, 7, 407.

doi: 10.3389/fnhum.2013.00407

References

Referenties

GERELATEERDE DOCUMENTEN

In case Xanker sounds were presented near the central lights, there should more temporal ventriloquism by capture sounds presented from a lateral position than from central

Exposure stimulus pairs were presented with an auditory–visual Lag (AV-lag) of ¡100 and +100 ms with sounds either central or lateral; the location of the test stimulus sound was

To obtain an independent measure of perceived spatial disparity, participants judged, in a different part of the experiment, in a location discrimination task whether the sound and

Prediction-related auditory omission responses were only observed in the single sound condition, suggesting that the sensory system, even with exact foreknowledge of the

Why is it that the Christian représentation of the national martyr, Lumumba, turns into a représentation of Christ living out his passion in the martyrology of the Luba Kasai

They exposed participants to asynchronous audio- visual pairs (*200 ms lags of sound-first and light-first) and measured the PSS for audiovisual, audio-tactile and visual-tactile

Unexpected auditory omissions induced an increased early negative omission response in the autism spectrum disorder group, indicating that violations of the prediction model

Atypical visual-auditory predictive coding in Autism Spectrum Disorder: Electrophysiological evidence from stimulus omissions.. Poster session presented at NVP Winter Conference,