• No results found

Decoding Linguistic Auditory Stimuli to Investigate Meditation and the Predictive Hierarchy

N/A
N/A
Protected

Academic year: 2021

Share "Decoding Linguistic Auditory Stimuli to Investigate Meditation and the Predictive Hierarchy"

Copied!
37
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Decoding Linguistic Auditory Stimuli to Investigate Meditation and the Predictive Hierarchy Ana Radanovic

University of Amsterdam

Supervisor: Dr. Ruben Laukkonen Assessor: Dr. Marte Otten

(2)

Abstract

Meditation is offered as a miracle remedy for many psychological disorders such as depression. What is happening in the brain during meditation that could make this possible? How malleable is the human brain under our own volition? Here we present the predictive processing theory as a potential explanation for the effects of meditation with the aim of investigating such effects of meditation on the different levels of the linguistic processing hierarchy. We break down the linguistic processing hierarchy into three components: emotion, semantic, and linguistic. Using electroencephalography (EEG) we measured brain responses to neutral words, negative words (emotion), pseudowords (semantic), and pure tones (linguistic) in order to reflect these

components. The methods and results presented are of the pilot study aimed at validating the stimuli and the analysis methodology. Both evoked related potential (ERP) component analysis and decoding are used to validate the expected neural responses then measure ability to classify the brain responses to these stimuli types. Results from passive listening showed expected ERP component dynamics for the chosen stimuli and successful classification between the linguistic, semantic, and emotional components of linguistic processing. With a few corrections, the stimuli and analysis pipeline/methodology are ready for the main experiment with expert meditators.

(3)

Introduction

With growing popularity, meditation is offered as a therapy for many clinical disorders including depression and schizophrenia (Marchand, 2012). Some psychologists theorize that meditation as a potential therapy could be possible by causing a shift in perception, allowing a person to identify with the “self” and become more compassionate toward themselves or others (Shapiro et al., 2005). A shift in perception from meditative practice indicates that voluntarily modulating our perception is possible. Reports from expert meditators further support this notion and reveal the remarkable effects of meditation on perceptual processes occurring in arguably lower levels of processing than emotional regulation. For example, instead of the traditional experience of binocular rivalry such that the dominant image seen switches when different images are presented to each eye, expert meditators report perceptual stability of one image for a prolonged time, or even an incomplete dominance of both images while meditating (Carter et al., 2005). This is an impressive feat and brings to light the possible malleability of the brain which begs the question: how does meditation affect cognitive processing, and to what extent?

A theory that proposes a theoretical understanding of state and trait effects of meditation is the theory of predictive processing, a popular framework to use as an underlying theory in cognitive neuroscience with wide explanatory power, although it is not yet widely implemented in meditation research (Pagnoni et al., 2019). The aim of the current project is to use

electroencephalography (EEG) in order to understand the effects of meditation on predictions in the predictive hierarchy during linguistic processing. The present paper presents the

methodology and results from the pilot study of the main project1. The introduction of this paper

1

Due to COVID-19 restrictions, the experimental study was interrupted during the pilot stages and thus data from only five participants was collected.

(4)

is structured such that it will begin with an explanation of the theory of predictive processing and the types of meditation, followed by description of and reasoning behind the methodology and the chosen experimental paradigm.

Predictive Processing

According to predictive processing theories, the brain constantly generates predictions of the world that are iteratively updated to uncover reliable causes for external sensations. Updates of these models are motivated by calculations of prediction error (PE) from a mismatch between predictions and incoming bottom-up sensory input. (Correia et al., 2015). PE is the information propagated through the system all the way through the hierarchy to higher cognitive areas, and weights are then attributed to PE in order to determine if the incoming signal is important, or if it is ambiguous and noisy (Pagnoni et. al, 2019; Friston, 2010).

The most recent formulation of the predictive processing theory is the Free Energy Principle (FEP) as proposed by Karl Friston (Friston, 2010; Friston & Stephan, 2007). According to FEP, the goal of any biological system is to maintain homeostasis with the uncertain and changing external environment and as such, the goal is to lower surprise (or PEs that would threaten homeostasis). This can be done by updating the prior model to match the incoming sensory signal (perceptual inference) or by performing actions in order to match the sensory signal with the prediction (active inference). In the lower levels of the hierarchy, the

representations most closely resemble actual sensory input without influence from prior

experience or emotion, but over time as signals are propagated to higher levels of the hierarchy, the representations become more abstract and are represented more conceptually (Friston & Stephan, 2007). The mechanism thought to govern the precision and weighting of the PEs,

(5)

therefore the mechanism governing the reliance on either incoming sensory signal or prior models, is equated with attention (Feldman & Friston, 2010; Friston, 2010).

Flexibility of generated predictions is important given our dynamic and uncertain external environment, however, the degree of flexibility varies. Some predictions that are generally held to be true, either from experience or generationally, will be more stubborn and less influenced by PEs, such as light originating from above and objects falling downward (de Lange et al., 2018; Yon et al., 2018). Such stubborn predictions may be necessary for active inference and important for survival, however, some researchers believe that psychological disorders such as depression, could be driven by these stubborn predictions (Clark et al., 2018; Edwards et. al., 2012). Psychological disorders are associated with atypical neuromodulatory systems (Yon et al., 2018), and PP proposes that these neuromodulatory systems control the weighting of PE (Hacker et.al, 2016). In this case, it is beneficial to explore the flexibility of predictions and the dynamics of attributing weights to PE for a better understanding of not only the plasticity of the brain, but the possible mechanisms behind certain psychological disorders.

Meditation

Meditation has been described as a practice of engaging particular attentional sets for emotional and attentional regulation with various end goals, including emotional balance, possibly leading to trait changes in self-experience (Lutz. et al, 2008; Cahn & Polich, 2006). Meditation styles vary widely, but Lutz. et. al (2008) provide two broad categories of meditation: Focused Attention (FA) and Open Monitoring (OM). FA meditation is a practice in which there is a narrow focus of attention. The meditator is focused on a chosen object, such as their breath, and when attention strays from the object of focus the meditator is instructed to shift attention back to the object. OM meditation, on the other hand, is a practice with a broader scope of

(6)

attention. OM practice is a passive, non-judgemental experience of thoughts and the meditator is instructed to notice thoughts that arise, let them go, and return to present awareness without rumination (Lutz et. al, 2008). Overall, the focus of attention is the main distinction between these two meditation styles (Pagnoni, 2019).

Prior research in meditation provides promising evidence that predictive processes may be altered with the practice, from higher level effects on predictions, such as positive

reinforcement learning (Kirk & Montague, 2015), to arguably lower levels, such as image dominance in binocular rivalry (Carter et al., 2005). The ERP component thought to

proportionally reflect the size of changes in PE is named Mismatch Negativity (MMN), which is elicited using oddball stimuli (Garrido et al., 2009; Winkler, 2007). Previous research found effects of meditation on the MMN, such that oddball stimuli elicited larger MMN amplitude in expert meditators employing FA meditation (Fucci et al., 2018; Biedermann et al., 2016; Srinivasan & Baijal, 2007). These results suggest FA meditation can affect the weighting of PE in such a way that the brain is applying higher precision weighting to incoming sensory stimuli (or the object of focus during the FA meditation) and incorporating the object into the prediction. With the object heavily weighted into the prior, the oddball stimuli elicit a larger MMN due to a large prediction error. However, oddball stimuli do not elicit the same MMN response in OM meditators, suggesting less reliance on priors and less weighting of incoming sensory input in OM meditation. This is logical in the light of the framework of predictive processing, FA ought to show higher precision weighting to sensory input than OM, due to attention focused on the object, with both having weaker top down influences from cognitive priors (Pagnoni, 2019).

Linguistic processing is known to be a hierarchical process (Ding et al., 2016; Davis & Johnsrude, 2003), thus allowing for analysis on the changes in representations of the stimuli in

(7)

different levels of the processing hierarchy, as a result of differential weighting of PEs in FA and OM meditation. Hence, the effects of the two different meditation styles on processing of auditory linguistic stimuli are the interest of the present project. In the next section, I will discuss the chosen methodology for this type of analysis.

Utilizing Multivariate Pattern Analysis

To investigate the effects of meditation on processing in the hierarchy we chose to use the method of Multivariate Pattern Analysis (MVPA), commonly called brain decoding. MVPA is frequently used in functional magnetic resonance imaging (fMRI) research to investigate cognitive processing. The application of this technique is not yet commonplace for time series neuroimaging data such as MEG/EEG but is proving to be a powerful tool alongside traditional univariate analysis techniques (Grootswagers et al., 2017; Müller et al., 2008; Kamitani & Tong, 2005). Using machine learning, a classifier is trained on EEG data and determines a “decision boundary”, used to differentiate between brain responses to different stimuli. The classifier is then tested on new EEG data to decode between the different stimuli, as the classifier has learned the hidden patterns between brain responses (Herrmann et al., 2012). The mathematical

procedures behind different decoding models vary and classifiers can create linear or non-linear decision boundaries, but the underlying mathematical procedures are analogous (for a brief overview, see Lemm et al., 2010). The latency in the peak decoding accuracy reflects at what stage of processing the ERP response carries the most important information for the

representation of the stimuli (Cichy, Pantazis, & Oliva, 2014), and disparities in values of classification accuracy between stimuli may reflect how different the representations of the stimuli are. As such, MVPA allows for analysis on representations and how much they change in the predictive hierarchy (Heikel, Sassenhagen, & Fiebach, 2018).

(8)

Methods to improve decoding have been established in order to aid the classifier in identifying the subtle differences in neural responses and improving decoding accuracy. These methods include averaging responses of trials of the same exemplar before decoding, increasing the number of stimuli presentations during the experiment, implementation of particular cross-validation procedures based on experimental design, and others (for a more thorough overview, see Grootswagers et. al., 2017). In the pilot experiment, we employed two of these methods with the goal of developing an experimental methodology and analysis pipeline that would provide the best decoding before we assess changes in decoding ability based on meditation style and expertise. Precisely, we were interested in the minimum number of stimuli presentations (utilizing the method of increasing trials) that would be needed in order to obtain the highest decoding accuracy for our chosen stimuli and the effects of averaging trials of the same exemplar on decoding accuracy.

MVPA has an advantage over evoked related potentials (ERP) component analysis as MVPA is more sensitive to noticing relevant information and subtle differences in neural responses and can better pick up on oscillatory dynamics as well as phase-locked component dynamics (Correia et al., 2015). This method also relies less on a priori hypotheses about specific ERP component dynamics, identity of which can be ambiguous and/or highly task dependent (Schacht & Sommer, 2009; Raettig & Kotz, 2008; Friedrich 2006). ERP component analysis will be used in the present experiment as a check for consistency between the measured ERP component dynamics and the trends seen in previous research for the chosen stimuli, as discussed in the next section.

(9)

Chosen Stimuli and Previous ERP Research

With linguistic processing occurring hierarchically (Davis & Johnsrude, 2003; Ding et al., 2016), we were interested in breaking down the hierarchy for the present project into three general components of linguistic processing. In order of descending levels of the hierarchy, they are: the emotional, the semantic, and the linguistic components. To reflect these components, the stimuli chosen were neutral words, negative emotion words (emotional component),

pseudowords (semantic component), and pure tones (linguistic component). These were chosen in accordance with previous research in ERPs discussed below, which demonstrates

characterizable effects in ERP component dynamics to each of the stimuli types in distinct time windows. The time windows also correspond to levels in the processing hierarchy, with lower levels of processing occurring earlier timepoints and higher levels in later timepoints.

Research has found effects of emotion word processing occurring as early as around 100ms after stimulus onset (Gianotti et. al, 2008), with the P200 (100-250ms after stimulus onset) shown to have a larger effect for emotion words (Paulmann & Kotz, 2008). The most robust effects are late effects seen in the P300, varying in latency between 250 and 500ms or more (Polich, 2007; Naumann et al., 1992) and the Late Positivity Component occuring around 600ms or more (LPC; Yang et al., 2019; Bayer, Sommer, & Schacht, 2010; Cuthbert et al., 2000). As such, we could expect that emotion words would differ most in their brain responses in the later time windows, corresponding to the highest levels of hierarchical processing.

The N400, originating around 300-600ms after stimulus onset, is an ERP component that is typically associated with semantic processing or semantic predictability and is thought to reflect the effort required to obtain lexical information (Duncan et. al, 2009). The N400 is characterized to have a larger amplitude and longer latency for pseudowords (Supp et al., 2004;

(10)

Attias & Pratt, 1992). As such, we can expect that this component can reflect the time window for the semantic processing of words occurring earlier than emotion processing, reflecting intermediate levels of hierarchical processing.

Non-linguistic stimuli, such as music and pure tones, show very different patterns of ERP component dynamics and can even show hemispheric specialization (Sidtis & Bryden, 1978). Pure tones can be distinguished from linguistic stimuli as early as 50ms (Okita et al., 1983), suggesting that differentiation between sound and speech occurs at the lowest levels of the predictive hierarchy.

The identified components and their effects discussed are largely task and stimuli dependent (Schacht & Sommer, 2009; Raettig & Kotz, 2008; Friedrich 2006), however, these components do not occur in isolation, but in a dynamic and interactive whole brain response to stimuli. Success in decoding between linguistic and/or auditory stimuli is seen in prior research from decoding between category specific information (Chan et al., 2011), between syntactically correct and incorrect sentences (Herrmann et al., 2012), between vowel sounds and speaker recognition (Formisano, 2005), and others (Correia et. al, 2015; Ethofer et. al, 2009). Our chosen stimuli types (neutral, negative, pseudo, and pure tones) have not previously been decoded, yet the characterizable ERP component dynamics provide evidence of differential brain pattern responses to each of the stimuli types, and as such, we expect that they will be decodable, no matter how this classification manifests.

The Present Study

The main goal of this project is to investigate the effects of meditation on predictions in linguistic processing by decoding between EEG responses to auditory stimuli in novice and expert FA and OM meditators. The results discussed in this paper are of the pilot study run to

(11)

verify the chosen stimuli and the decoding and analysis pipeline/methodology. The four groups of stimuli were chosen with interest in the effects of meditation, in order of descending levels in the processing hierarchy, the emotional (negative words), the semantic (pseudowords), and the linguistic (pure tones) component brain states of auditory processing.

Hypotheses

When listening passively to the four different auditory stimuli (neutral and negative words, pseudowords, and pure tones), we expect that the measured EEG responses from these four categories will be decodable with above chance accuracy. More specifically, in order to reflect specific components of linguistic processing and in descending order of the processing hierarchy, we hypothesize decoding ability between negative versus neutral words (emotion) in latest time windows, words (negative and neutral words) versus nonwords (pure tones and pseudowords) and words versus pseudowords (semantic) in intermediate time points, and words versus pure tones/ nonwords (linguistic) in the earliest time points.

Methods Participants and Ethics

Five Dutch speaking participants were tested (3 males and 2 females, ages range from 18-29 years with an average of 22.2 years). Participants reported normal bilateral hearing and no history of epilepsy or other neurological conditions. The participants either received class credit for their participation or volunteered. We did not ask the participants to provide information about previous mediation experience. The experiment was approved by the research ethics committee of the Vrije Universiteit Medical Center.

(12)

Stimuli

Stimuli consisted of four categories with 15 unique Dutch words or tones; negative, neutral, pseudowords, and pure tones.

Negative and Neutral

Negative and neutral words were taken from Moors et al. (2013) in which the norms of 4,300 Dutch words were analyzed. Nine nouns and six verbs were chosen. Valence and arousal ratings were from 1 (low) to 7 (high) and the average valence and arousal ratings, respectively, of the negative words was 1.66 and 5.31 and 4.12 and 4.08 for neutral words. Words were chosen to have significantly different valence and arousal in order to ensure valence and arousal effects to increase decoding ability. Negative and neutral words had a difference in power (or dominance) with averages of 4.68 and 4.13 for negative and neutral respectively. Both negative and neutral words were controlled to not differ significantly in age of acquisition, frequency, log frequency, phonological neighborhood size (CLEARPOND; Marian, Bartolotti, Chabal, & Shook, 2012; https://clearpond.northwestern.edu/dutchpond.php) and frequency. The words were generated using Natural Reader (https://www.naturalreaders.com/), a text to speech synthesizer, at speed setting -3.

Pseudowords

Pseudowords were created from a large list of neutral words provided by Moors et al. (2013) in which 2 syllables from 2 random words were swapped in attempt to conserve low level properties. Words were fed into Natural Reader and were then selected based on how “pseudo” they sounded to a native Dutch speaker. A word was considered “pseudo” if the word was not thought to sound like any known word on the first presentation of the word to the Dutch speaker.

(13)

Pseudowords were controlled for bigram frequency as compared to neutral and negative words, using Wordgen (https://www.wouterduyck.be/?page_id=29).

Pure Tones

Pure tones were generated in Ableton Live 10 operator (ableton.com) using sine waves of randomized frequencies and intensities. Adjustments on the length of the pure tones were done using Audacity (audacity.com). Pure tones were matched with words (negative, neutral, pseudo) on pitch, intensity, and length, analyzed using Praat (Boersma & Weenink, 2020).

Design and Experimental Procedure

Every participant was given an information sheet and a consent form and was told this was a pilot study for a meditation project. The participants were seated in a chair in a dark soundproof room and they were told that they would hear words and sounds. They were instructed to sit up straight, eyes closed, and passively attend to the sounds they hear. The participants were instructed to redirect their attention back to the audio if they noticed themselves mind wandering during the experimental session.

The order of the stimuli presentation was pseudo-randomized such that no category or word was presented twice in a row. Each of the 15 stimuli in every category was presented 6 times in order to increase signal to noise ratio (Chan et al., 2011), for a total of 360 stimuli presentations with an interstimulus interval randomized between 1500 and 1700 ms. Stimuli were played on binaural speakers set to a comfortable volume level for the participant, determined by playing a few of the sounds they would hear. Auditory stimuli were presented using Open Sesame (Mathôt et al., 2012).

(14)

Responses were recorded in two blocks and participants were given a five-minute break between each session. In total the experiment took approximately one hour including the set-up, break, and cap removal.

EEG Data Acquisition

EEG was recorded at 512 Hz from 64-Ag-AgCl electrodes using the ActiveTwo system (BioSemi, http://www.biosemi.com) in the 10-20 system. Reference electrodes were placed at the earlobes. Eye movements were measured using electrodes placed above both eyes and one below the right eye.

Preprocessing

All data was analyzed in a Python environment (Python Software Foundation,

https://www.python.org/) and preprocessing was completed using custom written scripts using MNE functionality (https://github.com/dvanmoorselaar/DvM). A high-pass filter above 0.1 Hz was applied, using a zero-phase shift “Butterworth” filter (MNE, Gramfort et al., 2013). Trials were epoched from -200 ms prior to and 2,000 ms post stimulus onset. To control for filter artifacts during preprocessing, epochs were extended by 500ms at the start and end of the epoch. Using an adapted automatic trial-rejection procedure (Fieldtrip toolbox; Oostenveld, Freis, Maris, & Schoffelen, 2011), artifact detection was completed with a 110-140 Hz bandpass filter. Variable score cut offs for each participant was based on the within-subject variance of the z-scores (de Vries, van Driel, & Olivers, 2017). 8-41% of trials were removed (M=13%). One participant became agitated during the experiment leading to noisy data causing 41% of epochs removed for this participant. Although the participants had their eyes closed for the duration of the experiment, independent component analysis (ICA) was done in case of eyeblink artifacts using the Picard method (MNE, Gramfort et al., 2013).

(15)

Onsets of the stimuli were adjusted prior to preprocessing due to empty sound in the start of the audio files. Each of the stimuli were analyzed by two researches in order to determine the delay of onset of the sound. Discrepancies in onset time between the two researchers were averaged.

Trigger onsets were also adjusted before pre-processing due to Open Sesame audio presentation delay. Because the delay from Open Sesame is variable, we adjusted the trigger onsets with a value of 47.24ms, determined previously as the average delay by Bridges et al. (2020).

Results ERP Component Analysis

As we were underpowered with only five participants, the main interest in conducting ERP component analysis was to visually inspect ERP waveforms to confirm that general trends of ERP responses are in conjunction with what is seen in previous literature. We expected early effects of differential processing in pure tones early in time, a larger N400 from pseudowords, and late effects in the LPC from negative words.

All ERP analyses were done in a Python environment using MNE toolbox (Gramfort et al., 2013). Epochs were baseline corrected using a 200ms pre-stimulus period and were averaged over participants. Analysis was completed using all electrodes, frontal, and then central

electrodes. Non-parametric permutation cluster tests were completed in order to test significance at each time point with few assumptions of the data and automatically controlling for family wise error rate (Maris & Oostenveld, 2007). Analyses were pairwise (negative versus neutral, negative versus pseudo, negative versus pure, neutral versus pseudo, neutral versus pure, and pure versus pseudo), collapsing across participants.

(16)

Shown in Figure 1 is the grand averaged ERP for all four stimuli types (negative, neutral, pseudo, and pure tones) over the frontal electrodes. The ERP plot for the frontal electrodes was chosen as it visually showed the clearest ERP component dynamics for all four of our stimuli. In the early timepoints, between 100 and 200ms, there is a positive, then a negative deflection in the pure tone condition that deviates from the initial components of other conditions. Around 600ms is a visibly large negative deflection in the pseudo condition which indicates the N400, as it was previously found to have the largest negative deflection for pseudowords.

The results of this analysis provide that ERP component dynamics for each stimuli type are consistent with previous literature, such that pure tones deviate from words early on (Okita et al., 1983), pseudowords have a large effect on the N400 (Supp et al., 2004; Attias & Pratt, 1992), and negative have late effects in the LPC (Bayer, Sommer, & Schacht, 2010; Cuthbert et al., 2000). This indicates that we are eliciting the expected responses for each stimuli type.

(17)

Figure 1. ERP Plot of frontal electrodes. ERP plot for all 4 stimuli types grand averaged over participants and

trials for the frontal electrodes (Fp1, AF7, AF3, F1, F3, F5, F7, Fpz, Fp2, AF8, AF4, AFz, Fz, F2, F4, F6 & F8). The 0 timepoint indicates stimulus onset, and the grey bar indicates the average length of the stimulus

presentation (1.15 seconds).

Multivariate decoding analysis

The four stimuli types (negative, neutral, pseudo, and pure) were expected to be

decodable, more specifically, decodable between the components of the linguistic hierarchy. The groups expected to be decodable with above chance accuracy were the negative versus neutral words reflecting the emotion component; words (negative and neutral) versus pseudowords and words versus non-words (pseudowords and pure tones) reflecting the semantic component; and words versus pure tones reflecting the linguistic component. A four-way classification was done for analysis on the minimum number of stimuli presentations needed for the highest decoding accuracy.

(18)

MVPA was conducted using a linear classifier -the backwards decoding model (BDM)- on each sampled time point using 10-fold cross-validation. All 64 channels were used during training. Both training and classifying were done separately per participant and the main analysis was collapsed across all participants. Decoding was done using single-trial data, averaging 2, and averaging 3 trials in order to increase signal to noise ratio (SNR; Grootswagers et al., 2017; Chan et. al, 2011). Classification accuracy was determined using area under the curve analysis (AUC), which represents classification accuracy as the proportion correct out of false-positive trials. A score of 0.5 represents chance level performance and a score of 1 is a perfect prediction score (Fyshe, 2019). Significant clusters of decoding were determined using non-parametric

permutation cluster testing. Emotional

The results from Figure 2C show that the decoding model could discriminate between negative and neutral words in latest time points (1.9-2.0 seconds) after stimulus onset, reflected in the significant cluster (p <0.05) indicated with the blue line.

Semantic

As demonstrated in Figure 2A and Figure 2B, the decoding model could discriminate between words and nonwords with a cluster of significant decoding (p <0.05) from 285-488ms after stimulus onset. Significant decoding was not found between words and pseudowords. Linguistic

The results from Figure 2D demonstrate that the decoding model could discriminate between words and pure tones with a significant decoding cluster (p <0.05) found beginning in the earliest timepoints, from 74ms-574ms after stimulus onset.

(19)

Figure 2. Decoding Plots for Emotional, Semantic, and Linguistic Components. Decoding plots for all components (emotion, semantic, linguistic) showing decoding accuracies using area-under-curve (AUC) analysis. Decoding accuracies are of broadband EEG data using all 64 channels from -200ms to 2000ms. The 0ms timepoint indicates stimulus onset and the grey bar indicates the average length of stimulus presentation (1.15 seconds). Significant clusters (p <0.05) are indicated with a solid blue line. The dashed line at 0.50 indicates chance performance. A. Decoding plot of classifier performance for words (negative and neutral words) versus pseudowords. B. Decoding plot of classifier performance for words (negative and neutral words) versus nonwords. C. Decoding plot of classifier performance for words (negative and neutral words) versus nonwords. D. Decoding plot of classifier performance for words (negative and neutral words) versus pure tones.

Decoding Methods

Averaging trials is typically done in order to improve signal-to-noise ratio (SNR; Chan et. al, 2011; Grootswagers et. al, 2017) and decreasing computing power (Grootswagers et. al, 2017), however this was not the case for this present experiment. As shown in Figure 3A, the

(20)

period of significant decoding decreased and changed location after averaging trials. Decoding accuracy before stimulus onset also increased, and the data became noisier.

We were also interested in analyzing the minimum number of repeated stimuli presentations needed in order to have the highest decoding accuracy. Depicted in Figure 3B, using a four way classification, the highest decoding average is in the range of 100-140 trials per exemplar. However, the lack of a plateau in highest decoding performance suggests the peak decoding accuracy was not reached.

(21)

Figure 3. Results from Decoding Methods: Averaging and Increasing Trials. The figures depict the results from using two decoding methods, averaging trials and increasing the number of stimuli presentations. A. Results from averaging 0, 2 and 3 trials over the same exemplar when decoding words versus pure tones, collapsed across participants. Decoding was done on EEG broadband data across all 64 channels and accuracy was calculated using area-under-curve analysis (AUC). Periods of significance (p <0.05) after non-parametric cluster testing are indicated with a blue line. Dashed lines at 0.50 indicate chance performance. The 0ms timepoint indicates stimulus onset. B. Results from downscaling the number of trials used per exemplar when decoding between all four stimuli groups (negative, neutral, pseudowords, and pure tones). The dashed line signifies chance decoding (25%). The average decoding accuracy was calculated by first decoding between the four stimuli types using all trials (after removing epochs during pre-processing) individually for each participant, using actual decoding accuracy (percentage of correct classifications). This was then repeated using less trials per exemplar, or downscaling, until there were only 10 trials per exemplar. Decoding accuracies were then averaged across the entire epoch (-200ms to 2000ms) to calculate the average decoding accuracy. This was collapsed across participants.

Discussion

Meditation requires the engagement of particular attentional sets (Lutz et al., 2008) and the weighting of prediction errors is thought to be modulated by attention (Feldman & Friston,

(22)

2010; Friston, 2010). Therefore, meditation has the capability to affect the propagation of PEs and the representations of concepts in different levels of processing. Previous research has shown that meditation changes prediction errors during auditory processing (Fucci et al., 2018;

Biedermann et al., 2016; Srinivasan & Baijal, 2007), however employment of the predictive processing theory in explaining the state and trait effects of meditation is still in its infancy. We sought to investigate meditation’s effects on the predictive hierarchy further using the linguistic processing hierarchy as a foundation, breaking it down into three components representing, in order of descending levels of the hierarchy; the emotional, the semantic, and the linguistic components of processing. The stimuli, neutral words, negative words (emotion), pseudowords (semantic), and pure tones (linguistic) were chosen to reflect these components with

differentiated neural responses occurring in time points corresponding to levels in the processing hierarchy. The main goal of this pilot study was to validate the stimuli as well as the decoding analysis pipeline/methodology.

The results from the visual analysis of the ERPs revealed that the stimuli chosen were eliciting the expected neural responses given by previous ERP research. The pure tones deviated from the words early on (Okita, 1983), pseudowords elicited the largest N400 (Supp et al., 2004; Attias & Pratt, 1992), and the negative words elicited the greatest effects in the LPC (Bayer, Sommer, & Schacht, 2010; Cuthbert et al., 2000). With that said, the ERP component dynamics were not of main concern for this project as we wanted to utilize MVPA in order to better capture whole brain dynamics and representations.

Our decoding results did support our hypothesis that not only are the chosen stimuli decodable, but that the linguistic, semantic, and emotional components of linguistic processing can be decoded in their corresponding time windows reflecting their level in the processing

(23)

hierarchy. Firstly, significant decoding between words and pure tones, the “linguistic” component, was found in the earliest time window corresponding to the lowest levels of the hierarchy. Secondly, significant decoding was found between negative and neutral words, the “emotional” component, in the latest time windows corresponding to the highest levels of the hierarchy. Thirdly, the significant decoding for the semantic component, between words and non-words, was found later than for the pure tones and earlier than for negative and neutral words, corresponding to intermediate levels of the hierarchy. However, the significant decoding window was found between words and non-words, but not words and pseudowords, suggesting that the significance may have been driven by the pure tones in the non-words group.

Nonetheless, the decoding between the groups of stimuli and the components of linguistic processing was successful.

Implications for the Main Experiment

The utilization of these stimuli in the main experiment will include conditions with expert meditators employing FA and OM meditation. Both FA and OM meditation practices are known to reduce emotional reactivity (Lutz et al., 2008), and as such we expect that when these stimuli are tested with expert meditators, the decoding ability between negative and neutral words, or the emotional component, will not reach significance. OM practice, however, also requires a

disengagement with thoughts such that any thought that appears should be let go without rumination. This would suggest that there is a lack of meaning attached to the thought, so in the case of our experiment, this would suggest a disengagement of semantic meaning to each word. If this occurs, we could expect that the decoding between words and non-words or words and pseudowords will also not reach significance.

(24)

The effects of FA and OM meditation on the linguistic component of processing is exploratory, as some expert meditators have previously shown to manipulate processing at the lowest levels of the hierarchy with binocular rivalry (Carter et al., 2005), but our particular experimental paradigm may confound effects at this level. Pure tones, used to reflect linguistic component processing occurring at the lowest levels of the hierarchy, are oddball stimuli with only a 25% chance of presentation, which may elicit the MMN effects seen in previous research (Feldman & Friston, 2010; Friston, 2010). The MMN effect was not found in OM meditation conditions, however, breaking down the processing hierarchy at the lowest levels not only requires time, but also meditation expertise as the subjects in the study conducted by Carter et al. (2005) included Tibetan Buddhist monks with up to 54 years of experience. While the

experience of binocular rivalry in these monks was inconsistent, they did nonetheless occur. We may see that some meditators will be able to reach a state of meditation during the experiment in which the linguistic component of processing is not decodable. If the effect is not found when collapsed across the OM meditation condition, decoding analysis on individual meditators may reveal this effect.

When the meditation conditions are included in the experiment, a method of extended analysis after decoding that will be powerful for investigation of meditation’s effects on

predictions is the temporal generalization matrix (King & Dahaene; 2014). With this method, a classifier trained at a timepoint t, is tested against every other timepoint in the EEG epoch. The accuracies are then plotted in a Generalization Across Time (GAT) matrix with training times on the x-axis and testing times on the y-axis. This method of analysis can provide information about representations over time. For example, visual linguistic stimuli that tend to be easily decodable by a classifier at an early timepoint could be performing well based on ERP responses to certain

(25)

configurations or patterns of white and black pixels on the screen that the participant is looking at. The classifier trained at this early timepoint is then tested on data at later timepoints in the epoch. If the decoding accuracy remains high when in the time points where the stimuli is not presented on the screen anymore, it suggests that the classifier is performing well based on semantic features of the stimuli instead of the low-level properties of the stimulus (Fyshe, 2019). This type of analysis is a powerful tool in neurocognitive language studies as it can provide information regarding the presence of semantic representations in different time points, located anywhere in the brain, and addresses the issues with ERP component identity ambiguity and overlap (Heikel, Sassenhagen, & Fiebach, 2018; King & Dehaene, 2014).

The analysis of the GAT matrix is a qualitative analysis in which the shapes of significant decoding in the matrix is what provides the information regarding presence of representations at different timepoints, such that, for example, a perfect square reflects a sustained representation over time. Review of GAT patterns and the corresponding interpretations can be found in the report from King & Dehaene (2014). GAT analysis with meditation conditions may reveal interesting information regarding the dynamic changes of representations given by the differences in FA and OM practices. We can expect, for example, that when using GAT decoding for words versus pseudowords, semantic representations of words are present later in time for FA meditators, depicted in the matrix with classifiers trained in immediate timepoints performing well in later timepoints. We may not see the same results with OM meditators. Overall, decoding is a method that is very powerful for this experiment with the ability to investigate the effects of meditation in representations of the hierarchy from multiple perspectives.

(26)

Although we reached significant decoding with only five participants, there are a few limitations to the pilot study that with correction, may lead to higher decoding accuracy overall. One limitation is the delay of audio presentation from Open Sesame. Although we attempted to account for this in adjusting the trigger onset times before pre-processing, we used the average value of delay of 47.24ms. The delay from Open Sesame has a variance of at least 8.56ms (Bridges et al., 2020). Machine classifiers are sensitive to millisecond changes in the ERP signal, so this variance could have negatively affected decoding ability. For future studies, a different program to present stimuli could be used such as Matlab

(www.mathworks.com/products/matlab) or the audio could be loaded onto an external audio chip before presentation.

Another limitation could be the negative words chosen. Prior research has shown a differential processing of emotion and emotion laden words (Knickerbocker & Altarriba, 2013). Emotion words are words such as “love”, “hate”, “happy”, etc and emotion-laden words are words that contextually imply emotion, such as “cancer” and “death”. Our stimuli includes both, including “haten” (to hate) and “bloedbad” (massacre) which could have caused inconsistency in the response and corresponding representations of the negative word exemplar. The future study could control for this by only using emotion or emotion-laden words.

We also attempted to improve the decoding accuracy using trial averaging (Grootswagers et. al, 2017; Chan et. al, 2011) without success. The downscaling results also showed that we did not include enough trials to reach peak decoding accuracy. These together suggest that we did not include enough presentations of each of the stimuli and the averaging outweighed the trials. With less trials there is less for the classifier to be trained on, and thus information is lost or noisy which makes it more difficult for the classifier. In similar experimental paradigms,

(27)

researchers used 5-10 trial repetitions for highest decoding accuracy (Correia et al., 2015; Schaefer et al., 2011). The current experiment used 12 trial repetitions (six repetitions in each block) but many epochs were dropped. Thus, the future experiment could further increase the number of stimuli presentations and conduct the downscaling analysis again in order to obtain the ideal number of stimuli presentations for the highest decoding accuracy, and also re-examine the results of averaging with the increased number of trials.

Lastly, although not reaching significance, all of our decoding plots showed above chance decoding before stimulus onset. This could be a simple issue of baseline correction before decoding, or more participants needed for the experiment. Nonetheless, the decoding before stimulus onset did not reach significance and is not a critical concern for our results.

Conclusion

In this paper, I have shown that the experimental methodology and analysis pipeline, with a few corrections, will be valid for the main experiment aimed at testing expert meditators. Visual ERP component analysis revealed the expected neural responses to the stimuli and the classifier was able to decode between the emotional, linguistic, and semantic components of linguistic processing. The use of decoding in this experimental paradigm will be powerful in revealing the impact of meditation and meditation styles in processing in different levels of the predictive hierarchy.

(28)

References

Attias, J. (1992). Auditory event related potentials during lexical categorization in the oddball

paradigm. Brain and Language, 43(2), 230–239. https://doi.org/10.1016/0093-934X(92)90129-3 Bayer, M., Sommer, W., & Schacht, A. (2010). Reading emotional words within sentences: The

impact of arousal and valence on event-related potentials. International Journal of Psychophysiology, 78(3), 299–307. https://doi.org/10.1016/j.ijpsycho.2010.09.004

Biedermann, B., De Lissa, P., Mahajan, Y., Polito, V., Badcock, N., Connors, M. H., ... & McArthur, G. (2016). Meditation and auditory attention: An ERP study of meditators and non-meditators. International Journal of Psychophysiology, 109, 63-70.

Boersma, Paul & Weenink, David (2020). Praat: doing phonetics by computer [Computer program]. Version 6.1.09, retrieved 26 January 2020 from http://www.praat.org/

Bridges, D., Pitiot, A., MacAskill, M. R., & Peirce, J. W. (2020). The timing mega-study: Comparing a range of experiment generators, both lab-based and online [Preprint]. PsyArXiv.

https://doi.org/10.31234/osf.io/d6nu5

Cahn, B. R., & Polich, J. (2006). Meditation states and traits: EEG, ERP, and neuroimaging studies. Psychological bulletin, 132(2), 180.

Carter, O. L., Presti, D. E., Callistemon, C., Ungerer, Y., Liu, G. B., & Pettigrew, J. D. (2005). Meditation alters perceptual rivalry in Tibetan Buddhist monks. Current Biology, 15(11), R412-R413.

Chan, A. M., Halgren, E., Marinkovic, K., & Cash, S. S. (2011). Decoding word and category-specific spatiotemporal representations from MEG and EEG. NeuroImage, 54(4), 3028–3039. https://doi.org/10.1016/j.neuroimage.2010.10.073

(29)

Cichy, R. M., Pantazis, D., & Oliva, A. (2014). Resolving human object recognition in space and time. Nature neuroscience, 17(3), 455.

Clark, J.E., Watson, S., Friston, K.J., 2018. What is mood? A computational perspective. Psychol. Med. 48, 2277–2284. doi:10.1017/S0033291718000430

Correia, J. M., Jansma, B., Hausfeld, L., Kikkert, S., & Bonte, M. (2015). EEG decoding of spoken words in bilingual listeners: From words to language invariant semantic-conceptual

representations. Frontiers in Psychology, 6(FEB), 1–10.

https://doi.org/10.3389/fpsyg.2015.00071

Cuthbert, B. N., H. T. Schupp, M. M. Bradley, N. Birbaumer, and P. J. Lang. 2000. Brain potentials in affective picture processing: covariation with autonomic arousal and affective report. Biological Psychology 52(2). 95–111.

Davis, M. H., & Johnsrude, I. S. (2003). Hierarchical processing in spoken language comprehension. Journal of Neuroscience, 23(8), 3423-3431.

De Lange, F. P., Heilbron, M., & Kok, P. (2018). How do expectations shape perception?. Trends in cognitive sciences, 22(9), 764-779.

De Vries, I. E., van Driel, J., & Olivers, C. N. (2017). Posterior α EEG dynamics dissociate current from future goals in working memory-guided visual search. Journal of Neuroscience, 37(6), 1591-1603.

Ding, N., Melloni, L., Zhang, H. et al. Cortical tracking of hierarchical linguistic structures in connected speech. Nat Neurosci 19, 158–164 (2016). https://doi.org/10.1038/nn.4186 Duncan, C. C., Barry, R. J., Connolly, J. F., Fischer, C., Michie, P. T., Näätänen, R., Polich, J.,

(30)

for eliciting, recording, and quantifying mismatch negativity, P300, and N400. Clinical Neurophysiology, 120(11), 1883–1908. https://doi.org/10.1016/j.clinph.2009.07.045

Edwards, M.J., Adams, R.A., Brown, H., Pareés, I., Friston, K.J., 2012. A Bayesian account of “hysteria”. Brain 135, 3495–3512. doi:10.1093/brain/aws129

Ethofer, T., Van De Ville, D., Scherer, K., & Vuilleumier, P. (2009). Decoding of emotional information in voice-sensitive cortices. Current biology, 19(12), 1028-1033.

Feldman, H., & Friston, K. (2010). Attention, uncertainty, and free-energy. Frontiers in human neuroscience, 4, 215.

Formisano, E., De Martino, F., Bonte, M., & Goebel, R. (2008). “Who” Is Saying “What”? Brain-Based Decoding of Human Voice and Speech. Science, 322(5903), 970–973.

https://doi.org/10.1126/science.1164318

Friedrich, C. K., Eulitz, C., & Lahiri, A. (2006). Not every pseudoword disrupts word recognition. Behavioral and Brain Functions, 2(1), 36. https://doi.org/10.1186/1744-9081-2-36.

Friston, K. (2010). The free-energy principle: a unified brain theory?. Nature reviews neuroscience, 11(2), 127-138.

Friston, K. J., & Stephan, K. E. (2007). Free-energy and the brain. Synthese, 159(3), 417-458. Fucci, E., Abdoun, O., Caclin, A., Francis, A., Dunne, J. D., Ricard, M., ... & Lutz, A. (2018).

Differential effects of non-dual and focused attention meditations on the formation of automatic perceptual habits in expert practitioners. Neuropsychologia, 119, 92-100.

Fyshe, A. (2020). Studying language in context using the temporal generalization method.

Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1791), 20180531. https://doi.org/10.1098/rstb.2018.0531

(31)

Garrido, M. I., Kilner, J. M., Stephan, K. E., & Friston, K. J. (2009). The mismatch negativity: a review of underlying mechanisms. Clinical neurophysiology, 120(3), 453-463.

Gianotti, L. R. R., Faber, P. L., Schuler, M., Pascual-Marqui, R. D., Kochi, K., & Lehmann, D. (2008). First valence, then arousal: The temporal dynamics of brain electric activity evoked by emotional stimuli. Brain Topography, 20(3), 143–156. https://doi.org/10.1007/s10548-007-0041-2.

Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., ... &

Hämäläinen, M. S. (2014). MNE software for processing MEG and EEG data. Neuroimage, 86, 446-460.

Grootswagers, T., Wardle, S. G., & Carlson, T. A. (2017). Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data. Journal of Cognitive Neuroscience, 29(4), 677–697.

https://doi.org/10.1162/jocn_a_01068

Hacker, D., de Lange, F. P., & Press, C. (2019). The Predictive Brain as a Stubborn Scientist. Trends in Cognitive Sciences, 23(1), 6–8. https://doi.org/10.1016/j.tics.2018.10.003

Haker, H., Schneebeli, M., & Stephan, K. E. (2016). Can Bayesian theories of autism spectrum disorder help improve clinical practice?. Frontiers in psychiatry, 7, 107.

Heikel, E., Sassenhagen, J., & Fiebach, C. J. (2018). Decoding semantic predictions from EEG prior to word onset. bioRxiv, 393066.

Herrmann, B., Maess, B., Kalberlah, C., Haynes, J.-D., & Friederici, A. D. (2012). Auditory perception and syntactic cognition: Brain activity-based decoding within and across subjects: Brain-based decoding within and across subjects. European Journal of Neuroscience, 35(9), 1488–1496. https://doi.org/10.1111/j.1460-9568.2012.08053.x

(32)

Jiang, Z., Li, W., Liu, Y., Luo, Y., Luu, P., & Tucker, D. M. (2014). When affective word valence meets linguistic polarity: Behavioral and ERP evidence. Journal of Neurolinguistics, 28, 19–30. https://doi.org/10.1016/j.jneuroling.2013.11.001

Kamitani, Y., & Tong, F. (2005). Decoding the visual and subjective contents of the human brain. Nature neuroscience, 8(5), 679-685.

King, J.-R., & Dehaene, S. (2014). Characterizing the dynamics of mental representations: The temporal generalization method. Trends in Cognitive Sciences, 18(4), 203–210.

https://doi.org/10.1016/j.tics.2014.01.002

Kirk, U., & Montague, P. R. (2015). Mindfulness meditation modulates reward prediction errors in a passive conditioning task. Frontiers in psychology, 6, 90.

Knickerbocker, H., & Altarriba, J. (2013). Differential repetition blindness with emotion and emotion-laden word types. Visual Cognition, 21(5), 599-627.

Lemm, S., Blankertz, B., Dickhaus, T., & Müller, K.-R. (2011). Introduction to machine learning for brain imaging. NeuroImage, 56(2), 387–399. https://doi.org/10.1016/j.neuroimage.2010.11.004 Lutz, A., Slagter, H. A., Dunne, J. D., & Davidson, R. J. (2008). Attention regulation and monitoring

in meditation. Trends in Cognitive Sciences, 12(4), 163–169. https://doi.org/10.1016/j.tics.2008.01.005

Marchand, William R., MD Mindfulness-Based Stress Reduction, Mindfulness-Based Cognitive Therapy, and Zen Meditation for Depression, Anxiety, Pain, and Psychological Distress, Journal of Psychiatric Practice: July 2012 - Volume 18 - Issue 4 - p 233-252 doi:

(33)

Marian, V., Bartolotti, J., Chabal, S., & Shook, A. (2012). CLEARPOND: Cross-linguistic easy-access resource for phonological and orthographic neighborhood densities. PloS one, 7(8), e43230.

Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG-and MEG-data. Journal of neuroscience methods, 164(1), 177-190.

Mathôt, S., Schreij, D., & Theeuwes, J. (2012). OpenSesame: An open-source, graphical experiment builder for the social sciences. Behavior Research Methods, 44(2), 314-324.

doi:10.3758/s13428-011-0168-7

Moors, A., De Houwer, J., Hermans, D., Wanmaker, S., van Schie, K., Van Harmelen, A.-L., De Schryver, M., De Winne, J., & Brysbaert, M. (2013). Norms of valence, arousal, dominance, and age of acquisition for 4,300 Dutch words. Behavior Research Methods, 45(1), 169–177.

https://doi.org/10.3758/s13428-012-0243-8

Müller, K. R., Tangermann, M., Dornhege, G., Krauledat, M., Curio, G., & Blankertz, B. (2008). Machine learning for real-time single-trial EEG-analysis: from brain–computer interfacing to mental state monitoring. Journal of neuroscience methods, 167(1), 82-90.

Naumann, E., Bartussek, D., Diedrich, O., & Laufer, M. E. (1992). Assessing cognitive and affective information processing functions of the brain by means of the late positive complex of the event-related potential. Journal of Psychophysiology, 6(4), 285–298.

Okita, T., Konishi, K., & Inamori, R. (1983). Attention-related negative brain potential for speech words and pure tones. Biological Psychology, 16(1-2), 29-47.

Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J. M. (2011). FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational intelligence and neuroscience, 2011.

(34)

Pagnoni, G. (2019). The contemplative exercise through the lenses of predictive processing: A promising approach. In Progress in Brain Research (Vol. 244, pp. 299–322). Elsevier. https://doi.org/10.1016/bs.pbr.2018.10.022

Paulmann, S., & Kotz, S. A. (2008). Early emotional prosody perception based on different speaker voices. Neuroreport, 19(2), 209-213.

Polich, J. (2007). Updating P300: an integrative theory of P3a and P3b. Clinical neurophysiology, 118(10), 2128-2148.

Raettig, T., & Kotz, S. A. (2008). Auditory processing of different types of pseudo-words: An event-related fMRI study. NeuroImage, 39(3), 1420–1428.

https://doi.org/10.1016/j.neuroimage.2007.09.030

Schacht, A., & Sommer, W. (2009). Time course and task dependence of emotion effects in word processing. Cognitive, Affective, & Behavioral Neuroscience, 9(1), 28-43.

Schaefer, R. S., Farquhar, J., Blokland, Y., Sadakata, M., & Desain, P. (2011). Name that tune: Decoding music from the listening brain. NeuroImage, 56(2), 843–849.

https://doi.org/10.1016/j.neuroimage.2010.05.084

Shapiro, S. L., Carlson, L. E., Astin, J. A., & Freedman, B. (2006). Mechanisms of mindfulness. Journal of clinical psychology, 62(3), 373-386.

Sidtis, J. J., & Bryden, M. P. (1978). Asymmetrical perception of language and music: Evidence for independent processing strategies. Neuropsychologia, 16(5), 627-632.

Srinivasan, N., & Baijal, S. (2007). Concentrative meditation enhances preattentive processing: a mismatch negativity study. Neuroreport, 18(16), 1709-1712

Supp GG, Schlögl A, Gunter TC, Bernard M, Pfurtschneller G, Petsche H: Lexical memory search during N400: Cortical couplings in auditory comprehension. NeuroReport 2004, 15:1209-1213.

(35)

Winkler, I. (2007). Interpreting the mismatch negativity. Journal of Psychophysiology, 21(3-4), 147. Yang, H., Laforge, G., Stojanoski, B., Nichols, E. S., McRae, K., & Köhler, S. (2019). Late positive

complex in event-related potentials tracks memory signals when they are decision relevant. Scientific reports, 9(1), 1-15.

Yon, D., de Lange, F. P., & Press, C. (2019). The Predictive Brain as a Stubborn Scientist. Trends in Cognitive Sciences, 23(1), 6–8. https://doi.org/10.1016/j.tics.2018.10.003

(36)
(37)

Referenties

GERELATEERDE DOCUMENTEN

In this study we will address certain aspects that are important to generate proper results. It will give a visual on how firms choose certain strategies and how they move

The current research will contribute to this work by showing that influence hierarchy steepness (i.e. the strength of the influence hierarchy) is an important factor for

The social and cultural trends pushing both men and women into a highly valued formerly &#34;male&#34; public work culture -- in which work is a major source of self

Consistent with the findings by Farb et al., (2007) and Lutz et al., (2016), we hypothesize that the intensive mindfulness practice during the retreat will lead to a clearer

(1990:193) conclusion is very significant in terms of this study, namely that experiences of transcendental consciousness as cultivated by meditation are

• Andrea Menaker, Seeking Consistency in Investment Arbitration: The Evolution of ICSID and Alternatives for Reform, in Albert Jan van den Berg (ed), International Arbitration: The

To assess whether our effort and reward virtual reality environments were actually experienced as more realistic than the baseline condition, participants were asked to rate

We consider a model consisting of two fluid queues driven by the same background continuous-time Markov chain, such that the rates of change of the fluid in the second queue de- pend