• No results found

A pilot study to examine the lateralisation and discriminability of the neural correlates of facial expressions

N/A
N/A
Protected

Academic year: 2021

Share "A pilot study to examine the lateralisation and discriminability of the neural correlates of facial expressions"

Copied!
57
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Internship report

Andreas Sebastian Wolters

UvA ID: 11 11 97 64

Start date: 9

th

of February, 2016

End date: 11

th

of October, 2016

Credits: 32

A pilot study to examine the lateralisation

and discriminability of the neural correlates

of facial expressions

Supervisor #1: Efraïm Salari

Supervisor #2: Dr. Mathijs Raemaekers

Co-assessor: Dr. Max Keuken

Department of Neurology and Neurosurgery,

Rudolf Magnus Institute of Neuroscience,

University Medical Center Utrecht

(2)

TABLE

OF

CONTENTS

1.

I

NTRODUCTION

4

1.1 LOCKED-IN SYNDROME AND THE NEED FOR A BRAIN-COMPUTER INTERFACE (BCI) ... 4

1.2 THE CURRENT STATE OF BCIS ... 5

1.3 WILFUL FACIAL EXPRESSIONS AND BCIS ... 5

1.4 THE NEURAL CORRELATES OF FACIAL EXPRESSIONS ... 6

1.5 FACIAL ASYMMETRIES ... 7

1.6 LATERALISATION ... 7

1.7 THIS STUDY ... 8

1.8 RESEARCH QUESTIONS ... 8

1.9 OUTLINE ... 8

2.

G

ENERAL METHODS

8

2.1 PARTICIPANTS ... 8

2.2 FACIAL EXPRESSIONS ... 8

2.3 TRAINING ... 9

2.4 EXPERIMENTAL DESIGN ... 10

2.5 TASK ... 10

2.6 FMRI SET UP ... 11

2.7 PRE-PROCESSING OF FMRI DATA ... 11

2.8 CORTICAL RECONSTRUCTION AND GENERATION OF MASKS ... 12

2.9 TASK EXECUTION ANALYSIS ... 13

3.

L

ATERALISATION

13

3.1 LATERALISATION: METHODS ... 13

3.2 LATERALISATION: RESULTS ... 14

3.3 LATERALISATION: DISCUSSION ... 17

4.

C

LASSIFICATION

18

4.1 CLASSIFICATION: METHODS ... 18

4.2 CLASSIFICATION: RESULTS ... 22

4.3 CLASSIFICATION: DISCUSSION ... 33

5.

F

ACIAL ASYMMETRIES

33

5.1 FACIAL ASYMMETRIES: METHODS ... 34

5.2 FACIAL ASYMMETRIES: RESULTS ... 36

5.3 FACIAL ASYMMETRIES: DISCUSSION ... 41

6.

G

ENERAL DISCUSSION

41

6.1 LIMITATIONS OF THIS STUDY ... 42

6.2 FUTURE DIRECTIONS ... 44

7.

C

ONCLUSIONS

45

8.

A

CKNOWLEDGEMENTS

45

9.

R

EFERENCES

46

10.

S

UPPLEMENTARY MATERIALS

51

(3)

A pilot study to examine the lateralisation and

discriminability of the neural correlates of facial

expressions

A.S. Wolters1, E. Salari2, M. Raemaekers2, M.C. Keuken3, N.C. Ramsey2

1 Master of Science ‘Brain and Cognitive Sciences’, Institute of Interdisciplinary Sciences, University of Amsterdam,

Amsterdam, The Netherlands

2 Department of Neurology and Neurosurgery, Rudolf Magnus Institute of Neuroscience, University Medical Center

Utrecht, Utrecht, The Netherlands

3 Faculty of Social and Behavioral Sciences, Developmental Psychology, University of Amsterdam, Amsterdam, The

Netherlands

Abstract

For patients affected by total Locked-In Syndrome, a paralysis of all wilful muscle function, any form of communication is impossible (Patterson & Grabois, 1968). A brain-computer interface is a system that continuously monitors recordings of brain activity and recognises a target set of signals as a system input; this can enable these patients to communicate again through computer-assisted spelling (Birbaumer, 2006). The communication speeds of the current systems are, however, slow (Nicolas-Alonso & Gomez-Gil, 2012). For this study, it was proposed that a brain-computer interface based on wilful facial expressions could enable users to quickly communicate an emotional state. It is, however, not known whether the neural activation patterns that are correlated to wilful facial expression are lateralised; this has often been hypothesised as subjects display systematically asymmetric movements during facial expressions, with the left face half being more expressive (Sackeim & Gur, 1978). This pilot study attempts to assess (1) if the neural correlates of wilful facial expressions are lateralised and (2) if these activation patterns can be classified significantly above chance level, indicating a potential application in a brain-computer interface. Three healthy participants (two females, age 21-28) participated in this pilot study. Subjects were instructed to display one of four facial expressions whilst brain activity was recorded by a 7-Tesla magnetic resonance imaging scanner. It was consistently shown that the spread of the correlated neural patterns over the motor and somatosensory cortex is significantly larger in the right hemisphere (with a combined lateralisation index of -0.4357); other measures of lateralisation showed bilateral results. Classification accuracies significantly above chance levels were found in all participants; the mean classification accuracy for the optimal parameter settings was 75%, localised to the motor and somatosensory cortices.

In conclusion, this study has brought forward strong indications that (1) the neural correlates of wilful facial expression are right-lateralised over the motor and somatosensory cortex and (2) that these neural correlates can be classified with high accuracies, making further research into brain-computer interfaces based on wilful facial expressions an auspicious line of research.

(4)

1. Introduction

1.1 Locked-In Syndrome and the need for

a brain-computer interface (BCI)

Locked-In Syndrome describes the paralysis of intentional, or wilful, muscle functioning; it can arise from a variety of conditions, such as amyotrophic lateral sclerosis, brain stem stroke or spinal cord injury (Patterson & Grabois, 1986). If a patient is incapable to carry out any wilful muscle contraction whatsoever, including eye movements, the Locked-In Syndrome is described as total (tLIS). For tLIS patients, communication is impossible (Patterson & Grabois, 1986).

A BCI is a system that can monitor and recognise certain sets of brain signals; it allows a computer program to be solely controlled by signals stemming from the cortex (Wolpaw et al., 2002). To achieve this, signals are acquired and then processed to efficiently recognise a target set of brain signals. Target signals are recognised within the temporal stream of brain recordings; these can then be used to steer an effector device, may that be an electric wheel chair, robotic arm or computer cursor (Ortiz-Rosario & Adeli, 2013). In more abstract terms, a BCI creates an artificial output channel for the nervous system, which can, for example, enable patients to communicate again (Birbaumer, 2006).

Signals are commonly acquired through electrophysiological measurements (such as electroencephalography [EEG] or electrocorticography [ECoG]; Leuthardt et al., 2009) or neuroimaging techniques (e.g.

functional near-infrared spectroscopy [fNIRS]; Hong et al., 2015). EEG measurements entail to temporarily place a grid of electrodes on the scalp to measure electrical activity stemming from the neurons’ ionic currents (Buzsáki et al., 2012). Whilst vast amounts of temporal data can be collected (acquisition rates of up to 2000 Hertz are feasible) the spatial resolution of an EEG suffers from what is known as the inverse problem. This problem entails that signals cannot be localised non-ambiguously as some signals cancel each other out before reaching the electrodes (Grech et al., 2008); furthermore, electrical activity gets diffused by the scalp (Nuwer, 1988).

ECoG entails implanting a grid of electrodes directly on an area of the cortical surface; whilst this does not circumvent the inverse problem fully, source localisation of ECoG signals is vastly improved over EEG systems (Nakasato et al., 1994), allowing recordings of near-surface cortical areas in high spatial resolution (Leuthardt et al., 2009). Measuring signals via ECoG grids also offers high temporal resolution, similar to that of an EEG (Leuthard et al., 2009). Furthermore, the frequency range that can be reliably recorded with an ECoG is significantly wider than that of an EEG, which is important for studying a variety of functions; movements, for example, are accompanied by high frequency activity (Pfurtscheller et al., 1993; Ball et al., 2008).

fNIRS, which leverages light in the near-infrared range to measure the ratio of deoxygenated versus oxygenated haemoglobin, offers very accurate source localisation, but suffers from a temporal delay

(5)

in the signal that is being acquired. Changes in cerebral blood flow are recorded, which are known to occur with a delay of multiple seconds after the correlated activity took place (Buccino et al., 2016). Hence, fNIRS is often combined with EEG systems in BCI applications (e.g. Fazli et al., 2011). In summary, due to the combination of high temporal and spatial resolution, as well as the ability to uniquely pick up correlates of motor function, using ECoG has been described as the “ideal trade-off for practical implementation of BCIs” (Leuthardt et al.,

2009, p. 5).

1.2 The current state of BCIs

Many recent attempts at creating BCIs for communication are based on computer-assisted spelling (Birbaumer, 2006); one way of implementing these systems is to use digital spelling tables. A cursor runs through all rows first. When the cursor reaches the row that contains the desired letter, the user has to generate the signal that the BCI is tuned to (e.g. imagining to clench the fist) in order to select that row. It is known that, if no movement is imagined or planned, a synchronised electrical activity pattern around nine to eleven Hertz (the mu-wave) can be observed over the brain areas involved in producing movements. When movement is carried out, a desynchronisation of that mu-wave occurs, alongside a synchronisation in the gamma-wave power band (Pfurtscheller

et al., 1993). These signals are then picked up

in the brain recordings by the BCI and a selection of that row is executed (e.g. Leuthardt et al., 2009). The letter is then selected by going through the columns in a

similar fashion (e.g. Yeom & Sim, 2008). Most of the previous applications were EEG-based systems; attempts to leverage ECoG grids for BCIs have only been made recently. These studies showed promising results, even though the speed of communication remains a problem for all types of BCI systems (e.g. Brunner et al., 2011). Whilst many advancements have recently been made to improve that communication speed, it is still significantly lower than naturally occurring communication (Nicolas-Alonso & Gomez-Gil, 2012).

1.3 Wilful facial expressions and BCIs

It can be assumed that there is a specific need for communication of emotional states in tLIS patients; this can currently be achieved via spelling out an emotional reaction, which, as outlined above, suffers from low communication speed. It is, however, conceivable that, to quickly communicate an emotional state, patients could intend to smile and the system would spell out ‘I am

happy’, which would drastically speed up

emotional communication. This is why, in this study, it is attempted to understand if the neural correlates of wilful (defined as intentionally produced, or posed) facial expressions could serve as an input signal for a BCI.

Emotional states have been assessed as BCI inputs in a variety of studies. Phenomena like valence and arousal (e.g. Heger et al., 2014) or moods like anger, disgust or envy (e.g. Kassam et al., 2013) are usually decoded, most commonly via EEG or fNIRS (cf. Liberati

et al., 2015 for a meta-analysis). Emotional

(6)

as emotions are hard to control; patients are also unlikely to intend to communicate all their emotional states. In summary, the neural correlates of wilful facial expressions could allow patients to selectively communicate intended emotional reactions; data of brain activity during movement is also easily recorded using ECoG (e.g. Brunner et

al., 2011).

In order to use the neural correlates of a certain function as an input to a BCI the cognitive state of the participant needs to be classifiable based on neural recordings alone (cf. Iacoviello et al., 2015). Classification can be achieved through a variety of algorithms (Lotte et al., 2007). The target function needs to be classifiable with extremely high accuracy to create a BCI system that support patients without eliciting frustration; classification accuracies of at least 90% are needed (Huggins et al., 2011).

To our knowledge, no attempts have been made to classify wilful facial expressions based on their neural correlates. Non-emotional facial movements have, however, been classified with extremely high accuracy levels before; mouth movements between the lips, tongue, jaw or larynx were classifiable in the range of 90% and above based on their neural correlates as measured in a high-field functional magnetic resonance imaging (fMRI) scanner (Bleichner et al., 2015). Classification accuracies using fMRI data are likely to be reproducible with ECoG recordings as signals have been shown to be correlated between localised increase in blood-oxygen level dependent (BOLD) activity in fMRI data and increases in the high-gamma power band in ECoG data (Hermes et al., 2011). Hence, it can

be hypothesised that wilful facial expressions are highly discriminable and have the potential to be applied as an input to a BCI.

1.4 The neural correlates of facial

expressions

This leads to the question of which brain areas can be assumed to show activation patterns that correlate with the occurrence of wilful facial expressions. To our knowledge, only one neuroimaging study has been carried out that attempts to understand the neural correlates of facial expressions. In this low-resolution positron emission tomography study it was shown that, during an emotionally-induced facial movement (either smile or laughing), blood flow increased to the supplementary motor area (SMA) and the left putamen, but not the primary motor cortex. When participants were, however, asked to intentionally mimic a smile or laughter, blood flow increases were found in face area of the bilateral primary motor cortex and the SMA (Iwase et al., 2002). This is in line with results from anatomical studies (cf. Rinn, 1984), in which it was shown that facial expressions are produced through different neural pathways, depending on whether an expression was made wilfully or as an impulse reaction to express a subject’s emotionality. A double dissociation has been shown to exist between the wilful and emotional facial motor pathways (Rinn, 1984). Wilful facial expressions are thought to originate in the prefrontal cortex, with signals being relayed through the motor cortices to reach the motor nucleus after passing through the corticobulbar projections of the pyramidal tract. Impulsive, or emotionally-induced facial

(7)

expressions, however, arise from the extrapyramidal system which is thought to encompass mostly subcortical structures (Morecraft et al., 2004). Hence, it can be assumed that the facial area of the primary motor cortex carries motor-related signals correlating with the occurrence of wilful facial expressions; this area is the main region of interest (ROI) for an ECoG-based BCI signal acquisition.

1.5 Facial asymmetries

There are, however, indications that the neural correlates of facial expressions are lateralised to the right hemisphere (i.e. that more pronounced neural activation patterns can be observed in the right hemisphere; Borod et al., 1983). This is due to the findings that facial expressions in humans are rated as more expressive on images that artificially duplicate and combine a left face half (displaying that expression) into a composite image, when compared to the equivalent composite image of the right face half (Sackeim & Gur, 1978). These differences in subjective ratings have not only been shown in emotionally-induced facial expressions; the effect was also shown to occur during wilful expressions (Borod et al., 1983). This effect is not specific to humans; it was also shown in chimpanzees in both subjective ratings as well as computer-based movement analysis, an approach to quantify the amount of facial movement (Fernandez-Carriba et al., 2002).

Movement analysis has also been applied to examine the asymmetry of human facial expressions. Nicholls (2004) used a 3D camera to record participants wilfully displaying either happiness or sadness. They then

overlaid a neutral baseline image and assessed the movement of pixels. Both expressions resulted in more movement on the left hemiface, with more pronounced differences during sadness. A further study by Desai (2009) used 2D video recordings of subjects expressing either happiness, sadness, anger, fear, surprise or disgust. They computed the facial movement by measuring entropy, which was defined as the sum of pixel intensity differences across two subsequent frames. More movement was found in the left hemiface of all male participants; mixed results were obtained for female participants. In conclusion, the effect of asymmetry was relatively consistently shown in both subjective ratings and movement analyses of human facial expressions.

1.6 Lateralisation

It has often been argued that this facial asymmetry suggests right hemispheric dominance (or lateralisation), as the facial nerves are contralaterally innervated (cf. Borod et al., 1983). Neuroscientific evidence for this lateralisation effect in facial expressions has, to our knowledge, not been brought forward.

Another reason why brain activity related to facial expressions is thought to be lateralised is that facial asymmetries cannot be consistently shown in left-handed subjects (Rubin & Rubin, 1980). It is known that 27% of left-handers show a complete reversal of hemispheric specialisation, i.e. language functions being lateralised to the right, amongst other reversals (Knecht et al., 2000); hence, it has been argued that the

(8)

inconsistencies of facial asymmetries in left-handed subjects are an indicator for lateralisation, as a causal relationship between inconsistencies and a potential lateralisation to the right hemisphere can be hypothesised (cf. Rubin & Rubin, 1980).

1.7 This study

In this study high-field fMRI recordings were taken to measure brain activity during wilful facial expressions in both hemispheres. It was attempted to establish whether the neural correlates of wilful facial expressions are lateralised; this was followed up by an examination of facial asymmetries in the same participants. It was also attempted to classify facial expressions based on their neural correlates. If classification accuracies above chance level can be found, then these are likely to be achievable via ECoG as well, as signals have been shown to be correlated between localised increase in BOLD activity and increases in the high-gamma power band (Hermes et al., 2011); hence, this study can give an indication of (1) which hemisphere of the sensorimotor cortices is likely to carry the majority of the correlated activation patterns and (2) whether the neural correlates of wilful facial expressions are likely to be discriminable enough to be a potential input to a BCI system.

1.8 Research questions

Q1. Can right-lateralised activation patterns

over the sensorimotor cortices be observed during wilful facial expression? Q2. Can these activation patterns be classified significantly above chance level?

1.9 Outline

In section two of this report the general methods are outlined. Subsequently, the methods, results and discussion specific to lateralisation (section three), classification (section four) and facial asymmetries (section five) are described. This is followed up by the general discussion, conclusions and acknowledgements in sections six, seven and eight, respectively.

2. General methods

2.1 Participants

Three healthy participants, aged between 21 and 28 years, have been recruited; two of these were female. All of them completed the Edinburgh Handedness Inventory (Oldfield, 1971) and were shown to be fully right-handed. This was a requirement for participation as previous research failed to consistently show the facial asymmetries in left-handed subjects (Rubin & Rubin, 1980). All participants were native Dutch speakers.

Participants were in good health, with no history of neurological or psychiatric disorders. Written informed consent was given by the participants prior to the experiment. The study was approved by the ethical committee of the University Medical Center Utrecht and was in agreement with the declaration of Helsinki (2008).

2.2 Facial expressions

In this study, participants were asked to reproduce the facial expressions of happiness,

(9)

expressions were chosen as they were shown to be easily reproducible wilfully (Field & Walden, 1982), to ensure consistent reproductions by the participants; keeping the variance in execution low is important for classification, as less variance results in more overlap in the exemplars that are derived from the training data (see 4.1). Furthermore, these facial expressions were shown to be highly discriminable in recordings of facial electromyography (EMG; Hamedi et al., 2011) and by video-based analysis (Taner et al., 2014), making it easier to assess whether participants carried the task out without mistakes (see Task execution analysis, section 2.9).

2.3 Training

During the testing and developmental phase of the task it was found that there are fundamental differences in how subjects display each of the four facial expressions. Furthermore, some subjects exhibited significant movement around the shoulders and neck, even when instructed to attempt to move as little as possible; this might cause artefacts in the fMRI data. A training program has hence been developed to thoroughly introduce participants to the correct execution of the facial expressions.

The training program was segmented into two phases; the first phase contained explanations about how to carry out each facial expression, which was followed by a self-directed phase. During the explanatory phase participants received instructions on the detailed movements for each of the facial expressions, which have been taken from the book ‘The Art

of Pantomime’, written by Charles Aubert and

published in 1927. The instructions were translated from English to Dutch. Images of the facial expressions to be carried out were displayed alongside the descriptions; these were taken from the Cohn-Kanade AU-coded facial expression data base (Lucey et al., 2010); see figure 1.

After this explanatory phase, participants were presented with an interface which allowed them to see a camera image of themselves next to the example images (see figure 2); they were instructed to spend ten minutes to try to mimic the example images as closely as possible. After five of these ten minutes elapsed, the experimenter entered the room to assess (1) if the facial expressions are carried out as instructed and (2) if any avoidable shoulder, neck or head movement is present; further instructions were provided, if necessary.

Figure 1. Example of the explanation phase of the

training programme.

Figure 2. Example of the self-directed phase of the

training programme.

(10)

2.4 Experimental design

This study attempts to test three hypotheses; firstly, it was examined whether the hypothesised right-lateralised activation patterns over the sensorimotor cortices (cf. Borod et al., 1983) can be observed. Secondly, it was tested whether the neural correlates of wilful facial expressions can be classified with accuracy levels that are significantly above chance levels. These two research question were also followed up by an analysis of whether facial asymmetries can be reproduced using subjective ratings (e.g. Sackeim & Gur, 1978) and movement analyses (e.g. Nicholls et al., 2004) to draw a link between the occurrence of lateralisation and facial asymmetries.

2.5 Task

2.5.1 Task for asymmetry recording

After the training was completed, subjects were asked for their approval to be recorded via video camera for the next stage (all participants agreed); these video recordings were used to generate the images for the facial asymmetry analysis (for both subjective ratings and movement analysis).

Stimuli were displayed for 3000 milliseconds, followed by a fixation cross that was displayed for 6000 milliseconds (see figure 3). The stimulus duration was chosen with respect to the fMRI set up and the ability to carry out classification with the neural correlates (see sections 2.5.2 and 4). Subjects were instructed display each facial expression for the duration of the stimulus presentation and to relax their facial muscles whilst the fixation cross is displayed. Stimuli were displayed in either

text form or as an image to be imitated; this was introduced to be able to exclude reading- or imitation-specific activity in the fMRI experiment (cf. Hauk et al., 2004). The images were taken from the Cohn-Kanade AU-coded facial expression data base (Lucey et al., 2010).

Forty trials were carried out, containing ten of each facial expression; of these, five were presented in text, with the other five being presented as images to be imitated. The stimulus presentation was randomised and counterbalanced. Figure 3. Diagram of the task for asymmetry recording. 2.5.2 Task for fMRI data collection The task for the fMRI phase followed a similar structure to the task for the asymmetry recording; the task duration was, however, altered to accommodate a slow-event related paradigm, which is necessary for classification. Using this paradigm, the data of each trial carries no traces of hemodynamic responses to previous trials (cf. Mumford et

al., 2012). One trial equalled eight scans with

a repetition time of 2.1 seconds, which amounts to roughly 16.8 seconds per trial; see figure 4. Multiple runs with different amount of trials were carried out; see section 2.6.

(11)

Figure 4. Diagram of the task during fMRI data

collection.

2.6 fMRI set up

The fMRI recordings were collected using a 7-Tesla Philips Achieva MRI system with a 32-channel head coil; the scanner is situated in the University Medical Center Utrecht. Functional images were acquired using an echo-planar imaging sequence (TR/TE = 2100/25 milliseconds, FA [flip angle] = 70, 41 slices with no gap, acquisition matrix 128 × 128, the voxel size was 1.809 millimetre isotropic, the slice acquisition was interleaved); these images were acquired in anterior-posterior orientation. A high-resolution T1 weighted image was acquired for anatomical reference (3D TFE, TR/TE = 7/2.99 milliseconds; FA = 8; voxel size 0.984 × 0.984 × 1 millimetre).

Anatomical scans were collected first from each participant. After that, the first run of functional data was collected, lasting for 641 dynamics (or eighty trials), during which participants were instructed to carry out the facial expression (active run). During the second run of collecting functional data, which lasted for 321 dynamics (or forty trials), participants were instructed to attend to the stimuli but not carry out the facial expression (inactive run). Inactive runs were implemented so that perception- and language-related processes (cf. Fadiga et al.,

2005) can be isolated from motor processes and hence excluded during further analysis. For the third run 641 dynamics were collected (or eighty trials); participants were instructed to carry out the facial expressions. Not all participants completed all runs due to (1) technical issues with the scanner (gradient coil overheated, participant three), (2) experimental delays due to testing and (3) this study being a pilot (the inactive run was only introduced for participants two and three); see table 1 for details. ANATOMICAL SCANS RUN 1 RUN 2 RUN 3 #1 ✓ ✓ ⤫ ⤫ #2 ✓ ✓ ✓ ⤫ #3 ✓ ✓ ✓ Overheated

Table 1. List of participants and which runs were

completed by whom; ‘overheated’ stands for a run that had to be aborted as the gradient coil overheated.

2.7 Pre-processing of fMRI data

The pre-processing steps described below, unless noted otherwise, were carried out using SPM12 (available under http://www.fil.ion.ucl.ac.uk/spm/software/sp m12). As the first pre-processing step, functional scans were re-aligned to the first scan of the first run to reduce motion artefacts. These realigned functional scans were then slice-time corrected to correct for differences in acquisition timing over slices within one dynamic. The T1 image was then corrected for field inhomogeneities by dividing the T1 weighted image by the proton density image (Van de Moortele et al., 2009). Cortical reconstruction and volumetric segmentation was performed with the Freesurfer image analysis suite, which is

(12)

documented and freely available for

download online

(http://surfer.nmr.mgh.harvard.edu/). This also provided labels of cortical regions (Desikan et al., 2006), which are necessary to localise activity (see 2.8 for more details). After these steps, different pre-processing steps were used for participant one and the remaining participants.

2.7.1 Spatial normalisation (for participant one)

The data of participant one showed signal dropouts across all functional scans, with a more pronounced dropout towards the occipital cortices, leading to an inaccurate co-registration of functional and anatomical scans. Hence, for this subject, a different pre-processing procedure was used, during which the functional scans were normalised to the anatomical scan (described in this section). Better results were expected due to the higher degrees of freedom used during the normalisation process. For the remaining participants the collected data showed no such signal dropout; hence, co-registration was used (see 2.7.2).

For participant one, the anatomical scan was first segmented, i.e. separate files containing only the voxels within grey matter, white matter, and CSF were generated. These files are required to generate a template, which can be used as a target for the normalisation. To generate this target, these segmented files were recombined into a single template file using FSL (available under http://fsl.fmrid.ox.ac.uk/fsl/fslwiki/).

Subsequently all functional scans of participant one were normalised (as a form of image registration) to the template generated from the anatomical scan. 2.7.2 Co-registration (for participant two and three)

For participant two and three, after the cortical reconstruction and volumetric segmentation was performed, functional scans were co-registered to match the skull-stripped and photon density inhomogeneity-corrected anatomical scan so that they are in the same space as the masks and can be visualised.

2.7.3 Further pre-processing (common to all participants)

Each session was detrended, i.e. the first, second and third order trends per voxel have been removed. Normalisation over time was carried out subsequently, i.e. the mean BOLD response was subtracted from each voxel, which was then divided by its Standard Deviation (SD).

2.8

Cortical

reconstruction

and

generation of masks

Anatomically, the facial motor area is known to be at the inferior end of the pre- and postcentral gyrus, with the distinction between motor and somatosensory areas being less clear anatomically than in other functionally-defined areas of the motor and somatosensory cortices (Woolsey et al., 1979). Starting with the labels provided by the Freesurfer cortical reconstruction (Desikan et

(13)

analysed independently. These masks were defined anatomically, stretching over parts of the frontal cortex, more specifically the precentral gyrus (commonly associated with motor functions) and the postcentral gyrus (commonly associated with somatosensory functions). ROIall contains all voxels in the grey

matter of the entire scanned area, ROIpre

contains all voxels within the grey matter of the precentral gyrus, ROIpost contains all voxels

within the grey matter of the postcentral gyrus and ROIprepost contains all voxels within

the grey matter of the pre- and postcentral gyrus. Eight further masks were produced that capture the voxels in either the right or the left hemisphere within each of these ROIs.

2.9 Task execution analysis

Video recordings of the mouth area of the participants have been taken whilst these were in the fMRI scanner. These were inspected visually and trials were manually excluded if either (1) no facial movements was present during stimulus presentation or (2) the wrong facial movement was present in the mouth area. Due to space constraints in the scanner, video recordings could not be taken of the upper facial areas. Facial EMG measures were taken, but were later discarded as the electrodes loosened during the scanning process.

3. Lateralisation

3.1 Lateralisation: Methods

3.1.1 General linear model (GLM) analysis A first-level GLM analysis has been carried out using SPM12; this regression analysis was conducted on a single-subject level and

contained the factors that best model the BOLD response after being convolved with a hemodynamic response function (Calhoun et

al., 2004).

The GLM analysis was carried out for every finished run; see table 1 (section 2.6) for an overview. This analysis resulted in four T-maps, one for each condition, which contain a T-value for every voxel; this value represents how closely that voxel matched the expected activation profile in form of the convolved GLM. These maps were visualised on the anatomical scans and then inspected; for the active runs, it was expected to find significant activation over the facial area of the pre- and postcentral gyrus. For the inactive runs, it was hypothesised that no such activation patterns should be visible; activation patterns were also expected to occur towards the occipital cortices in both active and inactive runs due to the visual stimulus presentation method of this experiment.

3.1.2 Exclusion of inactivity and stimuli type specificity

Certain voxels have been removed prior to the lateralisation analysis, based on two exclusion criteria. Firstly, any voxel that was significantly activated (i.e. T-value above 4.8, the significance threshold of the GLM analysis) during an inactive run (only collected for participants two and three) was excluded, in order to remove potential biases due to lateralisation effects of motor processes related to perception and processing (cf. Fadiga et al., 2005).

Secondly, any voxel that was significantly activated only during one of the types of

(14)

stimuli, i.e. image or text, was also removed prior to the lateralisation analysis. This was carried out to remove potential biases due to lateralisation effects of reading-and imitation-related activation patterns over the motor cortex (cf. Hauk et al., 2004).

3.1.3 Lateralisation and lateralisation index A common approach to measuring lateralisation is the lateralisation index, which is an index that ranges from -1 (fully right-lateralised) to +1, which describes the opposite; see formula 1. No standard way in reporting is apparent, as +1 sometimes describes the reverse dominance through using an alternative formula (Jansen et al., 2006). Generally, though, activation patterns resulting in a value between -0.2 and +0.2 are defined as bilateral, with any value below -0.2 being defined as right-lateralised, and any value above +0.2 being classified as left-lateralised (Seghier, 2008). For this study, the range between 0.1 to 0.2 in either direction was defined as a trend towards that lateralisation.

Formula 1. Formula for the lateralisation index; QLH

stands for quantitative measure of the left hemisphere, QRH stands for quantitative measure of the right hemisphere, LI stands for ‘Lateralisation index’.

3.1.2.1 Measure of extent

The lateralisation index can be calculated using different quantitative measures. One of these is the measure of extent (MoE), which describes the procedure of counting all voxels with a T-value that exceeds a set threshold (in

this case set to 4.8, as per GLM analysis) in each hemisphere of a given area (pre- and postcentral gyrus in this study); hence, inferences can be made about the spread of significant activation, not about how strongly each voxel correlates with the GLM (Seghier, 2008). This method has been chosen as it was shown that its results are correlated with the Wada test (Binder et al., 1996) which is a highly reliable measure of lateralisation (Baxendale, 2009), but is invasive as a barbiturate has to be injected into the blood stream of a participant (Binder et al., 1996).

3.1.2.2 Measure of magnitude of signal change

Another quantitative measures to describe lateralisation is the measure of magnitude of signal change (MMoSC). In this measure, the mean magnitude of signal change (in form of β-values) of the 10% highest-activated voxels in each hemisphere is measured and compared between hemispheres. It is crucial to keep the regions of interest roughly the same size between the hemispheres so that similar voxel quantities are compared (Jansen

et al., 2006). This measure has been included

as it tests a different aspect of lateralisation, examining the strength of activation, rather than its spread (Jansen et al., 2006).

3.2 Lateralisation: Results

3.2.1 Task execution analysis

No trials were excluded based on the inspection of the video recordings of participants’ mouth activity during fMRI data collection, as all participants performed the task well.

(15)

3.2.2 GLM model

The results of the GLM model, in form of T-maps, have been visualised on each participants’ anatomical scan and visually inspected; only T-values exceeding the threshold of 4.8, as defined per GLM model, have been displayed (see figure 5). For participants two and three, results have also been visualised for the inactive run (see figure

6), which has not been carried out in participant one.

For active runs, as can be seen in figure 5, activation was found mostly over frontal, motor, somatosensory and occipital areas. For inactive runs, as can be seen in figure 6, activation was found mostly over non-surface occipital areas. (a) (b) (c) Figure 5. Visualisation of activation that significantly correlates with the GLM of the active runs in (a) participant one, (b)

participant two and (c) participant three. Happiness is marked in blue, sadness in green, surprise in red and disgust in yellow. The pre- and postcentral gyrus is highlighted in white.

(16)

(a) (b) Figure 6. Visualisation of activation that significantly correlates with the GLM of the inactive runs in (a) participant two and (b) participant three; inactive data was not collected from participant one. Happiness is marked in blue, sadness in green, surprise in red and disgust in yellow. The pre- and postcentral gyrus is highlighted in white. 3.2.3 MoE The overall results from all three participants can be classified as ‘right-lateralised’, with

only two out of twelve results are classified as ‘bilateral’ with a trend towards right lateralisation; see table 2.

Participant Condition Amount of active voxels in left hemisphere Amount of active voxels in right hemisphere Lateralisation index #1 Happiness 22 60 -0.463 #1 Sadness 7 14 -0.333 #1 Surprise 19 41 -0.367 #1 Disgust 18 35 -0.321 #1 Combined 66 150 -0.389 #2 Happiness 49 271 -0.694 #2 Sadness 64 291 -0.639 #2 Surprise 33 217 -0.736 #2 Disgust 46 239 -0.677 #2 Combined 192 1018 -0.683 #3 Happiness 81 153 -0.308 #3 Sadness 74 104 -0.169 #3 Surprise 66 118 -0.283 #3 Disgust 68 91 -0.145 #3 Combined 289 466 -0.234

Table 2. Table of MoE per participant and condition; ‘Combined’ stands for the summation of the values across all

(17)

3.2.4 MMoSC

Overall, results from two of the three participants have to be classified as ‘bilateral’ with a trend towards right lateralisation,

whereas the activation patterns of one participant have to be classified as ‘bilateral’; see table 3.

Participant Condition Mean signal

magnitude for the left hemisphere Mean signal magnitude for the right hemisphere Lateralisation index #1 Happiness 36.172 45.461 -0.114 #1 Sadness 33.838 42.864 -0.118 #1 Surprise 32.715 46.020 -0.169 #1 Disgust 36.418 48.089 -0.138 #1 Combined 34.768 45.608 -0.135 #2 Happiness 21.596 26.080 -0.094 #2 Sadness 20.164 24.666 -0.100 #2 Surprise 19.081 27.457 -0.180 #2 Disgust 20.085 26.700 -0.140 #2 Combined 20.231 26.226 -0.129 #3 Happiness 28.109 30.678 -0.044 #3 Sadness 27.770 26.902 +0.016 #3 Surprise 27.877 28.755 -0.016 #3 Disgust 28.054 26.914 +0.021 #3 Combined 27.952 28.312 -0.006 Table 3. Table of MMoSC per participant and condition; ‘Combined’ stands for the mean of the values across all conditions in one participant.

3.3 Lateralisation: Discussion

It was consistently shown in the measures of extent that the correlated neural patterns over the sensorimotor cortices are right-lateralised; only two out of twelve measures (16.67%) would be classified as ‘bilateral’, but showed a trend towards right lateralisation. The remaining ten measures are classified as ‘right-lateralised’ (83.33%). No such clear picture emerges from measuring the magnitude of signal change. Of the twelve measures, seven can be classified as ‘bilateral’ with a trend towards right lateralisation (58.3%), with five being classified as bilateral.

In conclusion, wilful facial expressions are consistently accompanied by neural correlates

that show wider activation patterns in the right hemisphere of the sensorimotor cortices, compared to the left; this is in line with the hypothesis. This finding cannot be consistently repeated in the measures of the magnitude of signal change; all measures point towards bilateral results, although the majority of the measures show a trend towards right lateralisation. This suggests that there is a bigger area of activity in the right hemisphere during wilful facial expressions than in the left hemisphere, with no clear support (besides a trend towards the right hemisphere) for stronger activation values in one of the two hemispheres.

(18)

4. Classification

4.1 Classification: Methods

In this study, it was attempted to predict, solely based on the recorded neural correlates, which of the four facial expressions was carried out by a subject in a trial. A number of different parameters have been iterated to find the parameter set that yields the highest mean classification accuracy. Combinations of two classification algorithms (Pattern correlation and Support Vector Machine), four procedures for feature selection (either using all voxels in a given ROI, or using analysis of variance, T-test statistics or T-maps for finding significantly task-related voxels), three alpha levels (0.05, 0.01 and Bonferroni-corrected) and four regions of interest (ROIall, ROIpre, ROIpost,

ROIprepost) have been tested.

4.1.2 Classification algorithms

Two classification algorithms have been applied, Pattern correlation and Support Vector Machine.

4.1.2.1 Pattern correlation (PC)

During the training phase of a PC algorithm, or template matching procedure, spatial templates of each condition are generated by averaging the BOLD responses that occur during each trial of that given condition, resulting in four templates. These templates are based on the data of half of the trials, i.e. all trials with even identifiers (see 4.1.7). Not all voxels are included; the templates are based on a specific number of voxels that is defined through feature selection (see 4.1.4). The trial to-be-classified is then classified

based on the template it shows the highest correlation with, measured by the Pearson correlation coefficient (Cover & Hart, 1976).

4.1.2.2 Support Vector Machine (SVM) An SVM is a type of supervised machine learning algorithm used for classification and regression analyses. An SVM can be used on one-of-two-categories discrimination tasks. A geometric representation of all data points is generated as points in a space; iterations are run during which the parameters of a hyperplane are altered until it optimally separates the exemplar points of each class. Optimality is defined as the hyperplane that shows the largest distance (or functional margin) between the members of each class (Hsu et al., 2003). A linear basis function kernel (sometimes also described as an SVM with no kernel) has been used, meaning that the hyperplane takes on a linear form (Hsu et

al., 2003).

An in-house script has been used to extend the capabilities of an SVM to a discrimination task with more than two categories; during this, multiple SVM algorithms are trained to discriminate, in each iteration, one class against one other class. In this case with four classes, seven SVM classifiers are needed to be able to discriminate all classes from each other (Weston & Watkins, 1999).

4.1.2.3 Chance level and Markov Chain Monte Carlo (MCMC)

As the classification was a four-choice discrimination task, one can assume that the chance level is around 25%. The data set used for classification in this study, consisting of

(19)

split into a training and a classification set, see 4.1.7), is, however, too small to assume this chance level (cf. Combrisson & Jerbi, 2015). To determine the chance level more accurately and to be able to make a statement about whether a classification result is significantly higher than what is expected to occur by chance an MCMC algorithm has been carried out. During this, the per-trial classification labels (output of the classification algorithm) were swapped around in a random fashion and accuracy results were reported for 1000 times. Results were expected to be normally distributed around a mean of about 25%; this describes the chance level more accurately. An instance of the MCMC algorithm was run after every classification attempt. Significance of the measured classification accuracy was then reported in form of a P-value, describing the probability that the given classification accuracy is significantly different from the distribution generated by the MCMC (e.g. Madel & Ellis, 2005).

4.1.3 Choice of active and rest dynamics Each trial consisted of eight dynamics, i.e. eight full scans were acquired for each trial. For analysis, it had to be defined which ones should be regarded as active dynamics, i.e. which ones are thought to contain the peak of the activation underlying the facial expressions. The activity value of the active dynamics is used to generate the spatial-temporal templates. The choice of which dynamics are defined as active was made through an examination of the raw data for each participant.

First, significant voxels have been selected through a T-test feature selection (see 4.1.4)

under the assumption that active trials can be expected on dynamics two, three and four. The raw data was then separated into trials and the mean activation was calculated across the selected voxels for each dynamic. For each trial the highest peak of mean activation was isolated. It was then analysed, per participant, which dynamic contains the majority of peaks and hence is the dynamic most likely to contain the active segment of the data; the mean and median values were reported.

4.1.4 Feature selection

Feature selection is carried out to reduce the number of voxels that the classification is based on; hence, it is used to run the classification using only the voxels that are likely to be discriminable. Different feature selection methods are feasible (cf. Wagner et

al., 2005); in this study feature selection

methods based on a T-test, ANOVA, T-maps statistics and no specific feature selection were used. Voxels that showed activity during visual processing alone or activity specific to only one of the stimuli types were removed prior to the feature selection (see 4.1.4.1).

4.1.4.1 Exclusion of inactivity and stimuli-type specificity

Since it is unknown to what extent the motor cortex is active during the reading and processing of action-indicating words or during the interpretation of images containing facial expressions, any activity that is not motor-related was removed, similar to the voxel removal prior to the lateralisation analyses. Firstly, any voxel that was significantly activated (i.e. T-value above 4.8,

(20)

as per GLM model) during an inactive run (only collected for participants two and three) was excluded to ensure that only those voxels that are involved in producing the facial expressions are used for classification. Furthermore, any voxel that showed significant activation for only one of the types of stimuli, i.e. image or text, was also removed from feature selection. This was introduced to be able to exclude reading- or imitation-specific activity (see Hauk et al., 2004)

4.1.4.2 All voxels in ROI

This is the equivalent of no specific feature selection. All voxels in the given ROI were used as an input to the classifier algorithm.

4.1.4.3 Analysis of Variance (ANOVA)

Carrying out a voxel-wise ANOVA allows to isolate the voxels that show little within-task variability whilst simultaneously showing high between-task variability; it can be assumed that voxels with these characteristics are likely to be highly discriminable (Wagner et al., 2005). For each voxel, per trial, the peak BOLD activity level was calculated. Then all trials belonging to one condition (i.e. happiness, sadness, surprise or disgust), were grouped together and an ANOVA was carried out to examine if a significant difference in activity levels can be found between categories. Voxels were then selected based on the values of an F-test (Othman, 2014).

4.1.4.4 T-Test

For the T-test feature selection, the parameter settings for the active and rest

A voxel-wise T-test was then carried out; voxels were selected based on the resulting T-values, a high value of which represents a voxel that is significantly more active during the active dynamics than during the rest dynamics (Wagner et al., 2005).

4.1.4.5 T-Maps

The T-maps are generated during the process of analysing the data via the GLM (carried out in SPM12; see 3.1.1) and follow the same fundamental principle than the voxel-wise T-test outlined in 4.1.4.4, except that it takes into account more variability of the hemodynamic characteristics. The GLM defines a priori expectations of activation and convolves those with a canonical hemodynamic response function. This is, in principle, a more refined statistical analysis than the definition of active and rest trials. During the map feature selection, the T-maps for each condition (i.e. happiness, sadness, surprise and disgust) were analysed and the voxels with the highest T-values were isolated and used as an input into the classifier algorithm (Wagner et al., 2005).

4.1.5 ROIs

Classification was attempted with ROIall,

ROIpre, ROIpost and ROIprepost (see 2.8 for further

details).

4.1.6 Alpha levels for feature selection Alpha levels describe which threshold is used for selecting voxels; either the top five percent, top one percent, or a Bonferroni-corrected method was applied. Bonferroni correction is the strictest selection method, as

(21)

by the overall number of voxels. Another option was to define the amount of voxels to be selected a priori (e.g. the 200 most active voxels); a threshold is hence not required.

It was expected that a stricter selection method leads to improved classification accuracies, as only the most highly active voxels would be used for classification. However, if too little voxels are selected, no accurate classification is possible; therefore, multiple alpha levels were tested.

4.1.7 Cross-validation

The data set was split into two equal halves, one set for training the classifier algorithm with the other set being used to test classification accuracies. All trials with an odd identifier were used to train the classification algorithm, whereas all trials with an even identifier were used as target data for the measures of classification accuracy.

4.1.8 Parameter search

In the results section the classification accuracies are reported first, based on the set of parameters that resulted in the highest mean accuracy across participants (see 4.2.1). To identify this optimal parameter constellation, it has been examined what the effect of manipulating the parameters is on classification accuracies; ROIs, classification algorithm, alpha levels and feature selection method have been iterated. Classification was run with iterations of each parameter, which resulted in 300 classification results in total (see Table 2, Supplementary Materials); the effect of parameter choice on classification accuracy has been tested by dependent T-tests.

4.1.9 Examining differences in classification accuracy between participants

Further analyses were aimed at understanding the differences in classification accuracy between participants, in case these occur. Firstly, in case higher accuracies can be found when areas outside ROIprepost are

included, it was examined which voxel clusters were recruited for these classification results (see 4.2.4.1).

Secondly, it was analysed whether a link can be observed between the sum of absolute movement correction that was carried out during the pre-processing of the fMRI data for each participant and the classification accuracies; the assumption is that, when more movement correction had to be carried out, more interpolation of the data is needed and less of the original values remain. SPM12 corrects movements with six degrees of freedom, i.e. by moving the brain scans (in the directions of x, y and z) and rotating them (yaw, pitch and roll, which are Euler angles). The data regarding the corrections along x, y and z has been reported by the sum of absolute correction; the total distance was calculated for each trial (see formula 2), summed and then reported in millimetres.

Formula 2. Formula used to calculate the distance of

movement correction; ‘t’ stands for a point in time and x, y and z for the movements along each respective axis.

This is not possible for rotation, as points across the brain will be moved differently, depending on how close they are situated towards the rotation axes; hence, the absolute correction has been reported for

(22)

yaw, pitch and roll, in radians; see table 5 for

the results (see 4.2.4.2).

Thirdly, it has been tested whether the fit between the raw data and the GLM differs between participants, assuming that a worse-fitting GLM model would negatively impact classification accuracies. A GLM with a singular regressor was established in which no discrimination was made between conditions. For each trial the Euclidean distance between the peak of the raw data and the peak of the model was taken; this was reported by the mean and SD of each participant (see 4.2.4.3).

4.1.10 Comparison of classification accuracy between hemispheres

A comparison between the classification accuracies in left and right pre- and postcentral gyrus has been carried out; the parameters were held constant from the reported highest mean classification results (4.2.1).

4.2 Classification: Results

4.2.1 Highest mean classification accuracies To allow comparability, parameters have been set to the constellation that resulted in the highest mean classification accuracy over the pre- and postcentral gyrus (see table 1, supplementary materials, for full set of results). Using an SVM, ROIpost, T-test statistics

feature selection method and Bonferroni-corrected alpha levels, classification accuracies of 67.5% were observed in participant one (p = 0.001; see figure 8), 90% in participant two (p = 0.001; see figure 9) and 67.5% in participant three (p = 0.001; see figure 10); this results in a mean classification accuracy of 75%.

(23)

(a) (b) (c) Figure 8. (a) Confusion matrix of the classification results for participant one, masked to ROIpost

, generated with an SVM, T-Test statistics feature selection and Bonferroni-corrected alpha levels. Ten trials correctly classified per condition equals a 100% accuracy. The overall accuracy was 67.5%. (b) Bar plot of the outcomes of the 1000-time Monte Carlo repetition, which resulted in a mean of 25% with a Standard Deviation of 8%. The classification result is significantly different from this distribution, with p = 0.001; the vertical blue line signifies the classification accuracy in the distribution. (c) Visualisation of the voxels used for feature selection (n = 130).

(24)

(a) (b) (c) Figure 9. (a) Confusion matrix of the classification results for participant two, masked to ROIpost

, generated with an SVM, T-Test statistics feature selection and Bonferroni-corrected alpha levels. Ten trials correctly classified per condition equals a 100% accuracy. The overall accuracy was 90%. (b) Bar plot of the outcomes of the 1000-time Monte Carlo repetition, which resulted in a mean of 25% with a Standard Deviation of 9%. The classification result is significantly different from this distribution, with p = 0.001; the vertical blue line signifies the classification accuracy in the distribution. (c) Visualisation of the voxels used for feature selection (n = 150).

(25)

(a) (b) (c) Figure 10. (a) Confusion matrix of the classification results for participant three, masked to ROIpost, generated with an SVM,

T-test feature selection and Bonferroni-corrected alpha levels. Ten trials correctly classified per condition equals a 100% accuracy. The overall accuracy was 67.5%. (b) Bar plot of the outcomes of the 1000-time Monte Carlo repetition, which resulted in a mean of 25% with a Standard Deviation of 8%. The classification result is significantly different from this distribution, with p = 0.001; the vertical blue line signifies the classification accuracy in the distribution. (c) Visualisation of the voxels used for feature selection (n = 110).

4.2.2 Determining the active trial choice As detailed in section 4.1.3, the raw fMRI data has been analysed to assess over which dynamic the majority of the peaks are found; see table 4 for results. Based on these results, two and three have been defined as active dynamics; five and six were defined as rest dynamics. Participant Mean of dynamic that contained the peak Median of dynamic that contained the peak #1 3.713 3 #2 2.413 2 #3 2.8 3 Table 4. Table showing the mean and median values of which dynamic contained the peak of activation.

(26)

4.2.3 Influence of parameters

Besides finding the parameter settings resulting in the highest mean classification accuracy, it has also been tested whether the choice of parameters has a statistically significant influence on the accuracy levels. A variety of combinations have been tested, resulting in 300 classification results (see Table 2, Supplementary Materials for the complete set). A combination between different classification algorithms, mask choices, feature selection methods and alpha levels has been tested.

4.2.3.1 Influence of classification algorithm There was a significant difference in the accuracies elicited by PC (M = 59.28%, SD = 12.21%) and SVM (M = 69.82%, SD = 13.05%); t(298) = -7.3404, p < 0.001, which means that SVM is the superior classification algorithm to use; see figure 11 for a graph of the data.

Figure 11. Bar plot of the comparison of classification

accuracies between PC and SVM; the red crosses and vertical lines mark the error bars, based on the Standard Deviation. The horizontal blue line represents the expected chance level of 25%. Asterisks indicate statistically significant differences.

4.2.3.2 Influence of ROI choice

It has also been tested whether the ROI choice has an effect on classification accuracy to understand whether the most discriminable information is located over the pre- and postcentral gyrus (i.e. in ROIprepost,

ROIpre and ROIpost) or in all scanned areas, i.e.

ROIall. These masks were selected for two

reasons; firstly, all masks related to the pre- and postcentral gyrus which contain the facial motor area, were selected a priori due to their relevance for a potential application in an ECoG-based BCI (see section 2.8). Furthermore, ROIall was chosen to get an

indication of whether classification accuracies improve or deteriorate when areas other than the sensorimotor cortices are included.

Using ROIall across all parameters led to a

mean classification accuracy of 66.2% (SD = 15.3%), ROIprepost showed a mean classification

accuracy of 65% (SD = 12.9%), with 63.4% (SD = 11.8%) being the mean classification accuracy for ROIpre. Classification with the

ROIpost mask reached a mean classification

accuracy of 63.9% (SD = 14.3%).

Significance testing via T-test has isolated two significant differences; ROIpre showed

significantly lower classification accuracies than ROIall (t(71) = 3.291, p = 0.002) and

ROIpost (t(71) = 2.457, p = 0.017); see figure 12

for a graph of the data.

Referenties

GERELATEERDE DOCUMENTEN

To investigate whether this reaction is driven by automatic mimicry or by recognition of the emotion displayed we recorded electromyograph responses to presentations of

The findings of my research revealed the following four results: (1) facial expres- sions contribute to attractiveness ratings but only when considered in combination with

The current study investigated effects of administered dopamine on the perceived attractiveness and neurophysiological indices of attention and processing (i.e., the P1, P2, and

To characterize the genetic variation underlying line variation in both virgin and post-mating receptivity, we used the DGRP2 internet tool (Huang et al., 2014; Mackay et

vraatschade die door zowel larven als kevers in het veld wordt toegebracht aan al deze getoetste plan- ten, duidt de afwezigheid van aantrekking door middel van geur- stoffen

Uit de resultaten bleek dat adolescenten met een LVB meer impulsief gedrag lieten zien, minder verantwoordelijkheid voor hun eigen gedrag namen, meer verbale-, fysieke en

This research paper analyzes the effects of microfinance institutions (MFI) on women entrepreneurship in Uganda, mediated by institutional quality.. It is important

3: Sol fraction ge- nerated during devul- canization versus the relative decrease in crosslink density of de- vulcanized GTR using initial conditions (de- vulcanization time: 6