• No results found

Automated Identification and Measurement of Suppressed Emotions using Emotion Recognition Software

N/A
N/A
Protected

Academic year: 2021

Share "Automated Identification and Measurement of Suppressed Emotions using Emotion Recognition Software"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

Automated Identification and

Measurement of Suppressed Emotions

using Emotion Recognition Software

By: Jochem Jansen (6151612)

The aim of this study is to investigate whether the facial emotion recognition software FaceReader by Noldus is capable of reliably detecting suppressed emotions in facial expressions and setting a step in the development of an objective tool to measure suppressed emotions. The subjects were shown images from the International Affective Picture System (IAPS) with the greatest positive and negative valence scores. Instructions to consciously suppress the natural emotional response expression and express an emotion of the opposite valence were given to create a situation where natural facial emotion expression was suppressed. The recorded facial expressions did not match the expected characteristics of suppressed emotion expression. The most surprising way in which expression deviated from expectations was that the positive expression Happy was expressed more than Disgust as a response to negative stimuli (t(36)=0.964, p=0.015), and Disgust was expressed more strongly during positive stimuli (t(36)=1.999, p=0.045). The instructions to suppress natural emotional expression resulted in a difference in overall expression of Disgust as a result of positive stimuli (t(206)=15.875, p<0.001) and negative stimuli (t(171)=29.735, p<0.001). Expected characteristics of normal and suppressed emotional expression all seemed to be false, therefore the method used in this study cannot be used to detect expressive suppression. Further research into stimulus-induced emotional facial expression is needed to get a better understanding of expressiveness before a suppression detection tool can be devised.

Introduction

Emotion expression

Considering facial expression as a representation of emotion started with the research of Charles Darwin in his book The Expression of the Emotions in Man and Animals (1872), where he categorized and grouped certain expressions based on the muscle actions associated with those expressions. More current research on human facial emotion recognition was pioneered by Paul Ekman in the early 1970’s (Bettadapura, 2012). Ekman and colleagues identified six basic emotions that are universal and distinct from each other namely: happiness, sadness, anger, fear, disgust and surprise (P Ekman, 1972, 1994, Paul Ekman, 1992, 1993, 1999; Paul Ekman & Friesen, 1971) In recent years, the question has arisen whether these basic emotions are truly universal and cross-cultural

(Gendron, Roberson, van der Vyver, & Barrett, 2014; R. E. Jack, Garrod, Yu, Caldara, & Schyns, 2012; Rachael E. Jack, Blais, Scheepers, Schyns, & Caldara, 2009). However, these critiques focus mainly on

(2)

2

the mental representation and recognition of the expression of these emotions and not their existence. Therefore, current research assumes that these emotions exist yet their cross-cultural nature (both expressive and comprehensive) requires further investigation.

Another contribution of Ekman to the field of facial emotion research is the development of the Facial Action Coding System (FACS). This system is a way to categorize facial muscle movements or Action Units (AU) based on which emotional expressions can be determined as a combination of AU (Bettadapura, 2012; P Ekman & Friesen, 1978; Paul Ekman & Friesen, 1976). Even though the first attempt on facial emotion recognition was done by Suwa and colleagues in 1987 (Samal & Iyengar, 1992) and used a different approach, since the introduction most automated facial emotion recognition attempts have been based on the FACS or derivatives (Bettadapura, 2012; Fasel & Luettin, 2002; Lewinski, den Uyl, & Butler, 2014; Sebe et al., 2007; Zhihong Zeng, Pantic, Roisman, & Huang, 2009). Currently there are several facial emotion recognition software packages available of which the most prominent ones are Nviso (nViso SA, Lausanne, Switzerland), Affdex (Affectiva Inc., Waltham, USA) and FaceReader (Noldus Information Technology, Wageningen, The Netherlands) (Danner, Sidorkina, Joechl, & Duerrschmid, 2014). In this investigation FaceReader by Noldus was used since this is a widely used, validated software capable of real time processing of videos and images (Benţa et al., 2009; Bijlstra & Dotsch, 2010; Chiu, Chou, Wu, & Liaw, 2014; Danner et al., 2014; Lewinski, den Uyl, et al., 2014).

Facial emotion expression suppression

Facial expression research using automated emotion recognition software commonly investigates the most prominent emotion during the presentation of a stimulus during a timeframe of seconds (Benţa et al., 2009; Bijlstra & Dotsch, 2010; Blank & Lewinski, 2015; Danner et al., 2014; Lewinski, 2015; Lewinski, den Uyl, et al., 2014; Lewinski, Fransen, & Tan, 2014). However, in a natural context, people do not always express their predominantly felt emotion overtly or for an extended period of time. For social reasons it is sometimes beneficial to adjust one’s emotional reaction (Butler et al., 2003; Gross, 2002; Ochsner & Gross, 2005; Ohira et al., 2006). In these cases, the initial emotional expression is suppressed or masked by another expression. Ekman and Frank (1992) discovered this effect, named it squelching and described it as follows: “As an expression emerges, the person seems to become aware of what is beginning to show and interrupts the expression, sometimes also

covering it with another expression.” In more recent studies this effect became known as

(emotional/expressive) suppression. A variety of studies have identified expressive suppression as having negative consequences for the individual. First of all, it has been suggested that suppressing a negative emotion leaves the experience of the emotion intact whereas suppressing a positive emotion diminishes the experience of the emotion (John & Gross, 2004; Ohira et al., 2006). Furthermore, suppressing emotions adds to the cognitive load, leaving less capacity for other processes such as memory (Gross, 2002; John & Gross, 2004). This seems to be linked to the social consequences of expressive suppression, since it has been found that people who use suppression are less responsive in social situations, leading to a lower sense of rapport and hindering social relationship forming and building (Butler et al., 2003; Butler, Lee, & Gross, 2007). Gross & John developed a questionnaire that measures the overall usage of emotional suppression as an emotion regulation strategy, called the Emotion Regulation Questionnaire (ERQ) (2003). This questionnaire

(3)

3

measures the usage of suppression as an emotion regulation technique on a 7 point scale. Even though some attempts have been made to measure this suppression objectively, these have been too expensive and unspecific to be of use (Ohira et al., 2006). As a result, there is presently no useable objective measurement method for spontaneous expressive suppression as a response to a stimulus. The aim of this research is to investigate whether facial emotion recognition software (FaceReader) is capable of reliably detecting suppressed emotions in facial expressions and setting a step in the development of an objective tool to measure suppressed emotions. It will do so by investigating whether there are differences in amount and timing of expressions of happy and disgusted as a response to positive and negative stimuli when suppressing emotion expression versus natural expression.

Methods

Participants

Of the 20 participants, one participant was excluded from analysis because the backlighting reflected via a pair of glasses into the camera distorted the measurement. Exclusion criteria for recruitment were based on factors that might interfere with a proper facial expression reading such as: facial paralysis, (partial) blindness, face-obscuring objects such as glasses or hairdo and diagnosed mental disorders that would interfere with normal emotional responsiveness. The remaining 19 participants consisted of 12 females and 7 males between the age range 20-42 (mean=24.8, sd = 5.4). The subjects were recruited through social media from the general population within Mexico City by Neuromarketing SA de CV. All signed an informed consent form and received no compensation.

Software

The software used to gather facial emotion expression data was FaceReader by Noldus, developed by VicarVision. The version used in this research is FaceReader 6.1 (2015). This software was chosen since its reliability has been validated and is widely used in both academic research and commercial use, making it interesting and valuable to test the capabilities of the software (Lewinski, 2015). The reported accuracy of recognizing the correct intended emotion expression from a still image is 84% and upwards (Bijlstra & Dotsch, 2010; Lewinski, Fransen, et al., 2014; Noldus, 2011). For determining the valence of the facial emotion response to emotion-eliciting images, the software is about as good, and sometimes better than, human observers (Benţa et al., 2009). It processes 15 frames per second and can identify a neutral facial expression, the six basic emotions as defined by Ekman (1992) and other factors such as stare direction, emotional valence, age and gender. The software first identifies the face and then puts a mesh of 500 reference points in key areas based on a model called the Active Appearance Model. This setting of the 500 reference points is used as the baseline (neutral) emotion. From the captured images of the facial expression, the deviation of the reference points from baseline is measured. The emotional expression measurement is determined by using these deviations in an artificial neural network which has been validated by 10,000 manually scored images (Noldus, 2011). The stimulus presentation and synchronization with the data was achieved by using the Stimulus Presentation Tool by Noldus that was provided with the software.

(4)

4

Design and procedure

After reading general information about the study and signing the informed consent, subjects were taken to the experimental setup and an introduction and instructions would be shown on the

computer. To elicit the desired emotions, the widely validated dataset International Affective Picture System (IAPS) (Lang, Bradley, & Cuthbert, 1997) was used. The images of the IAPS are scored on valence and arousal. The design of the study was a 2x2 study with valence (happy/disgust) X suppression (suppression/no-suppression), so the selected images were intended to get either a neutral (medium valence, low arousal), happy (high valence, high arousal) or disgusted (low valence, high arousal) response.

In the first condition, the participants were asked to suppress natural emotional expression when presented with negative stimuli and express a positive emotion and smile. In the second condition, the participants were asked to suppress a natural emotional expression when presented with positive images and express a negative emotion and look disgusted instead. The neutral images are used as a control condition and baseline benchmark. All subjects performed both conditions in a counterbalanced order either starting with suppressing negative valence images (version A) or starting with suppressing negative valence images (version B).

The conditions were separated in sections which started with an instruction and practice block presented using PsychoPy v1.18.03(Peirce, 2009). In these blocks the experiment was explained by mentioning there will be images shown meant to elicit emotional responses. This is followed by instruction to express the opposite emotion by smiling/looking disgusted when presented with a disgusting/happy image and react naturally when presented with a neutral or positive/disgusting image. This was mentioned repeatedly and explicitly by repeating the instruction to smile/look disgusted when confronted with a disgusting/happy image and react naturally with other types of images. After the instructions participants got the option to read the instructions again or to start the practice rounds. The practice rounds consisted of 3 series of 3 images (neutral, positive & negative) and used the same timing as the real experimental blocks (6 seconds stimulus followed by 5 seconds rest) (see Figure 1). The series ended with the image with the desired suppressed valence after which a feedback screen was presented asking whether they suppressed the natural emotion by expressing the opposite. The instruction block ended with a repetition of the instructions, a notification that the experiment would be starting and the automated opening of FaceReader Stimulus Presentation (Noldus) software for the actual experimental block.

The two experimental blocks consisted of 8 neutral, 8 positive and 8 negative images. However, due to some technical difficulties with the software and hardware, an average of 16.6 (sd=1.0) of the 24

(5)

5

randomized images per block were shown every block in a randomized order. The images were shown for a period of 6 seconds to reach the expressive maximum (Benţa et al., 2009) followed by a 5 second resting cross image to prevent spillover of expression (Williams et al., 2005). Due to the workings of the stimulus presentation software, in between the stimuli a loading bar with the text “loading next stimulus” appeared for an inconsistent amount of seconds, adding to the resting time between stimuli. This should not affect data since the normal resting time was designed to be enough to get back to baseline expression. The participants did not receive any feedback on their performance.

The participants were informed both verbally and through an information sheet about the fact that their faces would be filmed. If subjects could do without glasses, they would take them off. The images were shown on a 15.6 inch (39.6 cm) laptop screen in an isolated room with no windows. Following the recommendations of the user manual (Noldus, 2011) the main light source was a white light source behind the computer with a white wall to reflect the light to get a good measurement and avoid shadows and lighting variation between subjects. The recommended camera, a Windows LifeCam Studio, was used to capture the video

Data and measurements

The data used are those of the six emotions measured: happy, sad, surprised, angry, sad and disgusted. The frame rate was 15 frames per second, thus each frame lasted 1/15 sec= 66ms. The FaceReader software scored each emotion’s expressiveness with a score between 0 and 1. The score is not a proportional score, but an absolute score of how well the facial states matched one of the emotion expression templates (Noldus, 2011). The data output included a marker of which stimulus was presented to determine the start of stimulus presentation and differentiate between stimuli.

Analysis

Exploration

A set of exploratory analyses using Rstudio version 0.99.902 (2015) and Microsoft Excel (2013) were performed to understand the dataset and to see to what extent it matched with the expected characteristics. As mentioned before, it was expected that suppressing an emotional expression requires additional cognitive processing, thus will take longer than expressing a natural

unsuppressed reaction to an image. It is therefore expected that the initial phase of facial expressive reaction will be similar during suppression of expression and during natural expression. This is regarding both timing and expression valence. However, when suppressing an emotion, it was expected that after a short period of time the initial emotional expression would subside and the forced expression would arise. Figure 2 is a representation of the expected measurements.

(6)

6

Important aspects of this hypothesized progression of emotional expression are 1) type of

emotional expression (natural vs. unnatural) 2) magnitude of emotion 3) timing.The analyses were

performed using the first 150 frames (10 seconds) of each measurement per stimulus because there was a slight fluctuation of amounts of frames measured.

1) An exploration of these three aspects was performed by plotting the averaged scored per emotion over time over the original 2x2 design of the experiment (positive/negative valence X suppression/no-suppression).

2) To get an insight into the magnitude of the expression measure, the mean expression measures per group were used.

3) And finally, the timing of onset of an expression was performed using the slope of the averaged progression measure.

A significant value for a slope was measured in the following manner: First, the average scores per frame were transformed into z-scores, using the mean and standard deviation of all data from stimuli with the corresponding valence. Next, the slope was calculated by deducting the value of the previous frame from the selected frame’s value. If the slope was 0.1 or bigger for three or more consecutive frames, the first frame was taken as the moment of onset for an emotion. So, for example, the z-score for the third frame of the happy expression during positive stimuli presentation and during the suppression condition is as follows:

Slope of the third frame is the z-score of the 3rd frame – z-score of the 2nd frame.

The values of significant change were chosen because the outcome of this method best fit the expression progression graph. Only the emotions ‘Happy’ and ‘Disgusted’ were investigated because they appear to have high activation relative to the other emotions and matched the instructions. To test whether the instructions had an effect on the expression measurements an unpaired t-test was performed using the means per stimulus of the measurements per emotion. The mean

activation per emotion expression over the complete duration of the stimulus was taken per type of stimulus. So, for instance, 173 negative stimuli were shown in total across all 19 subjects, thus n=173. The suppression instructions were tested versus the no-suppression instructions for happy and disgusted expression as a response to positive and negative stimuli.

Suppression No-suppression

(7)

7

Results

Figure 3 shows the overall average values of the measures per stimulus and instruction. As can be seen, the most expressed emotions were Happy and Disgusted quickly followed by Angry. This confirms that Happy and Disgusted are expressed more than other emotions as intended by the experimental design.

Furthermore, it is clear that during negative stimuli, expressiveness was greater than during positive stimuli. Especially the Happy expression was significantly more present during negative stimuli (t(36)=0.9645, p=0.015), whereas Disgusted was more expressed as a result of a positive stimulus (t(36)=1.9995, p=0.045). This basic structure of expressiveness per emotion can also be seen in the timeline per emotion (Figure 4). Comparing the obtained timeline graphs with the expected timeline (Figure 2), there are two characteristics that seem to visually match. The first being the observation that Happy does seem to be expressed during positive stimuli without suppression, and lacks significant expression during suppression. Secondly, the observed jump in Happy expression during negative stimuli with a suppression instruction corresponds to the predicted result.

All other characteristics of the predicted progression of facial expression as described in the Analysis section seem to be different. Actually, these characteristics that fit the prediction lose their value when the opposite emotional expression during these conditions are taken into account. During positive stimuli and no-suppression instructions, Disgusted still seems to be the stronger expressed emotion. Furthermore, during negative stimuli and no-suppression instructions, not only is Happy expressed a lot stronger than Disgusted, it seems to be expressed even more than during

suppression instructions.

(8)

8

Figure 4: Timeline emotional expression measurements of positive and negative stimuli with and without suppression instructions

(9)

9

In Figure 5 we can see the moment of onset of the emotions Happy and Disgust during the different conditions. The expected timing as visualized in Figure 5 predicts that during suppression conditions, the emotion matching with the stimulus valence will be expressed shortly before the instructed emotional expression will be expressed. The non-suppression conditions were expected to only show onset of the emotions that matched the stimulus valence at approximately the same time as it would in the suppression condition. As predicted, during negative stimuli, the Disgust expression is measured first and at almost the same time in both conditions. What is unexpected is that the Happy measure has an earlier onset during non-suppression. During positive stimuli more characteristics are not as expected. The timing of onset for the Happy emotion is faster with suppression instructions and during non-suppression, Disgust seems to be faster than the Happy emotion. Both are unexpected results.

Effect of instructions

Comparing means between the suppression and no-suppression instructions can be seen in table 1 which shows an overview of the results of unpaired t-tests.

The different instructions only had an effect on the Disgust expression as can be seen in the table above. This indicates that the instructions did have an effect on the subjects’ expressiveness but that this was limited to the Disgust expression in both conditions.

However, when visually comparing the timelines of Disgust during positive and negative stimuli, the suppression of the Disgust expression does not have a later onset as was expected (see Figure 6).

Suppression instructions No suppression instructions Suppression instructions No suppression instructions

Happy mean=0.036778 mean=0.039166 Happy mean=0.149464 mean=0.136016

SD=0.007745 SD=0.016471 SD=0.088993 SD=0.063598

Disgust mean=0.053857 mean=0.023249 Disgust mean=0.02817 mean=0.045145

SD=0.019701 SD=0.003532 SD=0.00441 SD=0.002967

p=0.1881

t(171)=1.1445 p=0.254

Positive stimuli Negative stimuli

t(206)=15.8755 p<0.0001

t(171)=29.7352 p<0.0001 t(206)=1.3207

Table 1: unpaired t-tests per emotion, type of stimulus and suppression instruction

(10)

10

Differences per stimulus

The stimuli selected from the IAPS were those of the highest positive and negative valence scores. Due to the established reliability of the IAPS database, and the valence scores of the chosen images were close together, the images were expected to have approximately comparable emotional effect on the subjects. However, when viewing the images personally, some of them seemed a little outdated and therefore could be subjectively perceived as less effective in evoking an emotional reaction. The database used was put together in the year 2005 and it is possible that sociocultural interpretation of the images has changed since then. Furthermore, it could be the case that some images are more effective in actually eliciting facial expression than others. A reason for this could be that some pictures were marked as inducing high negative valence emotions due to a more complicated value system (a child holding a gun against his head) and other as a result of a more obviously disgusting picture (an open swollen face wound). No difference between the stimuli was found when performing one-way ANOVA’s using disgusted and happy measurements. For both the positive stimuli (happy (F(15,195)=0.649, p=0.831), disgusted (F(15,195)=0.807, p=0.668)) and for negative stimuli (happy(F(15,161)=0.950, p=0.510), disgusted (F(15,161)=0.572, p=0.893)).

Differences per subject

Some people are naturally very expressive and others much less so. This is not only true during normal daily interaction, but also during the experiment. When checking for overall expressiveness, an ANOVA test was performed over the means of the emotion happy (as a representation of all emotional scores) over all stimuli presented to that subject. A clear difference in overall

expressiveness between subjects was found (F(18,650)=10.756, p=4*10-27).

Conclusions and discussion

In this paper, an attempt has been made to test whether FaceReader can be used to identify suppressed emotions by looking at a certain set of expected features of suppressed expression of emotion. Almost none of the observed measurements were as expected, thus the investigated metrics do not seem to be useful for identifying whether an expressed emotion was a suppressed one or not. The only criterion that yielded expected results is the mean activation of Disgust over the different conditions. This is higher with normal expression instruction during negative stimuli

(t(171)=29.73, p<0.001), and lower during normal expression instructions during positive stimuli (t(206)=15.88, p<0.001). However, results regarding factors such as the mean activation of Happy, timing of onset of expressions, and expected dominant expression all suggest that suppression does not have a significant impact at all or completely unexpected altogether.

In short, the expected model of suppressive expression is incorrect or incomplete, or the experiment or analysis are flawed. Here we discuss some of the surprising results, have a look at possible flaws in the investigation, and explore suggestions for future direction.

(11)

11

Negative stimuli seem to elicit a very strong Happy expression

reaction

When subjects were presented with negative stimuli the expected Disgust expression was not the stronger expressed emotion as was expected. In both suppression and no-suppression conditions, high levels of Happy were measured upon presentation with a negative stimulus. This expression pattern was expected to be associated with suppression instructions because subjects were asked to express a positive emotion. However, the measurement of Happy with no-suppression instructions was not significantly different. Furthermore, the expression of Happy seemed to be stronger during negative stimuli than positive ones (t(36)=0.9645, p=0.015). Even though it seems counterintuitive to see a positive expression as a result of a negative stimulus, the results suggest this to be the case. When analyzing the videos it was striking that people started smiling as a result of a negative

stimulus when there was no suppression instruction for negative stimuli. Even when they clearly had not forgotten about the suppression instructions for positive stimuli in that block, this reaction seemed to be quite common. This visual evidence provided a direction of possible explanation for this effect. First of all, Happy seemed to be an easy expression to show in a forced manner. People were sometimes struggling to express a forced disgusted expression. However, this does not explain why natural expression of Happy during positive stimuli was lower than its expression without suppression during negative stimuli. The visual inspection of the video data provides a possible answer. During negative stimuli, the natural expression did not seem to be disgust but more of a grimace expression. During this grimace expression eyebrows went up and the corners of the mouth also went up resembling a Happy expression. This could account for the apparent high Happy expressiveness during negative stimuli. The reason why this happens is unclear but even though it is a very interesting effect, it makes finding suppressed emotions a lot more difficult when it is unclear what the natural emotional reaction would be.

Disgust as a reaction to positive images

When presented with positive stimuli, the most dominant expression measured overall was Disgust (t(36)=1.9995, p=0.045). Overall, Disgust seems to be equally expressed as happy during positive stimuli. The visual observation of the video data together with verbal feedback received from the participants in the form of an informal chat afterwards, can provide a possible explanation. Included in the set of images with the highest positive valence scores in the IAPS were sexually explicit images. Even though participants were made aware of this beforehand and agreed they would be exposed to this type of image, the response was regularly one more of shock going towards disgust, than one of Happy.

Measurement of timing of onset expression

The method to determine the onset of a new expression was chosen based on a basic understanding of the data in combination with visual inspection. Taking the slope based on z-scores was an attempt to obtain a quantifiable and comparable measure of a change in expressiveness of one of the

measurements: timing. The threshold of significance for the slope of 0.1 for three or more consecutive frames was chosen based on observed coherence with the data. However, a more reliable method to determine onset of emotion might be necessary. This could be obtained by

(12)

12

determining data behavior during onset of expression by, for instance pattern analysis or machine learning and a bigger dataset. Since the expected difference in onset of timing between suppressed and not suppressed expression, the method used in this paper seems to be ineffective and thus inconclusive. However, based on visually inspection of timeline plots, there does not seem to be a difference. This holds for both video and data observation.

Emotion elicitation using pictures

The IAPS is a well-established database for eliciting emotions. However, the question remains whether reported emotional response on which this database is built, also elicits facial emotional expression. During the experiment it was quite striking that even though the subject reported in a questionnaire after the experiment that the images were perceived as shocking (Likert scale score:5.1/7, sd=1.7), very little expression was shown. The English version of the questionnaire can be found in appendix 1. This data was not used for further analyses due to lack of time. At the end of the original experiment the subjects were shown a commercial intended to elicit a fear response, and during this viewing, expressiveness was notably more prominent. Therefore it seems as if the selected images could have simply failed to have a facial emotional response. A comparison of effectiveness to elicit emotions between images and videos can be useful for further research using facial emotion recognition software on elicited emotions. Furthermore, it is very likely that people are significantly less expressive in an experimental setting than in a real-life setting. Although, when comparing self-reports on how expressive subjects were in normal life versus during the experiment on a 7-point Likert scale, no significant difference was found (F(1,38)= 3.299, p=0.077). However, during informal verbal feedback after the experiment, a larger part of the subjects indicated that they did feel watched and therefore might have had a less expressive face.

Other emotions as an expressive category of interest

In this investigation only Happy and Disgusted were looked at due to their opposing valence and correlation with intended stimuli. However, it might be possible that there is extra information to be found in the remaining 4 expression categories. Since expressiveness was rather unexpected as mentioned above, including other expression measures could reveal differences between suppression and natural expression.

Future directions

The goal of this investigation was to test whether automated facial recognition software can be used to measure expression of a suppressed expression. Studies on the effects of suppression and the theoretical framework for cognitive load during suppression are there, however, this study seems to be the first trying to measure facial expressiveness of this suppression. Using validated facial

emotion recognition software to measure the suppressed emotion might have skipped a step. It could be useful to first perform a study in which human observers rank and describe suppressed emotional expression. This could result in more insight into how expression suppression progresses and creating a reference and/or training set for an automated software analysis.

(13)

13

Still images simply do not seem to have the desired effect on expressiveness. Video could be a more effective way to elicit expressive behavior due to more sensory and natural sensory input. In this investigation image stimuli were chosen due to the time-sensitive nature of suppression. However, it might be possible to use video stimuli without losing the time sensitivity. For instance, facial

expressive measures could be combined with galvanic skin response or pupil dilation to indicate the moment a significant change in the cognitive state of the subject which can be used as a ‘start’ moment of the stimulus. Given the stimulus has a large enough emotional impact to elicit an expression, it might also induce other physiological responses which can be used in a multimodal approach and validation of facial emotion recognition software for detection of suppressed

emotions. To do this, a standardized video database such as LIRIS-ACCEDE (Chamaret, Chen, Baveye, & Dellandr, 2015) can be used.

Overall, automated facial emotion recognition software is promising but not yet ready to be used in reliably measuring elicited and suppressed emotions. The big question remains whether it will ever surpass human performance in interpreting emotional facial expression as long as its algorithms are based on human feedback.

Literature

Benţa, K. I., van Kuilenburg, H., Eligio, U. X., den Uyl, M., Cremene, M., Hoszu, A., & Creţ, O. (2009). Evaluation of a System for RealTime Valence Assessment of Spontaneous Facial

Expressions. Distributed Environments Adaptability, Semantics and Security Issues International Romanian-French Workshop, Cluj-Napoca, Romania, 17–18. Retrieved from

http://users.utcluj.ro/~benta/FRW09_Benta.pdf

Bettadapura, V. (2012). Automatic analysis of facial expressions: the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1424–1445.

doi:10.1109/34.895976

Bijlstra, G. (Radboud U. N., & Dotsch, R. (Rad. (2010). FaceReader 4 emotion classification performance on images from the Radboud Faces Database, (1).

Blank, L., Lewinski, P., & Lewinski, P. (2015). Expressions of Speakers in Banks ’ YouTube Videos Predict Video ’ s Popularity Over Time.

Butler, E. A., Egloff, B., Wlhelm, F. H., Smith, N. C., Erickson, E. A., & Gross, J. J. (2003). The social consequences of expressive suppression. Emotion, 3(1), 48–67. doi:10.1037/1528-3542.3.1.48

Butler, E. A., Lee, T. L., & Gross, J. J. (2007). Emotion regulation and culture: are the social consequences of emotion suppression culture-specific? Emotion, 7(1), 30–48.

doi:10.1037/1528-3542.7.1.30

Chamaret, C., Chen, L., Baveye, Y., & Dellandr, E. (2015). LIRIS-ACCEDE : A Video Database for Affective Content Analysis. IEEE Transactions on Affective Computing, 1–14.

(14)

14

Chiu, M.-H., Chou, C.-C., Wu, W.-L., & Liaw, H. (2014). The role of facial microexpression state (FMES) change in the process of conceptual conflict. British Journal of Educational Technology, 45(3), 471–486. doi:10.1111/bjet.12126

Danner, L., Sidorkina, L., Joechl, M., & Duerrschmid, K. (2014). Make a face! Implicit and explicit measurement of facial expressions elicited by orange juices using face reading technology. Food Quality and Preference, 32(January 2016), 167–172. doi:10.1016/j.foodqual.2013.01.004 Darwin, C. (1872). The expression of emotion in man and animals. New York: Philosophical Library

Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. Nebraska Symposium on Motivation. doi:10.1037/0022-3514.53.4.712

Ekman, P. (1992). An argument for basic emotions. Cognition & Emotion. doi:10.1080/02699939208411068

Ekman, P. (1993). Facial Expression and Emotion. American Psychologist. Ekman, P. (1994). All emotions are basic. The Nature of Emotion.

Ekman, P. (1999). Basic emotions. Cognition. doi:10.1002/0470013494.ch3 Ekman, P., & Friesen, W. V. (1976). Measuring-Facial-Movement.

Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology. doi:10.1037/h0030377

Ekman, P., & Friesen, W. (1978). Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press.

Fasel, B., & Luettin, J. (2002). Automatic Facial Expression Analysis: A Survey. Pattern Recognition, 36, 259–275.

Gendron, M., Roberson, D., van der Vyver, J. M., & Barrett, L. F. (2014). Perceptions of emotion from facial expressions are not culturally universal: Evidence from a remote culture. Emotion, 14(2), 251–262. doi:10.1037/a0036052

Gross, J. J. (2002). Emotion regulation: Affective, cognitive, and social consequences. Psychophysiology, 39(3), 281–91. doi:10.1017.S0048577201393198

Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85(2), 348–362. doi:10.1037/0022-3514.85.2.348

Jack, R. E., Blais, C., Scheepers, C., Schyns, P. G., & Caldara, R. (2009). Cultural confusions show that facial expressions are not universal. Current Biology, 19(18), 1543–8.

doi:10.1016/j.cub.2009.07.051

Jack, R. E., Caldara, R., & Schyns, P. G. (2012). Internal representations reveal cultural diversity in expectations of facial expressions of emotion. Journal of Experimental Psychology: General, 141(1), 19–25. doi:10.1037/a0023463

(15)

15

Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences, 109(19), 7241–7244. doi:10.1073/pnas.1200155109

John, O. P., & Gross, J. J. (2004). Healthy and Unhealthy Emotion Regulation : Personality Processes , Individual Differences , and Life Span Development.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). International Affective Picture System (IAPS): Technical Manual ans Affective Ratings. NIMH Center for the Study of Emotion and Attention, 39–58. doi:10.1016/j.epsr.2006.03.016

Lewinski, P. (2015). Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets. Frontiers in Psychology, 6(September), 1– 6. doi:10.3389/fpsyg.2015.01386

Lewinski, P., den Uyl, T. M., & Butler, C. (2014). Automated facial coding: Validation of basic emotions and FACS AUs in FaceReader. Journal of Neuroscience, Psychology, and Economics, 7(4), 227–236. doi:10.1037/npe0000028

Lewinski, P., Fransen, M. L., & Tan, E. S. H. (2014). Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli. Journal of Neuroscience, Psychology, and Economics, 7(1), 1–14. doi:10.1037/npe0000012

Noldus. (2011). Reference Manual, 1–208.

Ochsner, K. N., & Gross, J. J. (2005). The cognitive control of emotion. Trends in Cognitive Sciences, 9(5), 242–9. doi:10.1016/j.tics.2005.03.010

Ohira, H., Nomura, M., Ichikawa, N., Isowa, T., Iidaka, T., Sato, A., … Yamada, J. (2006). Association of neural and physiological responses during voluntary emotion suppression. NeuroImage, 29(3), 721–733. doi:10.1016/j.neuroimage.2005.08.047

Peirce, J. W. (2009). Generating stimuli for neuroscience using PsychoPy, 2(January), 1–8. doi:10.3389/neuro.11.010.2008

RStudio Team (2015). RStudio: Integrated Development for R. RStudio, Inc., Boston, MA URL http://www.rstudio.com/

Samal, A., & Iyengar, P. a. (1992). Automatic recognition and analysis of human faces and facial expressions: a survey. Pattern Recognition, 25(1), 65–77. doi:10.1016/0031-3203(92)90007-6 Sebe, N., Lew, M. S., Sun, Y., Cohen, I., Gevers, T., & Huang, T. S. (2007). Authentic facial expression analysis. Image and Vision Computing, 25(12), 1856–1863.

doi:10.1016/j.imavis.2005.12.021

Williams, L. M., Das, P., Liddell, B., Olivieri, G., Peduto, A., Brammer, M. J., & Gordon, E. (2005). BOLD, sweat and fears: fMRI and skin conductance distinguish facial fear signals. NeuroReport, 16(1), 49–52. doi:10.1097/00001756-200501190-00012

(16)

16

Zhihong Zeng, Pantic, M., Roisman, G. I., & Huang, T. S. (2009). A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1), 39–58. doi:10.1109/TPAMI.2008.52

Appendix

Appendix 1: questionnaire on experience of emotional expression English version

It was difficult to show the opposite emotion than the image evoked

1---2---3---4---5---6---7

Totalmente Neutral Totalmente

en desacuerdo de acuerdo

During the complete experiment I showed a lot of emotional expression in my face

1---2---3---4---5---6---7

Totalmente Neutral Totalmente

en desacuerdo de acuerdo

I found the images very shocking

1---2---3---4---5---6---7

Totalmente Neutral Totalmente

en desacuerdo de acuerdo

I usually express a lot of facial emotions

1---2---3---4---5---6---7

Totalmente Neutral Totalmente

en desacuerdo de acuerdo

Referenties

GERELATEERDE DOCUMENTEN

The present research aimed to show that changes in company performance are important antecedents of CEOs’ positive and negative emotion expressions, as found in

H3a: Higher negative switching costs lead to a higher amount of complaints. Conversely as positive switching costs provide the customer with advantages of staying in the

feeling of control and they were able to readjust their future expectations. However, involvement does not automatically lead to more positive emotions. For example,

The second aim of the current study was to investigate how the intensity of emotion expression of older adults in the different action units is related to the emotional valence

In this research we will attempt to classify and identify statues and busts of Ro- man Emperors using existing techniques, and discuss what features of these statues cause

With reference to this issue, research on the health effect of Type D person- ality in cardiac patients indicates that social inhibition may modulate the impact of negative emotions

In a study with static facial expressions and emotional spoken sentences, de Gelder and Vroomen (2000) observed a cross-modal influence of the affective information.. Recognition

FIGURE 2 | Relationships as predicted by Predictive and Reactive Control Systems (PARCS) theory and tested in the current study (A) between discomfort (punishment reactivity),