• No results found

Matching facial expressions during retrieval of happy autobiographical memories

N/A
N/A
Protected

Academic year: 2021

Share "Matching facial expressions during retrieval of happy autobiographical memories"

Copied!
15
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Matching facial expressions during retrieval of happy

autobiographical memories

Student

Jolien Bergman (11595272) Mentor

Tess den Uyl, PhD Examiner

Jonas van Nijnatten, MSc Word count

Abstract (… words) Report (…. Words) 26th of June 2020

(2)

Matching facial expressions during the retrieval of happy autobiographical memories

Abstract

Mental health issues are a worldwide problem, which causes an increasing need in online therapies. Nowadays, computerized behavioral therapy (cCBT) is successfully used as online therapy. A successful innovation of online therapy has been the use of embodied conversational agents (ECAs). ECAs are computer programmed characters, capable of having conversations with human beings. Nevertheless, little is known about effective expressions of ECAs and research into facial expressions is required to further develop these ECAs. The present study researched facial emotional expression during the retrieval of different emotional memories, since earlier research found facial expressions matching the emotion of the autobiographical memory. However, one important thing that is expected to influence emotional expression is the social context. Therefore, in this experiment the retrieval of autobiographical memories was cued with either text, a video of someone expressing a neutral emotion or with a video of someone expressing an emotion congruent with the emotion of the memory. The study was an online experiment that used a software program called Face Reader to analyze facial expressions. Matching facial expressions were found during the retrieval of happy facial expressions were found during the retrieval of happy autobiographical memories. No effect of the social context on the facial expressions was found.

Key words: Facial Expressions, autobiographical memories, Emotion, Face Reader Online, social contexts

(3)

Introduction

Everyone knows what certain emotions feel like and how to recognize specific emotional expressions. However, even though emotions have been studied for a very long time, defining them has been an elusive goal. Charles Darwin, the father of modern emotion science, was the first to focus on the physical expression of emotions in 1872 since this is measurable and seemed to be innate. It took almost 100 years until it was proven that emotions are universal and inborn (Ekman & Friesen, 1971), which shows the complexity of researching emotions. Several years later, Ekman & Friesen developed ‘The Facial Action Coding System’ (FACS), a system that describes facial movements using action units (Ekman & Friesen, 1978). This was the start of the measurement of emotions by analyzing facial movements, after which extensive research over the years and an increased interest in facial expression research.

One field that benefits from this research is mental health care, specifically online therapies. Mental health issues are a worldwide problem, and the need and interest in online therapies have grown considerably the last decade (Stasiak et al., 2016).

Most of the online therapy programs that have been developed over the years are called computerized cognitive behavioral therapy (cCBT), based on cognitive behavioral therapy (CBT) (Proudfoot et al., 2004). Young people, in particular, are a promising target audience for online therapy since they are familiar with theonline environment. Additionally, research has proven that online therapy is very useful as an early intervention for children and adolescents suffering from depressions and anxiety (The National Institue for Health and Care Excellence, 2009). Nevertheless, there is a lot of room for improvement and innovation in this ever-growing branch of therapy.

One example of an innovation that is already successfully used in online therapy is Embodied Conversational Agents (ECAs), which originate from Artificial Intelligence. ECAs are computer-generated characters that show many of the same properties as humans and are capable of having conversations with human beings (Isbister & Doyle, 2004). ECAs are already used in several online therapy programs, e.g. to support smokers to quit smoking (Grolleman et al., 2006). Research has shown that using ECAs instead of textual communication in online therapy has several advantages, such as more trust and commitment of the patient (Kiesler & Sproull, 1997). Despite the promising success of the role of ECAs in online therapies, information on how to best develop and present the ECAs is limited (Kramer et al., 2020). Optimization of the presentation of the ECAs is essential, but little is known about effective expressions of the ECAs (Kramer et al., 2020). Facial expression research could contribute to this.

(4)

One way to study facial expressions is to have people recall autobiographical memories and measure their expressions. Autobiographical memories are memories of episodes of one’s personal life (Conway & Williams, 2008). Subjective experience is an essential part the autobiographical memory. This subjective experience ensures that during the retrieval of autobiographical memories people sometimes experience the emotions and feeling that they experienced during the original event of the memory (Comblain et al., 2005).

One study in the field of facial expressions research during memory retrieval was the study of El Haj, Antoine & Nandrino (2016), who found that during retrieval of episodic autobiographical memories, participants expressed emotions matching with the emotion of the memory. This research was done with textual cues only.

However, people often express emotions during conversations in social contexts. Therefore, it is interesting to research the effect of a social context, or the feeling of a social context, on the facial expression of emotions. Specifically, the feeling of a social context created online. Since tasks with congruent emotions seem to influence the expression of emotion (Olszanowski, Wróbel, & Hess, 2019), it is interesting to compare social contexts with emotion congruent as well as incongruent with the emotion of the memory. Therefore, the main focus of this study was to research to what extent specific emotional memories retrieved in different social contexts elicit matching facial expressions. Furthermore, this study will pay attention to how facial expressions are influenced by the emotional state of participants at the time of retrieval. Taking this possible influence into account could be interesting for more efficiency in the only therapy programs. It is hypothesized that specific emotional memories elicit matching facial expressions since matching emotional facial

expressions were found during episodic autobiographical retrieval (El Haj, Antoine, & Nandrino, 2016). Besides, it is thought that social context might influence facial expressions sincethis comes closer to real-life conversations and to be more specific in the congruent social context. While research has shown that congruency can modulate the processing of facial expressions in early stages of face processing in the brain (Diéguez-Risco et al., 2015). Also, congruent body posture facilitates the

retrieval of autobiographical memories (Dijkstra, Kaschak & Zwaan (2007).

To test this hypothesis, we used a software program called FaceReader Online (Noldus) that analyses facial expressions online. The experiment consisted of three conditions in which participants had to retrieve different emotional memories. Each condition consists of three different emotional categories: happy - sad – neutral, two memories had to be retrieved of each emotional category. The conditions aimed to serve different experiences of social contexts. One condition consisted of textual instructions, one of someone telling the instruction with a neutral facial expression and one of someone telling the instruction with an emotional expression congruent with the emotional category of which the memory had to be retrieved.

(5)

It is expected that during the retrieval of happy memories matching happy facial expressions will be present, meaning the amount of happy facial expression will be higher than the amount of sad and neutral facial expression. Likewise, the amount of sad facial expression is expected to be the highest for the sad memory, and the neutral emotional expression is expected to be the highest for the neutral memory.Further, it is expected that participants will express stronger facial expressions in the condition where the instructor shows congruent expressions. Most effect of the congruent social context is expected to be seen for the happy facial expressions. This would be in line with the view on emotional mimicry which states that emotional facial expressions are only mimicked if the emotion shows empathy and facilitates affiliation (Hess & Fischer, 2014).

Methods Participants

In this study, 13 females and 6 males with an age ranging between 18 and 57 participated (M±SD: 30.53 ± 13.60).

Design

The experiment was entirely online, participants executed the experiment behind their own computer with a camera in different surroundings. Before starting the task, participants were asked to read and sign an informed consent. After this, they answered demographic questions and mood state questions. The mood state questions consisted of six statements about their emotional state: “I feel calm”, “I am tense”, “I am confused”, “I feel relaxed”, “I feel satisfied”, “I am concerned”. Participants had to choose the answer that corresponded best with their emotional state at that moment (1= “not at all”, 2= “a little”, 3= “quite a lot”, 4= “very much”). Then, they received instructions and performed a test to make sure their facial expressions could be measured with FaceReader Online. After fulfilling this test, the task started.

For every participant, the task consisted of retrieval of two memories of each emotion (happy, sad, and neutral), resulting in a total of six memories. The order in which memories of the different emotions were asked was randomized between the participants using Qualtrics. Although, the two memories of one emotion were always retrieved respectively. Participants were instructed to retrieve a memory matching the emotion (60s). After that, they had to share this memory (60s). The instructions differed between the emotions, for the neutral emotion participants got an instruction to retrieve a memory of a regular morning and another instruction to retrieve a memory of a regular evening. For the happy emotion they got instructions to retrieve a memory of days they felt happy and for the sad emotion of days they felt sad.

(6)

Additionally, participants received the instructions for the retrieval in different ways that were intended to create different experience of social context. The instructions were presented either via text, via a video of someone showing neutral facial expression (social), or via a video of someone showing emotional facial expression congruent with the emotion of the memory (social plus). Thus, one participant received all six emotions via one of the three mentioned ways.

After each shared memory participants had to rate the emotional value of their shared memory (1= “very negative”, 2= “negative”, 3= “neutral”, 4= “positive”, 5= “very positive”). After the retrieval and sharing of six memories participants were asked the same mood state questions asked in the beginning of the experiment. The experiment ended with a personality questionnaire and some questions about the quality of the experiment.

FaceReader online

During the whole experiment FaceReader measured the facial expressions of the participants online. The system measures up to 500 facial key points, including the points that enclose the face and the points of all characteristics of the features of the face (mouth, eyes, lips etc.). In this way, it makes a virtual mask of the face by which it can measure the expressions of the following emotions: happy, sad, angry, surprised, scared, disgusted and neutral (Uyl & Kuilenburg, 2005). Emotions that are also acknowledged as the “basic emotions” (Ekman, 1970). Not only does it recognize these set of emotions, it can also detect 15 minimal facial actions from the Facial Action Coding system (FACS) (Ekman, Friesen & Hager, 2002). Furthermore, FaceReader also analyzes the valence and arousal. Valence represents the emotional status of the participant, calculated by subtracting the highest negative emotion (sad, angry, scared or disgusted) from the happy emotion. Arousal indicates how active (+1) or not active (0) the participant is, based on activation of 20 Action Units from the FACS.

For the statistical analysis only the facial expressions measured during the sharing of the (six) memories of the participant were used. Besides, just the amount of happy, sad and neutral emotions measured by FaceReader were used for this study.

Statistical analysis

For every participant, the mean of the facial expression measured during the sharing of two memories of the same emotion was calculated as well as the mean of the valence during sharing of all six memories was calculated. The values of the negative mood-state questions (“I am tense”, “I am confused”, “I am concerned”) were inverted and the mean of the answers to the mood state questions was determined for every participant.

(7)

To analyze whether the facial expressions matched with the emotion of the memory a within-subject analysis was done by performing the Friedman test, due to violation of the normality assumption controlled for with the Shapiro-Wilk test. The Wilcoxon signed-rank test was performed as post-hoc analysis. Means of the emotional categories (happy- sad- neutral) for one emotion measured by FaceReader were compared within a participant.

The effect of different social contexts on the facial expression of emotions was determined by performing between-subject analysis using the Kruskal-Wallis test, after controlling for the normality assumption with the Shapiro-Wilk Test. Means of one emotional category and its matching measured emotion by FaceReader were compared between participants.

Finally, analysis of the correlation between the valence of the participant during the task and the current mood state of the participants was done by calculating Spearman’s correlation.

Results

Different emotional memories and facial expressions

To determine whether the amount of facial expression of an emotion matched with the emotion of the memory, the facial expression of these emotions (happy, sad, and neutral) had to be compared for every type of emotional memory. The amount of happy facial expression (c2 = 20.6, p < 0.001) showed a significant difference between the happy memory (M±SD: 0.10 ± 0.14), the sad memory (M±SD: 0.02 ± 0.03) and the neutral memory (M±SD: 0.04 ± 0.08). Post-hoc analysis showed that the amount of happy facial expression was significantly different from the amount of neutral facial expression (p < 0.01) and sad facial expression (p < 0.001). No significant difference in the amount of sad facial expression (c2 = 2.95, p = 0.229) or neutral facial expression (c2 = 4.53, p = 0.104) during retrieval of happy, sad or neutral memories was found.

(8)

Fig. 1 Facial expressions during retrieval of memories eliciting different emotions. (A) This figure

shows the means of happy facial expressions during the sharing of happy memories. The amount of facial expression of the happy memory showed a significant difference with the amount of facial expression of the neutral memory (p < 0.01) as well as the sad memory (p < 0.001). (B) This figure shows the means of sad facial expressions during the sharing of sad memories C) This figure shows the means of neutral facial expressions during the sharing of neutral memories.

The amounts of facial expression of the sad and neutral memory showed no significant difference with the amount of facial expression of the happy memory or of the amount of facial expression of each other.

(9)

Different social contexts and facial expressions

For analysis of the effect of different social contexts, the amount of facial expression of an emotion for its matching emotional memory was compared between the participants and thus between the three social contexts (text, social and social plus). However, analysis of these results did not show a difference in the amount of happy facial expression (c2 = 0.13, p = 0.937), sad facial expression (c2 = 0.01, p = 0.995) or neutral facial expression (c2 = 5.61, p = 0.060).

(10)

Fig. 2. Facial expressions in different social contexts. (A). This figure shows the mean of happy facial

expressions of the participants during the sharing of happy memories in the different conditions. (B). This figure shows the mean of sad facial during the sharing of sad memories in the different conditions. (C) This figure shows the mean of neutral facial expressions of the participants during the sharing of neutral memories. There is no significant difference between the amount of expression of either the happy emotion (c2 = 0.13, p = 0.937), the sad emotion (c2 = 0.01, p = 0.995) or the neutral emotion (c2 = 5.61, p = 0.060) between different social contexts.

Mood state and facial expressions

To analyze whether the valence of facial expressions was correlated with the mood state of the participants at the day of the experiment, the mean of the answers to the mood state questions and the mean valence of the facial expressions during the sharing of all six memories were measured. No correlation between the valence of facial expressions (M±SD: -0.10 ± 0.12) and the mood state (M±SD: 3.36 ± 0.49) was found (rs = -0.079, p = 0.7481).

Fig. 3. Correlation between reported mood state and valence of facial expressions. This figure shows

that the amount of correlation between the mood state reported by the participants and the valence of their facial expressions during the sharing of memories. The measured correlation was not significant (rs = -0.079, p = 0.7481).

(11)

Discussion

In the present study, facial expressions were measured during retrieval of autobiographical memories. It was shown that during retrieval of happy memories, participants showed more happy facial expression compared to neutral and sad memories. However, the amount of sad facial expression and neutral facial expression did not seem to differ between either happy, sad or neutral memories. Additionally, there was no difference between the amount of happy, sad and neutral expression between different social contexts. Lastly, there was no correlation between the mood state of participants and the valence of their facial expression.

Even though the present study found matching happy facial expression during the retrieval of happy memories, the absence of other matching facial expressions is in contrast with results of El Haj, Antoine & Nandrino (2016), who did find matching emotional expressions during the retrieval of happy, sad and neutral memories. An explanation could be the small sample size of this research, combined with the inclusion of the outliers in the statistical analysis. Outliers were not removed since it was unsure whether this same value would have been an outlier in a dataset with bigger sample size. On the contrary, keeping outliers in the dataset has a strong influence on a dataset with such a small sample size. Another explanation for the results could be that the retrieval of sad memories is unpleasant and therefore harder. This could result in less intensive sad facial expressions and more neutral facial expressions. Besides, the amount of neutral facial expression seems to be high in every emotional category. This could contribute to the fact that no effect was found for the neutral facial expressions. One other thing that could have impacted the intensity of the facial expressions in different social contexts is the fact that the order in which different emotional categories had to be retrieved was randomized. When a sad memory had to be retrieved directly after a happy memory, it is possible that the amount of sad expression is less then when a sad memory had to be retrieved after a neutral memory. Also, in the social condition in which the expression of the person in the video should not be the same as the emotion, the video was showing a neutral expression. This means that this condition is actually congruent with the neutral memory, instead of incongruent.

The lack of effect of the different social contexts is in contrast with the view on emotional mimicry (Hess & Fischer, 2014), because no effect of a video in which someone was expressing the same emotion as the retrieved emotion was found. This result could be due to the fact that this was an online experiment, which could have caused the effect of the social contexts to be less. In future research, it would be interesting to create a social context that is more relatable to reality. In order to research this efficiently it would be advisable to first research which aspects of the visual presentation are important for a sense of reality. Another part that would be interesting to take into account in future experiments is the fact that neutral facial expressions seem to be highly present in every emotional category and can affect

(12)

measurements. Working with difference scores, where neutral facial expressions are subtracted from the other emotional facial expressions, could resolve this issue. In conclusion, the happy facial expression was the only expression that was increased when someone was telling a happy memory. No effect of social context or mood state on facial expression was found. However, it could be possible that changing several factors of the experiment results in more effect of all conditions on facial expression.

(13)

List of references

Comblain, C., D’Argembeau, A., & Van der Linden, M. (2005). Phenomenal characteristics of autobiographical memories for emo- tional and neutral events in older and younger adults. Experimental Aging Research, 31(2), 173–189.

Diéguez-Risco, T., Aguado, L., Albert, J., & Hinojosa, J. A. (2015). Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion. Biological Psychology, 112, 27–38. https://doi.org/10.1016/j.biopsycho.2015.09.012

Díez-Álamo, A. M., Díez, E., Alonso, M. A., & Fernandez, A. (2019). Absence of posture-dependent and posture-congruent memory effects on the recall of action sentences. PLOS ONE, 14(12), e0226297. https://doi.org/10.1371/journal.pone.0226297 Ekman, P. (1970). Universal Facial Expressions of Emotion. California mental health, 8(4), 151-158

Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124–129. https://doi.org/10.1037/h0030377

Ekman, P. & Friesen, W.V. (1978). Facial action coding system. Palo Alto: Consulting Psychologist Press.

Ekman, P., & Friesen, W. V. (1976). Measuring facial movement. Environmental Psychology and Nonverbal Behavior, 1(1), 56–75. https://doi.org/10.1007/bf01115465 Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial Action Coding System. Manual and Investigator’s Guide, Salt Lake City, UT: Research Nexus.

El Haj, M., Antoine, P., & Nandrino, J. L. (2015). More emotional facial expressions during episodic than during semantic autobiographical retrieval. Cognitive, Affective, & Behavioral Neuroscience, 16(2), 374–381. https://doi.org/10.3758/s13415-015-0397-9 Hess, U. & Fischer, A. H. (2014). Emotional mimicry: why and when we mimic emotions. Social Psychological Compass. 8 (2), 45-57.

Holland, A. C., & Kensinger, E. A. (2010). Emotion and autobiographical

memory. Physics of Life Reviews, 7(1), 88–131.

(14)

Isbister K, Doyle P. The blind men and the elephant revisited evaluating interdisciplinary ECA research. In: Ruttkay Z, Pelachaud C, editors. From brows to trust evaluating embodied conversational agents. Dordrecht, Netherlands: Springer; 2004:3-26.

Grolleman, J., Van Dijk, B., Nijholt, A., & Van Ernst, A. (2006). Break the habit! designing an e-therapy intervention using a virtual coach in aid of smoking cessation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). https://doi.org/10.1007/11755494_19 Kiesler, S. & Sproull, L. (1997). Social responses to “social” computers. In B. Friedman (Ed.), Human values and the design of technology, CLSI, Publications.

Kramer, L. L., Ter Stal, S., Mulder, B. C., De Vet, E., & Van Velsen, L. (2020). Developing embodied conversational agents for coaching people in a healthy lifestyle: Scoping review. Journal of Medical Internet Research. https://doi.org/10.2196/14058

National Institute for Health and Clinical Excellence (NICE): De- pression in Adults: The Treatment and Management of Depression in Adults. London: NICE, 2009. Olszanowski, M., Wróbel, M., & Hess, U. (2019). Mimicking and sharing emotions: a re-examination of the link between facial mimicry and emotional contagion. Cognition and Emotion, 34(2), 367–376. https://doi.org/10.1080/02699931.2019.1611543

Proudfoot, J., Ryden, C., Everitt, B., Shapiro, D. A., Goldberg, D., Mann, A., … Gray, J. A. (2004). Clinical efficacy of computerised cognitive-behavioural therapy for anxiety and depression in primary care: Randomised controlled trial. British Journal of Psychiatry, 185(1), 46–54. https://doi.org/10.1192/bjp.185.1.46

Righart, R., & De Gelder, B. (2008). Recognition of facial expressions is influenced by emotional scene gist. Cognitive, Affective and Behavioral Neuroscience. https://doi.org/10.3758/CABN.8.3.264

Stasiak, K., Fleming, T., Lucassen, M. F. G., Shepherd, M. J., Whittaker, R., & Merry, S. N. (2016). Computer-Based and Online Therapy for Depression and Anxiety in Children and Adolescents. Journal of Child and Adolescent Psychopharmacology, 26(3), 235–245. https://doi.org/10.1089/cap.2015.0029

Uyl, D. M. J., & Kuilenburg, V. H. (2005). The FaceReader: Online facial expression recognition TL - 30. The FaceReader: Online Facial Expression Recognition, 30 VN-r(September), 589–590.

(15)

Williams, H.L., Conway, M.A. & Cohen, G. (2008). Autobiographical Memory. In G. Cohen & M.A. Conway (eds.), Memory in the Real World (3rd Edition) London: Psychology Press. pp. 21-90.

Referenties

GERELATEERDE DOCUMENTEN

The current study sought to test whether facial expressions of happiness and fear are perceived categorically by pre-verbal infants, using a new stimulus set that was shown to

In chapter 2, the effects of a psychosocial stress task on memory retrieval of neutral and emotional words is described, in which the effects of cortisol

Chapter 2: The effects of cortisol increase on long-term 21 memory retrieval during and after psychosocial stress Chapter 3: Long-term outcomes of memory retrieval under

During the retrieval of emotional memories, propranolol might block the access to the emotional responses normally elicited by the amygdala in response to a

While these results may suggest that in humans, adrenergic activation is also necessary for the effect of cortisol to occur on memory retrieval, they did not

Recall of neutral and negative words at 6 months (Session 3), as a percentage of the last learning trial on the encoding day (Session 1) is presented (a) in the

Recognition scores were calculated by subtracting the falsely recognized items from the number of correctly recognized items; neu = neutral; emo = emotional..

No effect of psychosocial stress was found on memory specificity or experience, but cortisol increases tended to be related to less specific, recent memories elicited