• No results found

Categorical perception of facial expressions in adults and pre-verbal infants : evidence with new stimuli

N/A
N/A
Protected

Academic year: 2021

Share "Categorical perception of facial expressions in adults and pre-verbal infants : evidence with new stimuli"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Categorical perception of facial expressions in adults and pre-verbal infants: Evidence with new stimuli

Research internship final report Yongqi Cong Supervised by: Disa Sauter Caroline Junge University of Amsterdam July 2014

(2)

Abstract

Research shows that adults perceive emotional expressions categorically, meaning that discrimination is faster and more accurate for expressions from different categories than from the same category. Some argue that this process is shaped by language. We propose to test emotion perception in the absence of language, by examining pre-verbal infants. We expected that adults as well as infants of 7-month-old have categorical perception. Only one existing study has examined infants’ categorical perception of emotions with black and white images. Using new coloured image morphs displaying different facial expressions of fear and happiness, we first replicated the categorical perception effect in adults (Experiment 1 and 2). We then tested 7-month-olds on their processing of novel stimuli after habituation using a looking time paradigm measured with eye-tracking (Experiment 3). Infants were first habituated to one specific emotional expression, then presented with that image again, paired with a new expression that either did or did not cross the emotion-category boundary (based on adult categorization data). The degree of physical differences between the novel and habituated expressions remained the same extent, regardless of the category of the new expression. Infants showed a novelty preference for between-category expressions, but not for within-category ones. This suggests that categorical perception of emotions is already present in 7-month-old infants prior to acquiring the verbal labels.

(3)

Categorical perception of facial expressions in adults and pre-verbal infants: Evidence with new stimuli

Categorical perception (CP) occurs when continuous perceptual stimuli are perceived as distinct categories; equal-sized physical differences between stimuli are judged as larger or smaller depending on whether the stimuli are from the same or different categories (Harnad, 1987). This phenomenon has been observed in multiple perceptual classes including speech sounds and colours. For example, adults do not perceive a colour continuum as a continuous change, but rather perceive changes from one colour to another at boundary points (Bornstein & Korda, 1984). Consequently, categorical perception results in a greater sensitivity to physical changes crossing the boundary between two perceptual categories. In the example of colour, research has shown that people are relatively insensitive to changes in wavelengths occurring within a region that is perceived as one colour, and are more sensitive to the same magnitude of changes that occur across the boundary of two colours (Bornstein& Korda, 1984).

In recent years, researchers started to examine also multi-dimensional stimuli, such as facial expressions. Cross-cultural research has shown that the same facial movements are used to display emotions universally (Ekman & Friesen, 1971). Major facial expressions such as happiness, sadness, fear, anger and disgust can also be easily recognized and distinguished from each other across cultures (see Ekman, 1994 for an overview). Evidence for CP of emotions has been found by many studies. For example, Using computer-generated line-drawings of faces displaying different emotions, Etcoff and Magee (1992) tested subjects’ identification of morphed expressions along a particular emotion continuum and their discrimination of pairs of these stimuli. Participants gave one verbal label to expressions on one side of the continuum (e.g. happiness) and another verbal label to expressions on the other side (e.g. fear); there seemed to be a perceptual boundary in between. Faces within the same emotion category (i.e. on the same side of the boundary) were discriminated more

(4)

poorly than faces from different categories that differed by an equal physical amount. The results have been replicated later with photographic-quality images (e.g. Calder et al, 1996; Gelder, Teunisse & Benson, 1997; Young et al, 1997). These findings provided evidence for categorical perception of emotion.

A central question in understanding the CP of emotion is whether the process is based on learned verbal labels acquired from communication development or biologically wired abilities for expression recognition. Some theorists argue that language plays a central role in the CP of emotions. Several empirical studies have found that CP of emotional facial

expressions could be affected by the words people use to label the emotions. For example, Fugate, Gouzoules and Barrett (2011) found that participants were more likely to show CP when reading chimpanzees’ facial expressions if they first explicitly learned the categories with a label. They argued that structural information in the face alone is insufficient for CP to occur. Other studies have shown that emotional facial expressions can be encoded differently depending on the accessibility of their verbal labels, and that CP is only observed when verbal labels are available (Gendron, Lundquist, Barsalou and Barrett, 2012; Roberson & Davidoff, 2000).

However, conflicting evidence from other studies suggest that verbal labels are not necessary for the CP of emotion. For example, Sauter, LeGuen and Haun (2011) tested native speakers of Yucatec Maya, whose language makes no lexical distinction between anger and disgust. Despite the lack of verbal labels, Yucatec Maya speakers perceived these emotion expressions categorically in a delayed matching task, to the same extent as the comparison group, whose language did have different words for the two emotions. Here, words do not seem to be necessary for CP to occur. This view is also supported by some ERP studies that identified a perceptual origin of CP in the brain (e.g. Campanella, Quinet, Bruyer,

Crommelinck & Guerit, 2002; Leppänen, Richmond, Vogel-Farley, Moulson, & Nelson, 2009). For example, Campenella et al. (2002) tested CP of morphed facial expressions. They

(5)

observed a modulation of the P3b wave, as the amplitude of the responses for discrimination of between-category pairs was higher than that for the within-category pairs. These findings suggest that CP of emotional expressions is based on biologically evolved mechanisms independent of language.

A more direct examination of the question whether language is necessary for the CP of emotion would be to test a population without any verbal labels for the emotional

expressions: pre-verbal infants. Research on colour perception has shown that infants as young as 4 months old have categorical perception (Bornstein, Kessen & Weiskopf, 1976; Franklin & Davis, 2004). If categorical perception is a domain-general mechanism, we would expect similar effects when it comes to facial expressions. This has indeed been shown by Kotsoni, de Haan and Johnson (2001). They used a morphed continuum of facial expressions created from prototypical expression of happiness and fear and tested 7-month-old infants. They compared infants’ discrimination of pairs of expressions from the same or different categories. A combined familiarization and visual-preference paradigm was used: infants were first presented with one expression repeatedly (fixed familiarization phase), and then presented with the familiar expression paired with a new expression (visual preference paradigm). Previous research shows that infants prefer looking at novel stimuli after

familiarization of other objects (e.g. Fantz, 1964). If infants have CP of emotions, they should show a stronger novelty preference for the new picture only when it is from a different category than the familiar expression. This is indeed what was found in their study, but only in one direction: infants showed a novelty preference for a fearful expression after being habituated to a happy expression, but not the other way around. The researchers argued that this was due to a negativity bias, which is defined as an asymmetry in preference for negative stimuli: fearful expressions are more readily detected and attended to longer when presented alongside happy expressions (Nelson & Dolgin, 1985; Yang, Zald & Blake, 2007).

(6)

According to the findings of Kotsoni et al. (2001), this negativity bias overruled the CP effects.

The current study

The current study has two goals. The first is to replicate the CP of emotion effects found in previous studies using new coloured stimuli. All previous studies on CP of emotion have used black-and-white images, which have limited ecological validity. Secondly, we attempt to replicate the CP of emotion effects in pre-verbal infants. Specifically, we are interested in the asymmetry observed by Kotsoni et al. (2001). Kotsoni et al (2001) only found CP effect in one direction, namely novelty preference for new cross-category

expression after habituation to happiness but not to fear. Using the same stimuli to test both adults and infants allows us to validate the stimuli and increase the interpretability of the infants’ results. For these purposes, we have conducted three experiments. In experiment 1, we found the perceptual boundary between the two emotion categories. In experiment 2, we tried to validate the stimuli and replicate the CP effects of emotion in adults. In experiment 3, we used a similar paradigm as Kotsoni, et al. (2002) to test the CP effects in 7-month infants.

EXPERIMENT 1 Method

Stimuli

Prototypical facial expressions were obtained from The Amsterdam Dynamic Facial Expression Set (ADFES). Two female models were selected (F02 and F03) and consistent with Kotsoni et al.(2002), two expressions were used (fear and happiness). Image morphs from the prototypical facial expressions were morphed using the Sqirlz 2.1 (Xiberpic.com) morphing software to create a continuum. Intermediate exemplars at every 10% distance were then taken. Each morph displays a blend of different proportions of the two emotions

(7)

(see Figure 1 for examples of 20% distance; a list of all stimuli used can be found in the Appendix).

Face A

100% fear 80% fear 60% fear 40% fear 20% fear 0%fear 0% happiness 20% happiness 40% happiness n 60% happiness 80% happiness 100% happiness Face B

100% fear 80% fear 60% fear 40% fear 20% fear 0%fear 0% happiness 20% happiness 40% happiness 60% happiness 80% happiness 100% happiness Figure 1. Morph exemplars two morph steps apart.

The morphs were then edited in Photoshop to make them look more natural. Several small changes were made including removing the shadows in the background, editing the blurred parts of the shoulders, teeth, and hair. When changes were made, they were applied to all morphs on the continuum.

Participants

Participants were recruited online via social media. A total of 62 people participated (31 were females) and the mean age was 30 years. All participants participated on a voluntary basis.

Design and Procedure

The experiment was an online test administered using Qualtrics. Participants first saw a welcome screen with a brief description of the test they were going to do and a digital consent form. After they agreed to participate by clicking “continue”, they saw the instruction

(8)

screen where they were told how to respond to the questions. After clicking “continue” again, they were asked to enter their age and gender before proceeding to the test phase.

During the test, participants were presented with one image at a time and were asked which emotion was displayed. They had two options, “happiness” and “fear”. Half of the participants were presented with these options in the reversed order. After they had made a choice, they clicked “continue” to move on to the next trial. Participants always saw the image, the question, and the options on the same screen. They could not move on to the next trial if they did not give an answer.

There were 11 morph exemplars (including the prototypical expressions) for each of the two faces; every morph was presented four times, making a total of 88 trials for each participant. The whole experiment were divided into four blocks, each consisted of 22 unique trials in random order (the two faces were shown interchangeably). The online test took about 15 minutes to complete.

Results

For each morph, proportion of times when “happiness” was chosen as an answer was calculated. The calculation was conducted on all participants across all trials.1 No data were

excluded.

&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

1&To calculate the proportion of times when “fear” was chosen as an answer is the same analysis, given that it is

a two-option forced choice task. Therefore, results discussed here are only based on the “happiness” option. &

(9)

Figure 2A. The proportion of time when “happiness” is chosen as an answer as opposed to “fear”

across all trials (face A).

Figure 2B. The proportion of time when “happiness” is chosen as an answer as opposed to “fear”

across all trials (face B).

As illustrated in Figure 2A, for face A, morphs 1, 2, 3 and 4 were perceived most of the time (100%, 99%, 99% and 82% for the four morphs respectively) as expressions of happiness. Morphs 7, 8, 9, 10, 11 were perceived most of the time as expressions of fear (9%, 2%, 1%, 0, and 0 people judged them as happiness). Morphs 5 and 6 were more ambiguous: morph 5 was perceived 63% of the time as happiness and morph 6 was perceived 31% of the time as happiness. 99.19%& 81.85%& 62.50%& 31.45%& 8.87%& 2.02%& 0& 0.1& 0.2& 0.3& 0.4& 0.5& 0.6&0.7& 0.8&0.9& 1& 100h0f&90h10f&80h20f&70h30f&60h40f&50h50f&40h60f&30h70f&20h80f&10h90f&0h100f& 1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11&

Proportion of times recogized as "happiness" face A 98.79%& 84.27%& 36.29%& 13.71%& 2.42%& 0& 0.1& 0.2& 0.3& 0.4& 0.5& 0.6&0.7& 0.8&0.9& 1& 100h0f&90h10f&80h20f&70h30f&60h40f&50h50f&40h60f&30h70f&20h80f&10h90f&0h100f& 1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11&

Proportion of times recognized as "happiness" face B

(10)

For face B (Figure2B), there is a similar pattern. Morphs 1, 2, 3, 4 were perceived most of the time (100%, 100%, 99%, 84% respectively) as happiness, while morphs 6, 7, 8, 9, 10 and 11 were perceived most of the time as fear (14%, 2%, 1%, 1%, 0, and 0 people considered these morphs to be happiness). Morph 5 was less clear than the other morphs; it was judged 36% of the time as happiness. In other words, morph 5 was perceived by most people as an expression of fear.

Discussion

We found that for face A, morphs 1,2,3,4,5 were perceived as happiness, and morphs 6,7,8,9,10, 11 were perceived as fear. For face B, morphs 1,2,3,4 were perceived as happiness, and morphs 5, 6,7,8,9,10, 11 were perceived as fear. Considering the series of morphs as a continuum of two emotions transitioning into each other, there should be a point where participants start to judge the expression as one emotion instead of the other. This transition point should be where the majority people judged the emotion as different from the previous point (i.e., crossing over 50%). It can then be seen as a boundary separating the two emotion categories: all expressions to the left of the boundary should be more likely to be perceived as happiness and all expressions to the right of the boundary should be more likely to be

perceived as fear.

For face A, the 50/50 point can be found between morph 5 and morph 6. This means morph 5 and all morphs up to 60% happy (i.e. morphs 1, 2, 3, 4) were seen as happiness expressions, while morph 6 and all morphs from 50% happy and less (i.e. morphs 6, 7, 8, 9, 10, 11) were seen as fear expressions. This boundary is, however, not very clear-cut, because morph 5 is perceived as happiness only 63% of the time, which is just slightly above 50%. If we compare this to the boundary face of face B (i.e. morph 4), which is perceived 84% of the time as happiness, to assign morph 5 as an expression of happiness might not be very

(11)

For face B, there is a clearer picture. The 50/50 point is found between morph 4 and morph 5. Morph 4 was perceived by 84% of the participants as happiness while morph 5 was perceived by 36% of the participants as happiness. Here, there is a clear distinction between morphs to the left of the mid-point and those to the right in terms of their likelihood of being judged as one emotion instead of the other. We can make a relatively confident conclusion that morphs 1, 2, 3 and 4 are seen as expressions of happiness and morphs 5, 6, 7,8, 9, 10, and 11 are seen as expressions of fear.

EXPERIMENT 2

To test whether categorical perception would occur with these stimuli, we were

interested in whether people’s ability to discriminate between facial expressions would differ depending on whether they are from the same emotion category or different ones. If

categorical perception occurs, we would expect greater sensitivity to change if it occurs across a category boundary than if it occurs within a category. Simply put, it would be easier to tell the difference between two expressions that display two different emotions than the difference between two expressions displaying the same emotion to different extents. We tested this hypothesis using an X-AB task.

Method

Participants

Participants were 34 students from the University of Amsterdam. (14 males, M age = 23.3). Some of them received partial course credits in return and others participated on a voluntary basis. Three participants were excluded from the analysis; one had an accuracy rate lower than two standard deviations of the mean (he performed at about chance rate). Two other participants had reaction times two standard deviation higher than the mean; they were excluded to avoid bias in the analysis. Therefore, the final sample consisted of 31 people.

(12)

Stimuli and apparatus

The stimuli used in this experiment were identical to those used in Experiment 1. Tests were run on lab PCs controlled by E-Prime (Schneider, Eschmann, & Zuccolotto, 2002). Responses were entered using standard keyboards.

Study Design

Discrimination between two facial expressions was tested using an X-AB task. Participants were first briefly presented with stimulus X, followed by stimuli A and B side by side; the task was to indicate whether X was the same as A or B. Stimulus X was either a predominantly happy or a predominantly fearful face (based on the results from Experiment 1). The original image was then paired with a distractor image, a morph one step apart (i.e. 10% difference) from the target. The distractor was either a within-category expression (e.g. a happy face paired with another face that is slightly happier) or a between-category

expression (a happy face paired with a slightly less happy face which crossed the category boundary and therefore perceived as a fear expression). Participants had to decide which of the two stimuli was the same image that they had just seen (i.e. stimulus X).

There were 6 types of possible AB pairs, morphs 3 and 4, morphs 4 and 5, morphs 5 and 6, morphs 6 and 7, morphs 7 and 8, and morphs 8 and 9. The probability of each member of the pairs being X was equal. Whether a pair was a cross-category or within-category one was judged based on the category boundaries established in Experiment 1. For face A, pair 5 and 6 was a between-category pair; all the other pairs were within-category pairs. For face B, pair 4 and 5 was a between-category pair; all other pairs were within-category pairs.

All factors were manipulated within subjects. Response accuracy and reaction time were measured as dependent variables.

Procedure

Participants signed consent forms and read instructions on the screen. The experiment started with 6 practice trials to make sure participants understood what the task was. The

(13)

stimuli used for the practice trials were morphs of another face from the ADFES database that were not used for the experimental trials.

On each trial, a target stimulus (X) was presented in the center of the screen for 1000ms, followed by a pair of stimuli, one of which was identical to the target. Participants were then asked to indicate which member of the pair, the one to the left or the one to the right, was the one they had just seen. They entered their response with a key press. No feedback was given on the correctness of the responses. The inter-trial interval was self-paced.

Each type of pair was presented 8 times with each member having equal likelihood of being the target. The positions of the two pictures on the screen (left or right) were

counterbalanced. Each participant received 96 experimental trials (8 times per trial type x 6 types of trials x 2 faces). The order of the trials was randomized. Participants were instructed to respond as quickly and as accurately as possible.

Results

The most widely used criterion for categorical perception is better discrimination for pairs of stimuli from different perceptual categories than from the same category given the same amount of physical change (Calder et al., 1996). Better discrimination can be

operationalized as higher accuracy or faster response speed. We looked at both these criteria. Analysis for face A and face B were done separately given that they had different category boundaries.

Individual trials were removed from the analysis if reaction time exceeded 2 standard deviations away from the mean. This resulted in 65 trials being removed for face A and 72 trials for face B, leaving a total of 3125 remaining trials for the analysis.

Face A.

Accuracy. Following the analysis procedure from Calder et al. (1996), we compared

(14)

boundary on each side (morphs 5 and 6) to average accuracy of pairs that lie well within each category (morph 3 and 4 on the happiness side and morphs 8 and 9 on the fear side). This forms a comparison of between-category trials to within-category trials. 2 A paired-sample ttest was performed. Group mean difference did not reach statistical significance, t (30)= -1.33, p > 0.1.The same analysis was performed to compare the between-category trials (morph pair 5 and 6) to average of all the other within-category trials (morph pairs 3 and 4, 4 and5, 6 and 7, 7 and 8, 8 and 9). Results again did not show group difference, t (30) = -1.45,

p > 0.1.

Reaction Time. The same trial type comparisons were conducted for reaction time.

First, the between-category trials were compared to within-category trials on the tails of the continuum. Then, the between-category trials were compared to the average of all within-category trials. Neither test yielded significant results: t (30) = -1.44, p > 0.1 and t (30) = 0.3,

p > 0.1.

For face A, discrimination of the between-category morph pairs did not differ from that of the within-category morph pairs, neither in terms of accuracy nor reaction time. This means that categorical perception effect did not occur with these stimuli.

Face B.

Accuracy. Similarly, we compared accuracy of between-category trials (morph pair 4

and 5) to that of the within-category trials on the tail of the continuum (morph pair 3 and 4 in the happiness category and morph pair 8 and 9 in the fear category). A paired-sample t-test showed significantly better performance in between-category trials than within-category trials, t (30) = 4.21, p < 0.001. The same pattern was observed when comparing the between-category trials with the average of all within-between-category trials, t (30) = 4.01, p < 0.001. This

&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

2&The&procedure&of&comparing&the&morph&pairs&closest&to&the&boundary&(3,&4)&to&those&furthest&from&the&

boundary&on&each&side&(3,4&and&8,9)&was&borrowed&from&Calder&et&al.&(1996).&Additional&tLtests&were& conducted&comparing&the&betweenLcategory&pair&to&each&individual&within&category&pair;&none&of&the& results&turned&out&significant.&&

(15)

means that participants were better at discriminating two expressions from each other when they were from different emotion categories. The results are illustrated in Figure 3.

Reaction Time. Improved performance on the between-category trials was also

reflected in the reaction time. A paired-sample t-test comparing the reaction time in the between-category trials to the average reaction time in the within-category trials on the tail of the continuum showed a significant difference, t (30) = -3.158, p < 0.01. Participants were faster when giving responses to between-category trials (M = 1399.3ms, SD = 252.8) than to within-category trials (M = 1588.5ms, SD = 4503.5). The same pattern was shown when the between-category trials were compared to the average of all the within-category trials (M = 1582ms, SD = 327.8), t (30) = -2.73, p < 0.05. The results are illustrated in Figure 3.

Figure 3. Performance comparison across trial types for Face B.

Discussion

We have replicated the categorical perception effect with one of the two models we used, namely Face B. For this model, participants showed a greater sensitivity to changes

(16)

when they occurred across different emotion categories than within the same one. Better discrimination of between-category pairs of expressions compared to within-category ones was reflected in both response accuracy and reaction time.

We failed to find a categorical perception effect with Face A. This could likely be explained by the results on the recognition task in Experiment 1; the two morphs closest to the perceptual mid-point of the continuum, morph 5 and 6, were rather ambiguous, that is, assignment of the morphs to a specific prototype category was not very consistent. Morph 5 was judged about 60% of the time as an expression of happiness; this is just slightly over half. In contrast, the morph closest to the boundary for face B (morph 4) was perceived 84% of the time as happiness. Lack of clear distinction between expressions of different emotions makes it difficult to assess categorical perception. Face A in this case might not quality as an appropriate stimulus. For this reason, it will be disregarded in Experiment 3.

EXPERIMENT 3 Method

Participants

Fourteen infants between the age of 6.7 months and 7.9 months (M = 7.3) were tested; six of them were girls. Two additional infants were tested but excluded from the final

analysis. One had a side bias (more than 80% of the time spent looking at only one side of the screen), the other had too much missing data to include in the analysis. 3 All babies came

from non-depressed families. Stimuli and apparatus

The stimuli used were the same as in Experiment 1 and 2, except we only used morphs two steps apart from each other (20% physical distance), i.e. morph 3, 5, 7, and 9. In the &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

3&&Another&18&infants&were&tested&using&face&A&before&the&adult&identification&data&were&available.&Due&to&

the&limited&duration&of&this&project,&we&started&recruiting&infants&before&we&were&able&to&confirm&the& validity&of&the&stimuli.&This&had&some&implications&for&the&results,&which&are&discussed&in&the&general& discussion.&&

(17)

study of Kotsoni et al. (2001), they also used morph comparisons of two steps apart. The experiment was programmed and presented in SR Research Experiment Builder. Eye movements were tracked using Eyelink 2000 (SR Research Ltd., Mississauga, Canada). Study design

Previous research shows that infants prefer looking at novel stimuli after habituation or familiarization of other objects (e.g. Fantz, 1964). We made use of this novelty preference to test infants’ discrimination between two expressions after having habituated to one of them. Our procedure was very similar to that of Kotsoni et al. (2001).

Half of the infants were in the experimental condition. They were first shown a fearful expression (morph 5) repeatedly until they habituated. The habituation criterion was met if average looking time on three consecutive trials was less than 50% of the average looking time of the 3 longest trials. They then moved on to the test phase, where morph 5 was shown again on one side of the screen, paired with either morph 3 (between-category pair) or morph 7 (within-category pair) on the other side of the screen. Each pair type was presented twice, making up four test trials in total. The position of each member of the pairs was

counterbalanced and the order of the four trials were pseudo-randomized so that the first two trials consisted of one between-category pair and one within-category pair. The proportion of the time spent looking at the novel expression was calculated as the dependent variable (PropNew). If infants would show CP of emotion, the relative looking time of the new expression in relation to the habituated one should be longer for the between-category pairs, but there should be no difference for within-category pairs.

The other half of the infants were in the “within” condition. They were habituated to morph 7, which was perceived as an expression of fear by adults. Morph 7 was then paired with either morph 5 or morph 9 in the test phase. Morphs 5 and 9 were both perceived as expressions of fear by adults, we therefore didn’t expect any difference in the relative looking time of the novel stimuli in this condition.

(18)

To avoid side bias and to keep information load constant across trials, the habituation trials also consisted of two images shown side-by-side, just as the test trials. The difference was that habituation trials consisted of two identical images.

Procedure

Infants came to the lab with at least one caregiver. The caregiver signed a consent form before the experiment started. They then sat in front of the computer about 50 – 60cm away from the screen with their baby either in a maxi-cosi on their laps or directly on their laps.

The experiment started with a three-point calibration (this is a procedure to adjust the eyetracker to the infants’ eyes). After the eye-tracker was calibrated, experimental trials began. Infants first saw a pre-test image, to make sure that their attention was focused on the screen. The habituation trials then started. The habituation phase ended when the habituation criterion was met, or after 20 trials. The test phase then followed immediately. After the four test trials, the pre-test image was shown again, to check whether the child’s visual attention was still on the screen. The experiment then ended with a thank you note, to indicate to the parents the experiment had finished.

Every trial started with an animated fixation-cue in the center of the screen (an

expanding bullet eye) accompanied by sounds. Once the infants’ eyes fixed on the screen, the stimuli would be presented by a button press on the experimenter’s computer. A trial was ended and moved on to the next trial when the infant looked away from the screen for more than 1500ms or after 20s. A trial was recycled (i.e. presented again) if the infant looked at the screen for less than 1000ms before he/she looked away, or when the infant did not look at the screen at all within 4000ms after the trial started. These criteria applied to both the

habituation and test phase.

A short animation attention-getter (lasted about 3 seconds) was presented every four trials to keep the infants’ attention on the screen. The eye-tracker tracked the gaze of the right eye throughout the experiment. The test took about 5-10 minutes.

(19)

Results

All infants met the habituation criteria within 20 trials. The average number of trials taken to habituate was 9.2 trials. This is very close to the number of fixed familiarization trials (i.e. 10) reported in the study of Kotsoni et al. (2001).

We created two interest areas (IAs) for the pairs of expressions on the screen. Each IA covers one member of the image pair allowing for about 1cm of margin. We then calculated the total looking time per trial on the relevant areas (i.e. total looking time of the two expressions) by adding the looking time of each IA. We then took the looking time at the IA covering the new expression and divided it by the total looking time of that trial. This value was then averaged between the two presentations of the same trial type. In this way, we obtain the proportion of time spent looking at the new stimuli relative to the familiar stimuli (PropNew)

For the experimental condition, PropNew was calculated for the between-category pairs and within-category pairs. A paired sample t-test was then conducted to compare them. Results showed a significant group difference, t (6) = -3.47, p < 0.05. By examining the group means, we see that infants spent a higher proportion of their looking time on novel expressions when they were from a different category (M proportion= 0.63, SD= 0.12) than when they were from the same category (M=0.49, SD = 0.07) as the habituated expression. Results are illustrated in the left part of Figure 4.

The same analysis was done for the other condition. PropNew was calculated for trial pair 7, 5 and trial pair 7, 9 when the novel expression was morph 5 and morph 9 respectively. A paired sample t-test comparing the two PropNew values again found significant group difference, t (6) = 3.05, p < 0.05. Examining the group means revealed that infants looked longer at the novel expression (M proportion = 0.69, SD = 0.12) when it was morph 9 than

(20)

when the novel expression was morph 7 (M proportion = 0.50, SD = 0.14). Results are illustrated in the right part of Figure 4.

Figure 4. Proportion of time spent looking at the novel expression in the test phase. In the

experimental condition, morph 5 was the familiar face habituated to; in the within condition, morph 7 was the familiar face habituated to.

Discussion

We predicted that if infants show CP of emotion to these stimuli, it would be easier for them to discriminate pairs of facial expressions when they were from different emotion categories; this would be reflected in an increased novelty preference. In the present experiment, we observed that after habituation to one facial expression, infants spent more time looking at a new expression when it came from a different category to the habituated expression. The increased novelty preference for a cross-category expression reflects better discriminability. These results can be seen as evidence for CP of emotion in pre-verbal infants.

A surprising finding in this experiment was the asymmetrical preferential looking for the novel expressions in the within-category pairs. After habituation to a fearful expression

(21)

(morph 7), babies spent significantly more time looking at a novel expression when it was to the right of the familiar face on the continuum (i.e., a more fearful expression) than when it was to the left of the familiar face (i.e., a less fearful expression), although given that both were from the same (adult) emotion category, namely fear. Previous research has shown that babies tend to show a spontaneous and persistent interest in looking at fearful expressions, even following habituation to fear (e.g. Nelson, Morse & Leavitt, 1979; Kotsoni et al., 2001). It is likely that the preference for more fearful faces also hold when the comparisons are within the same emotion category. Morph 9 and morph 5 were both perceived as expressions of fear. But morph 9 was much closer to the prototypical fear expression compared to morph 5, which means that morph 9 was a more fearful expression between the two. If the fear preference hypothesis were true, we would indeed expect more novelty preference when the habituated face was paired with morph 9 than with morph 5. The fear preference can be tested by additional experiments comparing all types of morph pairs within the emotion category fear, and seeing if the fear preference is always present.

One issue to note is that the sample size in this experiment is very small. Though our findings add important information to the existing literature, they are likely underpowered. The reliability of the results should be further tested with a larger sample.

General Discussion

We have replicated the CP of emotion effects in Experiment 1 and 2 with a recognition task and a discrimination task: adults put morphed facial expressions into distinct emotion categories and were better at discriminating pairs of expressions if when they were from different categories. We have generalized the effects to new stimuli that have not been validated before. Specifically, using coloured image morphs (instead of black-and-white), we have increased the ecological validity of the tests. We can conclude that CP of emotional facial expressions is a rather robust effect.

(22)

In order to answer the question of whether language is necessary for the CP of emotion, we tested 7-month-old infants in Experiment 3 using a habituation paradigm. Infants were first habituated to an expression of fear, and then the habituated face was paired with a new expression. We observed novelty preference when the new expression was from a different emotion category (i.e., happiness) but not when it was from the same emotion category (i.e., fear). This indicates better discrimination of between-category pairs of stimuli. Kotsoni et al. (2001) used a similar experimental paradigm to test CP of emotion in infants; they found only CP of a fearful face after habituation to a happy face, but not the other way around. Our study here has extended and complimented their findings. Additionally, we also found that when the habituated fearful face was paired with one of two other expressions with different degree of fear displayed, there was a stronger novelty preference for the more fearful expression. This suggests that infants have a consistent preference for looking at fearful expressions. However, we did find CP of emotion effects, but not a negativity bias in the experimental condition. After infants were habituated to a fearful expression, they had a stronger novelty preference for a happy expression than a more fearful expression of equal physical distance. It shows that the CP effects are stronger than the negativity bias. This finding is at odds with that of Kotsoni et al.(2001).

One disadvantage of this project is the small number of subjects in Experiment 3. Since there is a limited time frame, we started testing babies before we had the data of adult

recognition task. With the goal of finishing the project on time in mind, we chose to use for the baby testing stimuli whose categorization boundaries were not yet validated with adults. This resulted in some of the experimental manipulations being invalid. We also had a very low power of the significant results in Experiment 3. A lesson to learn for the future is that in such a research design, the stimuli to be used should always be validated first.

For future research, additional experiments should be done with more conditions to test CP on the other end of the emotion continuum happiness – fear. The findings of our

(23)

experiments also provide promising grounds for further research on the CP effects with other emotions, but what is first needed is a replication of these findings with a bigger sample.

(24)

References

Bornstein, M. H., Kessen, W., & Weiskopf, S. (1976). Color vision and hue categorization in young human infants. Journal of Experimental Psychology: Human Perception and

Performance, 2(1), 115.

Bornstein, M.H., & Korda, N.O. (1984). Discrimination and matching within and between hues measured by reaction times: Some implications for categorical perception and levels of information processing. Psychological Research, 46, 207-222.

Calder, A.J. , Young, A.W., Perrett, D.I., Etcoff, N.L., & Rowland, D. (1996). Categorical perception of morphed facial expressions. Visual Cognition, 3(2), 81-118.

Campanella, D., Quinet, P., Bruyer, R., Crommelinck, M., & Guerit, J.M. (2002). Categorical perception of happiness and fear facial expressions: an ERP study. Journal of Cognitive

neuroscience, 14(2), 210-217.

Ekman, P., & Friesen (1971). Constants across cultures in the face and emotion. Journal of

Personality and Social psychology, 17, 124-129.

Etcoff, N., & Magee, J. (1992). Categorical perception of facial expressions. Cognition, 44, 227-240.

Fantz, R. L. (1964). Visual experience in infants: Decreased attention to familiar patterns relative to novel ones. Science, 146(3644), 668-670.

Fugate, J.M.B., Gouzoules, H., & Barrett, L.F. (2010). Reading chimpanzee faces: evidence for the role of verbal labels in categorical perception of emotion. Emotion, 10(4), 544-554.

Franklin, A. and Davies, I. R.L. (2004), New evidence for infant colour categories. British Journal of Developmental Psychology, 22: 349–377.

Gelder, B. de, Teunise, J.P., Benson, P.J. (1997). Categorical perception of facial expressions: Categories and their internal structure. Cognition and Emotion 11(1), 1-23. Gendron, M., Lindquist, K.a, Barsalou, L., & Barrett, L.F. (2012). Emotion words shape

(25)

emotion percepts. Emotion, 12(2), 314-325.

Hanard, S, R. (1987). Categorical perception: The groundwork of cognition. Cambridge: Cambridge University Press.

Kotsoni, E., de Haan, M., & Johnson, M.H. (2001). Categorical perception of facial expressions by 7-month-old infant. Perception, 30 (9), 1115-1125.

Leppänen, J. M., Richmond, J., Vogel-Farley, V. K., Moulson, M. C., & Nelson, C. A. (2009). Categorical representation of facial expressions in the infant brain. Infancy,

14(3), 346-362.

Nelson, C. a, & Dolgin, K. G. (1985). The generalized discrimination of facial expressions by seven-month-old infants. Child Development, 56(1), 58–61.

Nelson, C. A., Morse, P. A., & Leavitt, L. A. (1979). Recognition of facial expressions by seven-month-old infants. Child Development, 1239-1242.

Roberson, D., & Davidoff, J. (2000). The categorical perception of colors and facial expressions: the effect of verbal interference. Memory & Cognition, 28(6), 977-986. Sauter, D. A., Le Guen, O., & Haun, D.B.M. (2011). Categorical perception of emotional

facial expressions does not require lexical categories. Emotion, 11(6), 1497-1483. Schneider, W., Eschmann, A., & Zuccolotto, A. (2002). E-Prime user’s guide. Pittsburgh,

PA: Psychology Software Tools, Inc.

Yang, E., Zald, D. H., & Blake, R. (2007). Fearful expressions gain preferential access to awareness during continuous flash suppression. Emotion (Washington, D.C.), 7(4), 882–6.

Young, A. W., Rowland, D., Calder, A. J., Etcoff, N. L., Seth, A., & Perrett, D. I. (1997). Facial expression megamix: Tests of dimensional and category accounts of emotion recognition. Cognition, 63(3), 271-313.

(26)

Appendix All stimuli used in Experiment 1 and 2

Model A

100%happiness 90%happiness 80%happiness 70%happiness 60%happines 50%happiness 40%happiness 30%happiness 20%happiness 10%happiness 0 happiness 0 fear 10%fear 20%fear 30%fear 40%fear 50%fear 60%fear 70%fear 80%fear 90%fear 100%fear

1 2 3 4 5 6 7 8 9 10 11 Model B

100%happiness 90%happiness 80%happiness 70%happiness 60%happines 50%happiness 40%happiness 30%happiness 20%happiness 10%happiness 0 happiness 0 fear 10%fear 20%fear 30%fear 40%fear 50%fear 60%fear 70%fear 80%fear 90%fear 100%fear

Referenties

GERELATEERDE DOCUMENTEN

When analyzing the challenges that MNC’s face when they try to become (more) sustainable as described in the 39 articles, I categorized them in 5 main challenges: operating in

o De muur die werd aangetroffen in werkput 2 loopt in een noord-zuidrichting en kan mogelijk gelinkt worden aan de bebouwing die wordt weergegeven op de Ferrariskaart (1771-1778)

Regarding the size 35 instruments, the positive control group had significantly (P &lt; 0.001) higher scores compared to all other groups except the group employing the ultrasonic

At a time when immense changes seem to accelerate in various domains of life (Rosa 2013), when those domains exhibit multiple temporalities (Jordheim 2014), when we witness

a stronger configuration processing as measured by a higher accuracy inversion effect is related to improved face memory and emotion recognition, multiple linear regression

We explore the possibilities of a dense model-free 3D face reconstruction method, based on image sequences from a single camera, to improve the current state of forensic

• We combine the common latent space derived by EM-SFA with Dynamic Time Warping techniques [18] for the temporal alignment of dynamic facial behaviour.. We claim that by using

We start free format to stimulate students to think creatively about the solution, and gradually offer them some typical systems engineering means, such as partitioning (make