• No results found

Telling friend from foe : listeners' ability to identify in-group and out-group members from laughter

N/A
N/A
Protected

Academic year: 2021

Share "Telling friend from foe : listeners' ability to identify in-group and out-group members from laughter"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Telling friend from foe: Listeners’ ability to identify in-group and out-group members from laughter.

Marie Ritter

Graduate School Psychology, Universiteit van Amsterdam

INTERNSHIP REPORT

Supervisor: Dr. D. A. Sauter

Department of Social Psychology, Universiteit van Amsterdam

Credits: 18 ec

Ethics committee reference code: 2014-SP-3736

Author Note

Marie Ritter, Student No. 11118962, Universiteit van Amsterdam, Amsterdam. I want to thank Dora Matzke and Johnny van Doorn for their help with aspects of the Bayesian analysis.

Address: Burgdammer Mühlenberg 8, 28717 Bremen, Germany. E-mail: marie.ritter@student.uva.nl

(2)

Abstract

Background: Emotions of in-group members as people from a similar cultural or social

background are recognized more accurately compared to emotions of out-group

members. This so-called in-group advantage of emotion recognition has been explained with a higher motivation to recognize emotions of peers. However, this presupposes that people can identify in-group members by their emotional expressions. Method: This study investigated if people are able to perceive group membership from the emotional vocalization of laughter. Specifically, a within-participant design with a six-way forced choice paradigm was implemented in an online study to experimentally test whether the Dutch participants could categorize laughter segments in terms of nationality. Results: Both a frequentist and Bayesian analysis show that participants cannot discern group membership from laughter better than chance. While none of the groups were identified above chance level, the in-group and distant out-group (e.g., Namibian) laughter was identified slightly better compared to close out-group (e.g., French) laughter.

Implications: Differences in emotion expression modes (facial expression vs nonverbal

vocalization) are considered to explain the inability to identify group membership from laughter. Lastly, it is discussed that the findings challenge an aspect of motivational explanations of the in-group advantage of emotion recognition.

Keywords: in-group advantage, group membership, laughter, nonverbal

(3)

Telling friend from foe: Listeners’ ability to identify in-group and out-group members from laughter.

Is she a friend or foe? Does she belong to a cultural or social group similar to my own? In short, is she part of my in-group or an out-group? The answers define many factors in our social behavior and relationships, such as how we construct our identity (e.g., Brewer, 1991) or interact with others. For example, we generally attend more closely to faces of fellow group members (Byatt & Rhodes, 2004) and process in-group members’ faces more holistically (Chun, Park, Park, & Kim, 2012). We are better at recognizing perceived in-group members (Hehman, Mania, & Gaertner, 2010) and more accurate when identifying what an in-group member is feeling from nonverbal

expressions (Elfenbein & Ambady, 2002a). This effect is known as the in-group

advantage of emotion recognition (for a meta-analysis see Elfenbein & Ambady, 2002b).

The in-group advantage even affects situations in which we only believe someone belongs to our in-group: In a study by Thibault, Bourgeois, and Hess (2006),

participants were asked to identify the emotion on faces that were either labeled to belong to the participants’ in-group or an out-group. The data suggest that when participants thought that they were looking at an in-group member, they were better at labeling the depicted emotion. This would support the motivational account which explains the in-group advantage of emotion recognition as follows: If we perceive someone as an in-group member, we are more motivated to find out what they are feeling. In turn, this means that if people are better at recognizing emotions from in-group members, they should be able to tell who belongs to their group from an expression alone.

In this study, we aim to experimentally test whether people can discern group membership from nonverbal expressions, specifically from the nonverbal vocalization laughter. To investigate whether people are able to do so is the main objective of this project (H1). Furthermore, we aim to test whether there are covariates that can

(4)

influence the ability to identify group membership from laughter (H2), such as familiarity with foreign laughter.

Identifying group members (H1)

If you meet someone new, there are many indicators that provide information about your new acquaintance’s group membership. For example, she might wear a suit, she could look older, or she could wear a headscarf or have a different skin color. These are all indicators that she could be from a similar cultural or social group as you are. Moreover, all these indicators are something you can immediately see.

Visual indicators. In the described situation group membership is very salient, similar to situations is which it is explicitly stated (e.g., Thibault et al., 2006) or when experimental stimuli contain very distinct group features (e.g., pictures of different ethnicities; Cassidy, Quinn, & Humphreys, 2011). Other times group membership is not as straightforward: Marsh, Elfenbein, and Ambady (2003) presented American

participants with pictures of American–Japanese (American citizens with Japanese heritage) and Japanese (Japanese citizens with Japanese heritage) people who posed with either neutral or emotional expressions. Participants were asked to categorize the pictures according to whether they thought the person was American–Japanese or Japanese. People performed above chance level for neutral expressions but they

succeeded much better in the task when the person was posing an emotional expression. This suggests that emotional expressions may contain information about group

membership akin to an accent in speech (e.g., Clopper & Pisoni, 2004b). Indeed, studies suggest that while people generally converge on the prototype expressions of emotions (e.g., Ekman & Friesen, 1978), they also show culturally specific differences in how they express emotions on their face, forming an emotion dialect (Elfenbein, Beaupré,

Lévesque, & Hess, 2007). If there are such subtle differences in emotional expressions like an emotion dialect, people might be able to use these differences as information about group membership as is suggested by Marsh et al. (2003). However, these results

(5)

are concerned with the visual domain; while they can inform research in other domains, studies directly concerned with nonverbal vocalizations need to be consulted.

Nonverbal vocalizations. Nonverbal vocalizations play a crucial role in forming and maintaining relationships (Dale, Fusaroli, Duran, & Richardson, 2013); thus it could be expected that they might carry important social information, such as group information. This has been observed in vocalizations of primates and apes which vary with individual characteristics (e.g., age or hierarchy; Fischer, Kitchen, Seyfarth, & Cheney, 2004). Moreover, it was found that chimpanzees adjust their calls to distinguish themselves from close living groups (Crockford, Herbinger, Vigilant, & Boesch, 2004) and that these calls are meanigful to listenening individuals (Herbinger, Papworth, Boesch, & Zuberbühler, 2009). Two studies provide more insight into human

vocalizations: Walton and Orlikoff (1994) found that people can identify the ethnicity of a speaker 60% of the time from “a” sounds alone. Bryant et al. (2016) found that

human laughter can communicate information about social relationships such that listeners can identify whether people laughing together are friends or strangers. This suggests that human nonverbal vocalizations might carry group information.

This was tested by Sauter (2013, Experiment 1): Dutch participants listened to emotional vocalizations of amusement, relief, triumph, and sensual pleasure from three different countries (Namibia, Britain, and the Netherlands). Participants were first asked to classify the emotion that was vocalized and then to identify whether the person was from the Netherlands (the in-group), another European country (the close out-group), or a country outside Europe (the distant out-group). In the emotion recognition task, the experiment found the in-group advantage. In the group classification however,

participants identified none of the groups better than expected by chance.

This shows that while some studies suggest that people might be able to identify group membership from nonverbal vocalizations, this was not confirmed in a direct test. To further probe this matter, possible covariates should be investigated, such as

(6)

membership from laughter.

Familiarity: a covariate (H2)

There is only little evidence on the impact of familiarity on group identification from nonverbal vocalizations. In contrast, many studies investigating language suggest that a person who is familiar with a certain dialect is better at identifying whether a speaker is from a certain region or social group (Kerswill & Williams, 2002). For example, Clopper and Pisoni (2004a) showed that participants who had lived in many different US states were better at telling from which state a speaker came, compared to participants who had lived in one state most of their lives. Baker, Eddington, and Nay (2009) replicated the effect with a Utahan accent and additionally showed that

participants who were from a state close to Utah, a close out-group, were almost as good as the Utahans, the in-group, at identifying a Utahan accent. People from more distant states, the distant out-group, performed worse.

While the literature on dialects in language suggests that being familiar with a group might improve performance when classifying people according to vocalizations, the study by Sauter (2013, Experiment 1) does not confirm that this is the case for nonverbal vocalizations. Participants did not perform better than chance for any of the groups, with performance in in-group and close out-group comditions actually being slightly lower than in distant out-group conditions.

In sum, many studies on either visual indicators or language suggest that (a) people might be able to identify group membership from nonverbal vocalizations, and (b) people who are more familiar with another group are better at identifying the group from their nonverbal vocalizations. Yet, neither of these claims were confirmed in the direct investigation by Sauter (2013). However, this study has some limitations that should be addressed. First, the study included vocalizations of multiple emotions. While this was necessary to test the in-group advantage of emotion recognition, it might have increased task difficulty in the group classification task. Second, the study only included

(7)

one nationality per group so that participants perhaps performed badly because they could not easily distinguish in-group from close out-group, confusing the two with each other.

The current study

In the following study, we aim to stringently test whether people can identify in-and out-group members from laughter alone. As in Sauter (2013), we represent the groups as nationalities because national identity is a salient and reliable group dimension (Smith, 1991). As mentioned before, studies on dialect recognition in language imply that familiarity plays a role in perceiving group membership from acoustic cues (e.g., Baker et al., 2009). Hence, we distinguish between in-group, close out-group, and distant out-group (Sauter, 2013). Lastly, we remedy the mentioned limitations of Sauter (2013) by including more nationalities and focusing specifically on the vocalization of laughter.

Laughter. Laughter, as one of the most extensively researched vocalizations (e.g., Owren & Amoss, 2014), is one of the most variable acoustic expressions of humans (Rothgänger, Hauser, Cappellini, & Guidotti, 1998). It is often implicated in social situations (Scott, Lavan, Chen, & McGettigan, 2014); in fact, laughter is about 30 times more likely in social compared to solitary situations (Provine, 2004). Moreover, laughter is not necessarily a response to humor, but “mutual playfulness, in-group feeling and positive emotional tone—not comedy mark the social settings of most naturally

occurring laughter” (Provine, 1996, p. 41). Dezecache and Dunbar (2004) even suggest that laughter is an extended form of grooming with which social bonds can be

strengthened with multiple individuals at a time. Recently, a study showed that laughter can function as a signal of affiliation and coalition also to others that are “listening in” (Bryant et al., 2016): People can tell whether two people are friends or strangers by listening to them laughing together.

In sum, laughter seems to be a signal that would typically be produced in

(8)

group membership information plays an important role. We therefore expect that if people are able to identify group membership from nonverbal vocalizations, laughter would be an ideal candidate.

Summary of hypotheses. In examining the question of whether people are able to identify group membership from laughter, we made the following predictions: We expect that participants can identify group membership from laughter (H1) and

distinguish either (a) between the separate nationalities, or (b) whether a laughing person belongs to the in-group vs close out-group vs distant out-group (H1.2). Under the null hypothesis (H0), we expect participants to perform at chance level when identifying nationality from laughter. Additionally, we explore the hypothesis that higher

familiarity with other groups’ laughter is associated with better performance (H2).

Methods

Design and procedure

The study had a within-participant design with six conditions (the six nationalities of the laughter tracks: Dutch, English, French, US–American, Japanese, Namibian) á four trials. The 24 experimental trials employed a six-way forced choice paradigm and included one clip of laughter each. In each trial, particiants listened to a short clip of laughter. Afterwards, they were asked to indicate from which of the nationalities they thought the laughing person came. Each stimulus was presented once in random order that was fixed across participants.

Before the experimental trials, participants answered questions regarding

demographics (age, sex, and level of education)1. Additionally, the number of countries participants had travelled to was asked and taken as a proxy for familiarity with laughter in other countries to investigate whether it would be associated with

performance on the test (H2). Lastly, as an exploratory measure participants were asked 1There was no effect of participants’ sex and level on education which is why neither is not reported

(9)

how well they expected to perform in the experimental trials2.

Participants were asked whether they consented for their anonymous answers to be analyzed for scientific purposes, but were also given the option of participating without allowing analysis of their data. Moreover, participants had to confirm that they were at least 18 years old.

Upon completion of the study, participants were given feedback on how well they had done in the form of a total score of correct answers. The study was approved by the University of Amsterdam Department of Psychology ethics committee.

Stimuli

The 24 laughter tracks that were used were taken from the following studies: The Dutch, English, and Namibian laughter tracks stem from a previous investigation by Sauter (2013); the US American laughter tracks were obtained from the study by Simon-Thomas, Keltner, Sauter, Sinicropi-Yao, and Abramso (2009); and the Japanese laughter tracks were taken from a ongoing project by Sauter, Scott, and Tanaka (2017). Lastly, the French laughter tracks have not been validated but were recorded under similar conditions as the other tracks.

Participants

The study was online on the website of a Dutch popular science magazine (quest.nl)3 from June 12th to 26th, 2014, and was publicly accessible. The study used an opportunistic sample, collecting as many responses as possible in the available time. A total of 1500 participants responded. Participants were excluded due to (a) them not giving explicit consent for their test data to be used for scientific purposes (264

participants), (b) errors in the data log (5 participants), (c) being under 18 years old (75 participants), or (d) not having answered all categorizations questions (342 participants).

2Performance expectation did not show any effect and is therefore not reported on further.

3It is reasonable to assume that people on this Dutch-language site were either Dutch or sufficiently

(10)

The remaining 814 participants (527 women, 287 men) had a mean age of 30.87 years (range: 18–75 years). More detailed sample characteristics can be seen in Table 1. Table 1

Sample characteristics for the 814 participants after exclusions.

Variable %

Education

University degree (WOa) 23.5

Bachelor degree from a professional college (HBOa) 37.9 Another degree at a professional college (MBOa) 17.69 High-school education or equivalent (HAVO, VWO, VMBOa) 19.8

Basic or lower education 0.8

Travel experienceb

One to Five 20.1

Six to ten 39.3

11 to 20 33.5

21 or more 7.0

aDegrees according to the Dutch education system. bAnswers to the question: To how many countries have you traveled so far?

Results

Analyses were performed with the statistics programs R (R Core Team, 2013) for the frequentist analysis, and JASP (JASP Team, 2017) for the Bayesian analysis.

Variables

The directly available data from the study consisted of sample characteristics (age, sex, and education), expected performance scores, a measure of familiarity with laughter in other countries (travel experience), and the answers in each trial. Initially, the

answers were summed according to a confusion matrix by country as seen in Table 2. Additionally, a confusion matrix was compiled in which answers were divided into the in-group, close out-group, and distant out-group. This can be seen in the appendix (Table B1).

Hu scores. From the confusion matrix, Hu scores were calculated (Wagner,

(11)

Table 2

Confusion matrix of answer proportions in %.

Judgment

Stimulus Neth Fra Eng USA Jap Nam Neth 26.32 20.76 17.29 13.76 13.73 8.14 Fra 19.44 18.52 14.47 8.51 23.80 15.26 Eng 18.46 20.64 18.86 14.96 8.94 18.15 USA 7.63 17.20 13.45 17.60 30.28 13.85 Jap 12.75 21.93 15.14 10.84 25.80 13.54 Nam 15.14 10.29 13.88 27.95 6.70 26.04

Note. Neth = Netherlands, Fra = France, Eng = England, Jap = Japanese, Nam =

Namibia

disproportionate use of one response alternative. Moreover, Hu scores correct for

disproportionate presentation of one stimulus type (e.g., presentation of 12 close out-group stimuli and 4 in-group stimuli).

The raw Hu scores range from 0 to 1, with 0 indicating that not one classification

was made correct and 1 indicating perfect accuracy. Because the Hu scores are

proportion measures, the scores are arcsine transformed for further analysis to stabiize variance and normalize the data (Wagner, 1993). To obtain one general measure of performance for each participant, the Hu scores were averaged across conditions. This

will be called the mean Hu score. Lastly, for some analyses the difference between Hu

score and chance level were used. These measures will be called difference scores. Chance levels are calculated by dividing the number of correct answers for each trial by the number of answers (six). For example, when Dutch laughter was presented, there was only one correct answer of six options. Therefore, the chance level is 1 divided by 6 (16). When analyzing trials within each close out-group and distant out-group laughter together, the following applies: When participants are presented with laughter of a close out-group, there are three correct answers (French, English, US-American) out of six options. Therefore, the chance level is 3 divided by 6 (1

2). However, when participants are presented with laughter of the distant out-group, there are two correct answers (Japanese, Namibian) out of six options. Therefore, the chance level is 2 divided by 6

(12)

(13).

Checks for normality. All variables were checked for normality with

Shapiro-Wilk tests, which indicated that all scores were nonnormally distributed (ps

< .001). Moreover, the scores included many outliers as can be seen in Figure 1. Visual

inspections showed that the scores were heavily positively skewed with a large number of smaller scores and only few high scores. Because the variables were not normally

distributed, the nonparametric equivalent of the t-test, the Wilcoxon Signed-Rank test, was used. For tests other than the t-test (ANOVA, regressions), the parametric versions were used as they are known to be robust against normality violations (e.g., Norman, 2010).

Figure 1 . Boxplot of arcsine transformed Hu scores for each of the six conditions. The

dashed line indicates the chance level of 16. Neth = Netherlands, Fra = France, Eng = England, Jap = Japanese, Nam = Namibia.

(13)

Group identification (H1)

In order to directly test the null hypothesis and to accept or reject it with known certainty, all of the described tests were run using both frequentist analyses and the Bayesian alternative.

We used the same Hu scores in both analyses. As any other measure in a

statistical test, Hu scores are estimated with uncertainty. If this uncertainty is not taken

into account in the model, it could result in a bias towards the null hypothesis. However, it is reasonable to assume that the bias was rather small seeing as the

estimation uncertainty of the scores should be low as each Hu score was estimated using

24 observations per subject.

As already mentioned, the nonparametric version of the t-test was used in both analyses. The nonparametric Bayesian one-sample t-test was run using a computer program by van Doorn, Maarsman, and Wagenmakers (2017). The test estimated the effect size δ which is the difference between scores and chance level. The test uses a prior of δ ∼ Cauchy(0, 1) (Rouder, Speckmann, Sun, Morey, & Iverson, 2009).

Overall Hu scores. To check the overall performance of participants, the mean

Hu scores were compared to the chance level (16). Participants performed significantly

worse than chance (Median of mean Hu score: 0.077, p < .001, r = −0.81). The Bayesian test showed overwhelming evidence for the alternative hypothesis. The effect size was estimated to have a median of −1.026 with a Bayesian 95% confidence interval of [−1.126, −0.932]. This means that the scores were confidently below chace level. The Bayes factor for the alternative hypothesis exceeds 1000, which means that given the data, the alternative hypothesis is over 1000 times more likely than the null hypothesis. The prior and posterior distributions can be seen in Figure 2.

Group-specific Hu scores. In order to investigate whether there were any

differences between the different countries, group-specific Hu scores were computed. For

(14)

Figure 2 . Prior and posterior distribution with its Bayesian confidence interval of the

effect size δ. A score of zero represents performance at chance level. The prior

distribution (dashed line) shows what distribution of a score is expected under H0 with no data (performance at chance level). The posterior distribution (solid line) shows the distribtion that is expected given the data. The point of interest (zero) is marked with grey dots on both distributions. The probability of the prior distribution at that point is divided by the probability of the posterior distribution at that point to obtain the Bayes Factor BF10.

might have been able to detect single groups better than chance or better than other groups.

Comparison to chance. The Hu scores were separately compared to chance

level. Multiple Wilcoxon-Signed Rank tests and their Bayesian equivalents were run as described for the overall scores. Table 3 shows the medians and chance levels to which they are compared for each comparison as well the effect sizes and Bayes factors. All comparisons showed that the Hu scores were significantly below chance; Bayes factors

showed that the alternative hypothesis with scores different than chance was over 1000 times more likely given the data.

Comparison between groups. In order to investigate whether there were differences between performance between conditions, two one-way repeated-measures ANOVA were run comparing performance according to (a) countries and (b) in-group,

(15)

Table 3

Comparisons of group scores with chance level for Wilcoxon Signed-Rank Test and Bayesian equivalent.

Hu-Scores

Group Mediana Chance levela Effect sizesb

Netherlands 0.062 0.167 -0.48 France 0.042 -0.74 England 0.050 -0.65 USA 0.050 -0.69 Japan 0.062 -0.57 Namibia 0.062 -0.46 In-Group 0.062 0.167 -0.48 Close Out-Group 0.234 0.524 -0.48 Distant Out-Group 0.125 0.340 -0.84

Note. All tests were significant at an α-level of .001 and Bonferroni corrected for number

of comparisons. Bayes factors in favor of the alternative hypothesis all exceeded 1000. aScores are arcsine transformed and rounded to three digits. bOnly applicable to frequentist tests.

close out-group, and distant out-group. The within-subject independent variable was condition or group and the dependent variable were difference scores. The difference scores are chosen for this analysis to control that some groups had a higher chance level for being selected. For example, the raw chance score for a correct answer to the

in-group is 16, whereas for the close out-group it is 12. In the Bayesian analyses, the alternative model which allowed differences between conditions was tested against a null model which did not allow for differences. As in the t-test the prior was specified as a Cauchy distribution.

There were differences in performance between the separate countries. As

Mauchly’s test indicated that the sphericity assumption was violated4 (W = 0.87, p < .001) the test was Greenhaus-Geisser correceted; FGG(4.76, 3869.07) = 32.69, p < .001.

The pattern is illustrated in Figure 1. The Bayesian analysis also showed that participants performed differently in the separate country conditions; BF10> 1000.

4The ANOVA was rerun using a multilevel approach comparing a null model with a model including

condition by means of a log-likelihood ratio. This analysis does not assume sphericity of the data and produced qualitatively similar results, χ2(1) = 155.05, p < .001.

(16)

Follow-up analyses compared conditions among each other. A detailed comparison table can be seen in the appendix (Table C1). Conditions that belonged to the same group (e.g., Japan and Namibia belonging to the distant out-group) did not differ. As can also been seen in Figure 1, performance in close out-group conditions was lower compared to the distant out-group condition and the in-group (the Netherlands). Yet, in no condition did participants perform better than chance.

This pattern was confirmed in the following analysis in which the close out-group conditions were collapsed as well as the distant out-group conditions. As Mauchly’s test indicated violation of the sphericity assumption, Greenhouse-Geisser corrected scores are reported; W = 0.87, p < .001, η = .89. There was a significiant difference5;

FGG(1.78, 1447.14) = 984.15, p < .001. The Bayes analysis supported this result; BF10 > 1000.

As illustrated in Figure 3, participants performed worse in the close out-group conditions compared to in-group (V = 29690, p < .001; BF10> 1000) and distant out-group conditions (V = 104370, p < .001; BF10> 1000). Moreover, participants performed better in in-group compared to out-group conditions;

V = 292800, p < .001; BF10> 1000. Yet, in none of the conditions participants

performed better than chance.

Familiarity (H2)

A linear model was estimated to check whether the number of countries that a participant visited could predict mean performance in the experiment. In the Bayesian analysis, the JASP program uses multivariate generalizations of Cauchy priors on standardized effects with a prior width of 0.5 (see Rouder, Morey, Speckmann, & Province, 2012). The results showed that the familiarity was not associated with

5The analysis was rerun with the multilevel approach which showed qualitatively similar results;

(17)

Figure 3 . Difference scores for performance in the separate groups in-group, close, and

distant out-group. A higher score represents a better performance. In = In-group, Close = Close out-group, Distant = Distant out-group

performance (F (3, 810) = 1.081, p = .38; BF01= 7.606)6.

A further exploratory analysis was conducted: Dutch participants have traveled more likely traveled to countries of the close out-group, such as France or England, compared to countries that are farther away, such as Namibia or Japan. Therefore, it could be assumed that the measure applies to familiarity with the close out-group alone. Hence, performance in close out-group trials could be higher for participants with high familiarity scores. However, there was no significant association

(F (3, 810) = 1.94, p = .12; BF01= 9.991)7.

Discussion

This study investigated whether (a) participants could identify a person’s

nationality from her laughter (H1), and (b) a participant’s familiarity with laughter in other countries could predict performance in the group identification task. Both a frequentist and Bayesian analysis concerened with H1 confidently showed that

participants did not perform above chance level for both average performance and the 6Note that the Bayes factor in favor of the null hypothesis BF

01is reported here, instead of the Bayes

factor in favor of the alternative hypothesie BF10. BF10= 0, 00013 7The Bayes factor in favor of the alternative BF

(18)

separate conditions. Conditions differed in that participants performed worse in close out-group trials compared to performance in in-group and distant out-group trials. Performance was similar for the in-group and distant out-group but still not above chance level. Both frequentist and Bayesian analyses regarding H2 showed that there was no association between familiarity and performance in the classification task.

These results support the findings of Sauter (2013) in that people cannot judge group membership from nonverbal vocalizations, such as laughter. At the same time, this study contrasts with literature that shows people’s ability to judge group

membership from facial expressions (e.g., Marsh et al., 2003) or language dialects

(Kerswill & Williams, 2002). This raises the question about why these domains differ in how informative they are about group membership.

Difference to facial expressions and language dialects

It seems intuitively clear that people can judge what group a person might belong, such as clothing or ethnicity; people often explicitly use visual cues to associate

themselves with a certain group or identity (e.g., Green, 2001), for example, when dressing formally for work. Another visual indicator—emotional expressions—has been found to contain the already mentioned emotion dialect (Elfenbein et al., 2007). A person might therefore be able to detect which emotional dialect another person communicates and thereby judge group membership.

For language, similar characteristics have been found. Language is strongly connected to social identity (Giles & Viladot, 1994) and differs sufficiently between groups so that people can use it as information in group classifications (Kerswill & Williams, 2002). Indeed, people seem to give more weight to the dialect somebody speaks compared to their visual appearance (Rakić, Steffens, & Mummendy, 2011).

In contrast, no study to our knowledge has associated laughter with group identity so far or found a “laughter dialect”. This might seem puzzling given that chimpanzees have the ability to communicate group information with calls (Crockford et al., 2004).

(19)

However, it needs to be noted that chimpanzee groups only adjusted their group calls when another group was living close by, so when the need to distinguish themselves arose. This need might not be present for humans who can choose from a variety of other signals to communicate group information. Lastly, it should be noted that in many studies that investigate group classification from emotional expressions,

participants are presented with a binary decision (Marsh et al., 2003): Does this person belong to your in-group or another group? It might be that the different accents in emotional expressions are just enough to signal a certain familiarity or distinctness to one’s own group. This might explain why participants performed slightly better when listening to laughter of their in-group or a distant out-group, albeit participants did not perform above above chance in these conditions. Laughter from the close out-group was neither very familiar nor very distict which might have led to higher confusion with other conditions.

Implications for the motivational account

If people cannot reliably judge group membership from laughter, this challenges an aspect of the motivational account. If a higher motivation to recognize in-group

members emotions drives the in-group advantage in emotion recognition, people should be able to first identify their fellow group members. As already shown by (Sauter, 2013), people do not seem to be able to judge group membership from vocalizations for which an in-group advantage in emotion recognition is found. This study supports these conclusions.

Nonetheless, this should not mean that the motivational account is altogether invalid. It is possible that these motivational processes do not operate in judging

nonverbal vocalizations of emotions. Alternatively, one could argue that people are able to distinguish groups based on laughter but that they cannot report on this

classification. Therefore, other studies might be needed to investigate possible unconcious classifications.

(20)

Limitations

While this study aimed to provide a stringent test of the hypotheses, it has some limitations. First, it might be argued that the task to classify laughter according to nationality was too difficult as countries that have similar cultural backgrounds might show only little differences. However, participants could not reliably judge group membership even when an answer was counted as correct when it only belonged to the larger group. For example, when an English laughter was judged to by US-American it was counted as a correct close out-group judgment. Even with this reasonable difficulty, performance did not exceed chance level.

Second, this study did not explicitly confirm that participants regarded the Netherlands as their in-group. However, given that the study was run on a popular science magazine website in Dutch, it is reasonable to assume that most participants were either Dutch or so acculturated that they chose to visit this website instead of a website from their home country.

Third, the proxy for familiarity—the number of countries someone had

visited—might not have captured the latent variable which might be an alternative explanation for the finding that participants’ familiarity is not associated with performance in the group classification. Perhaps, participants had visited other countries but did not come into extensive contacts with the local population.

Conclusion

This study showed that people cannot reliably judge group membership from laughter. This finding challenges an aspect of the motivational account of the in-group advantage of emotion recognition, but also inspires questions for further research: Can people possibly nonconciously judge group membership from laughter but not report on it? Why is group identification possible from visual indicators and language, but not from nonverbal vocalizations?

(21)
(22)

References

Baker, W., Eddington, D., & Nay, L. (2009). Dialect recognition: The effects of region of origin and amount of experience. American Speech, 84, 48–71.

doi:10.1215/00031283-2009-004

Brewer, M. B. (1991). The social self: On being the same and different at the same time.

Personality and Social Psychology Bulletin, 17, 475–482.

Bryant, G. A., Fessler, D. M. T., Fusaroli, R., Clint, E., Aarøe, L., Apicella, C. L., . . . Zhou, Y. (2016). Detecting affiliation in colaughter across 24 societies. PNAS, 113, 4682–4687. doi:10.1073/pnas.1524993113

Byatt, G. & Rhodes, G. (2004). Identification of own-race and other-race faces:

Implications for the representation of race in face space. Psychonomic Bulletin &

Review, 11 (4), 735–741.

Cassidy, K. D., Quinn, K. A., & Humphreys, G. W. (2011). The influence of

ingroup/outgroup categorization on same- and other-race face processing: The moderating role of inter- versus intra-racial context. Journal of Experimental

Social Psychology, 47, 811–817. doi:10.1016/j.jesp.2011.02.017

Chun, J.-W., Park, H.-J., Park, I.-H., & Kim, J.-J. (2012). Common and differential brain responses in men and women to nonverbal emotional vocalizations by the same and opposite sex. Neuroscience Letters, 514, 157–161.

doi:10.1016/j.neulet.2012.03.038

Clopper, C. G. & Pisoni, D. B. (2004a). Homebodies and army brats: Some effects of early linguistic experience and residential history on dialect categorization.

Language Variation and Change, 16, 31–48. doi:10.1017/S0954394504161036

Clopper, C. G. & Pisoni, D. B. (2004b). Some acoustic cues for the perceptual categorization of American English regional dialects. Journal of Phonetics, 32, 111–140. doi:10.1016/S0095-4470(03)00009-3

(23)

Crockford, C., Herbinger, I., Vigilant, L., & Boesch, C. (2004). Wild chimpanzees produce group-specific calls: A case for vocal learning? Ethology, 110, 1221–243. Dale, R., Fusaroli, R., Duran, N. D., & Richardson, D. C. (2013). The self-organization

of human interaction. In B. H. Ross (Ed.), Psychology of Learning and Motivation:

Vol. 59 (pp. 43–96). Salt Lake City, UT: Academic Press.

doi:10.1016/B978-0-12-407187-2.00002-2

Dezecache, G. & Dunbar, R. (2004). Sharing the joke: The size of natural laughter groups. Evolution and Human Behavior, 33, 775–779.

doi:10.1016/j.evolhumbehav.2012.07.002

Ekman, P. & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologists Press.

Elfenbein, H. A. & Ambady, N. (2002a). Is there an in-group advantage in emotion recognition? Psychological Bulletin, 128, 243–249.

doi:10.1037//0033-2909.128.2.243

Elfenbein, H. A. & Ambady, N. (2002b). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128, 203–235. doi:10.1037//0033-2909.128.2.203

Elfenbein, H. A., Beaupré, M., Lévesque, M., & Hess, U. (2007). Toward a dialect theory: Cultural differences in the expression and recognition of posed facial expressions. Emotion, 7, 131–146. doi:10.1037/1528-3542.7.1.131

Fischer, J., Kitchen, D. M., Seyfarth, R. M., & Cheney, D. L. (2004). Baboon loud calls advertise male quality: Acoustic features and their relation to rank, age, and exhaustion. Behavioral Ecology and Sociobiology, 56, 140–148.

doi:10.1007/s00265-003-0739-4

Giles, H. & Viladot, A. (1994). Ethnolinguistic differentiation in Catalonia. Multilingua:

JOurnal of Cross-Cultural and Interlanguage Communication, 13, 301–312.

(24)

Green, E. (2001). Suiting ourselves: Women professors using clothes to signal authority, belonging and personal style. In Through the wardrobe: Women’s relationships with

their clothes. New York, NY: Berg.

Hehman, E., Mania, E. W., & Gaertner, S. L. (2010). Where the division lies: Common ingroup identity moderates the cross-race facial-recognition effect. Journal of

Experimental Social Psychology, 46, 445–448. doi:10.1016/j.jesp.2009.11.008

Herbinger, I., Papworth, S., Boesch, C., & Zuberbühler, K. (2009). Vocal, gestural and locomotor responses of wild chimpanzees to familiar and unfamiliar intruders: A playback study. Animal Behavior, 78, 1389–1396.

JASP Team, T. (2017). JASP (Version 0.8.1.1)[Computer software]. Retrieved from https://jasp-stats.org/

Kerswill, P. & Williams, A. (2002). Dialect recognition and speech community focusing in new and old towns in England: The effects of dialect levelling, demography and social networks. In Handbook of perceptual dialectology (Vol. 2, pp. 173–205). Amsterdam, The Netherlands: Benjamin.

Marsh, A. A., Elfenbein, H. A., & Ambady, N. (2003). Nonverbal “accents”: Cultural differences in facial expressions of emotion. Psychological Science, 14, 373–376. doi:10.1111/1467-9280.24461

Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics.

Advances in Health Science Education, 15, 625–632. doi:10.1007/s10459-010-9222-y

Owren, M. J. & Amoss, R. T. (2014). Spontaneous human laughter. In M. M. Tugade, M. N. Shiota, & L. D. Kirby (Eds.), Handbook of positive emotions (pp. 159–178). New York, NY: Guilford.

Provine, R. R. (1996). Laughter. American Scientist, 84, 38–45.

Provine, R. R. (2004). Laughing, tickling, and the evolution of speech and self. Current

(25)

R Core Team. (2013). R: a language and environment for statistical computing. ISBN 3-900051-07-0. R Foundation for Statistical Computing. Vienna, Austria.

Retrieved from http://www.R-project.org/

Rakić, T., Steffens, M. C., & Mummendy, A. (2011). Blinded by the accent! The minor role of looks in ethnic categorization. Journal of Personality and Social

Psychology, 100, 16–29. doi:10.1037/a0021522

Rothgänger, H., Hauser, G., Cappellini, A. C., & Guidotti, A. (1998). Analysis of laughter and speech sounds in Italian and German students. Naturwissenschaften,

85, 394–402. doi:10.1016/j.tics.2014.09.002

Rouder, J. N., Morey, R. D., Speckmann, P. L., & Province, J. M. (2012). Default bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56, 356–374. doi:10.1016/j.jmp.2012.08.001

Rouder, J. N., Speckmann, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin &

Review, 16, 225–237. doi:10.3758/PBR.16.2.225

Sauter, D. A. (2013). The role of motivation and cultural dialects in the in-group advantage for emotional vocalizations. Frontiers in Psychology, 4, Art. 814. doi:10.3389/fpsyg.2013.00814

Sauter, D. A., Scott, S. K., & Tanaka, A. (2017). The perception of amused and polite

laughter across cultures: A balanced cross-cultural studies with science museum

visitors. Manuscript in preparation.

Scott, S. K., Lavan, N., Chen, S., & McGettigan, C. (2014). The social life of laughter.

Trends in Cognitive Science, 18, 618–620. doi:10.1016/j.tics.2014.09.002

Simon-Thomas, E. R., Keltner, D. J., Sauter, D., Sinicropi-Yao, L., & Abramso, A. (2009). The role of motivation and cultural dialects in the in-group advantage for emotional vocalizations. Emotion, 9, 838–346. doi:10.1037/a0017810

(26)

Thibault, P., Bourgeois, P., & Hess, U. (2006). The effect of group-identification on emotion recognition: The case of cats and basketball players. Journal of

Experimental Social Psychology, 42, 676–683. doi:10.1016/j.jesp.2005.10.006

van Doorn, J., Maarsman, M., & Wagenmakers, E.-J. (2017). Nonparametric Bayesian

hypothesis testing through data augmentation. Manuscript in preparation.

Wagner, H. L. (1993). On measuring performance in category judgment studies of nonverbal behavior. Journal of Nonverbal Behavior, 17, 3–28.

Walton, J. H. & Orlikoff, R. F. (1994). Speaker race identification from acoustic cues in the vocal signal. Journal of Speech, Language, and Hearing, 37, 738–745.

(27)
(28)

Appendix A

Results on relationship between performance and age

Age and the mean Hu scores were correlated with the nonparametric test of Kendall’s τ

and the Bayesian equivalent. Age was significantly associated wit performance with

τ = .059, p = 0.01. The Bayes factor of BF10 = 1.119 showed that the alternative hypothesis is about as likely as the null hypothesis. Indeed, Figure A1 shows that the line of best fit is very flat and that the correlation might be driven by some of the outliers.

(29)

Appendix B

Additional confusion matrices. Table B1

Confusion matrix of answer proportions in % for the separate in-group, close out-group, and distant out-group.

Judgment Stimulus In Close Distant

In 26.25 51.75 21.75

Close 15.16 48.08 36.75 Distant 14.00 50.00 36.00

Note. Answers in for the close out-group and distant out-group were added to the cell

when participants chose a country from the correct group, even if the selected

nationality was not the correct origin of the laughter. For example, if the laughter was Namibian, but the participant selected Japanese, a count was added to the distant out-group/distant out-group cell.

(30)

Appendix C Pairwise comparisons Table C1

Comparison matrix for pairwise comparisons of performance in each condition with Wilcoxon Signed-Rank Test and Bayesian equivalent.

Conditions

Analysis Neth Fra Eng USA Jap Nam Frequentista a,b,c a,d b,e c,f d,e,f d,e,f Bayesian a,b,c a,d b,e c,f d,e,f d,e,f

Note. A letter appears in those cells that are significantly different or have a sufficiently

high Bayes factor. All p < .001, all BF10> 100. Neth = Netherlands, Fra = France, Eng = England, Jap = Japan, Nam = Namibia.

Referenties

GERELATEERDE DOCUMENTEN

Note that this also means that self-reliance is different from free-riding from an economic per- spective: while group members can free-ride on fellow group members by investing

In this respect he can be regarded as one of the founders of the idea of Christelike Nasionale Onderwys (Christian Na- tional Education). Soon after his confirmation

Mori silk, the scope of this study was to test the cell — material interactions of HUCS with spider MA dragline silk from Nephila edulis and provide an in-depth assessment on

In Study 1, we showed that underperforming (vs. equal-performing) group members expected to feel distressed while being part of the group. They expected to experience distress

(2018), Emotion recognition from faces with in- and out-group features in patients with depression, Journal of Affective Disorders 227: 817-823.. culture) can have an impact on

Conversely, because individuals who are perceived to be morally superior have the potential to enhance the group’s image in terms of its morality—the main dimension of group

1 examined whether relative to base- line episodes in which peer punishment was absent, (i) the pres- ence of peer punishment increased contributions to the group’s fighting

Therefore, next to knowing which distance configuration is the most conducive to group performance we are also interested in exploring whether groups as