• No results found

Evaluating over-confidence using facial epression anaysis in iterviewing cntexts

N/A
N/A
Protected

Academic year: 2021

Share "Evaluating over-confidence using facial epression anaysis in iterviewing cntexts"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Evaluating Over-Confidence using Facial Expression

Analysis in Interviewing Contexts

Andreas Maroulis

Student number 11417757

September 2018

https://we.tl/t-8hjDT8RARK

Thesis submitted in partial fulfilment for the degree of

Master of Business Administration,

Concentration Big Data and Business Analytics

Amsterdam Business School

(2)

Abstract

Overconfidence can often be mistaken as confidence. One way both surface is through non-verbal behaviours. In this study we explore the potential of using automatic facial expression coding to determine overconfidence. We use FaceReader to automatically code facial expressions in a dataset of 118 university students who perform a 10-minute pitch to a committee determining their hirability. The facial expression of disgust seemed most consistently linked to our measures. Further ways to analyse facial expression are discussed. The paper ends with a description of the potential of facial expression analysis in organizations.

(3)

Acknowledgements

I would like to thank my supervisor, Dr. Richard Ronay and his PhD student Samuel Mayora for providing me with a dataset and guidance in this work, my colleagues Amogh Gudi and Peter Lewinski for assisting with the pre-processing the data as well as all the staff at VicarVision whose work has produced the validated FaceReader product, used in this thesis. Finally, I would like to thank my managing director, Tim den Uyl for giving me time to complete this work, as well as Lesley Swensen, who’s invaluable work has made this MBA program possible.

(4)

Important Note

This thesis uses excerpts from work that the author submitted as part of the final paper for the Leading People Strategically course.

Furthermore, this thesis is an extension of the work done by Ronay, Oostrom, Lehmann-Willenbrock, Mayoral & Rusch (2018) and uses one of the dataset analysed in the manuscript. Thus, descriptions of the datasets and variables are similar to the work.

Finally, all FaceReader coded files, fused SPSS data, syntax and output files are included in the submission of this thesis and can be found at this link https://we.tl/t-8hjDT8RARK (also on the cover page).

(5)

Table of Contents

Abstract ... 2 Acknowledgements ... 3 Important Note ... 4 Table of Contents ... 5 1 Introduction ... 6

1.1 Background and Context ... Error! Bookmark not defined. 1.2 Scope and Objectives ... 9

2 Method ... 11

2.1 The study ... 11

2.2 Measures ... 11

2.2.1 Measurements from Ronay et. al. 2018 ... 11

2.2.2 Measurement of Emotional Expressions ... 12

2.3 Data Preparation / Pre-Processing... 13

2.3.1 Deciding on a threshold of frames coded for facial expressions ... 13

2.3.2 Creating a summary emotion expression score ... 15

2.3.3 Deciding on a Threshold of expressivity ... 15

2.4 The Final Dataset ... 16

2.5 Analysis ... 17 3 Results ... 18 3.1 Correlations ... 18 3.2 Regressions ... 19 4 Discussion ... 21 4.1 Summary ... 21

4.2 Limitations and future direction ... 22

4.3 Implications to business settings ... 23

5 Appendix ... 32

(6)

1 Introduction

1.1

Background

In a competitive job market, decision makers often revert to some implicit or explicit decision on whether or not a person is a good fit for an organization, sometimes even based on intuition. One way in which this intuition surfaces is through the belief that a candidate may be a future leader. This, in turn, is often associated with the level of confidence of the candidate (Hogan, Curphy, & Hogan, 1994; Kirkpatrick & Locke, 1991; Magee & Frasier, 2014). A troublesome fact, however, is that distinguishing confidence from overconfidence when no ground-truth data is available is quite a difficult task as both seem to surface in similar ways in nonverbal channels. Author Malcolm

Gladwell dramatically has stated that “Incompetence annoys me. Overconfidence terrifies me”. While a bit critical, he very effectively identifies the potential implications of mistakenly trusting someone to execute a job / task that they cannot do simply because one mistook their confidence to reflect their ability. Thus, succeeding to separate confidence from overconfidence would be greatly valued in organizations who are trying to optimize their hiring practices.

Traditionally, hiring procedures have relied on hard metrics such as a resume or list of

achievements (such as university distinctions, transcripts, etc.) to filter through potential candidates for a position. Cover / motivation letters, reference letters intelligence tests and in-person interviews often offer crucial information in further determining the potential value of an individual to an organization. Human resource departments are trying to improve their hiring practices by having access to larger pools of desired talent to match organization needs. In fact, there is an increase use of social media in both the recruiting and the job seeking process (El Ouirdi, 2018). The workforce of the future is mobile and global, furthering the need for a wider global recruiting net and the use of the right tools to help in the recruiting process. Nevertheless, a larger candidate pool requires greater resources and may create increased complexity in picking the right candidate. After all, intuitive impressions may strongly bias how we perceive a person (Porter & ten Brinke, 2009). Nonverbal behaviour plays an important role in the formation of first impressions, especially when the amount of information is low (Knapp & Hall, 2009). There is thus a great need for innovative, yet standardized, practices to assist in hiring new talent.

Furthermore, on a generational level, each generation appears to develop different values with respect to work-life balance. Research on millennials further suggests that organizations need to reconsider their hiring practices and their way of determining what their organization will look like in the future (Canedo, Graen, Grace, Johnson, 2015). This not only requires recruiting for the skills necessary for an organization’s continuing success but also recruiting people who’s working preference and style matches that of the organization.

(7)

The need for operational efficiency and the desire to quantify previously immeasurable traits, such as confidence, have pushed HR departments to find new methods of hiring, often leveraging

technological advancements. One new trend is using sensing technologies in interview settings (in person, or digital interviews). Research has shown that vocal and non-verbal behaviour can be used to create psychological profiles (Chamorro-Premuzic, Akhtar, Winsborough, & Sherman, 2017). Traits such as excitement, friendliness and engagement can easily be measured by means of facial

expression analysis. As said in recent reports traditional IQ testing is limiting whereas emotional intelligence is crucial and often overlooked in a potential employee. For the year 2017 people analytics and sentiment analysis were top in the list of predicting employee engagement. It is thus apparent that looking at a candidate’s non-verbal behaviour can be useful in determining both a person’s psychology but could potentially also help in quantifying his or her emotional intelligence.

Advances in the field of computer vision show promise in automating the process of behaviour coding (Bartlett, et. al, 2006; De laTorre, et. al., 2015; Lewinski; den Uyl, & Butler, 2014). There are a number of benefits in automated behaviour coding solutions. First, they drastically reduce the time for behaviour coding, while remaining as unobtrusive as traditional behaviour coding. Second, once validated, they eliminate the need for reliability checks between coders because an automated solution will always output the same result. Finally, they are not susceptible to fatigue effects inherent in long behavioural coding tasks. These solutions often model facial expressions based on categories of emotion (e.g. happy, sad, and angry) and/or on the Facial Action Coding System (FACS; Ekman, Friesen & Hager, 2002) quite reliably. Beyan, Capozzi, Becchio, & Murino (2017) have even gone so far to automatically try and determine leadership potential from facial feature extraction. Promising external validity results (Lewinski; den Uyl, & Butler, 2014) have led to an increased use of such solutions in research and commercial applications, and have even paved the way for new fields such as affective computing (Picard, 1995).

Overconfidence is typically harder to detect with certainty when it is expressed nonverbally (Judge, Bono, Ilies, & Gerhart, 2002). Nevertheless, given our ability to quantify behaviour quickly and objectively, we will use automatic facial expression coding to evaluate whether we can identify overconfidence from a person’s facial expressions. How can knowing a person’s facial expression help in the interview process? Figure 1 depicts the flow of a patented system (Shaburov &

Monastyrshin, 2017) that automatically detects facial expression in humans and links it to future performance metrics.

(8)

Figure 1: Procedure flow from Shaburov & Monastyrshin (2017)

1.2

Facial Expressions and Perceptions of People

When studying facial expressions, we can make inferences of the psychology of a person but also identify the impressions that person creates. With respect to facial expression perception, Trichas, Schyns, Lord & Hall (2016) showed that specific expressions can have a great effect on whether or not is perceived to have the potential of leader. Stewart, Waller & Schubber (2009) found that observer’s perceptions of politicians are often influenced by they politicians’ display of emotion. We already see how facial expressions alone can influence people’s perceptions of us. In the face of an interviewer, for example, an interviewee may appear favorable just by displaying the right facial expressions! We must not underestimate the Importance of context in the display of emotion. Torres & Gregory (2018) suggest that the right interview questions can be used to evaluate expressivity.

Measuring facial expression behavior, in this case, could thus both hurt and help a candidate. Showing positive displays of emotion may create likeability, but in the wrong context it may appear

(9)

When trying to understand a person’s psychology from the facial expressions he or she shows the research is a bit less conclusive. Measuring the synchronicity and mimicry between the expressions of the interviewer and the interviewee can be indicative of emotional intelligence. Good mimicry of facial expressions is correlated to empathy and good emotional intelligence. Another area of research relevant to his paper is linking facial expression behavior to personality (Biel, Teijeiro-Mosquera, and Gatica-Perez, 2012; Gavrilescu & Vizireanu, 2017; Schmid Mast, Gatica-Perez, Frauendorfer,

Nguyen, Choudhury, 2015). While the research is yet to be conclusive, under certain contexts, facial expressions seem to be somewhat reliable indicators of personality (Big Five or 16PF characteristics). Such insights can be useful if an organization is seeking a certain type of personality to match with a position or with the organizational culture.

It appears that measuring facial expression behaviors during interview settings is feasible and can provide some additional useful insight that could assist organizations in finding matches with potential candidates.

Torres & Gregory (2018) found that interviewee aesthetics play an important role in hiring

decisions. In other words, how you look influences your chances of getting hired. This can helpful for candidate preparation but could also serve as a reminder to interviewers to avoid biases based on appearance. Trichas et. al. (2016) showed that that leadership impressions mediated the effects of facial emotions on trait ratings. Nguyen, Frauendorfer, Schmid Mast, and Gatica-Perez (2014) found that interviewer visual cues were predictive of hirability of a candidate. Furthermore, they found that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data. In other words, an interview is more important to an interviewer in determining the desirability of a candidate than a questionnaire. Finally, Gavrilescu & Vizireanu (2017) found that facial expression reactions to certain emotion-inducing stimuli are indicative of personality traits.

As with much of the research in social psychology, can we truly link a behaviour to guaranteed future success in the workplace?

1.3

Scope and Objectives

As mentioned before, in this paper we try to objectively quantify expressed and perceived

overconfidence. For this study we define overconfidence as the “excessive certainty in the correctness of one’s knowledge (Moore & Healy, 2008; Moore & Schatz, in press; Moore & Swift, 2010)“. Continuing on the work of Ronay et al (2018), we have three main objectives:

• To see whether facial expression can be used as way to objectively measure overconfidence. We code for facial expressions and analyse an existing dataset of video interviews collected at a large Dutch university.

(10)

To see if facial expression analysis can be used in regulating our potential wrong perception of a candidate. We code for facial expressions and analyse an existing dataset of video interviews collected at a large Dutch university.

• To evaluate the potential of using automatic facial coding in interviewing procedures. Based on insights from the analysis but also from literature search we propose future directions in the use of facial expression software in hiring procedures.

(11)

2 Method

2.1

The study

One hundred and eighteen (118) students (21 male, M = 20,05, SD = 1,98) at a large Dutch university participated in a simulated job talk study by Ronay, Oostrom, Lehmann-Willenbrock, Mayoral & Rusch (2018). “Participants were asked to give a simulated job talk in front of a live committee, with the purpose of inducing stress (Kirschbaum, Pirk, & Hellhammer, 1993).” (Ronay et. al. 2018). While the original dataset was comprised of 140 students, video availability and quality reduced our dataset to 118 students.

“Participants gave a ten-minute presentation, intended to convince a committee (consisting of two trained research assistants who were blind to our hypotheses) that they were the best candidate for a hypothetical leadership position.“ (Ronay et. al. 2018).

Prior to their presentation, participants were asked to complete an adapted version of the General Knowledge Questionnaire (GKQ, Michailova, 2010, Ronay et al., 2017) and to rate their confidence on how their performed on the GKQ.

2.2

Measures

2.2.1 Measurements from Ronay et. al. 2018

Presentation Quality: The two-person committee rated participants on verbal (structure, speech, understandability, main points, voice, and persuasion) and non-verbal (eye contact, posture, gestures, use of space, calm, and enthusiasm) quality. This is not included in the current dataset and analysis but results from previous studies will be referenced in this thesis.

Affect: To provide a measure affect, participants completed twice the Positive and Negative Affect Scale (PANAS; Watson, Clark, & Tellegen, 1988), prior to learning they would have to deliver a job talk and then once again after the talk concluded. This is not included in the current dataset and analysis but results from previous studies will be referenced in this thesis.

Overconfidence: this was measured by regressing participants’ confidence scores (i.e., mean confidence ratings) onto their accuracy (i.e., percentage of correctly answered items) using an adapted version of the GKQ and saving the standardized residual scores (Anderson et al., 2012; Cohen, Cohen, West, & Aiken, 2003; Cronbach & Furby, 1970; DuBois, 1957; John & Robbins, 1994).

Perceived measures: 306 Amazon Mechanical Turk workers (56% men, Mage = 37.77, SD = 12.52, ranging from 17 to 79) viewed the first 30 seconds of each participant’s talk (muted) and rated the participants on their perceived OverConfidence, Confidence, Competence, Potential and Hirability on a 7-point scale.

(12)

Competence Manipulation: 725 Amazon Mechanical Turk workers (52% men, Mage = 35.96, SD = 11.45, ranging from 18 to 74) rated the same 30 second clips but were also given a randomly assigned manipulation of the participant’s competence. Each 30 second clip was paired with a resume of the candidate either portraying the candidate as highly competent individual, or as low competent individual.

2.2.2 Measurement of Emotional Expressions

Each frame of the 118 videos was coded on 0-to-1 facial expression intensity scale using the Noldus FaceReader software (Noldus, 2016;). FaceReader is a computer vision solution that has a 3-step process of (1) detecting a face in a video using the Viola-Jones algorithm (2004), (2) modelling the face using a 500-point Active Appearance Model (Cootes & Taylor, 2004), and (3) using a neural network to estimate one of seven (7) possible outputs (each corresponding to a different facial expression). The facial expressions coded were those associated to the emotions of happiness, sadness, anger, surprise, fear and disgust (Figure 2) as there is a wide range of literature that suggests these are universally expressed when uninhibited by display rules (Ekman et. al, 1987).

Figure 2: Facial Expression Coded (FACS Manual; Ekman, Friesen & Hager, 2002)

Furthermore, these expressions are the most commonly available and validated across the majority of automatic facial expression coding solutions (Bartlett, et. al, 2006; De laTorre, et. al., 2015; Lewinski; den Uyl, & Butler, 2014). Finally, a seventh expressions of neutral was also coded for activation and intensity. Therefore, for each video we had a facial expressions values for the entire length of a participant’s video (figure 3).

(13)

Figure 3: FaceReader Expression Output

2.3

Data Preparation / Pre-Processing

2.3.1 Deciding on a threshold of frames coded for facial expressions

While FaceReader is validated software for automatically analysing facial expressions from video (Lewinski; den Uyl, & Butler, 2014), it does require good lighting and angle conditions to perform optimally. As the original dataset was not recorded with these data requirements in mind, a percentage of the frames from each participant recording could not be modelled and subsequently coded for facial expressions. Figure 4 shows examples of 4 participants that FaceReader successfully modelled.

Figure 4: AAM fitting well on participant videos

The dataset already contained scores for overconfidence, and perceived confidence, perceived overconfidence, perceived competence, perceived potential and perceived hirability. Our aim was to relate facial expressiveness to these measures. We thus sought to find an optimal threshold of percentage of frames successfully modelled in order to retain as many participants from the original dataset while not greatly affecting the distribution of the aforementioned target variables. Table 1 shows the mean and standard deviation of frames successfully analysed for different threshold values (70%, 80%, 90%).

(14)

Table 1

Descriptive Statistics for # frames analyzed at different threshold levels

Threshold N Mean Std. Deviation

90% 29 14642,69 1990,15

80% 48 14324,00 1623,18

70% 68 13515,71 2021,70

As we can see, the mean number and standard deviation of the # of frames analysed does not change greatly as the threshold value drops to 70%. It is important to note that the videos are recorded at 25fps which means that 13515,71 frames are equal to 13515,71/25 ~ 540 seconds of video. The difference of frames analysed between the 90% threshold and the 70% is equivalent to about 20 seconds worth of analysed frames, on average. As the videos are approximately 10 minutes long, we consider 20 seconds to be an acceptable number of seconds missed.

To further establish that the threshold of 70% is acceptable we also investigated the mean and standard deviations of the overconfidence, and perceived confidence, overconfidence, competence, potential and hirability scores.

Table 2 - Descriptive Statistics for original dataset scores at different threshold levels

Threshold 90% (N = 29) Threshold 80% (N = 49) Threshold 70% (N = 68)

Min Max Mean St. Dev. Min Max Mean St. Dev. Min Max Mean St. Dev. OverConfidence -1,57 2,72 0,07 1,012 -1,57 2,72 0,06 0,939 -1,99 2,72 0,04 0,934 Per. Overconfidence 2,68 4,81 3,82 0,502 2,68 4,81 3,79 0,471 2,39 4,81 3,73 0,482 Per. Confidence 3,55 6,38 4,91 0,679 3,55 6,38 4,83 0,609 3,26 6,38 4,78 0,609 Per. Competence 4,35 5,52 4,89 0,325 3,90 5,52 4,83 0,367 3,90 5,52 4,84 0,339 Per. Potential 4,30 5,57 4,86 0,363 3,43 5,57 4,80 0,430 3,43 5,57 4,80 0,410 Per. Hirability 3,53 4,75 4,19 0,338 3,14 4,75 4,14 0,365 3,14 4,75 4,14 0,351

(15)

Table 2 also indicates that the lower threshold of 70% does greatly affect the distributions of theses variables.

Finally, comparing sample sizes, we notice a large difference between thresholds. To have a higher power in our analysis but also to use over 50% of the original video collection, we decide to continue our analysis using the 70% thresholds.

2.3.2 Creating a summary emotion expression score

While having a detailed frame-by-frame scoring for each facial expression is useful in analysing reactions to stimuli at certain moments during a video recording, it may not be very practical for the purposes of this study. To make emotion expression analysis more comparable to existing data structure, where each participant has a score for overconfidence and the perceived measures, it may be more practical to have a summary score for each expression of emotion per participant. Similar to Lewinski, Fransen, & Tan (2014), we aggregate the frame-by-frame data into one summary metric per expression by averaging the scores from each frame.

2.3.3 Deciding on a Threshold of expressivity

Summarizing the data in such way may drive expressivity scores down, however, especially if participants are infrequent in presenting these facial expressions. We, thus, tried to create a summary metric per expression that better represents the intensity of infrequently occurring facial expressions. Using a tool by Lewinski & Gudi (2014) and replicating the work of Lewinski et. al. (2014) we experimented with different thresholds of selecting the most expressive frames per emotion per participant video. For each video, facial expression scores for each emotion are put in ascending order creating a relative frequency distribution per emotion. We then experimented by disregarding the bottom 70th, 80th, 90th and 95th percent of each emotion distribution, thus looking at the top 30%, 20%, 10%, and 5% of values of emotion respectively for each video. Table 3 shows us the mean score and standard deviation for each percentile per emotion.

Table 3 – Mean and St Deviation of Expressions, by Percentile of frames analyzed

0th Percentile 70th Percentile 80th Percentile 90th Percentile 95th Percentile Mean St Dev Mean St Dev Mean St Dev Mean St Dev Mean St Dev Neutral 0,48 0,111 0,68 0,109 0,72 0,104 0,76 0,096 0,79 0,087 Happy 0,29 0,149 0,58 0,205 0,66 0,192 0,76 0,157 0,83 0,119 Sad 0,09 0,045 0,18 0,079 0,21 0,088 0,26 0,103 0,32 0,117 Angry 0,03 0,014 0,06 0,027 0,07 0,033 0,09 0,045 0,11 0,060 Surprised 0,13 0,069 0,25 0,120 0,28 0,133 0,34 0,149 0,39 0,160 Scared 0,08 0,044 0,16 0,091 0,19 0,105 0,24 0,127 0,28 0,144 Disgusted 0,01 0,006 0,03 0,013 0,03 0,015 0,04 0,020 0,06 0,027

(16)

As expected, the higher the percentile the higher the mean value per emotion expressed. Interestingly enough, however, not only does the mean increase as we select a higher threshold, but also (in most cases) so does the variability of the distribution (standard deviation, figure 5).

Figure 5 – Standard Deviation of emotion expression for each percentile of frames

Based on this, we selected the 95% percentile values as the basis for our analysis. Furthemore, we ran the same analyses for the remaining thresholds with similar results, making us more confident with our selection.

2.4

The Final Dataset

After aggregating and pre-processing the data, our dataset has 68 participants. The dataset includes scores for overconfidence, perceived confidence, perceived overconfidence, perceived competence, perceived potential and perceived hirability for each participant. Furthermore, the dataset includes scores of perceived confidence, perceived overconfidence, perceived competence, perceived potential and perceived hirability for a high competence manipulation, and a low competence manipulation for each participant. Finally, each participant has a summary score for the expressions of happiness, sadness, anger, surprise, fear, disgust and neutral using the top 5% values of each emotion for each participant. The result is a dataset with 68 records and 25 features.

0 0,05 0,1 0,15 0,2 0,25 0 70 80 90 95 Neutral Happy Sad Angry Surprised Scared Disgusted

(17)

2.5

Analysis

We treated each of the 7 expressions as possible predictors for overconfidence, perceived confidence, perceived overconfidence, perceived competence, perceived potential and perceived hirability. We ran Shapiro-Wilk tests to evaluate if our emotional expressions variables are normally distributed. We then performed correlations of the 7 expressions against our predictors. We repeated these correlations two more times using the high- and low-competence manipulated perceived confidence, perceived overconfidence, perceived competence, perceived potential and perceived hirability. Finally, we performed multivariate-regressions on each of our dependent variables using all 7 expressions as predictors.

(18)

3 Results

The Shapiro – Wilk test revealed that all expressions except Sadness are non-normally distributed (p < 0.05). Log transformation for the non-normal variables were computed. Examining normal Q-Q plot of the emotional expression variables allowed us to continue on with our analysis without the transformation.

3.1

Correlations

Table 4 shows the results of the correlations of emotion expressions to our dependent variables. We see that Disgust is significantly correlated with OverConfidence (ρ = 0,246, p < 0,05) and Perceived OverConfidence (ρ = 0,241, p < 0,05). Furthermore, the anger expression is significantly correlated to Perceived Confidence (ρ = 0,240, p < 0,05). Finally, the Neutral expression is

significantly correlated to Perceived Potential (ρ = 0,249, p < 0,05).

Table 4 – Correlation of Expressions against Depended measures

Neutral Happy Sad Angry Surprised Scared Disgusted

OverConfidence ρ 0,064 -0,011 0,026 0,083 -0,119 -0,110 .246 * p-value 0,603 0,926 0,834 0,500 0,335 0,371 0,043 Per. Overconfidence ρ 0,215 0,015 0,086 0,207 -0,044 0,112 .241* p-value 0,078 0,902 0,487 0,090 0,723 0,365 0,048 Per. Confidence ρ 0,226 0,037 0,159 .240 * -0,094 -0,055 0,226 p-value 0,063 0,766 0,195 0,049 0,448 0,658 0,064 Per. Competence ρ 0,199 -0,054 0,089 -0,012 -0,139 -0,065 0,088 p-value 0,104 0,660 0,470 0,924 0,257 0,597 0,475 Per. Potential ρ .249 * -0,109 0,053 0,072 -0,127 -0,015 0,061 p-value 0,041 0,378 0,668 0,562 0,302 0,906 0,621 Per. Hirability ρ 0,146 -0,023 0,041 0,038 -0,113 -0,015 0,096 p-value 0,236 0,852 0,742 0,758 0,359 0,907 0,435

Table 5 shows the results of the correlations of emotion expressions to our dependent variables with the manipulation of high competence. We see that Disgust is again significantly correlated to Perceived OverConfidence (ρ = 0,248, p < 0,05) but also to Perceived Confidence (ρ = 0,282, p < 0,05). Furthermore, the anger expression is again significantly correlated to Perceived Confidence (ρ = 0,240, p < 0,05). Table 6 shows the results of the correlations of emotion expressions to our dependent variables with the manipulation of low competence. As we see, there are no significant correlations in any pair of variables.

(19)

Table 5 - Correlation of Expressions against Depended measures – Manipulation High Competence

Neutral Happy Sad Angry Surprised Scared Disgusted

Per. Overconfidence ρ 0,157 0,024 0,078 0,183 -0,068 0,199 .248* p-value 0,200 0,848 0,526 0,135 0,582 0,104 0,041 Per. Confidence ρ 0,187 -0,021 0,220 .285 * -0,139 0,057 .282* p-value 0,128 0,865 0,072 0,018 0,259 0,646 0,020 Per. Competence ρ 0,220 -0,149 0,214 0,166 -0,150 -0,006 0,049 p-value 0,072 0,225 0,080 0,177 0,223 0,960 0,689 Per. Potential ρ 0,218 -0,197 0,162 0,175 -0,197 0,046 0,062 p-value 0,074 0,107 0,186 0,153 0,107 0,712 0,614 Per. Hirability ρ 0,214 -0,155 0,178 0,191 -0,155 0,079 0,083 p-value 0,080 0,207 0,147 0,119 0,208 0,523 0,503

Table 6 - Correlation of Expressions against Depended measures – Manipulation Low Competence

Neutral Happy Sad Angry Surprised Scared Disgusted

Per. Overconfidence ρ 0,230 -0,008 0,108 0,201 0,002 0,000 0,169 p-value 0,059 0,947 0,381 0,101 0,989 0,998 0,167 Per. Confidence ρ 0,207 0,066 0,105 0,175 -0,002 -0,157 0,131 p-value 0,090 0,590 0,393 0,152 0,989 0,202 0,287 Per. Competence ρ 0,071 0,015 -0,027 -0,129 -0,027 -0,111 0,130 p-value 0,566 0,903 0,828 0,295 0,830 0,368 0,291 Per. Potential ρ 0,193 -0,008 -0,016 -0,017 -0,002 -0,074 0,075 p-value 0,114 0,945 0,896 0,891 0,987 0,551 0,545 Per. Hirability ρ 0,024 0,046 -0,048 -0,111 -0,002 -0,094 0,119 p-value 0,846 0,710 0,695 0,367 0,986 0,444 0,334

3.2

Regressions

Regressing Overconfidence on the 7 expressions gave no significant results (R2 = 0.074, SE = 0.95, F(7, 60) = 0.684, p > 0.05). Similarly, regressing perceived confidence, perceived

overconfidence, perceived competence, perceived potential and perceived hirability on the 7 expressions also gave no significant results.

Competence Manipulation: Regression of Perceived Confidence on the 7 expressions under the high competence manipulation revealed significant results (R2 = 0.234, SE = 0.595, F(7, 60) = 2.615, p < 0.05). Table 7 shows the beta coefficients of the regression. Furthermore, regressing Perceived Overconfidene on the 7 expressions under the high competence manipulation revealed near significant

(20)

results (R2 = 0.196, SE = 0.504, F (7, 60) = 2.095, P = 0.058). Dependent variables under the low competence

condition and regressed on the 7 expressions showed no significant results.

Table 7 - Beta Coefficient of Perceived Confidence (High Competence Condition) regressed on 7 expressions

Model Unstandardized Coefficients Standardized Coefficients t Sig. B Std. Error Beta (Constant) 1,829 1,286 1,422 0,160 Neutral 1,009 0,932 0,136 1,083 0,283 Happy 1,256 0,758 0,231 1,657 0,103 Sad 1,411 0,675 0,257 2,091 0,041 Angry 3,239 1,374 0,304 2,358 0,022 Surprised 0,121 0,519 0,030 0,233 0,816 Scared 0,770 0,534 0,172 1,442 0,155 Disgusted 5,543 3,075 0,229 1,803 0,076

(21)

4 Discussion

4.1

Summary

The first noticeable result is that disgust seems to be weekly correlated to overconfidence as well as perceived overconfidence. While there is no prior work with respect how expressing disgust affects perceived confidence or overconfidence Allen, Frank, Schwarzkopf, Fardo, Winston, Hauser, & Rees (2016) found that being exposed to disgusted faces, unconsciously boosted their participants’

confidence in solving a difficult task, regardless of their actual performance on the task. They attribute this to an increased alertness triggered by a disgust face. This can explain perhaps why Mechanical Turk workers perceived participants who expressed disgust as more confident, perhaps responding similarly to the participants in Allen et al’s (2016) experiment. Furthermore, as there seems to be a relationship between overconfidence and the expression of disgust there could be a reverse effect, that is that overconfidence is expressed via discussed face, as opposed to a disgust face triggering alertness and, as a result, overconfidence.

The second significant relationship is that that of anger to perceived confidence. Tiedens (2001) had concluded that we are more likely to attribute status to people who expressed anger over sadness. He analyses President Clinton’s reaction to accusation with regards to the Monica Lewinski scandal and says he received more support when he reacted with anger instead of sadness, thereby seeming more confident in his innocence. Perhaps, in our experiment, anger has a similar effect in seeming confident and adamant in self presentation, even though there was no accusation to react on our participant’s side.

We are surprised that happiness was not associated in any way to perceived confidence or

overconfidence, especially given that the experiment called for participants to self present themselves as if they are interviewing for a position. Studies in the past have reported that interviewees who smiled were more likely to be offered jobs (Forbes & Jackson, 1980; Imada & Hakel, 1977;

McGovern and Tinsley, 1978). Similarly, in these studies, people who had more neutral expressions in their presentations were less likely to be offered jobs, but we see no such evidence in our analysis. In fact, neutral expression seems to be weekly correlated to perceived potential. As it seems, the evidence on happy and neutral expressions seem to be contradictory and inconclusive.

The results of the competence manipulation may indicate that a predisposition on competence may affect how you interpret one’s expression, with respect to their confidence. However, given that the results of the high-competence manipulation are very similar to those of the no-competence condition, it is hard to argue that a predisposition on competence may affect they way we interpret a candidate’s facial expressions.

(22)

4.2

Limitations and future direction

While this work is exploratory and has shown some interesting trends between measures of confidence and facial expressions there are inherently some limitations.

All the perceived measures are based on a 30-second clip of each of the participant’s video whereas the analysis is performed on the whole 10-minute video of each participant. As there was no clear way to identify on the data what 30-second clip exactly was used in the mechanical Turk data collection we decided to explore the data as is. Nevertheless, given that there were some interesting trends in the current data, it may be worth identifying the 30 second clips and re-run a similar analysis.

The data was not collected with the purpose of automatically coding if for facial expression. As a result, data was lost due to poor recording quality and only 68 out of 118 videos were used in our analysis. Furthermore, to further enhance the quality of the study, a better recording setup would ensure that a higher threshold of 80% of successfully modelled frames would reach a comparable number to the 68 videos of this study with a 70% threshold. This overall would raise the facial expression coding quality and provide a richer dataset to analyse in the future.

Ronay et. al. 2018 collected data on the quality of the speech that were not analysed as part of this study (e.g. eye contact, posture, enthusiasm). Ekman & Friesen (1978) suggest that some nonverbal channels may be more revealing than others in determining ones true intentions, making it an

appealing proposition to rank facial expressiveness against the other non-verbal measures, albeit none were significantly related to overconfidence.

Ronay et. al. 2018 also collected PANAS data and saw a notable change in affect from before the participant’s pitch to after. This could further be enhanced by looking at the facial expressions of our participants in such a temporal fashion. Given the manner in which the expression data was

aggregated (i.e. one summary metric per expression, per participant), a different approach should be taken. One approaches could be analysing the expression data as a time series. Another could be aggregating the expression data from the first 30 seconds of video and comparing them to the

aggregated data of the last 30 seconds of the video. Time and data quality restrictions made both these methods difficult.

One potential way forward is to analyse the content of each of the participant’s speech and relating it to their facial expressions. This can potentially provide context to facial expressions expressed at particular moments and can answer questions that may have to do with congruency of non-verbal and verbal behaviour, e.g. is what the candidate saying matching their facial expressions.

Overall, this study has further explored facial expressions in the context of expressed and perceived overconfidence. While facial expression does not seem to be a dependable way to

(23)

objectively quantify overconfidence, we must be cautious at how the expressiveness of an individual might affect our perception of their confidence and overconfidence.

4.3

Implications for business applications facial expression analysis

Nguyen & Gatica-Perez (2005) note that automatically extracting nonverbal cues due in fact predict hirability. Our results were not conclusive on whether facial expression analysis can be used to determine overconfidence. Nevertheless, let us briefly explore in what ways we can potentially use automatic behaviour detection in the future.

4.3.1 The potential of facial expression analysis

Facial Expression as candidate enhancement Factor

In this scenario we take a candidate’s expressivity and suggest it is a useful differentiating factor from other candidates. As previous research has shown, smiling can give the impression of leadership potential. In some studies, candidates who smiled more, kept more eye contact, nodded more,

produced more facial expressions were more likely to be hired (Forber & Jackson, 1980; Parsons & Liden, 1984; Anderson & Shackleton, 1990). We thus consider having an objective way to measure how expressive a candidate is as an added bonus to the candidate’s profile.

Facial Expression as a candidate-normalizing Factor

Interviewer ratings of candidates are affected by many factors. Behrend, Toaddy, Thompsoon, & Sharek (2012) suggest that removing cues such as attractiveness may help objectify the process of interviewing. Using tools to objectively quantify biases in interviews may assist in an optimized, normalized, and objective interview process. (Levashina, Hartwell, Morgeson, Campion, 2014). Research has shown that interviewer ratings and applicant reactions are lower in asynchronous technology mediated interviews. We must be wary of this and standardize our selection process so as to avoid unfair selection based on different technologies and based on different interviewers. To normalize the reaction to a participant we can take an interviewers rating of a participant and adjust it based on the participant’s expressivity. Expressive behaviour that may create bias in support of candidates may be corrected in order for other employee hiring factors to be considered more strongly. In this case, facial expression can be used as a control parameter, similar to a laboratory experiment.

Facial Expression as a candidate-prescription factor

Perhaps an organization wishes to hire a candidate with a specific expressivity profile. As an example, the role may require frequent presentation and public speaking engagements. In this context, a minimum expressivity threshold may be useful in selecting the right candidate.

(24)

The aforementioned ways of using facial expressiveness to assist in the fair evaluation of a candidate are a good start in using automated facial expression technology in the HRM process. Nevertheless, the examples above should be viewed with caution as metrics of soft data to complement hard candidate data such as resume and transcripts.

Chapman & Webster (2003) interviewed HR staff and managers identified several goals for the use of technology in Human resources, listed below (percentage of managers advocating for this goal indicated in a parenthesis):

- Efficiency (44.8%)

- Enable new assessment tools (41.1%) - Reduce costs (31%)

- Standardize systems (27.6%)

- Promote organizational image (15.5%) - Increase applicant convenience (15.5%)

Using automatic facial expression can help in a more informed and efficient hiring process by more successfully filtering out candidates. It can be viewed as a new assessment tool and a way to standardize the evaluation of candidates.

4.3.2 Ethical Implications

Nevertheless, like with any new tool or technology we must be wary of our reliance on the tool. The ease of use of a technology can lead to simplistic generalizations about a candidate. As identified by Stone et. a. (2015) we must find a balance between efficiency and effectiveness of HR technology tools, and they must be used as support tools assisting experienced hiring managers.

Finally, in a world where sensitivity for personal data is heightened when at the same time the collection of data is ever more easy we must consider the following issues brought up by Chamorro-Premuzic, Akhtar, Winsborough, & Sherman (2017):

a) Organization must set clear boundaries around data ownership and sharing: How long can interview data be used for and who can view it?

b) Organizations must allow a candidate to clearly consent to this process: the process honoured must be clear and transparent.

c) Organization should use data ethically and in a non-discriminatory manner: given that facial expressions can be used to determine personality characteristics we must be wary of the purpose of collecting such information. Furthermore, biases on less expressive people can be avoided by correcting for expressivity.

(25)

4.3.3 Conclusion

Organizations often face the problem of finding the right candidates both in terms of skills but also based on fit. One way to assist in both is by evaluating facial expressions during interview settings. Personality insights from facial expressions can be used to evaluate job and organization fit, whereas expressiveness can be used to correct impressions of candidates so as to make the hiring process standardized across candidates and interviewers. In this paper, we empirically tried to evaluate

whether overconfidence can be determined from facial expression analysis. Computer vision solutions are already mature enough to measure expressiveness from a regular RGB camera making the

(26)

5 References

Allen, M., Frank, D., Schwarzkopf, D. S., Fardo, F., Winston, J. S., Hauser, T. U., & Rees, G. (2016). Unexpected arousal modulates the influence of sensory noise on confidence. Elife, 5, e18103. Anderson, C., Brion, S., Moore, D.M., & Kennedy, J.A. (2012). A status-enhancement account of

overconfidence. Journal of Personality and Social Psychology, 103, 718–735.

Anderson, N. & Shackleton, V (1990). Decision making in the graduate selection interview: A field study. J. of Occupational Psychology.

Barsade, S. G., & Gibson, D. E. (2007). Why does affect matter in organizations?. Academy of management perspectives, 21(1), 36-59.

Bartlett, M. S., Littlewort, G., Frank, M. G., Lainscsek, C., Fasel, I. R., & Movellan, J. R. (2006). Automatic recognition of facial actions in spontaneous expressions. Journal of multimedia, 1(6), 22-35.

Behrend, T., Toaddy, S., Thompson, L. F., & Sharek, D. J. (2012). The effects of avatar appearance on interviewer ratings in virtual employment interviews. Computers in Human Behavior, 28(6), 2128-2133.

Berry, D. S., & McArthur, L. Z. (1986). Perceiving character in faces: The impact of age-related craniofacial changes on social perception. Psychological Bulletin, 100, 3–18.

Beyan, C., Capozzi, F., Becchio, C., & Murino, V. (2017, November). Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identification. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (pp. 451-455). ACM.

Biel, J. I., Teijeiro-Mosquera, L., & Gatica-Perez, D. (2012, October). FaceTube: predicting

personality from facial expressions of emotion in online conversational video. In Proceedings of the 14th ACM international conference on Multimodal interaction (pp. 53-56). ACM.

Blacksmith, N., Willford, J. C., & Behrend, T. S. (2016). Technology in the employment interview: A meta-analysis and future research agenda. Personnel Assessment and Decisions, 2(1), 2.

Burgoon, J. K. (1994). Nonverbal signals. In M. L. Knapp & G. R. Miller (Ed.), Handbook of interpersonal communication(2nd ed., pp. 344-390). Beverly Hills, CA: Sage.

Canedo, J. C., Graen, G., Grace, M., & Johnson, R. D. (2017). Navigating the new workplace: Technology, millennials, and accelerating HR innovation. AIS Transactions on Human-Computer Interaction, 9(3), 243-260.

Carli, L.L., LaFleur, S.J., & Loeber, C.C. (1995). Nonverbal behavior, gender, and influence. Journal of Personality and Social Psychology, 68, 1030–1041.

(27)

Chamorro-Premuzic, T., Akhtar, R., Winsborough, D., & Sherman, R. A. (2017). The datafication of talent: how technology is advancing the science of human potential at work. Current Opinion in Behavioral Sciences, 18, 13-16.

Chapman, D. S., & Webster, J. (2003). The use of technologies in the recruiting, screening, and selection processes for job candidates. International journal of selection and assessment, 11(2-3), 113-120.

Cohen J, Cohen P, West SG, Aiken LS (2003) Applied multiple regression/correlation analysis for the behavioral sciences, 3rd ed. (Hillsdale, NJ: Erlbaum).

Cootes, T. F., & Taylor, C. J. (2004). Statistical models of appearance for computer vision. Imaging Science and Biomedical Engineering (University of Manchester, Manchester, UK).

Cronbach LJ, Furby L (1970) How we should measure" change": Or should we?. Psychol. Bull. 74:68-80.

DePaulo, B. M. (1992). Nonverbal behavior and self-presentation.Psychological Bulletin,111, 203-243.

DuBois PH (1957) Multivariate correlational analysis (New York, NY: Harper).

De la Torre, F., Chu, W.-S., Xiong, X., Vicente, F., Ding, X., & Cohn, J. F. (2015). Intraface. IEEE International Conference on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia.

Dumitrescu, D., Gidengil, E., & Stolle, D. (2015). Candidate confidence and electoral appeal: An experimental study of the effect of nonverbal confidence on voter evaluations. Political Science Research and Methods, 3(1), 43-52.

Driskell, J.E., Olmstead, B., & Salas, E. (1993). Task cues, dominance cues, and influence in task groups. Journal of Applied Psychology, 78, 51–60.

Ekman, P. (2009). Telling Lies: Clues to Deceit in the Marketplace, Politics and Mariage. Ww Norton & Co.

Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). The Duchenne smile: Emotional expression and brain physiology: II.Journal of Personality and Social Psychology,58, 342-353.

Ekman, P., Friesen W.V., & Hager J. C. (2002). Facial action coding system: The manual. Salt Lake City, UT: Research Nexus.

Ekman P, Friesen WV (1969) The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1:49-98.

Ekman P, Friesen WV (1974) Detecting deception from the body or face. J. Pers. Soc. Psychol. 29: 288- 298.

Ekman, P., Friesen, W. V., O'sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., & Scherer, K. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of personality and social psychology, 53(4), 712.

(28)

El Ouirdi, M. (2016). The use of social media in recruitment and job seeking (Doctoral dissertation, Department of Management, Faculty of Applied Economics, Universiteit Antwerpen (Belgium)). Epitropaki, O., Sy, T.,Martin, R., Tram-Quon, S., & Topakas, A. (2013). Implicit leadership and

followership theories “in thewild”: Taking stock of information processing approaches to leadership and followership in organizational settings. The Leadership Quarterly, 24, 858–881. Forbes R. & Jackson P. (1980). Non-verbal behaviour and the outcome of selection interviews. J. of

Occupational Psychology.

Friesen, W. V., & Ekman, P. (1983). EMFACS-7: Emotional facial action coding system. San Francisco, CA: University of California, San Francisco.

Goffman,E.(1959).The presentation of self in everyday life.Carden City, NY:Doubleday. Imada, A. & Hakel, M. (1977). Influence of nonverbal communication and rater proximity on

impressions and decisions in simulated employment interviews. J. of Applied Psychology. John OP, Robins RW (1994). Accuracy and bias in self-perception: Individual differences in

selfenhancement and the role of narcissism. J. Pers. Soc. Psychol. 66:206-219.

Judge, T. A., Bono, J. E., Ilies, R., & Gerhardt, M. W. (2002). Personality and leadership: A qualitative and quantitative review. Journal of Applied Psychology, 87, 765-780.

Keating, C. F. (2016). The life and times of nonverbal communication theory and research: Past, present, future. In D. Matsumoto, H. C. Hwang, & M. G. Frank (Eds.), American Psychological Association Handbook of Nonverbal Communication(pp. 17-42). Washington, DC: APA Publications.

Keller, T. (1999). Images of the familiar: Individual differences and implicit leadership theories. The Leadership Quarterly, 10, 589–607.

Kilmann, R. H., Saxton, M. J., & Serpa, R. (1985). Gaining control of the corporate culture. Jossey-Bass Inc Pub.

Knapp M. & Hall J. (2009). Nonverbal communication in human interaction. Wadsworth, Cengage Krumhuber, E., Manstead, A., & Kappas, A. (2006). Temporal aspects of facial displays in person and expression perception: The effects of smile dynamics, head-tilt, and gender. Journal of Nonverbal Behavior, 31, 39–56.

Kirschbaum C, Pirke, KM, Hellhammer DH (1993) The “Trier Social Stress Test”—a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology 28:76-81.

Kuipers, M. C. M. (2017). Implement e-HRM successfully?: A study into the criteria to successfully implement e-HRM (Master's thesis, University of Twente).

Laustsen, L. (2014). Decomposing the relationship between candidates’ facial appearance and electoral success. Political Behavior, 36, 777-791.

Lawson, C., Lenz, G. S., Baker, A., & Myers, M. (2010). Looking like a winner: Candidate appearance and electoral success in new democracies. World Politics, 62, 561-593.

(29)

Levashina, J., Hartwell, C. J., Morgeson, F. P., & Campion, M. A. (2014). The structured employment interview: Narrative and quantitative review of the research literature. Personnel Psychology, 67(1), 241-293.

Lewinski, P., Fransen, M. L., Tan, E.S.H. (2014). Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli. Journal of Neuroscience, Psychology, and Economics, (1), 1-14. doi: 10.1037/npe0000012

Lewinski, P & Gudi, A(2014) - FaceReader™ Output Analyzer 6.0 by Peter Lewinski is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Based on a work at http://dare.uva.nl/document/522510

Lewinski, P., den Uyl, T. M., & Butler, C. (2014). Automated facial coding: Validation of basic emotions and FACS AUs in FaceReader. Journal of Neuroscience, Psychology, and Economics, 7(4), 227.

Locke, C. C., & Anderson, C. (2015). The downside of looking like a leader: Power, nonverbal confidence, and participative decision-making. Journal of Experimental Social Psychology, 58, 42-47.

McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., & Kaliouby, R. E. (2016, May). AFFDEX SDK: a cross-platform real-time multi-face expression recognition toolkit. In

Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 3723-3726). ACM.

McGovern, T. V., & Tinsley, H. E. (1978) Interviewer evaluations of interviewee nonverbal behavior. Journal of Vocational Behavior, 13, 163-171.

McHugo, Gregory J., John T. Lanzetta, Denis G. Sullivan, Roger D. Masters, and Basil G. Englis. (1985). ‘Emotional Reactions to a Political Leader’s Expressive Displays’. Journal of Personality and Social Psychology 49(6):1513–29.

Michailova J (2010) Development of the overconfidence measurement instrument for the economic experiment. MPRA paper 26384, Christian Albrechts University of Kiel, Germany. Montepare, J. M., & Dobish, H. (2003). The contribution of emotion perceptions and their

overgeneralizations to trait impressions. Journal of Nonverbal Behavior, 27, 237–254

Motowidlo, S. J., Carter, G. W., Dunnette, M. D., Tippins, N., Werner, S., Burnett, J. R., & Vaughan, M. J. (1992). Studies of the structured behavioral interview. Journal of Applied Psychology, 77(5), 571.

Naim, I., Tanveer, M. I., Gildea, D., & Hoque, M. E. (2015, May). Automated prediction and analysis of job interview performance: The role of what you say and how you say it. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on (Vol. 1, pp. 1-6). IEEE.

(30)

Nguyen, L. S., Frauendorfer, D., Mast, M. S., & Gatica-Perez, D. (2014). Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE transactions on multimedia, 16(4), 1018-1031.

Nguyen, L. S., & Gatica-Perez, D. (2015). I would hire you in a minute: Thin slices of nonverbal behavior in job interviews. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (pp. 51-58). ACM.

Noldus (2016). FaceReader: Tool for automatic analysis of facial expression: Version 7.0. Wageningen, the Netherlands: Noldus Information Technology B.V.

Parsons C. & Liden, R. (1984). Interviewer perceptions of applicant qualifications: A multivariate field study of demographic characteristics and nonverbal cues. J. of Applied Psychology. Porter, S., & Ten Brinke, L. (2009). Dangerous decisions: A theoretical framework for understanding

how judges assess credibility in the courtroom. Legal and Criminological Psychology, 14(1), 119-134.

Rafaeli, A., & Sutton, R. I. (1987). Expression of emotion as part of the work role. Academy of management review, 12(1), 23-37.

Ronay, R., Oostrom, J.K., Lehmann-Willenbrock, N., Mayoral, S., Rusch, H. (2018). Playing the Trump Card: Why We Select Overconfident Leaders and Why It Matters. Manuscript submitted for publication.

Ronay R, Oostrom JK, Lehmann-Willenbrock N, Van Vugt M (2017) Pride before the fall:

Overconfidence predicts escalation of public commitment. J. Exp. Soc. Psychol. 69: 13-22. Roulin, N., Bangerter, A., & Levashina, J. (2015). Honest and deceptive impression management in

the employment interview: can it be detected and how does it impact evaluations?. Personnel Psychology, 68(2), 395-444.

Ruben, M. A., Hall, J. A., & Schmid Mast, M. (2015). Smiling in a job interview: When less is more. The Journal of social psychology, 155(2), 107-126.

Schmid Mast, M., Gatica-Perez, D., Frauendorfer, D., Nguyen, L., & Choudhury, T. (2015). Social sensing for psychology: Automated interpersonal behavior assessment. Current Directions in Psychological Science, 24(2), 154-160.

Shaburov, V., & Monastyrshin, Y. (2017). U.S. Patent No. 9,747,573. Washington, DC: U.S. Patent and Trademark Office.

Sieverding, M. (2009). ‘Be Cool!’: Emotional costs of hiding feelings in a job interview. International Journal of Selection and Assessment, 17(4), 391-401.

Stewart, Patrick A., Bridget M. Waller, and James N. Schubert. (2009). ‘Presidential Speechmaking Style: Emotional Response to Micro-expressions of Facial Affect’. Motivation and Emotion 33(2):125–35

(31)

Stone, D. L., Deadrick, D. L., Lukaszewski, K. M., & Johnson, R. (2015). The influence of technology on the future of human resource management. Human Resource Management Review, 25(2), 216-231.

Sullivan, Denis G., and Roger D. Masters. (1988). ‘“Happy Warriors”: Leaders’ Facial Displays, Viewers’ Emotions, and Political Support’. American Journal of Political Science 32(2):345– 68.

Tenney, E., Meikle, N., Hunsaker, D., Moore, D. A., & Anderson, C. (2018). Is overconfidence a social liability? The effect of verbal versus nonverbal expressions of confidence.

Tiedens, L. Z. (2001). Anger and advancement versus sadness and subjugation: the effect of negative emotion expressions on social status conferral. Journal of personality and social psychology, 80(1), 86.

Torres, E. N., & Gregory, A. (2018). Hiring manager’s evaluations of asynchronous video interviews: The role of candidate competencies, aesthetics, and resume placement. International Journal of Hospitality Management, 75, 86-93.

Trichas, S., Schyns, B., Lord, R., & Hall, R. (2017). “Facing” leaders: Facial expression and leadership perception. The Leadership Quarterly, 28(2), 317-333.

Viola, P., & Jones, M. J. (2004). Robust real-time face detection. International journal of computer vision, 57(2), 137-154. Chicago.

Watson D, Clark LA, Tellegen A (1988) Development and validation of brief measures of positive and negative affect: The PANAS scales. J. Pers. Soc. Psychol. 54:1063-1070.

(32)

6 Appendix – Extra Tables

0

th

Percentile Correlations

Neutral Happy Sad Angry Surprised Scared Disgusted

OverConfidence ρ 0,010 0,074 0,104 0,127 -.248 * -0,117 0,222 p-value 0,935 0,548 0,399 0,301 0,041 0,344 0,069 Per. Overconfidence ρ 0,149 -0,057 0,098 0,068 -0,101 -0,010 0,199 p-value 0,226 0,644 0,425 0,583 0,413 0,934 0,103 Per. Confidence ρ 0,177 -0,101 0,183 0,111 -0,156 -0,104 0,123 p-value 0,149 0,412 0,135 0,366 0,203 0,397 0,317 Per. Competence ρ 0,172 -0,123 0,124 0,037 -0,150 -0,029 -0,007 p-value 0,160 0,317 0,316 0,765 0,223 0,813 0,953 Per. Potential ρ .266 * -0,194 0,090 0,079 -0,125 0,036 -0,061 p-value 0,028 0,113 0,464 0,519 0,310 0,773 0,621 Per. Hirability ρ 0,154 -0,127 0,104 0,074 -0,135 0,036 0,000 p-value 0,210 0,302 0,401 0,546 0,272 0,774 0,998

70

th

Percentile Correlations

Neutral Happy Sad Angry Surprised Scared Disgusted

OverConfidence ρ -0,101 -0,013 0,028 0,104 -0,035 0,143 .264 * p-value 0,411 0,916 0,819 0,401 0,776 0,243 0,030 Per. Overconfidence ρ 0,191 -0,083 0,187 0,235 -.304* -0,031 0,206 p-value 0,119 0,502 0,128 0,054 0,012 0,805 0,092 Per. Confidence ρ 0,162 -0,030 -0,019 0,102 -0,128 0,006 0,124 p-value 0,187 0,809 0,875 0,406 0,300 0,959 0,315 Per. Competence ρ 0,150 0,030 -0,148 -0,017 0,061 -0,219 0,064 p-value 0,221 0,805 0,228 0,892 0,621 0,073 0,602 Per. Potential ρ 0,143 -0,028 -0,105 0,100 0,045 -0,096 0,043 p-value 0,244 0,821 0,393 0,418 0,714 0,438 0,728 Per. Hirability ρ 0,179 -0,056 -0,069 0,078 -0,007 -0,064 0,086 p-value 0,144 0,647 0,574 0,526 0,954 0,603 0,487

(33)

80

th

Percentile Correlations

Neutral Happy Sad Angry Surprised Scared Disgusted

OverConfidence ρ 0,066 0,030 0,071 0,133 -0,192 -0,111 .243 * p-value 0,592 0,808 0,567 0,279 0,117 0,370 0,046 Per. Overconfidence ρ 0,226 0,039 0,087 0,138 -0,068 0,063 0,235 p-value 0,064 0,754 0,478 0,262 0,581 0,608 0,053 Per. Confidence ρ 0,237 0,020 0,157 0,180 -0,114 -0,071 0,185 p-value 0,052 0,869 0,200 0,142 0,355 0,567 0,132 Per. Competence ρ 0,204 -0,049 0,091 0,008 -0,151 -0,050 0,024 p-value 0,095 0,693 0,462 0,950 0,220 0,686 0,844 Per. Potential ρ .277 * -0,118 0,055 0,060 -0,134 0,009 -0,021 p-value 0,022 0,337 0,656 0,625 0,276 0,944 0,864 Per. Hirability ρ 0,166 -0,046 0,052 0,041 -0,128 0,012 0,032 p-value 0,175 0,709 0,676 0,737 0,299 0,924 0,793

90

th

Percentile Correlations

Neutral Happy Sad Angry Surprised Scared Disgusted

OverConfidence ρ 0,065 -0,002 0,050 0,113 -0,158 -0,113 .250 * p-value 0,599 0,988 0,685 0,359 0,199 0,359 0,040 Per. Overconfidence ρ 0,220 0,041 0,083 0,172 -0,057 0,086 .242* p-value 0,072 0,742 0,501 0,161 0,647 0,486 0,047 Per. Confidence ρ 0,229 0,053 0,154 0,209 -0,102 -0,067 0,210 p-value 0,060 0,670 0,210 0,088 0,408 0,589 0,086 Per. Competence ρ 0,194 -0,033 0,090 -0,006 -0,143 -0,065 0,055 p-value 0,112 0,790 0,467 0,960 0,244 0,597 0,655 Per. Potential ρ .256 * -0,091 0,053 0,059 -0,131 -0,010 0,018 p-value 0,035 0,461 0,666 0,632 0,287 0,938 0,885 Per. Hirability ρ 0,150 -0,011 0,043 0,033 -0,119 -0,008 0,062 p-value 0,223 0,927 0,728 0,792 0,332 0,946 0,613

Referenties

GERELATEERDE DOCUMENTEN

H5 : Compared to the no picture condition, an avatar profile picture positively impacts the perceived trustworthiness (a), expertise (b) and homophily (c) and indirectly

In agreement with the assumptions of RQ3, compared to in an initial antagonism-based climate, in a con- fidence-based climate the combination of high coercive power and high

Also, more research on cultural differences influencing the perception of soundscape in general is needed to accurately determine whether certain elements have positive or negative

Naast de interviewvragen werd respondenten ook gevraagd wat voor hen de maximale afstanden zijn vanaf welke afstand je de fiets pakt, wanneer je niet meer de fiets, maar de auto

What is the effect of the addition of a nutrition logo on food packages on consumers' perceived healthiness of a product among different product categories (hedonic

Dezelfde mensen die Starbucks koffie gebruiken, zullen dit product gebruiken. Het product past bij het imago van het merk

Aaker and Keller (1992): The effects of sequential introduction of brand extensions?. What is the likelihood that you buy the K-Swiss bags assuming a purchase was planned in

Path analysis using Bayesian estimation showed that perceived control, mediated by overconfidence, had a positive indirect effect on bicycle use and a negative one on