• No results found

Verbal deception detection in an Airport Security Context : using the verifiability approach to detect false intent

N/A
N/A
Protected

Academic year: 2021

Share "Verbal deception detection in an Airport Security Context : using the verifiability approach to detect false intent"

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Verbal Deception Detection in an Airport Security Context:

Using the Verifiability Approach to Detect False Intent

Yaloe van der Toolen

Bachelor thesis Clinical Psychology Supervisor: Bennett Kleinberg Student number: 10410260 Word count abstract: 149 Word count paper: 4919

(2)
(3)

TABLE OF CONTENTS

Abstract 4

Introduction 4

The cognitive approach to deception detection 5

The Verifiability Approach to deception detection 6

The detection of false intent 7

The current study 8

Method 9 Participants 9 Procedure 9 Data analysis 11 Results 12 Manipulation check 12 Motivation 12 Testing of hypotheses 12 Exploratory analyses 13 Discussion 15 References 18

(4)

ABSTRACT

While deception research traditionally has been focusing on the detection of lies about past events, the detection of false intent could be useful for the screening of airplane passengers. The current study examined whether the Verifiability Approach (VA) is able to identify false intent based on written statements in an online airport security context. VA assumes that truth tellers provide more verifiable details compared to liars. This study asked participants about their upcoming travel plans. Written statements were collected from truth tellers (n = 12), liars who did not prepare a cover story (n = 25), and liars who did prepare a cover story (n = 16). Results showed no differences in the number of verifiable details between the groups. However, 75% of the truth tellers and 72% of the liars could be identified correctly. This study concludes with recommendations for future research in the detection of false intent.

INTRODUCTION

On October 31, 2015, the Russian Metrojet Flight 9268 crashed in Egypt, killing all 224 people on board. The Russian Federal Security Service subsequently stated that a soda-can containing one kilogram of explosives might have caused the crash (Alkhshali & Brumfield, 2016). With more incidents like this in mind, explosive detection experts now warn for other disguised devices, stating that visual technology could be defeated by other creative terrorists, since X-rays are “only as good as the operator” (Kriel & Cruickshank, 2016).

Incidents like the one above suggest that airports should not solely rely on visual technology, but that a screening of passengers themselves is also necessary for the detection of threats in an airport context. However, while these on-airport screening procedures of course already exist, it could be stated that their efficiency is not yet sufficient. In line with this, Ormerod and Dando (2014) state that until this time, there are no valid methods implemented for determining which passengers are suspected and need to be interviewed by aviation security personnel. For example, although security agents

participating in the program Screening Passengers by Observation Technique referred almost 250,000 people for a secondary screening, fewer than 1% of these referrals lead to an actual arrest (Weinberger, 2010). This indicates that new ways to guarantee safety are needed. Several aviation security experts now argue that additional screenings of passengers and the detection of threats should take place well before people get to the airport (Baum, 2016; Hetter & Liebermann, 2016). One way to do this would be to ask passengers in advance about their intentions for their next flight in a written, online interview,

(5)

since the discrimination between true and false intent about future activities could effectuate the prevention of criminal acts (Vrij & Granhag, 2014). Therefore, the objective of this study was to investigate whether it is possible to identify false intent based on passenger’s written statements in an online airport security context.

The cognitive approach to deception detection

The detection of false intent is a research field that can be regarded as a subfield of deception detection (Warmelink, Vrij, Mann & Granhag, 2013). Traditionally, this field of legal psychology focused on people lying about past actions (Granhag & Strömwall, 2004; Granhag, 2010). Studies showed that people who have to identify liars without training, succeed in only 54% of the time – just above chance level (Bond & DePaulo, 2006). These low rates of deception are often caused by people’s ideas that specific indicators, such as avoiding eye contact or experiencing visible emotions like fear, suggest that people are lying. However, displaying such psychophysiological signs is not uniquely related to

deception; for example, truth tellers can also be nervous when trying to convince someone (Vrij, Granhag & Porter, 2010).

In order to evade these human shortcomings in their ability for deception detection, researchers have developed techniques to facilitating the identification of deceit. One important finding is that people are better in identifying liars when judging on the basis of verbal – in stead of visual – media (Bond & DePaulo, 2006). Hence, it is not surprising that many researchers have been focusing on

deception detection based on verbal content, and for this reason, verbal methods will also be used in the current study.

An influential verbal approach in deception detection is the cognitive approach, which is based on the assumption that lying is cognitively more demanding than telling the truth (Zuckerman, DePaulo & Rosenthal, 1981). There are many indicators that support this assumption. First, formulating a lie may be cognitively demanding, since it requires the invention of a story. In addition, liars need to monitor their narrative for plausibility, and be careful that they will not mention inconsistencies (Vrij, 2014). Besides this, liars have to suppress the truth while stating their story, also increasing cognitive load (Spence et al., 2001). Moreover, it is often stated that liars do not take their credibility for granted, causing them to monitor their own demeanor in order to appear truthful, and to constantly examine the investigator’s reaction (Kassin, Appleby & Torkildson-Perillo, 2010; Buller & Burgoon, 1996). Finally, in several deception detection studies, participants were asked about the cognitive load they had

(6)

experienced during lying. In many cases, participants indicated that they regarded lying as cognitively more demanding than telling the truth (e.g. Vrij, Mann & Fisher, 2006; Caso, Gnisci, Vrij & Mann, 2005).

Central to the cognitive approach is the idea that the already existing differences in cognitive load can be enhanced by means of interventions based on cognitive principles, since doing this evokes extra cues for deception detection (Vrij, Fisher & Blank, 2015). Vrij et al. (2015) describe several techniques to establish deception detection by imposing the cognitive load. This can be done by

instructing interviewees to carry out two tasks simultaneously, or by instructing them to tell their story in reverse order (e.g. Debey, Verschuere & Crombez, 2012; Vrij, Leal, Mann & Fisher, 2012). A second method is to ask unexpected questions. While liars can prepare themselves for anticipated questions, they can not do so for unexpected questions, forcing them to formulate a plausible answer without preparation. Spatial, temporal and questions about planning are often regarded as unexpected (Vrij et al., 2015). The cognitive approach establishes superior accuracy results in the detection of truth and lies (71%) compared to when a standard approach is used (56%; Vrij et al., 2015). In this study, we will focus on unexpected questions.

The Verifiability Approach to deception detection

A second method in deception detection that has yielded promising results is the Verifiability Approach (VA). This approach (Nahari, Vrij & Fisher, 2014a) uses a specific method to detect whether someone is lying by scrutinizing the number of verifiable details in people’s statements. A detail is considered verifiable when it is “(i) documented and therefore checkable; (ii) carried out together with (an)other identified person(s) (…); or (iii) witnessed by (an)other identified person(s)” (Nahari, Vrij & Fisher, 2014b: 28). In order to decide whether someone is lying or not, studies in the tradition of the VA first ask participants to answer several open questions about certain activities. Coders subsequently score these statements on number of details and their verifiability.

There are two core assumptions underlying the VA. First, in general, truth tellers include more details into their statements than liars do (Vrij, 2008). Second, it is assumed that liars avoid stating too many details, since they fear that those details, once investigated, reveal that they are lying (Nahari, Vrij & Fisher, 2012). These assumptions place liars in a dilemma. Because liars wish to appear truthful, they are driven to mention many details. However, mentioning details can also lead to being exposed (Nahari et al., 2014b). To avoid this, a possible strategy could be that liars will provide unverifiable details (e.g. ‘I saw a strange man walking in Oxford Street’ in stead of the easily verifiable ‘Last night at 2 am, my friend

(7)

Jacob and I saw a strange man walking in Oxford Street’). In line with this, Nahari et al. (2014a) found that liars mentioned fewer verifiable perceptual, spatial and temporal details than truth tellers.

A strength of the VA is that, once people are told that they have to mention as many verifiable details as possible, the ability to detect deception increases (Nahari, et al., 2014b). The VA proved to be successful in distinguishing liars from truth tellers, with accuracy rates of up to 80% (Harvey, Vrij, Nahari & Ludwig, 2016). Because of these high accuracy rates in the detection of lies about past events, we were interested whether the VA would also lead to successful detection of false intentions.

The detection of false intent

Whereas a great body of deception research has been focusing on the detection of lies about past events (e.g. a mock crime), for many practical purposes like the screening of prospective airplane passengers, it is more relevant to detect deception regarding planned future events. The past five years have seen a shift towards research on the detection of false intentions (Granhag & Strömwall, 2004; Granhag, 2010; Vrij & Granhag, 2014). Vrij, Granhag, Mann and Leal (2011) conducted the first study regarding false intentions in an airport setting. They found that deceptive statements were less plausible compared to truthful ones, but that both statements did not differ in number of details. In another study, Sooniste, Granhag, Knieps and Vrij (2013) asked participants questions about their intentions and corresponding planning, and found that truth teller’s answers to the questions about planning were longer, more detailed and more clear compared to untruthful statements. Giolla, Granhag and Liu-Jönsson (2013) focused on markers of good planning behavior (e.g. effective time allocation) to discriminate between true and false intent. In their study, truth tellers mentioned more planning markers than liars did. In addition, while truth tellers focused on telling how they would perform their stated intentions, liars mainly told why they would perform their stated intentions.

The objective of this study was to identify false intent based on written statements. However, the studies mentioned in the sections above all apply to the detection of deception on past actions. Nonetheless, there are indicators that the VA and other insights from deception detection on past actions will be useful for the detection of false intent as well.

For the remainder of this paper, we will define an intention as an individual’s mental state before undertaking a corresponding action, that will take place in a specific situation in the near future (Malle, Moses & Baldwin, 2003; Granhag, 2010). This implies that, when one has an intention, one also has an image of the future (Warmelink et al., 2013). Research from neuroscientists suggests that this imagining

(8)

of the future consists of processes that are also activated when remembering the past (Schacter, Addis & Buckner, 2008).

In order to discriminate between true and false intentions, we made people create a false intention by instructing them to lie about the purpose of their trip, so that there was no commitment to perform the stated intention (Granhag & Giolla, 2014). While people with true intentions are motivated to act upon this intention, false intentions lack this corresponding motivation (Ask, Granhag, Juhlin & Vrij, 2013). A study from Watanabe (2005) suggests that plans that are not expected to be executed have a less detailed mental image of the corresponding scenario compared to people who are actually

expecting to act according to their plan. This suggests that methods for deception detection on past events can also be used for deception detection concerning future events. For these reasons, insights from the cognitive approach and VA will also be used in this paper.

The current study

In the current research, we investigated whether the VA is a capable method for identifying false intent based on written statements in an online airport security setting. Participants were asked about their upcoming flight. They were divided in three groups: a truth condition where they had to tell the truth, a simple lie condition where participants had to lie, and a cover story lie condition, where people had to lie and prepare a cover story for their following trip. Derived from the VA, it was hypothesized that participants in the truth condition would provide more verifiable details than participants in the two lie conditions (Hypothesis 1). In line with this, it was expected that participants in the cover story group would provide more verifiable details than people in the simple lie condition (Hypothesis 2), since it has been shown that planned lies are harder to detect than spontaneous ones (DePaulo et al., 2003). Following the cognitive approach, the cognitive load will be enhanced by asking participants questions about planning – an intervention often perceived as unexpected (Warmelink, Vrij, Mann, Jundi & Granhag, 2012). It was hypothesized that participants would rate questions about planning as more unexpected than the questions about intentions (Hypothesis 3). In line with findings from Warmelink et al. (2012), it was expected that the differences in verifiable details between participants in the truth condition compared to participants in both lie conditions would be bigger when unexpected questions were being asked (Hypothesis 4).

We also included exploratory analyses. First, it was investigated whether people flying in the near future would be better prepared (and therefore would mention more verifiable details) than people who would not be leaving for the next couple of weeks (Exploratory analysis 1). A second exploratory

(9)

analysis was based on the cognitive approach. Because of the higher cognitive load liars are thought to experience, it was examined whether liars would need more time in answering the questions about their upcoming trip compared to truth tellers (Exploratory analysis 2).

METHOD

Participants

Ninety-five people from different countries participated in the online questionnaire posted on Crowdflower (an online tool for recruiting participants for micro-tasks) for the reward of $1.00. Data was excluded from participants 1) with double IP addresses, 2) who wrote in their statements that they had not yet booked their flight, and 3) who were instructed to lie about the purpose of their trip, but instead stated their truthful purpose. The final sample consisted of 53 participants. The truth condition (n = 12; Mage = 32.42, SDage = 8.16; MEnglish = 82.27, SDEnglish = 11.70; 9 males), simple lie condition (n = 25; Mage =

30.76, SDage = 10.19; MEnglish = 79.84, SDEnglish = 14.93; 19 males) and cover story lie condition (n = 16; Mage

= 31.94, SDage = 7.65; MEnglish = 78.50, SDEnglish = 16.53; 9 males) did not differ significantly in age, F(2, 50) =

0.164, p = .85, gender, χ2(2) = 2.00, p = .367 and self-rating of English proficiency, F(2, 49) = .21, p = .810.

Procedure

Participants were recruited via Crowdflower, where the questionnaire was introduced as a survey ragarding people’s travel behavior. All participants gave informed consent1. Participants provided

demographic information and were asked to rate their English proficiency on a scale ranging from 0 (not at all) to 100 (expert). Subsequently, participants were asked whether they would fly in the upcoming twelve weeks, which they could answer with “yes”, “no”, or “not sure yet”. Depending on this answer, participants were assigned to either the truth, simple lie or cover story lie condition after they had completed a short English language test which was included for the purpose of other research (for a schematic overview, see Figure 1). Participants in the truth condition had to tell the truth about their forthcoming flight. Participants in the simple lie condition had to lie about the purpose of their upcoming flight. Participants in the cover story lie condition also had to lie about the purpose of their upcoming flight, and were asked to prepare a cover story with the use of tripadvisor.com, a website with reviews about restaurants, hotels and touristic attractions. The lie conditions consisted of both participants who

(10)

Figure 1

Flowchart displaying the allocation of participants in three conditions and further experimental procedure. Task available via http://www.newlylabs.net/wp-content/research/cbdmi/exp1/desktop/html/start.html

(11)

would fly and participants who would not. Participants who would fly within twelve weeks had to lie about the purpose of their upcoming flight. For example, participants who had stated that they would be flying for work or study purposes (or one of the other options, see Figure 1), were randomly assigned to a different purpose. The categories ‘returning home’ and ‘other’ were never assigned. Participants who would not be flying in the upcoming twelve weeks, or were not sure, were randomly assigned to a

destination and were given a purpose (‘work/study’, ‘family/friends’ or ‘holiday’) for this imaginary flight. Participants in the truth condition read the following instructions: “We are interested in how passengers think about traveling by airplane. Your task for the rest of this experiment is to provide as many details as possible about your flight to [DESTINATION].”

Participants in the simple lie condition read: “We are interested in how passengers think about traveling by airplane. Your task for the rest of this experiment is to lie about your flight to

[DESTINATION]. The purpose of this flight should be [WORK/STUDY, HOLIDAY, or FAMILY/FRIENDS].” Participants in the cover story lie condition were given the same instructions as the participants in the simple lie condition, but they were told in addition: “You can develop a cover story via

tripadvisor.com on the next page. For your cover story, please go to tripadvisor and select one hotel, one restaurant, and one attraction that fits the purpose of your flight.” They were further required to copy and paste the URLs of their search queries into provided text boxed.

After receiving instructions, all participants were told that they had to answer four questions regarding their next flight (see Figure 1), and that deception detection researchers would determine the truthfulness of their statements. As a motivation, participants were told that they would automatically participate in a draw to win €100,- if they succeeded in convincing the experts. For ethical reasons, everybody participated in this draw. Participants were presented the definition of a verifiable detail, and were subsequently instructed to mention as many verifiable details in their statements as possible.

Following these instructions, participants were directed to four questions which they all had to answer in a minimum of 150 characters. After completing these questions, participants indicated on a scale from 0 (not at all) to 100 (very motivated) how motivated they had been. Also, they indicated how expected they had considered each question, ranging from 0 (not at all) to 100 (absolutely). After participation, participants were automatically reimbursed via their Crowdflower account. In order to be eligible for the draw, participants had to fill in their email address.

(12)

Data analysis

All statements were coded by four coders who were blind to the conditions. All coders were trained in a training session. First, all details that could be considered temporal (information about time or sequence of events), spatial (information about spatial arrangement) and perceptual (information about what the participant witnessed) were collected. Second, coded details were assessed for

verifiability. In both phases, each statement was independently coded by two coders, who subsequently compared their coding for dissimilarities. In case of disagreement, a third coder made the final decision. Coders agreed on 88,01% of the details in phase 1, and on the verifiability of 97,67% of these details in phase 2.

For each participant, both the sum of verifiable and unverifiable details was calculated. Also, each participant’s responding time was registered. For our main analysis, we measured whether the three conditions differed in amount of verifiable details. To test this, we used a one-way ANOVA on the number of verifiable details with Veracity (truth vs. simple lie vs. cover story lie) as factor2.

RESULTS

Manipulation check

After coding the statements, it was investigated whether each account corresponded with its assigned travel purpose. From the 35 flying participants who were instructed to lie about the purpose of their forthcoming flight, 19 still narrated about their truthful purpose, and were therefore excluded.

Motivation

Motivation was measured on a scale ranging from 0 (not at all) to 100 (very motivated). Motivation was quite high, and did not differ between truth condition (Mmotivation = 72.83, SDmotivation =

25.36), simple lie condition (Mmotivation = 80.80, SDmotivation = 16.69) and cover story lie condition (Mmotivation

= 79.88, SDmotivation = 25.15), F(2,50) = .59, p = .560.

Testing of hypotheses

For Hypothesis 1, a one-way ANOVA was used. The independent variable was Veracity (truth vs. simple lie vs. cover story lie). The dependent variable consisted of number of verifiable details. The conditions did not differ in number of verifiable details, F(2, 50) = 1.05, p = .357, partial η2 = 0.04 (see

2

An a-priori power analysis indicated that a total sample size of 54 would be sufficient to detect a significant effect

of Cohen’s f = 0.50 (based on the effect sizes found in previous VA studies), with a power of .90 and an alpha of .05.

(13)

Table 1

Means in number of stated verifiable and non-verifiable details and standard deviations (between parentheses) for the truth, simple lie and cover story lie conditions

Table 1), indicating Hypothesis 1 cannot be supported. In addition, we tested whether the conditions differed in number of non-veriafiable details, since the VA suggests that liars provide more non-verifiable details in order to appear truthful. It turned out that no support could be found for this assumption, F(2, 50) = 0.68, p = .511, partial η2 = 0.02. Since none of the groups differed in number of verifiable

details, Hypothesis 2 was not supported either, t(39) = -.18, p = .431, d = -0.06.

A paired t-test was used to test whether question 1 and 3 (about intentions) were regarded as more expected than question 2 and 4 (about planning). There was a significant difference in the amount of expectedness between the questions about intentions (Mexpected = 73.95, SDexpected = 23.53) and the

questions about planning (Mexpected = 69.25, SDexpected = 23.51), t(52) = 1.92, p = .030, d = 0.20, hereby

supporting Hypothesis 3.

For hypothesis 4, a 3 (Veracity: truth vs. simple lie vs. cover story lie) by 2 (Verifiable details per question type: Q1 and Q3 vs. Q2 and Q4, within-subjects) mixed ANOVA was used. There was no significant main effect for Veracity, F(2, 50) = 0.84, p = .436, partial η2 = 0.03. No significant main effect

was found for question type, F(1, 50) = 3.88, p = .054, partial η2 = 0.07, nor for the interaction between

question and condition, F(2, 50) = .02, p = .983, partial η2 < 0.01, indicating that Hypothesis 4 cannot be

supported.

Exploratory analyses

For Exploratory analysis 1, we used the number of weeks until the departure of the flight as a covariate, since we expected that people leaving in the near future would be better prepared than people who would not be leaving for the next couple of weeks. Weeks until departure were no

Truth Simple lie Cover story lie

Verifiable details 7.75 (3.19) 6.20 (2.58) 6.6 (3.83)

(14)

between the variables question and weeks until departure, F(3, 69) = .67, p = .571, partial η2 = 0.03,

thereby granting no support for Exploratory analysis 1.

To test Exploratory analysis 2, a 3 (Veracity: truth vs. simple lie vs. cover story lie) by 2 (Response time in minutes for the questions about intentions vs. the questions about planning, between subjects) mixed design ANOVA was conducted. Neither a significant main effect for response time was found, F(1, 50) = 0.28, p = .609, partial η2 < 0.01, nor a significant interaction effect between response time and

condition, F(2, 50) = 0.53, p = .591, partial η2 = 0.02, indicating that participants in the lie condition did

not take longer in answering the questions.

While none of the hypotheses gained support, we were still interested in how well truth tellers could be distinguished from liars based on the use of a cut-off point. We used Receiver Operating Characteristics (ROC) to calculate the optimal cut-off point by taking into account the most suitable combination of sensitivity and specificity of a diagnostic tool (see Figure 2). The area under the ROC curve (AUC) ranges from 0 to 1, where 1 indicates a perfect diagnostic criterion and 0.5 indicates random classification (Akobeng, 2007).

We were mainly interested in distinguishing between truth tellers and liars, so we put participants from both lying conditions in one, broader lying group. Using the number of verifiable details as a predictor for deciding whether a participant told the truth or lied, the value of the AUC was 0.67. Table 2 shows the diagnostic efficiency for specific cut-off points. We regarded the optimal cut-off a value accompanied by both a high sensitivity and a high specificity. A cut-off point of 7 categorized 75% of the truth tellers and 72% of the liars correctly.

Figure 2. ROC curve displaying the diagnostic efficiency for number of verifiable details across all possible cut-off points.

(15)

Table 2

Diagnostic efficiency (sensitivity and specificity) of verifiable details as a predictor for deciding whether someone tells the truth for specific cut-off points.

DISCUSSION

In this study, it was investigated whether it is possible to identify false intent based on written statements in an online airport security context with the VA. The findings do not support the

hypotheses: truthful statements did not differ in the number of stated verifiable details from false statements. Although questions about planning were regarded as less expected than questions about intentions, this did not affect the number of verifiable details. Also, the time until departure did not influence the number of verifiable details, and liars did not take longer in answering the questions compared to truth tellers. However, using ROC to establish an optimal cut-off point, we were still able to correctly identify 75% of truth tellers and 72% or liars. These classification accuracies are in line with those reported in previous studies on the VA (Nahari et al., 2014a, Nahari et al., 2014b). Whether the AUC and classification accuracies tentatively reported here are generalizable to other experiments on false intent remains an empirical question.

When looking at the VA and the cognitive approach, we can conclude that the findings of the present study are not in line with both theoretical frameworks. For instance, studies on the VA found differences between truth tellers and liars in the amount of verifiable details in their statements (e.g. Harvey et al., 2016; Nahari et al., 2014a; Nahari et al., 2014b, Nahari & Vrij, 2014), and studies in the tradition of the cognitive approach found that unexpected questions enlarged the difference in number of details stated between truth tellers and liars (e.g. Vrij et al., 2015; Warmelink et al., 2012). However, this study differed in many aspects from previous studies on the VA and the cognitive approach, implicating that the current’s study design might have caused the lack of support for both theories. A first reason why the findings in this study do not corroborate the findings from the VA and the cognitive approach, could be that the majority of the participants were no native English speakers. This is

6 > verifiable details 7 > verifiable details 8 > verifiable details

Cut-off Sens. Spec. Sens. Spec. Sens. Spec.

(16)

many participants had to answer the questions in their second language. While little is known about the ways in which language proficiency can affect deception, some studies have suggested that it is harder to discriminate between truth tellers and liars when they are speaking in a second language (Da Silva & Leach, 2013; Leach & Da Silva, 2013). This could implicate that our results might have been different would we have had a more homogeneous sample.

A second aspect that may have influenced the outcome considers ground truth: knowing (afterwards) with absolute certainty whether participants were lying or speaking the truth (Vrij et al., 2010). Many studies in deception research establish ground truth by instructing participants to perform a certain act or mock crime, and subsequently ask questions about whether or not participants performed this act: a highly controlled situation. However, the current study lacked this control: we will never be able to assess with absolute certainty whether participants who were instructed to lie or tell the truth really did so. While applied studies like the current research might have a greater external validity, establishing ground truth remains a crucial challenge (Vrij et al., 2010).

Third, participating in this study was completely digital: participants never faced an interviewer who gave instructions, while this was always the case in the other studies mentioned in this paper. This has some serious implications. The fact that participants could take part in the study from inside their own homes (instead of going to a lab), means that there was a lower amount of experimental control compared to other studies investigating deception detection. Circumstances between participants may have varied a lot, and participants may have been less attentive when taking part in this study. Also, because of the lack in human interaction, it is not unthinkable that participants in this study were less convinced that their statements would be thoroughly checked for truthful details. This may have

resulted in truth tellers taking less effort in reporting verifiable details about their trip, or liars making up false verifiable details.

A way to possibly overcome the problem of the lack in interaction, is to create an interview that imitates a conversation. This could be done by developing a so-called chatbot, an automated computer program that simulates human responses. For instance, in the current experiment, we often

encountered statements from participants who claimed that they would be “travelling with a colleague”, thereby not specifying who this colleague was. A chatbot could be programmed in a way that it would respond to these vague answers by subsequently asking which colleague the interviewee will be traveling with – thereby increasing interaction, and ideally enlarging the interviewee’s feeling that lying might not go unnoticed. Truth tellers then might be able to distinguish themselves from liars, since they

(17)

are truly able to give more details about “their colleague”, while liars either have to remain silent, or lie even more, which presumably enhances the chances of being caught.

Finally, while it should be taken into consideration that the current study is only the first study attempting to detect false intent with the VA, it is possible that the VA has no added value for the detection of false intent. After all, it might be hard to provide verifiable details about a planned activity that has not been carried out yet. As mentioned, Sooniste et al. (2013) focused on the length and clarity of stated intentions, and Giolla et al. (2013) scrutinized markers of good planning behavior in people’s statements. Since both approaches obtained promising results, it might be interesting to investigate whether these approaches will also be successful in the detection of false intent in an online airport security context. Further research could provide the answer to this question.

(18)

REFERENCES

Akobeng, A. K. (2007). Understanding diagnostic tests 3: receiver operating characteristics curves. Acta Paediatrica 96(5), 644-647. doi: 10.1111/j.1651-2227.2006.00178.x

Alkhshali, H., & Brumfield, B. (February 25, 2016). Egypt’s President links Russian plane crash to terrorism. CNN. Retrieved from: http://edition.cnn.com/2016/02/24/middleeast/egypt-sissi-russian-plane-sinai/

Ask, K., Granhag, P. A., Juhlin, F., & Vrij, A. (2013). Intending or pretending? Automatic evaluations of goal cues discriminate true and false intentions. Applied Cognitive Psychology, 27(2), 173-177. doi: 10.1002/acp.2893

Baum, P. (March 24, 2016). What can airports do to prevent another terror attack? The Telegraph. Retrieved from: http://www.telegraph.co.uk/travel/comment/brussels-attacks-how-to-improve-airport-security/

Bond, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and social psychology Review, 10(3), 214-234. doi: 10.1207/s15327957

Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory, 6(3), 203-242. doi: 10.1111/j.1468-2885.1996.tb00127.x

Caso, L., Gnisci, A., Vrij, A. & Mann, S. (2005). Processes underlying deception: An empirical analysis of truths and lies when manipulating the stakes. Journal of Investigative Psychology and Offender Profiling, 2(3), 195-202. doi: 10.1002/jip.32

Da Silva, C.S., & Leach, A. M. (2013). Detecting deception in second-language speakers. Legal and criminological psychology, 18(1), 115-127. doi: 10.1111/j.2044-8333.2011.02030.x Debey, E., Verschuere, B., & Crombez, G. (2012). Lying and executive control: an experimental

investigation using ego depletion and goal neglect. Acta Psychologica, 140(2), 133-141. doi: 10.1016/j.actpsy.2012.03.004

(19)

DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological bulletin, 129(1), 74-118. doi: 10.1037/0033-2909.129.1.74

Granhag, P. A., & Giolla, E. M. (2014). Preventing future crimes: Identifying markers of true and false intent. European Psychologist, 19(3), 195-206. doi: 10.1027/1016-9040/a000202

Granhag, P. A., & Strömwall, L. A. (2004). The detection of deception in forensic contexts. Cambridge: Cambridge University Press.

Granhag, P. A. (2010). On the psycho-legal study of true and false intentions: Dangerous waters and some stepping stones. The Open Criminology Journal, 3, 37-43.

Giolla, E. M., Granhag, P. A., & Liu-Jönsson, M. (2013). Markers of good planning behavior as a cue for separating trua and false intent. PsyCh Journal, 2(3), 183-189. doi: 10.1002/pchj.36

Harvey, A.C., Vrij, A., Nahari, G., & Ludwig, K. (2016). Applying the Verifiability Approach to insurance claims settings: exploring the effect of the information protocol. Legal and Criminological Psychology, 1-13. doi: 10.1111/lcrp.12092

Hetter, K., & Liebermann, O. (March 23, 2016). Airport security: How can terrorist attacks be prevented? CNN. Retrieved from:

http://edition.cnn.com/2016/03/22/travel/airport-security-post-brussels-attack-feat/

Kassin, S. M., Appleby, S. C., Torkildson-Perillo, J. (2010). Interviewing Suspects: Practice, science and future directions. Legal and Criminological Psychology, 15(1), 39-55. doi:

10.1348/135532509X449361

Kriel, R., & Cruickshank, P. (February 12, 2016). Source: ‘Sophisticated’ laptop bomb on Somali plain got through X-ray machine. CNN. Retrieved from

http://edition.cnn.com/2016/02/11/africa/somalia-plane-bomb/index.html

(20)

Malle, B. F., Moses, L. J., & Baldwin, D. A. (2003). Intentions and intentionality. Foundations of social cognition. Cambridge: The MIT Press.

Nahari, G., Vrij, A., & Fisher, R. P. (2012). Does the truth come out in the writing? SCAN as a lie detection tool. Law and Human Behavior, 36(1), 68-76. doi: 10.1037/h0093965

Nahari, G., & Vrij, A. (2014). Can I borrow your alibi? The application of the verifiability approach to the case of an alibi witness. Journal of Applied Research in Memory and Cognition, 3(2), 89-94. doi: 10.1016/j.jarmac.2014.04.005

Nahari, G., Vrij, A., & Fisher, R. P. (2014a). Exploiting liars’ verbal strategies by examining the verifiability of details. Legal and Criminological Psychology, 19(2), 227-239. doi:

10.1111/j.2044-8333.2012.02069.x

Nahari, G., Vrij, A., & Fisher, R. P. (2014b). The verifiability approach: Countermeasures facilitate its ability to discriminate between truths and lies. Applied Cognitive Psychology, 28(1), 122-128. 10.1002/acp.2974

Ormerod, T. C., & Dando, C. J. (2015). Finding a needle in a haystack: Toward a psychologically informed method for aviation security screening. Journal of Experimental Psychology: General, 144(1), 76-84. doi: 10.1037/xge0000030

Schacter, D. L., Addis, D. R., & Buckner, R. L. (2008). Episodic simulation of future events: Concepts, data and application. Annals of the New York Academy of Sciences, 1124(1), 39-60. doi:

10.1196/annals.1440.001

Sooniste, T., Granhag, P. A., Knieps, M., & Vrij, A. (2013). True and false intentions: Asking about the past to detect lies about the future. Psychology, Crime and Law, 19(8), 673-685. doi:

(21)

Spence, S. A., Farrow, T. F. D., Herford, A. E., Wilkinson, I. D., Zheng, Y., & Woodruff, P. W. R. (2001). Behavioural and functional anatomical correlates of deception in humans. Neuroreport, 12(13), 2849-2853.

Vrij, A., Fisher, R. P., & Blank, H. (2015). A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology, 1-21. doi: 10.1111/lcrp.12088

Vrij, A., Granhag, P. A., Mann, S., & Leal, S. (2011). Lying about flying: The first experiment to detect false intent. Psychology, Crime & Law, 17(7), 611-620. doi: 10.1080/10683160903418213

Vrij, A., Granhag, P. A., & Porter, S. (2010). Pitfalls and opportunities in nonverbal and verbal lie detection. Psychological Science in the Public Interest, 11(3), 89-121. doi:

10.1177/1529100610390861

Vrij, A., & Granhag, P. A. (2014). Eliciting information and detecting lies in intelligence interviewing: an overview of recent research. Applied Cognitive Psychology, 28(6), 936-944. doi:

10.1002/acp.3701

Vrij, A., Leal, S., Mann, S., & Fisher, R. (2012). Imposing cognitive load to elicit cues to deceit: Inducing the reverse order technique naturally. Psychology, Crime, & Law 18(6), 579-594. doi:

10.1080/1068316X.2010.515987

Vrij, A., Mann, S., & Fisher, R. (2006). Information-gathering vs. accusatory interview style: Individual differences in respondents’ experiences. Personality and Individual Differences, 41(4), 589-599. doi: 10.1016/j.paid.2006.02.014

Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities. Chichester, UK: John Wiley and Sons.

Vrij, A. (2014). Interviewing to detect deception. European Psychologist, 19(3), 184-194. doi: 10.1027/1016-9040/a000201

(22)

Warmelink, L., Vrij, A., Mann, S., Jundi, S., & Granhag, P. A. (2012). The effect of question expectedness and experience on lying about intentions. Acta Psychologica, 141(2), 178-183. doi:

10.1016/j.actpsy.2012.07.011

Watanabe, H. (2005). Semantic and episodic predictions of memory for plans. Japanese Psychological Research, 47(1), 40-45. doi: 10.1111/j.1468-5584.2005.00271.x

Weinberger, S. (2010) Airport security: Intent to deceive? Nature, 465(7297). 412-415. doi: 10.1038/465412a

Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology. New York: Academic Press.

Referenties

GERELATEERDE DOCUMENTEN

There is also no clear evidence to suggest that human capital development initiatives such as training and Sector Education and Training Authority (SETA) support

In general, one has to be cautious to apply polydispersity considerations based on asymptotic power-law cluster- size distributions to small clusters with N ~400.. Chen, Meakin,

We believe that the Paşca approach is more suitable approach to reduce the amount of unknown words in raw user search queries than the Espresso approach. User

In the first post-test, results showed that the Rechtwijzer group indicated to be in a significantly less escalated phase in their conflict, with an average answer

Duidelijk mag zijn dat het ervaren van meervoudige traumatische gebeurtenissen in de jeugd een significant risico vormt voor een ontwikkeling van ernstig psychopathologie later in

Implementing a new procurement strategy.. First of all, the findings indicate that although the willingness to renew their procurement strategy was present, the degree to which

5.3.4 Comparison of Relative Reactivity of Pre- and Post added Catalysts to Char Comparison of the two sets of samples; for example potassium carbonate, sodium carbonate and

• Dit het geblyk dat daar die afgelope dekade al meer erkenning gegee is deur psigologiese asook sommige mediese navorsers betreffende die feit dat trauma die