• No results found

Anxiety in Virtual Reality

N/A
N/A
Protected

Academic year: 2021

Share "Anxiety in Virtual Reality"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Anxiety in Virtual Reality

Iulia Ionescu

June 17, 2015

Supervisors: Robert Belleman (UvA)

Computer

Science

University

of

Amsterd

am

(2)
(3)

Abstract

The purpose of this study is to fill the gap between real life anxiety therapy and virtual anxiety therapy. This is obtained by investigating whether the same level of anxiety is evoked in a simu-lated environment as in a real environment. The test case for this investigation is glossophobia, the fear of public speaking. Therefore, the research question is: does a virtual environment evoke the same level of anxiety, while speaking for an audience, as a real environment? The Emotiv EPOC is used as electroencephalogram (EEG) to indicate either the similarity or the discrep-ancy of those two opposed environments. For this investigation, the environment of a university lecture room is chosen. Participants (N = 16) were asked to give a short presentation in both a lecture room and in Virtual Reality (VR), using the Oculus Rift DK2. The baseline was provided by a one-to-one presentation. Finally, the brain activity in all situations is investigated to make a broad conclusion, comparing outputs of electrodes. Based on this comparison, an estimation of the level of anxiety could be made. From the participants, an anxious and a nonanxious group with both six subjects was made, based on their personal report of anxiety. The results indicate that for anxious individuals, the evoked level of anxiety in VR is similar or slightly higher than in real life. Nonanxious individuals and the majority of the individuals, however, experience more nervousness, angriness or annoyance when presenting in VR in comparison with both the presentation in the lecture room and the intake presentation, with no particular preference re-garding the last two situations. This could be explained by the assumption that nonanxious individuals are more used to interacting with the audience and reading their facial expressions. While speaking in front of a virtual audience, they cannot longer interact with their audience, which could cause them to get a little more nervous, angry or annoyed. Anxious individuals, on the other side, might not be that used to interacting with the public, so they would experience similar levels of anxiety.

(4)
(5)

Contents

1 Introduction 5 1.1 Theoretical background . . . 5 1.1.1 Anxiety . . . 5 1.1.2 VRET treatment . . . 5 1.2 Research question . . . 5

1.3 Structure of this thesis . . . 6

2 Methodology 7 2.1 Glossophobia . . . 7

2.2 Implementation of the virtual environment . . . 7

2.2.1 Development tools . . . 8 2.2.2 Initial stage . . . 8 2.2.3 Realism . . . 8 2.2.4 Human-like behaviour . . . 9 2.2.5 Performance . . . 10 2.2.6 Overview . . . 10 2.3 Measuring anxiety . . . 10 2.3.1 Questionnaire . . . 10 2.3.2 Electroencephalogram (EEG) . . . 11

2.4 Detecting emotion from EEG signals . . . 11

2.4.1 Filtering frequencies . . . 12

2.4.2 Filtering the noise . . . 12

2.4.3 Calculating the power . . . 12

2.4.4 Calculating the arousal . . . 13

2.4.5 Calculating the valence . . . 13

2.4.6 Paired t-test . . . 14

3 Experiments 15 3.1 Participants . . . 15

3.2 Audience . . . 15

3.3 Procedure . . . 16

4 Results and discussion 17 4.1 Personal report of anxiety . . . 17

4.2 Detected emotion . . . 17

4.2.1 Arousal . . . 18

4.2.2 Valence . . . 19

4.3 Discussion . . . 19

5 Conclusions and future work 21

6 Acknowledgements 23

(6)

A Images 27

B Personal report of anxiety questionnaire 31

C Experiment script 33

C.1 Receive the audience . . . 33

C.2 Intake . . . 33

C.3 Lecture room . . . 34

C.4 Game room . . . 34

C.5 Final steps . . . 35

(7)

CHAPTER 1

Introduction

1.1

Theoretical background

1.1.1

Anxiety

In 1996, 26.9 million individuals in the United States were estimated to be affected by anxiety disorders at some point during their lives [1]. Anxiety disorders are for many reasons found debilitating for its victims. One of the reasons is the limitation anxiety produces on one’s working memory. Support has been provided for Eysenck’s theory, which assumes that the decrease in performance commonly shown by highly anxious people, is due to their restriction in working memory [2]. This implies that people who suffer from anxiety disorder and therefore perform less, consequently earn less. Furthermore, a study on the economic impact of anxiety on the population of the United States in 1990, reveals the expenditure of $46.6 billion for costs associated with anxiety disorder, 31.5% of the total costs for mental illnesses [2]. Over three-quarters of the costs are stated to be due to reduced productivity predominantly expressed in morbidity. Therefore, a strong demand for low-cost, yet effective treatment has emerged.

1.1.2

VRET treatment

Virtual Reality Exposure Therapy (VRET) has been proven to be an effective exposure delivery method for treating various disorders, including panic disorder, social phobia, Post Traumatic Stress Disorder (PTSD), fear of flying, fear of spiders and fear of heights [3]. When comparing VRET with standard (in vivo) exposure therapies, studies suggest that both treatments are equipotent [3]. In this way, the medium of VR seems to have responded to the demand for a reasonably prised exposure therapy.

1.2

Research question

While numerous studies have proven the potency of virtual reality as a form of therapy, it is presently unclear whether virtual reality evokes the same level of anxiety as a real environment. Yet, this information is crucial in order to be able to get an insight in the effectiveness of the therapy for anxiety. The perceived anxiety in VR could for instance be related to other factors than the anxiety that is tackled in the therapy, e.g. the excitement of wearing the VR head-mounted display. The purpose of this study is to fill in the gap beween real life anxiety therapy and virtual anxiety therapy. This is obtained by investigating whether the same level of anxiety is evoked in a simulated environment as in a real environment. Therefore, the research question is: does a virtual environment evoke the same level of anxiety, while speaking for an audience, as a real environment?

(8)

1.3

Structure of this thesis

In chapter 2, the method for answering the research question will be described. It consists of the implementation of the virtual environment, the taken measurements and the analysis of the ob-tained data. In chapter 3, the experiments will be discussed, elaborating on the participants, the audience and the procedure of the experiments. Then, the results will be presented and discussed in chapter 4. Finally, conclusions will be drawn in chapter 5, followed by recommendations in the Future work section.

(9)

CHAPTER 2

Methodology

In order to answer the research question, a virtual environment has to be built, simulating a real environment. This is elaborated in section 2.2. Both environments expose subjects to a similar kind of anxiety. During the exposure, the anxiety of the subjects needs to be measured. The method for doing this is explained in section 2.3. The way in which the obtained data is processed is discussed in section 2.4. The test case for anxiety for this investigation is glossophobia. This will be described first in section 2.1.

2.1

Glossophobia

Glossophobia is the anxiety one feels when speaking in public. Symptoms include feeling pressure to perform and being over-conscious of expressing signs (e.g. turning red or stuttering) indicating anxiety when speaking [4]. It turns out that people suffering from glossophobia often falsely perceive expressions of others as negative, without having any exceptional ability to read facial expressions [5]. The consequences of this disorder are that people avoid situations where they are exposed to their anxiety. For instance, attending classes where giving presentations is required are being skipped or not chosen at all, or the opportunity of getting a promotion is being turned down because of the mandatory presentation tasks. Those issues are highly regrettable as they limit personal development.

One proven therapy for overcoming the fear of speaking in public is the practicing of speeches in public. Recent studies demonstrate the effectiveness of this exercise with the use of virtual reality [6–10]. One of these studies even concluded that there is even a different reaction from the presenter on a either positive or negative audience [10]. A positive audience boosts the self-confidence of the speaker, while the negative audience significantly lowers the self-confidence of the speaker. Overall, the study also concluded that the effect of the therapy was strong in spite of the relatively low representational and behavioural fidelity of the virtual characters. Virtual reality allows a gradual exposure of anxiety to the speaker, for instance by varying the size of the audience, which makes it more tolerable for the patient. In addition, the undesired overhead of collecting an audience is sidestepped. Also, the personal and diverse essence of virtual reality as a treatment procedure makes it favourable. About the permanent effect of the therapy, the following reasoning could be deducted: anxiety is partly based on the fear of negative evaluation [11]. This fear of evaluation is a function of prior rejection and the need for approval [11]. This would mean that the more approval one gets (e.g. through exposure therapy), the more permanent the effect will be. Therefore, VRET treatments could be a good alternative to standard (in vivo) exposure therapy.

2.2

Implementation of the virtual environment

The environment for measuring glossophobia is a lecture room at the University of Amsterdam (C1.110).

(10)

2.2.1

Development tools

The following development tools are used: Hardware

• PC: Desktop PC or Mac (recommended [12])

• OS: Windows 7 64-bit or Mac OS X 10.9.2 or later (recommended [12])

• CPU: Quad-core Intel or AMD processor, 2.5 GHz or faster (recommended [12])

• GPU: NVIDIA GeForce 470 GTX or AMD Radeon 6870 HD series card or higher (recom-mended [12])

• RAM: 8 GB RAM (recommended [12]) • Oculus Rift DK2 [13]

Software

• Unreal Engine 4.7.6 [14] - for simulating the virtual lecture room with audience • MakeHuman [15] - for creating the audience

2.2.2

Initial stage

The virtual simulation used in this research is build upon an earlier simulation, investigating the same research question [16]. There are several reasons for continuing this investigation. Firstly, the results did not match the hypothesis, therefore adjustments to the simulation are made. Mentioned issues include a low performance when rendering and the inanimate behaviour of the audience. Lastly, the number of participants involved in this research was relatively low (5 participants), making it insufficient to draw a quantitative conclusion. Screenshots of this simulation can be found in Appendix A, Figure A.2 and A.3.

2.2.3

Realism

When comparing the initial implementation of the lecture room (Figure A.2) with the actual lecture room (Figure A.1), a major difference seems to be the luminous windows on the upper right side, which are missing in the simulation. These have been added, since they are a key characteristic of the lecture room. In addition to this, they give the lecture room a more realistic lighting, when comparing to the real one in Figure A.1, since the position of the artificial light sources has also been changed, and the intensity of the light sources have been reduced as well. Furthermore, the proportions of the characters in the audience, along with the proportions of the player character, do not correspond with reality. Therefore, these characters have been rescaled: the player character is made less tall, in order to share the same horizon with the real lecture room and therefore have the same vision on the audience, and the characters on the first row in the audience have been made bigger. The latter alteration has also been realised in order to obtain a more appropriate perceived sense of depth (when comparing the virtual with the real lecture room), which seemed to be lacking in the initial simulation. Figure A.6 is used to determine whether the scale of the people in the audience is correct (their heads should stick out above their chair). Lastly, the realism of the humans themselves has been increased. Where they had pale faces, plastic hair, eyebrows, eyelashes, and even clothes, and no eyes at all in the initial simulation, they now possess a real human skin texture, a certain transparency has been added to their hair, eyebrows, and eyelashes, their clothes have been made less shiny, and they are assorted with eyes in different colours.

In addition to the view, an attempt has been made to simulate the sound of the lecture room as well. However, the recorded sound of an empty lecture room was too noisy to make a contribution to the realism of the lecture room. Also, it would be barely noticeable when

(11)

someone would be talking through this noise. Therefore, the sound of the lecture room has been omitted.

Lastly, in the initial implementation, the player character is limited to a small walking area, due to the large restriction area around the audience characters. This restriction area has been eliminated, so that the player character has the freedom to walk throughout the lecture room. Unfortunately, the walking is limited by the wire of the Oculus Rift. Solutions for this could be using a controller or a treadmill to move around.

2.2.4

Human-like behaviour

For simulating the virtual audience, an obvious choice would be to stick to the behaviour sug-gested in [17], indicating how to design an audience for immersive virtual environments. This study introduces two frequency tables for both facial expression and gesture. However, for this investigation, only gestures are implemented, as the facial expressions have been left neutral because of the negligible influence they have on the relatively low resolution of the Oculus Rift. Furthermore, the extremely low frequencies of the gesture table make it unusable for a virtual audience listening to a relatively short presentation. Therefore, making an own observation for the behaviour of the audience is preferred to using the mentioned study.

In order to stay close to the background of the participants of the experiment, the behaviour of the audience of an undergraduate Computer Science tutor class has been observed. Five presentations were held, one following the other, each lasting four to eight minutes. The audience consisted of eleven to thirteen individuals. A few recurring gestures that are also implemented in the virtual audience are:

• When the presenter starts talking or is about to start talking, everyone in the audience makes eye contact with the presenter.

• Throughout the presentation, people in the audience start looking around, but most of them keep eye contact with the presenter.

• Throughout the presentation, people change their sitting pose a little, by moving their legs or their back.

• About half of the audience has a slumped sitting posture. • About half of the audience has their arms crossed.

• Some of the people in the audience may nod during the presentation.

In addition to this, the ”breathing” of the people in the audience is simulated by moving their chest periodically. Also, their sitting poses have been made more similar to the observed poses of the real audience and are now different for every individual.

Unreal

Unreal is a mainly blueprint driven game development program. Therefore, it is highly accessible for non-programmers to develop games. Game designers, for instance, are now able to create their own games, without the intervention of a programmer. In Unreal, there is a distinction between Level Blueprints, Class Blueprints, and Animation Blueprints. For the lecture room simulation, the Level Blueprint and Animation Blueprints are used. Every character in the audience has its own Animation Blueprint, which can be simply copied for every character, after the skeletal meshes of the characters are retargeted to the standard skeletal mesh in Unreal. The Level Blueprint has, among others, the privilege to retrieve the camera location of the game player and to create timelines, consisting of vectors with transform values for the skeletal bones of the audience. These values are passed on to the corresponding Animation Blueprints. The Animation Blueprints consist of two separate graphs: the Event Graph and the Animation Graph. The Animation Graph solely addresses the bone transforms of the corresponding character, leaving the Event Graph to handle the values for these transforms. Therefore, the Event Graph is in charge of calculating the value for changing the character’s head in order to have it looking at the

(12)

Table 2.1: Initial implementation versus new implementation

Initial Now

FPS 80 - 105 117 - 120

Character animation no yes Freedom to walk no yes Realistic humans no yes Realistic lighting no yes Realistic sitting poses no yes

game player: the obtained camera location from the player, through the Level Blueprint, minus the location of the corresponding character. In this way, the different interfaces work together to eventually simulate the human-like behaviour of the audience.

2.2.5

Performance

In a virtual simulation, a high frame rate is crucial, since a low frame rate produces discomforting judder. The minimum frame rate required for low latency on the Oculus Rift DK2 is 75 frames per second (FPS) [18]. The frame rate of the simulation in question increased significantly with respect to the former investigation, from 80 - 105 to 117 - 120 FPS. This is due to several reasons. Firstly, the number of light sources has been reduced. Secondly, the mobility of the lighting has been changed from movable to static. Both aspects cut back the complexity of the calculated lighting and therefore heighten frame rate. In addition to this, the shadows on the MakeHuman characters have been eliminated. Furthermore, the grouping of the static objects and merging them together into a single object has speeded up the rendering proces, since the static objects are only rendered once now. Lastly, it is likely that the updates of the Unreal Engine could have made an impact on the rendering performance.

2.2.6

Overview

In Table 2.1, an overview of the alterations made to the initial implementation discussed in this section, is shown.

2.3

Measuring anxiety

In order to measure anxiety, two measurements are held: a subjective measurement, via a per-sonal report questionnaire, and an objective measurement, using an EEG, which detects electrical activity in the brain.

2.3.1

Questionnaire

A questionnaire based on [19] is used to indicate the personal reported anxiety of the subjects and can be found in Appendix B. The questionnaire consists of 34 yes or no questions. The score is calculated as follows:

1. Assign one point for each of the following questions if the answer was yes: 1, 2, 3, 5, 9, 10, 13, 14, 19, 20, 21, 22, 23, 25, 27, 28, 29, 30, 31, 32, 33, and 34

2. Assign one point for each of the following questions if the answer was yes: 4, 6, 7, 8, 11, 12, 15, 16, 17, 18, 24, and 26

3. Score = 22 - step 1 + step 2

(13)

Table 2.2: Emotiv EPOC specifications [20]

EEG channels 14 (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4) Reference channels 2 (CMS/DRL in the P3/P4 locations)

Sample rate 128 samples per second (SPS) Resolution 14 bits

Frequency response 0.16 - 43 Hz

2.3.2

Electroencephalogram (EEG)

The Emotiv EPOC headset [20] is an affordable EEG which is used in applications since it was released in 2009. Today, it is used for controlling wheelchairs for physically disabled people, driving cars, and even navigating through games in combination with the Oculus Rift, inter alia. Table 2.2 contains the specifications of the Emotiv EPOC.

For setting up the Emotiv and connecting the electrodes to the right place on the scalp, the EPOC Control Panel is consulted. The TestBench is used for recording the raw signals from the electrodes.

Hardware

• Emotiv EPOC headset [20] Software

• EPOC Control Panel 2.0.0.21 [20] • TestBench v1.5.1.2 [20]

2.4

Detecting emotion from EEG signals

For the past couple of years, the study of emotions in human-computer interactions has increased significantly along with the capability of discovering patterns to relate them to emotional states [21]. The methods explained in this section are based on a study [21] which extracts features from the Emotiv signals in order to characterise states of mind in an arousal-valence 2D emotion model (see Figure 2.1). However, there are a few steps to follow before calculating this arousal and valence. These steps will be explained in the following subsections.

(14)

2.4.1

Filtering frequencies

As mentioned by [21], (8 - 12 Hz) and beta (12 - 30 Hz) waves are particularly interesting for researching both valence and arousal (see Figure D.6), because of the associations beta waves have with an alert or excited state of mind, whereas alpha waves are more dominant in a relaxed state. A fortunate byproduct of extracting solely these frequencies is the avoidance of different kinds of artefacts, e.g. eye movement/blinking, muscle, power lines. For filtering these alpha and beta frequency bands, a band-pass filter is used (Butterworth band-pass filter), from the SciPy Cookbook [22]. A low-pass and a high-pass filter is computed, after which these filters are applied to the signal, in order to pass frequencies within the specified range and rejects frequencies outside that range. In Figure 2.2, an illustration of this filter is shown using block diagrams. In order to reduce the obvious error any filter has, two methods are applied. Firstly, the mean was subtracted from the signal before the filtering. Secondly, a Hanning window function was used. However, any improvement or even difference on the filtered signal was not noticeable, so this method has been omitted.

Figure 2.2: Band-pass filter [23]

(a) (b)

Figure 2.3: Alpha frequency (a) and Beta frequency (b)

2.4.2

Filtering the noise

An essential step in evaluating waves, is to filter obvious noise. A way to do this is by filtering any voltages above a certain threshold. The values above this threshold are not the cause of human brain activity. For determining this threshold, the recorded waves are investigated to detect above what threshold the wave produces accidental spikes. For this investigation, the value of 50 µV is chosen as threshold. The accidental spikes are subsequently set te zero (i.e. the mean of the wave), as the rate of spikes is relatively low (10−4) in comparison with the sample rate of the EEG (128 SPS).

2.4.3

Calculating the power

In order to compare the obtained alpha and beta waves, the amplitudes of the waves need to be evaluated. This is done by computing the power of these waves, i.e. the sum of all amplitudes in a time interval (see Figure 2.4). Since the power is calculated over a whole time interval, the desired time interval has to be divided in multiple same-size time intervals, in order to obtain a power over time function. In this investigation, the size of these time intervals is equal to ten

(15)

seconds, enough to properly measure the functioning of the brain. Next, the arousal and the valence are calculated.

Figure 2.4: Calculating power of signal given in volt

2.4.4

Calculating the arousal

According to the study [21], the level of arousal, i.e. how excited a person is, is determined by computing the ratio of the beta and alpha brain waves. The alpha and beta brain waves are measured in four locations in the prefrontal cortex: AF3, AF4, F3 and F4 (see Figure 2.5). Because of the associations beta waves have with an alert or excited state of mind and alpha waves have with a more relaxed state, the beta/alpha ratio could indicate the arousal of a person (2.1).

Arousal = betaAF 3+ betaAF 4+ betaF 3+ betaF 4 alphaAF 3+ alphaAF 4+ alphaF 3+ alphaF 4

(2.1)

(a) (b)

Figure 2.5: Electrodes of Emotiv EPOC used for emotion detection: AF3, AF4, F3, and F4 [24] (a) and Prefrontal Cortex [25] (b)

2.4.5

Calculating the valence

The valence can be obtained using the equation 2.2. The idea behind calculating the valence is rather complex and therefore not explained in this study, but the explanation is found in [21].

V alence = alphaF 4 betaF 4

−alphaF 3 betaF 3

(16)

2.4.6

Paired t-test

In order to determine whether the difference between the three cases (intake, lecture room and Oculus Rift) per person is statistically significant, a paired t-test is applied to the data sets. This t-test compares two dependent data sets, e.g. the data obtained in the lecture room and with the Oculus Rift from the same participant. In order to be able to use this test, the pairwise difference needs to be calculated, resulting in the sample XD, which is assumed to

have a normal distribution [26]. The t-value is obtained using equation 2.3 and has a Student’s t-distribution [27]. In this equation, ¯XD is the mean of the sample XD, µ0 is the hypothesised

mean (null-hypothesis), sDis the standard deviation of the sample XD and n is the sample size.

t = ¯ XD− µ0 sD/ √ n (2.3)

The null-hypothesis of this test is that the mean of the difference is zero (H0 : µ0 = 0).

Once the t-value is determined, the p-value can be found in the table depicting the values of the Student’s t-distribution. A threshold is then applied to this p-value, which determines whether the differences are statistically significant. The threshold is set to p < 0.05. Python is used to calculate the t-value and p-value, using the scipy.stats.ttest rel function. This is done six times for each participant, comparing each of the data sets for both arousal and valence to each other (lecture room compared to Oculus Rift, Oculus Rift to intake and lecture room to intake).

(17)

CHAPTER 3

Experiments

3.1

Participants

Participants (N = 16) are college students from the University of Amsterdam, consisting of two females and fourteen males. A histogram with their ages is shown in Figure 3.1. The majority of the participants are undergraduate Computer Science students at the University of Amsterdam.

Figure 3.1: Age of participants

3.2

Audience

The audience consists of five members who are spread throughout the lecture room on fixed positions, as does the virtual audience. This is based on the earlier study [16].The idea behind this is to form a neutral audience, without groups of friends. The mentioned study bases the size of the audience on an earlier study on virtual glossophobia therapy [28]. The participants for the audience are linked to a particular character in the simulation, based on their gender and hair color. An exception to this is made for one member of the audience, who is female, but is represented by a male in the simulation. The audience is instructed to wear clothes similar to the clothes of the characters in the simulation (in any case, the same colour top).

(18)

3.3

Procedure

The procedure of the experiments is based on the earlier study examining the same research question [16]. This study proposes three situations: intake, lecture room and virtual lecture room. Participants are instructed to tell in two minutes about themselves in those three situations. The subject of the speech is based on the preceding study [16] and the idea behind it is that the story is very well known to the subject and therefore does not evoke fear or tension of forgetting the lines on the participant. All participants start with the intake presentation. Then, they are randomly assigned (using a random generator website [29]) to either first do their presentation in the lecture room followed by the virtual lecture room or vice versa. This is done in order to reduce the error caused by the order of the presentations [16], i.e. counteracting habituation. During their presentation, they are filmed and their brain activity is recorded by an EEG headset. Fact is that the brain activity is influenced by a lot of unplanned issues like food, hours of sleep, time of the day, etc. To minimise this error, the three presentations are all completed one after the other. Also, the physical influence caused by traveling to the different situations is minimised by taking the elevator. Furthermore, the participants are asked to stand still during their presentation, with their arms behind their back, in order to minimise the noise moving causes on the EEG.

The audience is instructed to copy the behaviour of their virtual character. In addition to that, they are asked not to react on what the speaker is telling them. They are also not allowed to ask questions or respond to questions from the speaker.

(19)

CHAPTER 4

Results and discussion

4.1

Personal report of anxiety

In Figure 4.1, the results of the questionnaire described in section 2.3.1 are displayed. The score is linear and goes from 0 for individuals suffering most from the anxiety disorder to 34 for individuals not suffering from the anxiety disorder.

In order to make a distinction between both individuals who suffer from glossophobia and individuals who do not, two groups have been made, based on the personal report of anxiety questionnaire, namely the Anxious group and the Nonanxious group. Each group consists of six individuals, representing the participants with the most and the least anxiety.

Figure 4.1: Scores of the personal report of anxiety questionnaire

4.2

Detected emotion

As explained in the methodology chapter, the arousal and valence are computed for each par-ticipant in all situations. The paired t-test is used to determine per parpar-ticipant if the difference between two situations, for either arousal or valence, is statistically significant. Only the situa-tions with p-value < 0.05 are considered in the results. This means that the less significant the

(20)

difference shown by the individuals (N = 16) in either arousal or valence is, the more their expe-rience in the different situations is identical. The results for both arousal and valence are shown in the following subsections. The arousal and valence over time for the individual participants are displayed in Appendix D.

4.2.1

Arousal

For 10 individuals, there is a statistical significant difference in arousal between Lecture room and Oculus:

• 6 experience more arousal in Oculus: – 0 anxious

– 3 nonanxious – 3 neutral

• 4 experience more arousal in Lecture room: – 2 anxious

– 2 nonanxious – 0 neutral

For 9 individuals, there is a statistical significant difference in arousal between Oculus and Intake: • 5 experience more arousal in Oculus:

– 0 anxious – 3 nonanxious – 2 neutral

• 4 experience more arousal in Intake: – 1 anxious

– 2 nonanxious – 1 neutral

For 6 individuals, there is a statistical significant difference in arousal between Lecture room and Intake:

• 4 experience more arousal in Lecture room: – 0 anxious

– 2 nonanxious – 2 neutral

• 2 experience more arousal in Intake: – 0 anxious

– 1 nonanxious – 1 neutral

The number of times that a case is preferred above one of the two remaining cases can be compared. Table 4.1 shows which case caused the most arousal from the highest number of times to the lowest number of times for all the participants, the anxious group and the nonanxious group.

Table 4.1: Top 3 of the case that caused the most arousal

All Anxious Group Nonanxious Group #1 Oculus (11) Lecture Room (2) Oculus (6)

#2 Lecture room (8) Intake (1) Lecture room (4) #3 Intake (6) Oculus (0) Intake (3)

(21)

4.2.2

Valence

A total of 7 individuals prefer Intake above Oculus: • 3 anxious

• 3 nonanxious • 1 neutral

A total of 5 individuals prefer Lecture room above Oculus: • 1 anxious

• 3 nonanxious • 1 neutral

A total of 3 individuals prefer Intake above Lecture room: • 2 anxious

• 0 nonanxious • 1 neutral

Again, the number of times that a case is preferred above one of the two remaining cases can be compared. Table 4.2 shows which case caused the most positive valence from the highest number of times to the lowest number of times for all the participants, the anxious group and the nonanxious group. Table 4.3 shows this for the most negative valence.

Table 4.2: Top 3 of the case that caused the most positive valence

All Anxious Group Nonanxious Group #1 Intake (9) Intake (5) Intake/Lecture room (3) #2 Lecture room (4) Lecture room (1) Oculus (0)

#3 Oculus (0) Oculus (0)

-Table 4.3: Top 3 of the case that caused the most negative valence

All Anxious Group Nonanxious Group #1 Oculus (12) Oculus (4) Oculus (6)

#2 Lecture room (5) Lecture room (2) Lecture room / Intake (1) #3 Intake (1) Intake (0)

-4.3

Discussion

First of all, the number of individuals who experience a statistically significant difference in arousal between the Lecture room and the Oculus is remarkable, ten out of sixteen. This indicates that there is indeed a difference in experience between both situations. Most individuals clearly experience more arousal during their presentation with the Oculus, followed by the Lecture room and the Intake. When zooming in on the anxious individuals, there seems to be a statistically insignificant difference between the distinct situations. This implies that the Anxious group experiences little difference in arousal between the three situations. The Nonanxious group, however, follows the trend of the majority of the individuals, with the Oculus evoking the most arousal.

In addition, the statistically significant difference in valence is lower than that of the arousal, meaning that the either positive or negative experiences of the participants are closely related to each other for the three situations. It is noticeable that from the individuals who had a

(22)

statistically significant difference in valence between the three situations, no one had a positive experience while presenting in VR. Most individuals preferred the Intake situation over the Lecture room, with the Anxious group preferring the Intake situation proportionally more than the Nonanxious group. The Nonanxious group had no particular preference between the Lecture room and the Intake. When comparing the negative valence between the different situations, the Oculus is experienced most negative, followed by the Lecture room and the Intake. There is a slight difference between the Anxious group and the Nonanxious group: the Nonanxious group experiences a more negative experience with the Oculus than the Anxious group. Overall, it must be emphasised that less than half of the individuals experienced a significant difference in valence between the situations, with no particular difference in their nature towards glossophobia.

All in all, for making a broad comparison on the level of anxiety evoked in the three different situations, both the differences in arousal and valence need to be compared. The Anxious group seems to experience similar levels of arousal in all situations and slightly negative valence in VR. Based on the Emotion plane (Figure 2.1) and the assumption that the Anxious group feels anxious when presenting in the Lecture room situation, this would mean that they experience the same or even slightly more anxiety in VR than in real life. The Nonanxious group, however, experienced more arousal and more negative valence while speaking in VR than in the other situations, with no particular preference regarding the other situations. According to the Emotion plane, this would indicate that they experience more nervousness, angriness or annoyance in VR than they do in real life. This also counts for the majority of all individuals. A possible explanation for this could be that nonanxious individuals, who normally are not anxious while speaking, are more used to interacting with the audience and reading their facial expressions. This could cause them to get a little more nervous, angry or annoyed when speaking in front of an audience that they cannot interact with. Anxious individuals on the other side, might not be that used to interacting with the public, so they would experience similar levels of anxiety.

(23)

CHAPTER 5

Conclusions and future work

The purpose of this study was to fill the gap beween real life anxiety therapy and virtual anx-iety therapy. This was obtained by investigating whether the same level of anxanx-iety was evoked in a simulated environment as in a real environment. The test case for this investigation was glossophobia, the fear of public speaking. Therefore, the research question was: does a virtual environment evoke the same level of anxiety, while speaking for an audience, as a real environ-ment? For answering this research question, participants were asked to give a short presentation in both a lecture room and in VR. The baseline was provided by a one-to-one presentation. The results indicate that for anxious individuals, the evoked level of anxiety in VR is similar or slightly higher than in real life. Nonanxious individuals and the majority of the individuals, how-ever, experience more nervousness, angriness or annoyance when presenting in VR in comparison with both the presentation in the lecture room and the intake presentation, with no particular preference regarding the last two situations. This could be explained by the assumption that nonanxious individuals are more used to interacting with the audience and reading their facial expressions. While speaking in front of a virtual audience, they cannot longer interact with their audience, which could cause them to get a little more nervous, angry or annoyed. Anxious individuals, on the other side, might not be that used to interacting with the public, so they would experience similar levels of anxiety.

Suggested future work would be to take another step back and investigate whether a recorded lecture room, passing for the virtual lecture room, would also cause the individuals who do not suffer from glossophobia to become a little more nervous, angry or annoyed. This would probably indicate why there is a difference in experience, in particular if it is related to the inability of the speaker to read the facial expressions of his audience. In order to make this investigation possible, the resolution of the VR head-mounted display would have to be sufficient to read the facial expressions. Furthermore, other test cases of anxiety than glossophobia could be investigated to get a more profound view of the difference in anxiety evoked in VR and in real life.

(24)
(25)

CHAPTER 6

Acknowledgements

First of all, I would like to thank my supervisor Robert Belleman for guiding me through the whole process of doing my graduation project and for providing assistance whenever I needed it. I am grateful for the liberty he gave me to tackle the research question in my own way and for never questioning my abilities to do so.

I would also like to show my gratitude to Sennay Ghebreab for his expertise on measuring brain activity and his equipment I was allowed to borrow for my study.

Last but not least, I would like to thank Jonas van Nijnatten for providing his knowledge on computing results from the EEG signals.

(26)
(27)
(28)
(29)

APPENDIX A

Images

(30)

Figure A.2: Screenshot of the initial simulation

(31)

Figure A.4: Screenshot of the new simulation

(32)
(33)

APPENDIX B

Personal report of anxiety questionnaire

Questionnaire Participant: Age:

Study programme:

DIRECTIONS: this instrument is composed of 34 statements concerning feelings about com-municating with other people. Answer the statements with yes or no. Work quickly, record your first impression.

1. While preparing for giving a speech, I feel tense and nervous.

2. I feel tense when I see the words speech and public speech on a course outline when studying. 3. My thoughts become confused and jumbled when I am giving a speech.

4. Right after giving a speech I feel that I have had a pleasant experience. 5. I get anxious when I think about a speech coming up.

6. I have no fear of giving a speech.

7. Although I am nervous just before starting a speech, I soon settle down after starting and feel calm an comfortable.

8. I look forward to giving a speech.

9. When the instructor announces a speaking assignment in class, I can feel myself getting tense.

10. My hands tremble when I am giving a speech. 11. I feel relaxed while giving a speech.

12. I enjoy preparing a speech

13. I am in constant fear of forgetting what I prepared to say.

14. I get anxious if someone asks me something about my topic that I do not know. 15. I face the prospect of giving a speech with confidence.

(34)

17. My mind is clear when giving a speech. 18. I am not afraid of giving a speech. 19. I sweat just before starting a speech.

20. My heart beats very fast just as I start a speech.

21. I experience considerable anxiety while sitting in the room just before my speech starts. 22. Certain parts of my body feel very tense and rigid while giving a speech.

23. Realising that only a little time remains in a speech makes me very tense and anxious. 24. While giving a speech I know I can control my feelings of tension and stress.

25. I breathe faster just before starting a speech.

26. I feel comfortable and relaxed in the hour or so just before giving a speech 27. I do poorer on speeches because I am anxious.

28. I feel anxious when the teacher announces the date of a speaking assignment.

29. When I make a mistake while giving a speech, I find It hard to concentrate on the parts that follow.

30. During an important speech I experience a feeling of helplessness building up inside me. 31. I have trouble falling asleep the night before a speech.

32. My heart beats very fast while I present a speech. 33. I feel anxious while waiting to give my speech.

(35)

APPENDIX C

Experiment script

C.1

Receive the audience

Location: C1.110

Time: ten minutes before participant enters

1. Tell the audience: ”It takes two minutes per test person. During this time: • Cell phone completely off

• No laptops • Try to sit still

• Copy behaviour from the simulation • Do not ask any questions or talk at all” 2. Position the audience (seen from the left):

• Row 1, seat 7

• Row 1, seat 1 from right portion of the lecture room • Row 2, seat 3

• Row 3, seat 12 • Row 4, seat 9

C.2

Intake

Location: C3.154A

1. Tell participant: ”For this experiment, a headset measuring your brain activity will be used. You will also be recorded to be able to explain possible peaks in the recording afterwards. The material will not be used for other purposes. Would you mind?”

2. Extract headset from charger

3. Position headset on head of participant, using the Control Panel 4. Turn on headset

(36)

5. Instruct participant: ”I’m going to ask you to tell me in two minutes about yourself: who you are, where you come from, your family, hobbies, your study. Later on, I am going to ask you to do this again. Try to tell the same story over again, but do not worry: it does not have to be exactly the same. I will tell you when your two minutes are up. Just keep on talking until I indicate you to stop. In order to reduce the noise, you are asked to keep a standing position with your arms behind your back and try to stand still. Do you have any questions before we start? No? Then we will start: Tell me in two minutes about yourself.”

6. Start recording camera 7. Start recording TestBench 8. Participant gives his speech 9. Stop recording TestBench 10. Stop recording camera

C.3

Lecture room

Location: C1.110:

1. Check the Control Panel for all electrodes to be still connected

2. Instruct participant: ”This is another situation where you have to tell in two minutes about yourself. Try to hold the same position as earlier and to stand still. Do not be offended if the audience does not respond to anything you say, they are instructed to do so. Again, I will indicate you when your time is up.”

3. Start recording camera 4. Start recording TestBench 5. Participant gives his speech 6. Stop recording TestBench 7. Stop recording camera

C.4

Game room

Location: C3.154A

1. Position Oculus Rift on top op headset

2. Check the Control Panel for all electrodes to be still connected

3. Instruct participant: ”This is another situation where you have to tell in two minutes about yourself. Try to hold the same position as earlier and to stand still. Do not be offended if the audience does not respond to anything you say, they are instructed to do so. Again, I will indicate you when your time is up.”

4. Start recording camera 5. Start recording TestBench 6. Participant gives his speech 7. Stop recording TestBench 8. Stop recording camera 9. Take off Oculus Rift

(37)

C.5

Final steps

1. Turn off headset 2. Take off headset 3. Charger headset

4. Instruct participant to fill in the Personal Report of Confidence as a Public Speaker ques-tionnaire

(38)
(39)

APPENDIX D

Graphs

Figure D.1: Arousal graphs per participant. The scores are determined by the Personal report of anxiety questionnaire in Appendix B.

(40)

Figure D.2: Arousal graphs per participant. The scores are determined by the Personal report of anxiety questionnaire in Appendix B.

(41)

Figure D.3: Arousal graphs per participant. The scores are determined by the Personal report of anxiety questionnaire in Appendix B.

(42)

Figure D.4: Valence graphs per participant. The scores are determined by the Personal report of anxiety questionnaire in Appendix B.

(43)

Figure D.5: Valence graphs per participant. The scores are determined by the Personal report of anxiety questionnaire in Appendix B.

(44)

Figure D.6: Valence graphs per participant. The scores are determined by the Personal report of anxiety questionnaire in Appendix B.

(45)

Glossary

EEG electroencephalogram. 1, 3, 10–12, 16, 23 FPS frames per second. 10

PTSD Post Traumatic Stress Disorder. 5 SPS samples per second. 11, 12

VR Virtual Reality. 1, 5, 20, 21

(46)
(47)

Bibliography

[1] Colin MacLeod and Avonia Mary Donnellan. Individual differences in anxiety and the restriction of working memory capacity. Personality and Individual Differences, 15(2):163– 173, 1993.

[2] Robert L DuPont, Dorothy P Rice, Leonard S Miller, Sarah S Shiraki, Clayton R Rowland, and Henrick J Harwood. Economic costs of anxiety disorders. Anxiety, 2(4):167–172, 1996. [3] Merel Krijn, Paul MG Emmelkamp, Ragnar P Olafsson, and Roeline Biemond. Virtual real-ity exposure therapy of anxiety disorders: A review. Clinical psychology review, 24(3):259– 281, 2004.

[4] David Carbonell, Ph.D. Fear of public speaking: the fear that stalls careers. http://www.anxietycoach.com/fear-of-public-speaking.html, [Online; accessed May 3, 2015]. [5] Scott R Vrana and Daniel Gross. Reactions to facial expressions: effects of social context and speech anxiety on responses to neutral, anger, and joy expressions. Biological Psychology, 66(1):63–78, 2004.

[6] Page L Anderson, Elana Zimand, Larry F Hodges, and Barbara O Rothbaum. Cognitive behavioral therapy for public-speaking anxiety using virtual reality for exposure. Depression and anxiety, 22(3):156–158, 2005.

[7] Dwi Hartanto, Isabel L Kampmann, Nexhmedin Morina, Paul GM Emmelkamp, Mark A Neerincx, and Willem-Paul Brinkman. Controlling social stress in virtual reality environ-ments. PloS one, 9(3):e92804, 2014.

[8] Nexhmedin Morina, Willem-Paul Brinkman, Dwi Hartanto, and Paul MG Emmelkamp. Sense of presence and anxiety during virtual social interactions between a human and virtual humans. PeerJ, 2:e337, 2014.

[9] David-Paul Pertaub, Mel Slater, and Chris Barker. An experiment on public speaking anxiety in response to three different types of virtual audience. Presence: Teleoperators and virtual environments, 11(1):68–78, 2002.

[10] Mel Slater, D-P Pertaub, and Anthony Steed. Public speaking in virtual reality: Facing an audience of avatars. Computer Graphics and Applications, IEEE, 19(2):6–9, 1999.

[11] David Watson and Ronald Friend. Measurement of social-evaluative anxiety. Journal of consulting and clinical psychology, 33(4):448, 1969.

[12] Unreal Engine. Recommended hardware. https://wiki.unrealengine.com/Recommended Hard ware, [Online; accessed June 16, 2015].

[13] Oculus Rift. https://www.oculus.com/en-us/, [Online; accessed June 16, 2015]. [14] Unreal Engine. https://www.unrealengine.com, [Online; accessed June 16, 2015]. [15] MakeHuman. http://www.makehuman.org, [Online; accessed June 16, 2015].

(48)

[16] Lankhorst Ilse Heussen, Nynke and Jesse Swart. Eindverslag: Wekt een Virtual Reality omgeving dezelfde angstsymptomen op als tijdens het spreken in een echte omgeving? -Anxiety in Virtual Reality project. 2014.

[17] Sandra Poeschl and Nicola Doering. Virtual training for fear of public speaking–design of an audience for immersive virtual environments. 2012.

[18] A Chalk and B Fisher. Normal mapping solutions for Oculus Rift development.

[19] James C McCroskey, JA Daly, and JC McCroskey. Self-report measurement. Avoiding communication: Shyness, reticence, and communication apprehension, pages 81–94, 1984. [20] Emotiv. Emotiv epoc. https://emotiv.com/epoc.php, [Online; accessed June 14, 2015]. [21] Rafael Ramirez and Zacharias Vamvakousis. Detecting emotion from eeg signals using the

emotive epoc device. In Brain Informatics, pages 175–184. Springer, 2012.

[22] SciPy. Cookbook / ButterworthBandpass. http://wiki.scipy.org/Cookbook/Butterworth Bandpass, [Online; accessed June 2, 2015].

[23] All about Circuits. Band-pass Filters. http://www.allaboutcircuits.com/textbook/alternating-current/chpt-8/band-pass-filters/, [Online; accessed June 14, 2015].

[24] GitHub. Emotiv epoc sensors locations, 2013. Image. https://camo.githubusercontent.com/ 1e4518dc36c83ce0199f139edb2eb9382df9aa8b/687474703a2f2f6e6575726f666565646261636b 2e7669736164756d612e696e666f2f696d616765732f6669675f656d6f746976706f736974696f6e73 2e676966, [Online; accessed June 13, 2015].

[25] The Applied Neuroscience Institute. Frontal cortex. Image. http://www.appliedneuroscienceinstitute.com/images/uploads/Left%20Frontal%20Pic.jpg, [Online; accessed June 13, 2015].

[26] Robert F. Woolson and William R. Clarke. Statistical Methods for the Analysis of Biomedical Data. John Wiley Sons, Inc., 2002.

[27] R. Lyman Ott and Micheal Longnecker. An Introduction to Statistical Methods and Data Analysis. Brooks/Cole, 2010.

[28] Mel Slater, David-Paul Pertaub, Chris Barker, and David M Clark. An experimental study on fear of public speaking using a virtual environment. CyberPsychology & Behavior, 9(5):627–633, 2006.

[29] Random number generator. https://www.random.org, [Online; accessed May 26, 2015]. [30] IIS UvA. Photo of lecture room c1.110. Image. http://iis.uva.nl/binaries/twocolumnland

scape/content/gallery/onderwijs/iis/divers/dsc02611.jpg?138114184836, [Online; accessed June 11, 2015].

Referenties

GERELATEERDE DOCUMENTEN

These labs (in this chapter referred to as design labs of type A, B, C and D) supported the participants in going through a design process in which they applied the principles

Due to the phase-out of all refrigerants with ozone depletion potential, a large void is left in the refrigeration market. This void was caused due to a lack of new,

This would be comparable to the finding in the current study that there was not a statistically signifi- cant difference for memory in the source monitoring task between the

die nagedagtenis van ’n voortreflike man, ’n voorbeeldige eggenoot en vader, ’n groot Afrikaner, en ’n agtermekaar Suid-Afrikaner.] (Hierdie Engelse artikel in

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

The nse m employment that failed to appear, despite better treatment facihties, Stresses the impor- tance to distmguish between impairment and disabil- ity on the one hand and

Girotra et al., (2010) were one of the first to apply such a research design to (offline) idea generation, they found that although groups had a lower average quality of ideas

When the ‘Log average risk aversion’ was used as dependent variable, the models with DNB index, Ortec 1 and Ortec 2 still gave significant results for the independent variables.