• No results found

Virtual Reality: Evaluating the Effects of Navigation Restrictions on the Experience

N/A
N/A
Protected

Academic year: 2021

Share "Virtual Reality: Evaluating the Effects of Navigation Restrictions on the Experience"

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Virtual Reality: Evaluating the Effects of Navigation

Restrictions on the Experience

SUBMITTED IN PARTIAL FULLFILLMENT FOR THE DEGREE OF MASTER

OF SCIENCE

RASHID MOHAMED

10819770

MASTER

INFORMATION

STUDIES

HUMAN-CENTERED MULTIMEDIA

FACULTY OF SCIENCE

UNIVERSITY OF AMSTERDAM

July 4, 2016

Supervisor / First examiner

Second examiner

Dr. Frank Nack

Dr. Davide Ceolin

University of Amsterdam

VU University

(2)

Virtual Reality: Evaluating the Effects of Navigation

Restrictions on the Experience

Rashid Mohamed

University of Amsterdam Graduate School of Informatics

Science Park 904, Amsterdam

rashid.mohamed@student.uva.nl

ABSTRACT

This paper evaluates the effect of restricting the navigational freedom on the overall experience, in a fully immersive vir-tual environment that is rendered by a head-mounted dis-play. The navigational freedom is restricted into various degrees: (1) stationary with the headtracking of the HMD disabled, (2) transported through the environment based on input, and (3) complete navigational freedom. Based on the concept of an existing search and select-based system, three virtual environments were developed and evaluated. In a within-subjects design, 27 participants took part in the ex-periment to measure how navigational freedom affects the overall experience. The findings conclude that in a search and select-based environment, that requires simple interac-tion between user and system, providing a moderate amount of freedom is sufficient for a satisfactory experience.

Categories and Subject Descriptors

1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism — Virtual Reality; 1.3.6 [Computer Graph-ics]: Methodology and Techniques — Interaction techniques; H.5.2 [Information Interfaces]: User Interfaces — Eval-uation/Methodology.

General Terms

Design, Experimentation, Human Factors

Keywords

Virtual Reality, Navigation, Presence, Immersion, Virtual Environments, Locomotion

1.

INTRODUCTION

Fox, Arena, and Bailenson (2009) define virtual reality as, "...a digital space in which a user’s movements are tracked and his or her surroundings rendered, or digitally composed and displayed to the senses, in accordance with those move-ments" (p. 95). By this broad definition, virtual reality can

be applied in all sorts of devices, from a 2D or 3D com-puter game that is played on the phone or console and in which the user navigates by pressing buttons to virtual real-ities that are immersive and the user is required to wear, for example, head-mounted displays (HMD) and haptic gloves. The low immersive virtual environments are widely avail-able and used worldwide, as these environments can be run on for example (smart)phones and console gaming devices. The virtual environments that fully immerse the user in the digital world require powerful PC’s (see for example table 2) and can include devices that allow users to control their movements in the virtual environment by walking in the physical space. Head-mounted displays such as the Oculus Rift1 renders the environment and contains an accelerome-ter to track the head movement of the user. Furthermore, there are virtual environments that are much more complex and require a large space, an example is the HIVE (huge immersive virtual environment). The HIVE allows for re-alistic movement in a space of 570 m2 and is designed for

behaviour research in spatial cognition(Waller, Bachmann, Hodgson, & Beall, 2007). Some environments such as the CAVE (computer assisted virtual environment) use multiple projections of the virtual environment on the surrounding walls and floor of the physical room, gloves for object ma-nipulation, and headtracking devices to control the user’s viewpoint(Sutcliffe, Gault, Fernando, & Tan, 2006).

According to Fox et al. (2009), three topics can be distin-guished within research of virtual environments (VE) in the social sciences: VE as objects of social scientific study, VE applications, and VE as a method to study social scientific phenomena. The first topic involves studying the different aspects (i.e. form and content) of virtual reality and what the effect is of these aspects on the user. Manipulating the emotions of the user and testing physiological effects can be the goal of the virtual environment. The second topic of VE research involves creating virtual reality applications that can be used to treat phobias and disorders (exposure therapy) or, for example, training environments for pilots. The third topic of VE research involves studying real world phenomena in a virtual environment. It can be unrealistic, time consuming, or costly to research certain phenomenon in the real world. Therefore, the controlled environment of virtual reality is the most efficient solution.

To narrow down the broad topic of virtual reality

environ-1

(3)

ments, the focus of this paper is:

• VEs that are HMD-based

• VEs that are objects of social scientific study (e.g. evoke emotional responses)

• VEs that are fully immersive

The remainder of this paper is structured as follows: first a review of the related work is presented that explores: (1) similar systems that fit the aforementioned three criteria; and (2) the research that is conducted on navigation and interaction methods in a HMD-based virtual environment. This is followed by the research question and the research methodology. Then the results of the study will be presented and discussed. The paper is culminated by the conclusion and future work.

2.

RELATED WORK

2.1

VEs evoking emotional responses

The research of Riva et al. (2007) focused on using virtual reality as a medium to evoke emotional responses from the user. Within that subject, the authors also focused on de-termining the relationship between presence and emotions. Presence can be defined as how real the user views the vir-tual environment, it is an important factor in creating ef-fective virtual reality(Fox et al., 2009). They hypothesized that when the feeling of presence is strong in the virtual en-vironment, then the virtual environment is more effective in evoking emotional responses. And vice versa, if the environ-ment is able to evoke emotions then the feeling of presence is intenser. They created several environments: one environ-ment to evoke anxiety, an environenviron-ment to evoke the feeling of relaxation, and a neutral environment that was used as control group. Their research concluded the following find-ings: virtual reality is a medium that is effective in evok-ing emotional responses in users. The relaxevok-ing and anxious parks were successful in inducing the desired emotions. As to whether presence and emotion have an influence on each other, they conclude that this is indeed true. Users felt a higher level of presence in the relaxing and anxious parks compared to the neutral park.

The research of Bouchard, St-Jacques, Robillard, and Re-naud (2008) corroborates the conclusions of Riva et al. (2007), as they have also carried out a study to determine whether anxiety has an influence on the sense of presence. Partic-ipants who have an aversion for snakes were immersed in three environment: a baseline environment, an anxiety in-ducing environment, and a neutral environment. The sense of presence was measured during and after each experience and the results show that the anxiety inducing environment provided an intenser sense of presence.

2.2

Navigation in VEs

Howarth and Finch (1999) researched the effect of two dif-ferent navigation methods on the nauseousness while wear-ing a head-mounted display within a virtual environment. One navigation method was with the user’s head kept still and movement was conducted with a hand-control, the other navigation method provided the user freedom of movement

(i.e. moving the head). According to the study, the latter type of movement caused more nauseousness.

Howarth and Costello (1997) compare wearing a HMD where head-tracking is disabled versus viewing a visual display unit (i.e. monitor). They conclude that the former type of dis-play causes more virtual simulation sickness symptoms to occur for the user.

Cutmore, Hine, Maberly, Langford, and Hawgood (2000) in-vestigate the effects of several factors on how effective users can learn to navigate (i.e. route and survey knowledge) in a virtual environment. Multiple factors were tested in sev-eral experiments: the gender of the user, active and passive navigation, cognitive style, hemispheric activation, and dis-play information. They make the following conclusions: (1) males are more proficient in gaining route knowledge, (2) with regards to survey knowledge, users perform more ade-quately if they have better visual-spatial cognition, and (3) measurements of the vertebrate cerebrum during the exper-iments indicate that the right side is stimulated to a greater extent.

Ruddle, Payne, and Jones (1999) conducted a study on the differences between navigation using a HMD and navigation on a desktop display in a virtual environment. Their results concluded that using HMD was more beneficial due to the head-tracked interface that provides the ability to move the head and view the surroundings.

Suma, Babu, and Hodges (2007) investigated the differences between several travel techniques in a virtual environment. The travel techniques are the following: real walking, mov-ing where lookmov-ing, movmov-ing where pointmov-ing. Their findmov-ings conclude that real walking is more efficient when it comes to complex, multi-level 3D environments. They do question however if the real walking technique is worth the extra cost and environment space that it requires.

Navigating an immersive virtual reality environment is pos-sible with the aid of 3D sounds alone(Lokki & Grohn, 2005). However, Lokki and Grohn (2005) state that a combination of auditory and visual cues provide a far more efficient per-formance.

The study of Chittaro and Burigat (2004) researched how effective 3D arrows pointed towards locations of interest are for users, in comparison with 2D arrows, and a radar-type navigation. They conclude that when the user is walking, the 3D arrows are just as adequate in aiding the user as the 2D arrows. However, the 3D arrows are far more effective when the user is flying through the environment.

2.3

Sense of presence

How immersion and media content impacts presence is re-searched in the study of Baños et al. (2004). They com-pared several systems ranging from fully immersive (HMD) to least immersive (PC monitor). Each system was tested with two virtual environments: one designed to induce an emotional state while the other was neutral. Their results conclude that the aforementioned two factors play a crucial role on the sense of presence. With regards to immersion, they found no differences between the systems except for that the HMD produced virtual simulation sickness symp-toms. With regards to content, their results reveal differ-ences when comparing environments that contain emotional stimuli and neutral environments. The emotions inducing environments appeared more captivating and realistic to the users. Furthermore, the authors state that in environments

(4)

that have no affective stimuli, immersion is the main factor that can provide a sense of presence.

Stanney, Mollaghasemi, Reeves, Breaux, and Graeber (2003) describe a multi-criteria system to design and evaluate vir-tual environments. In their paper, they present an extensive set of guidelines to improve the general comfort, immer-sive capacities, and sense of presence of VEs. The guide-lines cover a broad range of aspects, e.g. navigation, in-teraction, object manipulation, engagement, and comfort. Bowman, Kruijff, LaViola Jr, and Poupyrev (2001) provide an overview of 3D user interaction tasks and interfaces and present some guidelines for the design of such environments. A few notable principles on the design of 3D user interfaces are that:(1) interaction techniques should be chosen on what is required for the application; and (2) to limit the degree of freedom for the user, the user input should be guided by providing constraints. Furthermore, two of the unanswered questions they feel is important to resolve are the following: how various 3D interaction styles affect the user’s sense of presence and how one can determine what interaction tech-nique is most suitable for an application.

An expansive overview of presence measures is presented by Van Baren and IJsselsteijn (2004). In their paper they differentiate between two categories of presence: physical presence (i.e. feeling of being there) and social presence (i.e. feeling of being together with someone) (Van Baren & IJsselsteijn, 2004). According to their paper, there are two general approaches on how presence can be measured: subjective measures and objective corroborative measures. The first approach is predominantly conducted with post-test questionnaires. The second approach is conducted with the utilization of equipment and sensors to measure, for ex-ample, the heart-rate or galvanic skin responses.

3.

RESEARCH QUESTION

A system that adheres the three criteria, that were presented in the introductory section, is the Your Worst Nightmare (YWN) system(van der Kooij, Chen, van der Huizen, & Mo-hamed, 2016). The YWN-system, currently a proof of con-cept, is a simple search and select-based system that aims to provide the user with an environment to explore emotions by providing stimuli in the form of images. YWN makes use of the Oculus Rift DK2, a blood volume pulse sensor to mea-sure the heart-rate of the user, and an Xbox2 controller for movement. The system is created with Unity 3D3 and the

environment consists of three levels (i.e. main area, corridor, and final room) in which the user can navigate through. The YWN-system provides the user complete freedom of move-ment within the VE. However, the overall experience of the system was unsatisfactory; user testing indicated discomfort and lack of enjoyment.

One of the design guidelines of Bowman et al. (2001) sug-gests to limit the degree of freedom for the user and that user input should be guided by providing constraints. Therefore, instead of giving complete freedom to navigate with a con-troller, testing the effect of less freedom in the VE should be explored. Furthermore, it is stated earlier that nauseousness can be an issue in immersive virtual environments. The re-search of Howarth and Finch (1999) concluded that keeping

2http://www.xbox.com 3

http://unity3d.com/

the user’s head still during the VE experience causes less nauseousness than allowing the user freedom of head move-ment.

Nauseousness, a virtual reality induced symptom, can be caused by various factors. Sharples, Cobb, Moody, and Wil-son (2008) conducted a study on the causes of simulator sick-ness in virtual environments in multiple VR-displays: HMD, desktop, projection screens, and reality theatre. Their find-ings conclude that of all the VR displays tested, the HMD is the main VR-display that induced the symptoms. The causing factors could also lay in the design of the VE and how much control the user has as within the VE(Sharples et al., 2008). Bailenson and Yee (2006) conducted a longi-tudinal study and one of the researched variables was sim-ulator sickness. They came to the following conclusion: the amount of simulator sickness they experience will decline when users experience the VE for longer periods of time and are familiarized with the equipment. Lastly, Stanney, Hale, Nahmens, and Kennedy (2003) conclude that simula-tor symptoms can be reduced by limiting both the exposure durations to the VE and the navigational control of the user. Based on the findings, this paper focuses on the effects of different degrees of navigational freedom in a VE that is HMD-based. To conclude, the research question is the fol-lowing: What is the effect of navigational freedom on the

overall experience in a search and select-based virtual envi-ronment that requires simple interaction between user and system.

4.

RESEARCH METHODOLOGY

4.1

Environment

In order to answer the research question, three HMD-based virtual reality environments were created. To ease this pro-cess, the concept of the YWN-system was used as blueprint for the design and development of the environments. The YWN-system is essentially an environment in which the user can navigate through via joystick and select images to progress to the next stage.

The following three environments, with varying degrees of navigational freedom, were created to test the research ques-tion:

1. Environment 1 (EV1): VE with zero freedom. The user is immersed in an environment with no naviga-tional freedom and no freedom of head-movement (i.e. the headtracking is disabled).

2. Environment 2 (EV2): VE with moderate amount of freedom. The user is immersed in an environment that transports the user through the environment based on input. The user has freedom of head-movement 3. Environment 3 (EV3): VE with complete freedom.

The user is immersed in an environment that allows complete freedom of navigation and freedom of head-movement.

4.2

Sample and Measurements

Through convenience sampling(Bryman, 2015), 27 people between the ages of 20 and 35 were recruited to partici-pate in the study. There were 18 males and 9 females (M

(5)

= 25,52; SD = 4,004). The participants were divided in three groups of 9 people and each group experienced the system in a different order to prevent bias. Group A first experienced environment 3, then environment 1, and finally environment 2. Group B first experienced environment 2, then environment 3, and finally environment 1. Group C first experienced environment 1, then environment 3, and finally environment 2. All three of the environments are consistent in the amount of rounds (i.e. number of images to select) and aspects such as lighting, texture, speed of movement, and overall atmosphere. The number of rounds was set to 6. After experiencing each environment, the par-ticipants were asked to fill in a questionnaire to measure the experience. The questionnaire consists of four sections: (1) personal information, (2) the rating questions, (3) the pres-ence questions, and (4) the environment and task-specific questions.

The personal information section allows for the registration of group, gender, and age. The variable group is included to measure whether the within-subjects design of the research has lead to any disadvantages (i.e. carry-over effect). The rating questions ask the users for their opinions on the overall experience, the visual quality, and the audio that is playing in the background of the environments. This sec-tion, and more specifically the overall experience quessec-tion, is included for the purpose of creating a ranking of the en-vironments.

The presence questions are the most important part of the questionnaire, as presence is a crucial factor in creating effec-tive virtual reality(Fox et al., 2009). The presence questions are largely adopted from the presence questionnaire of Dinh, Walker, Hodges, Song, and Kobayashi (1999).

The environment and task-specific questions are included to gain a better understanding of the user’s opinion towards each environment. These questions are a valuable addition in helping understand the reason behind the results of the rating and presence sections. The items of the questionnaire are included as appendix B.

4.3

Environment description

4.3.1

Environments

The three environments are modelled after the concept of the YWN-system(van der Kooij et al., 2016). The environ-ments are nearly identical in shape, size, materials, textures, lighting, and visual quality. However, they are different in how the user navigates and progresses through the environ-ment. In EV1, the user is completely stationary and the headtracking is disabled, the user is transported based on input in EV2, and EV3 provides complete navigational free-dom. The following three areas are present in all of the environments: the main area, the exit room, and the final room. Screenshots of the environments are included as ap-pendix A.

The main area, a round-shaped room, is the starting loca-tion of the user. In this area, for EV2 and EV3, the user can see three adjacent rooms that contain the images and a fourth room that serves as an exit room. The user selects an image by entering one of the rooms. For the second en-vironment, the user is transported to a room and through the image based on input. The movements are constrained in this environment and follow a pre-defined path. Whereas in the third environment, the user has complete freedom of

Figure 1: Top view of EV3

navigation and can explore without restriction. An image is chosen by going through it, and once this happens, the users then find themselves in the second round. A pop-up text, that is displayed for five seconds at the start of a round, notifies the user of how many rounds are remaining. This process of selecting images lasts for six rounds, the system will then place the user in the final room.

The main area of EV1 is slightly different due to the move-ment restrictions on the user. Since the user is unable to move, the method of selecting an image is different. There-fore, instead of adjacent rooms that contain the images, there is one room from which an image will approach until it is nearby. The user has the option to either select that image or cycle through to view the other images. Once a selection is made, the image will go through the user and the next round of the experience will start.

The purpose of the exit room is dyadic: (1) to provide the user the possibility of quitting the experience, and (2) to provide an area that has a calming effect. Quitting the ex-perience could have also been accomplished via a menu op-tion. However, a designated area is created in order to have a more interactive experience between system and user. With regards to the calming effect, the exit room is designed to look like a green landscape with vegetation, a pond, and sounds of nature. The research of Annerstedt et al. (2013) concludes that in a virtual reality setting, a green environ-ment in combination with nature sounds can reduce stress. Upon entering the exit room, a timer of 15 seconds will start counting down. The user has the option to leave the exit room and continue the experience within this time frame. Remaining longer than 15 seconds will prompt a message explaining to the user that the experience can now be termi-nated and they can remove any hardware attached to their body. Due to being completed restricted from any move-ment in EV1, it was necessary to place the exit room be-hind the user. Therefore, in the event users wish to quit the experience, they will be rotated towards the exit room. In the final room, the user is presented with the images that are selected in the previous six rounds. The images are showcased on the left and right wall. At the end of the final

(6)

room, a message is displayed that informs the user that the end has been reached and to remove any hardware that is attached on the body. The user is requested to select the im-ages that are the most frightening or unpleasant, therefore, the final room is designed to give closure to the experience and an indication of what the fears and phobias of the user are.

The degree of navigational freedom of each environment has consequences for the final room. In EV1, the most restric-tive environment, the images on the walls move towards the user at a steady pace. In EV2, the user has the freedom and control to start and stop the transportation to the other side of the room. However, moving backwards, e.g. to examine the previous images, is not possible. In EV3, the user is not restricted and can navigate freely.

4.3.2

Imagery

The images were obtained from the international affective picture system (IAPS) (Lang, Bradley, & Cuthbert, 1997). Based on their research, Lang et al. (1997) assigned valence, arousal, and avoidance-approach ratings to the images. The images encompass many categories and range from neutral to containing disturbing content. Each of the three envi-ronments required a set of 18 images; 3 images for each of the six rounds. Due to the within-subjects design of the re-search, each participant experienced all three environments. Therefore, the emotional stimuli that the images provide must be similar across the environments. However, using the same set of images is not ideal due to the psychological desensitization that could occur(Mrug, Madan, Cook III, & Wright, 2015).

The categories of the IAPS images are the following: an-imals, faces, landscapes, objects, and people. The images within these categories are given valence, arousal, and avoidance-approach (Av/Ap) ratings. The valence dimen-sion indicates how pleasant or unpleasant the stimuli is, while the arousal dimension indicates how calm or exiting the stimuli is. The avoidance-approach rating is correlated with the valence and arousal. A pleasant and exciting stim-uli, e.g. image of a kitten, has a high avoidance-approach rating. Whereas an unpleasant and exciting stimuli, e.g. image of an injured person, has a low avoidance-approach rating. Table 1 displays the intensity and category of images that are presented in the environments. The images were se-lected with a standard deviation that is as low as possible, preferable between 0 and 1. Furthermore, within the main categories, diverse sub-categories/descriptions were chosen4. There is a logical progression with regards to the emotional stimuli; the first round contains the most general and neu-tral category which is followed by increasingly more specific categories. The increase in intensity occurred in the third and fifth round.

4.4

Implementation

4.4.1

Development

The environments were developed in version 5.3.4f1 of Unity 3D. Unity3D is a cross-platform development tool for games

4The category faces does not contain enough images with a

rating between 2.00 and 2.50. For the remaining two images, ratings were chosen that were as close as possible to that margin.

Table 1: The categories and the intensity of the emo-tional stimuli that is displayed per round in the en-vironments

Round Category Av/Ap rating 1 Landscapes 6.00 - 6.50 2 Objects 6.00 - 6.50 3 Animals 4.00 - 4.50 4 People 4.00 - 4.50 5 People 2.00 - 2.50 6 Faces 2.00 - 2.50

Table 2: The recommended PC specifications for the Oculus Rift (Oculus VR, 2016b)

Component Description

Video card NVIDIA GTX 970 / AMD R9 290 CPU Intel i5-4590 equivalent or greater Memory 8GB+ RAM

Video output Compatible HDMI 1.3 video output USB ports 3x USB 3.0 ports plus 1x USB 2.0 port OS Windows 7 SP1 64 bit or newer

and interactive experiences. Depending on the type of expe-rience one is intending to create, the Unity editor offers sev-eral modes, e.g. 2D or 3D. The environments were created in the 3D mode, this offers the tools necessary to create objects that are three-dimensional and rendered with textures, ma-terials, and lightning. Unity’s asset store5 provides a wide

range of assets to accelerate the development process. The objects in the environment are called game objects, they are the fundamental building blocks in Unity(Unity Technolo-gies, 2016). These game objects are the containers for com-ponents that define the object and give it additional func-tionality. For example, there are components for lights, ren-dering, and geometry. Furthermore, for advanced function-ality and to create custom gameplay behaviour, it was neces-sary to write scripts and include those as components. The programming languages C# and UnityScript are supported by Unity, the latter language is based on JavaScript(Unity Technologies, 2016).

4.4.2

Design decisions

The design and development of the environments required compromises between visual quality and performance with respect to the following aspects: layout of the environment, shape of the main area, vegetation quality, and overall visual quality.

A top view screenshot of the layout of the environment is included as figure 1. The environment is designed in this manner to give users the illusion that they are navigating in a large continuous space. The aim was to make the users think that when they navigate through the three rooms that are adjacent to the main area, they are simply navigating to the next main area in which the next set of images can be viewed. However, the reality is as follows: when users reach the end of the image rooms, the system simply loads the next level. It is a fast transition, however, it can be somewhat

5

(7)

noticeable to the user. Connecting all of the six rounds into one level would lead to higher render times and a decrease in the performance of the system. In addition, the main area is a round-shaped room so that the adjacent image rooms can be positioned in such manner that the whole image cannot be viewed from a distance. This is to give the user a preview of the image, however to view it in its entirety, entering the room is necessary.

The terrain of the exit room contains grass and other veg-etation. The Unity engine supports two types of grass: 2D painted grass textures and grass meshes. The latter type of grass looks more realistic and is rendered as a three-dimensional object. It is however more demanding and could lower the system’s performance. Therefore, in order to have a rich looking green landscape and maintain a minimum of 60 frames per second, 2D painted grass textures were used. Several image effects were applied to enhance the overall visual quality of the environments: antialiasing to smooth out jagged lines, crease shading to enhance the visibility of objects, depth of field to simulate the effect of focused foregrounds and defocused backgrounds, colour correction for contrast enhancement, and bloom to brighten up the lighting of the environment.

Furthermore, a start menu is developed to provide the user with ample time to attach the necessary hardware and test whether the digital environment is being rendered. Once the user is ready, the experience can be started.

Audio in virtual reality has the potential to improve the experience(Brinkman, Hoekstra, & van EGMOND, 2015), therefore, ambient audio is added to the virtual environ-ments in an attempt to enhance the immersion and presence for the users. As concluded by the research of Riva et al. (2007); a strong sense of presence makes the virtual reality incite emotional responses more adequately.

4.4.3

Hardware

The Oculus Rift DK2 was used as the head-mounted dis-play. In order to ensure satisfactory performance, the PC on which the testing was conducted adhered to the recom-mended specifications of the Oculus Rift. The specifications are provided as table 2. With regards to the movement and interaction within the virtual environment, the Xbox con-troller was used. According to the research of Rupp, Oppold, and McConnell (2013), the Xbox controller is an acceptable option to perform complex human-system interaction tasks. They further conclude that the Xbox controller is usable and produces less tracking errors compared to the standard joysticks and keyboards. Finally for the heart-rate mea-surements, the Blood Volume Pulse (BVP) sensor of biosig-nalplux6was utilised. The BVP sensor detects any changes

in blood volume in an artery and is required to be placed on the index finger of the user. Images of the hardware equipment is provided as figure 2.

4.5

User testing procedure

Prior to the user testing, a dummy test was conducted for two reasons: (1) to measure approximately how much time is needed to test the three environments with a user; (2)

6

http://biosignalsplux.com/index.php/en/

Figure 2: Images of the hardware equipment: (A) the Oculus Rift DK2(Oculus VR, 2016a), (B) the BVP sensor(biosignalplux, 2016), (C) the Xbox 360 controller(Microsoft, 2016)

and more importantly, whether the environments and hard-ware are functioning correctly and to improve the testing procedure if necessary. A detailed description of the testing procedure is the following:

1. The user is given an explanation on the purpose of the testing, how it will be conducted, and how much time it will approximately take.

2. The user is asked to read and sign a consent form, this is included as appendix C.

3. Assuming the user belongs to group A, the user will start experiencing EV3 first.

4. After the user is done, he can remove the HMD and fill in the presence questionnaire for EV3.

5. The user will then experience EV1 and afterwards fill in the questionnaire for that environment.

6. Finally, the user will experience EV2 and afterwards fill in the questionnaire for that environment.

Figure 3 depicts one of the participants during the user test-ing.

(8)

5.

RESULTS

27 People (18 males, 9 females), aged between 20-35, took part in the experiment (M = 25,52; SD = 4,004). Accord-ing to the Shapiro-Wilk test (p=0.020), the data among the groups is not normally distributed, see table 5 in ap-pendix D.1. Therefore, Friedman tests were used to anal-yse whether any differences exist between the three environ-ments. Furthermore, a series of post hoc Wilcoxon signed-ranks tests were conducted on the different combinations of related groups:

1. Environment 1 to Environment 2 2. Environment 1 to Environment 3 3. Environment 2 to Environment 3

The null hypothesis is defined as H0: There is no difference

between the three environments. The alternative hypothesis is H1: There is a difference between the three environments.

The Alpha value is α = 0.05. The descriptive statistics, the results of the Friedman test, and the results of the Wilcoxon signed-ranks tests have been included as appendix D.2 and D.3.

5.1

The rating questions

The Friedman Test shows no difference between the three environments for all three rating questions (Q1 p=0,135, Q2 p=0,543, Q3 p=0,574). However, a Wilcoxon signed-ranks test shows that there is a difference between EV1 and EV2 for Q1 (Z=-2,464, p=0,014). Furthermore, when look-ing closer at the descriptive statistics, then a difference can indeed be noticed between EV1 and the other two environ-ments. The first environment has mean of 3,15 and a median of 3,00. While EV2 has a mean of 3,81 and a median of 4,00. EV3 has a mean 3,70 and a median of 4,00. The means of both EV2 and EV3 approximates the median. Based on these numbers, quite a difference in the overall experience between the first environment and the latter two can be ob-served.

With regards to the visual quality and whether audio is found distracting, the Wilcoxon signed-ranks test shows no difference between the environments. A closer inspection on the descriptive statistics exhibits that the mean and median are almost identical.

5.2

The presence questions

With regards to Q4 (sense of presence), there was a statisti-cally significant difference between the three environments, X2 (2)=15,519, p=0,00042. A Wilcoxon signed-ranks test

reveals that the difference occurs between EV1 and EV2 (Z=-3,486, p=0,00049). The median score is 3,00 for EV1 and 4,00 for EV2.

With regards to Q8 (degree ease of looking), a Wilcoxon signed rank test shows that a significant difference occurs between EV2 and EV3 (Z=-2,422, p=0,015). The median score is 4,00 for EV2 and 5,00 for EV3. The first environ-ment has received a significantly lower score, however this was expected due to the headtracking being disabled. There-fore, the focus should lie between the differences of the other two environments.

For the remainder of the questions (i.e. Q5, Q6, Q9, and

Q10 ), there was no statistically significant difference ac-cording to the Friedman Test. However, viewing the data in detail reveals the following additional information. A Wilcoxon signed-ranks test between EV1 and EV3 for Q6 (realism virtual word) shows a significant difference (Z=-2,484, p=0,013). The descriptive statistics of Q6 reveals that the mean of EV1 is 3,22 with a median of 3,00. While EV3 has a mean of 3,63 with a median of 4,00. The mean of the latter environment approximates the median. The overall comfort (Q9) and overall enjoyment (Q10) vari-ables show that EV1 has received lower scores for both of the variables: the median of EV1 is 3,00 while the other two environments have a median of 4,00. The means of EV2 and EV3 are also higher and closer to the median in comparison to the first environment.

5.3

The environment and task-specific

ques-tions

For most of the questions in the section environment and task-specific, there were no statistically significant differ-ences between the three environments. The questions where differences were encountered are presented hereinafter. For Q12 (feeling in control of the movement), a Wilcoxon signed-ranks test shows that a significant difference occurs between EV2 and EV3 (Z=-3,331, p=0,001). The median score is 4,00 for EV2 and 4,00 for EV3. However, EV3 has a higher mean of 4,11.

For the first and second environment, the question whether the participants would have preferred more freedom of move-ment(Q13), the results indicate that when it comes to EV1 (i.e. the environment with the least freedom) the partici-pants agree with the statement that more freedom is pre-ferred. The median is 4,00, the mean is 3,56, and the mode is 4. While the values for EV2 are the following: the median is 3,00, the mean is 3,11, and the mode is 2.

Whether the final room provided sufficient reflection of the choices that are made during the previous rounds(Q27), the Friedman test indicates that there is a statistically signifi-cant difference between the three environments: X2 (2) = 6,377, p=0,041. However, the series of Wilcoxon signed-ranks tests provided no evidence that a significant difference occurs between any of the environments. The descriptive statistics reveal that the mean of EV1 (M=3,59) is slightly lower than the other two (EV2 M=4,00, EV3 M=4,11).

5.4

Summary of results

While EV2 and EV3 seem to be closer to each other in terms of the received scores, the first environment is a different case. On a number of variables, it has received much lower scores. The overall experience was rated good while the other two environments received higher ratings. The user’s sense of presence was also lacking in this environment com-pared to the other two. The visual quality of the environ-ment and the realism of the virtual world have received al-most identical ratings for all three of the environments. This can be explained by the fact that the conditions and vari-ables (e.g. lighting, material, movement speed) were kept similar across the three environments. However, as stated before, there was a statistical significant difference between EV1 and EV3 with regards to the realism of the virtual world. The overall comfort and overall enjoyment of EV1 re-ceived lower scores than the other two environments. These

(9)

Table 3: Carry-over effect

Q1: Overall experience Q4: Presence Q13: Freedom of movement Group A Group B Group C Group A Group B Group C Group A Group B Group C

N Valid 9 9 9 9 9 9 9 9 9 Missing 0 0 0 0 0 0 0 0 0 Mean 4,22 3,67 3,56 4,11 3,56 4,11 2,56 3,67 3,11 Median 4,00 4,00 4,00 4,00 4,00 4,00 2,00 4,00 2,00 Mode 4 4 5 4 5 5 2 5 2 Std. Deviation ,667 1,225 1,424 ,782 1,424 1,054 1,014 1,225 1,453 Minimum 3 1 1 3 1 2 1 2 2 Maximum 5 5 5 5 5 5 4 5 5 Percentiles 25 4,00 3,00 2,50 3,50 2,50 3,50 2,00 2,50 2,00 50 4,00 4,00 4,00 4,00 4,00 4,00 2,00 4,00 2,00 75 5,00 4,50 5,00 5,00 5,00 5,00 3,50 5,00 5,00

results can be explained by looking at Q13 (more freedom of movement) and Q15 (ability to move head). The users agree with the statement that more freedom of movement is preferred in EV1, whereas EV2 received neutral scores. The users also wanted the ability to move their head and thus be able to look around in the environment. The aforementioned two results provide an explanation why the sense of presence, the overall comfort, and the overall enjoyment were lower for the environment with the least amount of freedom, i.e. EV1. Furthermore, the users found being transported in EV2 less frustrating than being completely stationary.

EV2 and EV3 were much closer to each other in terms of the results. It is interesting to note that EV2, despite having less freedom of navigational movement and interaction, of-fered an enjoyable experience with a strong sense of presence and, also equally important, allowing the user to perform the tasks as sufficient as environment 3. A closer look at the de-scriptives shows that for the sense of presence, the second environment has a slightly higher mean and mode. This is also the same for the overall enjoyment of the experience (i.e. Q15).

Other interesting similarities across the three environments is that the imagery is causing an uncomfortable feeling: whether the images approach the user, the user is trans-ported towards the images, or the user is walking closer to the images. Furthermore, across the three environments, the users thought they had ample time before the next set of images appear after a selection is made in the current round.

One final point that is interesting to mention are the re-sults of Q25 and Q26: they reveal that for the restrictive environments (i.e. EV1 and EV2), the users do not feel the need to revisit images. However, in the third environment, when given the option and freedom to revisit the images, the users made use of that freedom to do so. And regardless of the navigational freedom, the final rooms in each envi-ronment provided the user with sufficient reflection of the chosen imagery.

6.

DISCUSSION

6.1

Discussion of results

This paper researched the effects of restricting the naviga-tion of the user in a virtual reality environment where the purpose is to search and select objects. This is an envi-ronment which does not require advanced or sophisticated

interaction between the user and system.

All of the aspects of the three environments, except for the navigational method, is kept consistent and similar to ensure that any differences can be explained by the navigational method. Furthermore, a within subjects design is used for the experiments; the same participants were involved in the testing of all three environments. One of the main advan-tages is that this reduces the error variance associated with individual differences. The disadvantage is that the partic-ipation in one environment can affect the performance in the following environments. To prevent this from occurring, the 27 participants were divided in 3 subgroups of 9 people. Each subgroup experienced the environments in a different order.

The results reveal that EV2 and EV3 have achieved rat-ings that are close to each other. Despite that EV2 has less freedom of movement in the virtual world, it offered a simi-larly enjoyable experience and even a slightly higher sense of presence. Furthermore, it did not hamper the user’s perfor-mance to achieve its tasks in the virtual world. In a search and select environment that does not require sophisticated interaction between user and system, complete freedom of navigation is therefore not necessary. This falls in line with one of the guidelines of Bowman et al. (2001) which suggests to limit the degree of freedom for the user and that the user input should be guided by providing constraints. Further-more, limiting the user’s navigational control could help in reducing simulator sickness symptoms(Stanney, Hale, et al., 2003).

Also, the argument could be made that the second environ-ment can be developed in a shorter period of time, renders faster, and therefore also less expensive to create. An envi-ronment that is fully explorable, such as EV3, requires that every piece of the map is designed and developed in detail. Whereas when the navigational freedom is restricted, certain areas of the map can be hidden from the user or given the illusion that they are present and fully functional. There-fore, for example, certain parts of the map do not require the materials, textures, and lighting. This could reduce the development time and, ultimately also, the cost to create.

6.2

Limitations

A caveat of the research is that the first environment, besides the restriction in navigational freedom, is different from the

(10)

other two environments with regards to the following aspect: the headtracking of the HMD is disabled. This prevents the user from being able to look around, which is also part of the research question; it restricts the user’s interaction in the virtual world even further. However, it does introduce an additional variable that separates it from the other two environments. This has contributed to the lower scores for presence and overall comfort. The results of Q15 indicate that the participants would have indeed liked the ability to move their heads.

Furthermore, the demographics of the participants is rang-ing from 20 years to 35 years old. Therefore, the research results cannot be generalized to people outside of this age group. However, this particular age group are the ones that will most likely use virtual reality systems. The final caveat of the research is the skewed ratio between male and female participants; 33,33% of the participants is female.

The within-subjects design of the research could lead to the disadvantage that participation in one environment can af-fect the performance in the subsequent environments. A method to prevent this from occurring was to divide the participants into different groups and give each group a dif-ferent order to experience the environments. Table 3 con-tains the data of three items, from the questionnaire of EV2, in which the results of each group are displayed.

The questionnaire is divided into four sections: personal in-formation, the rating questions, the presence questions, and the environment/task-specific questions. One question is selected from the latter three sections: Q1 (overall experi-ence), Q4 (sense of presexperi-ence), and Q13 (freedom of move-ment). The results are from the questionnaire of EV2 for the reason that this environment received the highest scores for the overall experience and sense of presence. Q13 is selected for the reason that it would be interesting to see whether participants want more or less freedom of movement after first experiencing one of the other environments.

The data reveals that group A has given slightly higher scores to the overall experience and sense of presence. Fur-thermore, they gave lower scores for the questionnaire item that stated that more freedom of movement is preferred (Q13). An explanation is that a carry-over effect has in-deed occurred. Group A experienced EV2 as the last en-vironment. There are two basic types of carry-over effects; practice and fatigue. Due to practise, the participants ac-climated to the virtual reality. When users experience the virtual environment for longer periods of time and are famil-iarized with the equipment, then the amount of simulator sickness they experience will decline(Bailenson & Yee, 2006). Therefore, their final environment (i.e. EV2) was perceived as the most pleasant.

Fortunately, the carry-over effect is kept to a minimum due to the division of the participants in groups and experiencing the environments in different orders.

7.

CONCLUSION AND FUTURE WORK

This paper explored the effect of navigational freedom on the user experience in a search and select virtual reality en-vironment that is rendered by a HMD. The enen-vironments were limited in various degrees of navigational freedom: (1) stationary with the headtracking of the HMD disabled; (2) transported through the environment based on input; and (3) complete navigational freedom. Twenty-seven

partici-pants between the ages of 20 and 35 have experienced the three environments and completed questionnaires to pro-vide their opinions. Based on the analysis of the results, the following conclusion can be made: for a virtual envi-ronment that is search and select-based and requires only simple interaction between the user and system, providing a moderate amount of freedom is sufficient for a satisfactory experience. Providing complete navigational freedom is pos-sible and will not affect the user’s experience in a negative manner. It could however affect the development time and cost.

The following directions should be explored for the future work: First, in order to acknowledge the limitations of the current study, research should be conducted with a target group that also includes participants outside the range of 20-35 years. Furthermore, a population sample that is bal-anced genderwise should be recruited. When participants younger than 18 years are included, the emotional stimuli should be adjusted as it can be inappropriate for younger audiences.

Second, evaluate whether the results gained from this study are viable when applied to virtual environments that re-quire more interaction between user and system than a sim-ple search and select-based environment. Third, evaluate how restricting the navigational freedom influences the ex-perience in other types of virtual environments, e.g. HIVE systems.

Fourth, the simulator sickness symptoms could be reduced when limiting the user’s navigational control(Stanney, Hale, et al., 2003). Testing for simulator sickness was not part of the research, additional testing should be conducted to evaluate how restrictions on navigational freedom influences these symptoms.

8.

ACKNOWLEDGEMENTS

The author would like to thank Frank Nack for his invalu-able supervision and guidance during the study. Addition-ally, the author expresses his gratitude to Frank Kresin for providing the resources to reach potential participants for the study. Finally, many thanks to all of the participants that sacrificed their time to take part in the experiment.

References

Annerstedt, M., Jönsson, P., Wallergård, M., Johansson, G., Karlson, B., Grahn, P., . . . Währborg, P. (2013). Inducing physiological stress recovery with sounds of nature in a virtual reality forest—results from a pilot study. Physiology & behavior , 118 , 240–250.

Bailenson, J. N., & Yee, N. (2006). A longitudinal study of task performance, head movements, subjective report, simulator sickness, and transformed social interaction in collaborative virtual environments. Presence:

Tele-operators and Virtual Environments, 15 (6), 699–716.

Baños, R. M., Botella, C., Alcañiz, M., Liaño, V. c., Guer-rero, B., & Rey, B. (2004). Immersion and emotion: their impact on the sense of presence. CyberPsychology

& Behavior , 7 (6), 734–741.

biosignalplux. (2016). Bvp sensor. Retrieved from http://biosignalsplux.com/index.php/en/bvp -blood-volume-pulse ([Online; accessed June 28, 2016])

(11)

Bouchard, S., St-Jacques, J., Robillard, G., & Renaud, P. (2008). Anxiety increases the feeling of presence in virtual reality. Presence: Teleoperators and Virtual Environments, 17 (4), 376–391.

Bowman, D. A., Kruijff, E., LaViola Jr, J. J., & Poupyrev, I. (2001). An introduction to 3-d user interface de-sign. Presence: Teleoperators and virtual environ-ments, 10 (1), 96–108.

Brinkman, W.-P., Hoekstra, A. R., & van EGMOND, R. (2015). The effect of 3d audio and other audio

tech-niques on virtual reality experience. IOS Press.

Bryman, A. (2015). Social research methods. Oxford uni-versity press.

Chittaro, L., & Burigat, S. (2004). 3d location-pointing as a navigation aid in virtual environments. In Proceedings

of the working conference on advanced visual interfaces

(pp. 267–274).

Cutmore, T. R., Hine, T. J., Maberly, K. J., Langford, N. M., & Hawgood, G. (2000). Cognitive and gen-der factors influencing navigation in a virtual envi-ronment. International Journal of Human-Computer

Studies, 53 (2), 223–249.

Dinh, H. Q., Walker, N., Hodges, L. F., Song, C., & Kobayashi, A. (1999). Evaluating the importance of multi-sensory input on memory and the sense of pres-ence in virtual environments. In Virtual reality, 1999.

proceedings., ieee (pp. 222–228).

Fox, J., Arena, D., & Bailenson, J. N. (2009). Virtual reality: A survival guide for the social scientist. Journal of

Media Psychology, 21 (3), 95–113.

Howarth, P., & Costello, P. (1997). The occurrence of virtual simulation sickness symptoms when an hmd was used as a personal viewing system. Displays, 18 (2), 107– 116.

Howarth, P., & Finch, M. (1999). The nauseogenicity of two methods of navigating within a virtual environment.

Applied Ergonomics, 30 (1), 39–45.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1997). In-ternational affective picture system (iaps): Technical manual and affective ratings. NIMH Center for the

Study of Emotion and Attention, 39–58.

Lokki, T., & Grohn, M. (2005). Navigation with audi-tory cues in a virtual environment. IEEE MultiMedia,

12 (2), 80–86.

Microsoft. (2016). Xbox 360-controller. Retrieved from http://www.xbox.com/nl-NL/xbox-360/accessories /controllers ([Online; accessed June 28, 2016]) Mrug, S., Madan, A., Cook III, E. W., & Wright, R. A.

(2015). Emotional and physiological desensitization to real-life and movie violence. Journal of youth and

adolescence, 44 (5), 1092–1108.

Oculus VR. (2016a). Dk2. Retrieved from https://www.oculus.com/en-us/dk2/ ([Online; ac-cessed june 28, 2016])

Oculus VR. (2016b). Rift. https://www.oculus.com/en-us/rift/. (Accessed march 7, 2016)

Riva, G., Mantovani, F., Capideville, C. S., Preziosa, A., Morganti, F., Villani, D., . . . Alcañiz, M. (2007). Affective interactions using virtual reality: the link between presence and emotions. CyberPsychology &

Behavior , 10 (1), 45–56.

Ruddle, R. A., Payne, S. J., & Jones, D. M. (1999). Navigat-ing large-scale virtual environments: what differences

occur between helmet-mounted and desk-top displays?

Presence, 8 (2), 157–168.

Rupp, M. A., Oppold, P., & McConnell, D. S. (2013). Com-paring the performance, workload, and usability of a gamepad and joystick in a complex task. In

Proceed-ings of the human factors and ergonomics society an-nual meeting (Vol. 57, pp. 1775–1779).

Sharples, S., Cobb, S., Moody, A., & Wilson, J. R. (2008). Virtual reality induced symptoms and effects (vrise): Comparison of head mounted display (hmd), desktop and projection display systems. Displays, 29 (2), 58– 69.

Stanney, K. M., Hale, K. S., Nahmens, I., & Kennedy, R. S. (2003). What to expect from immersive virtual envi-ronment exposure: Influences of gender, body mass in-dex, and past experience. Human Factors: The

Jour-nal of the Human Factors and Ergonomics Society, 45 (3), 504–520.

Stanney, K. M., Mollaghasemi, M., Reeves, L., Breaux, R., & Graeber, D. A. (2003). Usability engineering of virtual environments (ves): identifying multiple crite-ria that drive effective ve system design. International

Journal of Human-Computer Studies, 58 (4), 447–481.

Suma, E. A., Babu, S., & Hodges, L. F. (2007). Compar-ison of travel techniques in a complex, multi-level 3d environment. In 3d user interfaces, 2007. 3dui’07. ieee

symposium on.

Sutcliffe, A., Gault, B., Fernando, T., & Tan, K. (2006). Investigating interaction in cave virtual environments.

ACM Transactions on Computer-Human Interaction (TOCHI), 13 (2), 235–267.

Unity Technologies. (2016). Unity manual.

http://docs.unity3d.com/Manual/. (Accessed june 23, 2016)

Van Baren, J., & IJsselsteijn, W. (2004). Measuring pres-ence: A guide to current measurement approaches.

Deliverable of the OmniPres project IST-2001-39237 .

van der Kooij, D., Chen, M., van der Huizen, G., & Mo-hamed, R. (2016). Your worst nightmare. , 1–25. Waller, D., Bachmann, E., Hodgson, E., & Beall, A. C.

(2007). The hive: A huge immersive virtual envi-ronment for research in spatial cognition. Behavior

(12)

APPENDIX

A.

ENVIRONMENTS

A.1

Start menu

Figure 4: The start menu of the environments.

Figure 5: The start menu: an overlay menu appears asking for confirmation when opting to exit.

(13)

A.2

Environment 1

Figure 6: An overview shot of the environment within the Unity editor.

Figure 7: The main area of environment 1, this is the starting point.

Figure 8: The user can select one out of three images per round.

Figure 9: The image of the next round is visible after making a selection.

Figure 10: The exit room of the environment.

Figure 11: The final room of the environment.

Figure 12: The message that appears at the end of the final room.

(14)

A.3

Environment 2 and 3

Figure 13: An overview shot of the environment within the Unity editor.

Figure 14: The starting point of the environments.

Figure 15: The main area of the environment with the three adjacent image rooms and a fourth room which serves as an exit.

Figure 16: One of the image rooms.

Figure 17: Selecting an image is accomplished by navigating through it. The main area of the next round is then visible.

(15)

Figure 19: The messageboard that is displayed in the exit room to provide the user with instructions.

Figure 20: When the timer runs out in the exit room, a message is displayed informing the user that the experience can now be terminated.

Figure 21: The final room of the environment.

Figure 22: The message that appears at the end of the final room.

(16)

B.

QUESTIONNAIRE

B.1

Personal information

Your subgroup?

 Group A  Group B  Group C Your gender?

 Male  Female Your age?

B.2

The rating questions

1. On a 1-5 scale (where 1=poor, 2=fair, 3=good, 4=very good, and 5=excellent), rate your overall experience in this virtual world.

2. On a 1-5 scale (where 1=poor, 2=fair, 3=good, 4=very good, and 5=excellent), what was the quality of the visual display?

3. On 1 – 5 scale (where 1 = strongly disagree, 2 = dis-agree, 3 = no opinion, 4 = dis-agree, and 5 = strongly agree), the audio playing in the background was dis-tracting.

B.3

The presence questions

Rate each question on a 1-5 scale where 1 = poor, 2 = fair, 3 = good, 4 = very good, and 5 = excellent.

4. How strong was your sense of presence in the virtual environment?

5. How aware were you of the real world surroundings while moving through the virtual world (i.e., sounds, room temperature, other people, etc.)?

6. In general, how realistic did the virtual world appear to you?

7. How realistically were you moved through the virtual world?7

8. With what degree of ease were you able to look around the virtual environment?

9. What was your overall comfort level in this environ-ment?

10. What was your overall enjoyment level in the virtual environment?

B.4

The environment and task-specific

ques-tions

Rate the questions on a 1 – 5 scale where 1 = strongly disagree, 2 = disagree, 3 = no opinion, 4 = agree, and 5 = strongly agree

11. I did not feel present in the computer generated world. 12. I felt in control of my movement.7

7

Only in the questionnaires of EV2 and EV3.

13. I would have liked to have more freedom of movement.8

14. Being completely stationary made me feel frustrated.9 15. I would have liked the ability to move my head in the

virtual environment.9

16. Being transported through the virtual environment made me feel frustrated.10

17. Despite the constrictions on my freedom of movement, I could still perform the tasks at hand sufficiently.8

18. I had ample freedom of movement, therefore, I could perform the tasks at hand sufficiently.11

19. The images coming closer towards me made me feel uncomfortable.9

20. Being transported closer to the images made me feel uncomfortable.10

21. Walking closer towards the images made me feel un-comfortable.11

22. After selecting an image, I had ample time before the next set of images appeared.8

23. After selecting an image, I had ample time before hav-ing to choose from the next set of images.11

24. In the final room, the images approached too fast.9 25. In the final room, I would have liked to revisit images.8 26. In the final room, I made use of my freedom of

move-ment to revisit images.11

27. The final room provided sufficient reflection of my choices during the previous rounds.

8Only in the questionnaires of EV1 and EV2. 9

Only in the questionnaire of EV1.

10Only in the questionnaire of EV2. 11

(17)

C.

CONSENT FORM

C.1

Introduction

The University of Amsterdam is conducting a study into the effects on the experience when the freedom of the user, in an immersive virtual reality environment, is limited in various degrees with respect to navigation and interaction. You will experience three different versions of an experimental system which is designed to explore fears and phobias in a virtual environment.

C.2

Involvement

The research project aims to improve the experimental sys-tem by exploring different navigational methods within the environment. For this purpose, three separate environments are developed; (1) in environment one, you will be com-pletely stationary and the headtracking of the head-mounted display is disabled; (2) in environment 2, you will be trans-ported through the environment based on your input; (3) and finally, in environment 3 you will have complete navi-gational freedom. After playing through each environment, you will be requested to fill in a questionnaire to measure the experience. Completing all three of the environments, in-cluding the questionnaires, will take approximately 30 min-utes.

Prior to experiencing the environments, you will need to apply or use the following hardware components:

• A sensor to monitor your heart-rate will be applied to your index finger.

• A head-mounted display to immerse yourself in the virtual environment.

• A gamepad for controlling your movements and inter-actions.

The concept of the system is similar across the three en-vironments that you are about to experience. The virtual environment will consist of seven rounds (i.e. levels). In the first six rounds, you are required to select images and the final round will display all of your selections in one room. Each of the six rounds will present you with three different images, you will select the image that makes you the most uncomfortable or is the most frightening to you. Based on your selection and your heart-rate, the intensity of the im-ages could increase. Furthermore, within the environment, an exit room is present that will allow you to quit the expe-rience whenever you wish.

C.3

Risks

The images will display scenes that range from neutral to content that can be considered as horrific, therefore, they could cause psychological damage. Furthermore, there is also the risk that you will experience the symptoms of virtual reality sickness, e.g. nausea, disorientation, headache, and general discomfort. You can quit the experience anytime you wish, by either entering the exit room or simply removing the head-mounted display. The sensor that monitors your heart-rate will store the data temporarily, however, it will be deleted after you’re finished with the last environment.

C.4

Benefits

The system could help you discover what kind of images frighten you or make you feel uncomfortable, however, the study will not benefit you directly. Nevertheless, your par-ticipation is invaluable to the research project. With the data that will be gathered, the system can be improved and this will be beneficial to future users.

C.5

Confidentiality

The heart-rate data that the sensor collects, will be stored anonymously for the duration of the experiment and im-mediately deleted afterwards. The data from the question-naires will be stored and used for analyses and to potentially improve future iterations of the system. Only the researcher that is involved in this study will have access to the data.

C.6

Rights as research participant

Your participation is voluntary and you have the right to quit the experiment at any time. Deciding to withdraw your participation will have no negative consequences for you.

C.7

Contact information

Primary contact person Rashid Mohamed

Science Park 904 Amsterdam Postbus 94323

1090 GH Amsterdam

rashid.mohamed@student.uva.nl Secondary contact person dhr. dr. F.M. (Frank) Nack Science Park 904 Amsterdam Postbus 94323

1090 GH Amsterdam F.M.Nack@uva.nl T: 0205256377

C.8

Your signature

(With your signature you agree that you are; 18 years and older; have read and understood the information about the study; participate voluntarily; and understand that the data from the questionnaires will be used for research purposes.)

(18)

D.

ANALYSIS

D.1

Normal distribution test

Table 4: Descriptives of the variable Age

Descriptives

Statistic Std. Error

Age

Mean 25,52 ,770

95% Confidence Interval for Mean Lower Bound 23,93 Upper Bound 27,10 5% Trimmed Mean 25,33 Median 24,00 Variance 16,028 Std. Deviation 4,004 Minimum 20 Maximum 35 Range 15 Interquartile Range 5 Skewness ,865 ,448 Kurtosis -,014 ,872

Table 5: Normal distribution test

Tests of Normality

Kolmogorov-Smirnovaa Shapiro-Wilk Statistic df Sig. Statistic df Sig. Age ,203 27 ,006 ,907 27 ,020 a. Lilliefors Significance Correction

(19)

Q1 Over all ex perien ce Q2 Visu al qua lity Q3 Aud io is d istract ing Q4 Sens e of pre senc e Ev. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. 1 3,15 3,00 4 1,350 1 5 3,52 4,00 4 1,252 1 5 1,85 2,00 1 1,167 1 5 3,07 3,00 3 1,107 1 5 2 3,81 4,00 4 1,145 1 5 3,48 4,00 4 1,051 1 5 1,96 2,00 1 1,255 1 5 3,93 4,00 5 1,107 1 5 3 3,70 4,00 4 0,993 1 5 3,67 4,00 4 0,832 2 5 1,85 2,0 0 1 1,064 1 5 3,67 4,00 4 1,109 1 5 Fried m an T est Chi -Square : 4 ,00 0 / df : 2 / As y mp . Sig. 0,135 Chi -Squ are : 1,220 / df : 2 / As y mp . Sig. 0 ,543 Chi -Square : 1,111 / df : 2 / As y mp . Sig. 0 ,574 Chi -Square : 15,519 / df : 2 / As y mp . Sig. 0,000 4 Q5 Real w orld surr ounding s Q6 Reali sm virt ual w orld Q7 Reali stic m ov e ment Q8 Degree ease of lo oking Ev. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ian M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. 1 2,04 2,00 2 0,940 1 4 3,22 3,00 4 1,188 1 5 1,89 2,00 1 1,086 1 4 2 2,11 2,00 1 1,188 1 4 3,44 3,00 3a 1,050 2 5 3,52 4,00 4 1,282 1 5 3,89 4,00 4 1,121 1 5 3 2,15 2,00 1 1,199 1 5 3,63 4,00 4 1,079 1 5 3,67 4,00 4 1,038 2 5 4,44 5,00 5 0,751 3 5 Fried m an T est Chi -Square : 0,407 / df : 2 / As y mp . Sig. 0 ,816 Chi -Squ are : 4,353 / df : 2 / As y mp . Sig. 0 ,113 Chi -Square : 39,881 / df : 2 / As y mp . Sig. 0,000 Q9 Over all co m fort Q10 Ove rall enj oy ment Q11 Not feeling pre sent Q12 Co ntr ol m o ve m e nt Ev. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. 1 2,85 3,0 0 3 1,2 92 1 5 3,00 3,0 0 2 1,2 71 1 5 2,30 2,0 0 2 1,0 31 1 5 2 3,44 4,00 4 1,155 1 5 3,59 4,00 5 1,394 1 5 2,04 2,00 1 1,091 1 4 3,19 4,00 4 1,272 1 5 3 3,33 4, 00 4 1,074 1 5 3,41 4,00 4 1,217 1 5 1,96 2,00 2 0,898 1 4 4,11 4,00 5 1,050 2 5 Fried m an T est Chi -Square : 2,275 / df : 2 / As y mp . Sig. 0 ,321 Chi -Square : 5,373 / df : 2 / As y mp . Sig. 0 ,068 Chi -Square : 2,754 / df : 2 / As y mp . Sig. 0 ,252 Q13 More freedo m of m ove m e nt Q14 Stat ionary frus trated Q15 Abi lity to m ove head Q16 Tra nsporte d fr ustrated Ev. Mean M ed ia n M od e Std. D ev . M in . Ma x. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. 1 3,56 4,0 0 4 1,3 40 1 5 2,85 3,0 0 3 1,4 60 1 5 3,81 4,0 0 5 1,3 31 1 5 2 3,11 3,00 2 1,281 1 5 2,33 2,00 1 1,414 1 5 3 Fried m an T est Q17 Des pite restr ict ions; p erfo r m tas ks s ufficiently Q18 A mple freed o m; p erfor m ta sks su ffici ently Q19 I m a ges clos er u nco m forta ble Q20 I m a ges tra nspo rted unco mfortabl e Ev. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max . Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. 1 4,37 5,00 5 0,792 2 5 3,48 4,00 4 1,087 1 5 2 4,26 5,00 5 0,984 2 5 3,33 4,00 4 1,330 1 5 3 4,19 5,00 5 1,001 2 5 Fried m an T est Q21 I m a ges w alking unco m fo rtable Q22 A mple ti m e; ne x t set of images ap pea r Q23 A mple ti m e; ne x t set of images ch oos e Q24 Fin al roo m ; i mages appr oach fa st Ev. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. 1 4,15 4,00 5 1,064 1 5 2,26 2,00 1 1,163 1 5 2 4,30 5,00 5 0,912 2 5 3 3,22 4,00 4 1,281 1 5 4,15 5,00 5 1,099 1 5 Fried m an T est Q25 Fin al roo m ; lik e to revisit i mages Q26 Fin al roo m ; m a de u se of fr eedo m t o r evisit Q27 Fin al roo m ; suf ficient reflect ion Ev. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. Mean M ed ia n M od e Std. D ev . M in . Max. N = 27 (27 valid, 0 mis sing) a. Mult iple modes e xist , the s m alle st valu e i s s hown 1 2,30 2,00 2 1,203 1 5 3,59 4,00 4 1,118 1 5 2 1,96 2,00 1 1,091 1 5 4,00 4,00 4 0,920 2 5 3 3,59 4,00 5 1,338 1 5 4,11 4,00 4 0,847 2 5 Fried m an T est Chi -Square : 6,377 / df : 2 / As y mp . Sig. 0 ,041

Referenties

GERELATEERDE DOCUMENTEN

As explained below, different levels of cognitive stimulation are to be expected for these psychological needs when people receive input high versus low in novelty, with novel input

They examined the effect of e-cigarette purchasing age restrictions on other substance use, using data from the Youth Risk Behavior Surveillance System (YRBSS) from 2007 to

We have collected multitemporal datasets from the multisensory (radar, thermal, multi-spectral) satellite images to assess the changes in the form of the Anak Krakatau edifice

After the gaming experience in virtual reality, the participants filled out an online immersion questionnaire, embodiment questionnaire, situational empathy

difference between baseline with 60% and 100% tooth damage but is not clear for 30% for all loads, compare with the maximum entropy adaptive wavelet, from the results

Haar werk komt in de Nederlandse literatuurgeschiedenis niet of nauwelijks voor, maar wordt door kenners als een hoogtepunt van de Nederlandse letterkunde van de twintigste

Al met al draagt Vogel meer dan voldoende empirisch materiaal aan om haar conclusie te sta- ven dat de receptie van mannelijke en vrouwelijke auteurs tussen 1945 en 1960 vanaf

Wanneer de werkzaamheden niet dieper gaan dan de beek en de vijver diep of verstoord zijn, wordt verondersteld dat geen nieuwe aanwinsten te verwachten zijn voor het