• No results found

Comparison of the Auditory and Visual Modalities of Extrinsic Feedback Components in a Virtual Self-Exploration Environment

N/A
N/A
Protected

Academic year: 2021

Share "Comparison of the Auditory and Visual Modalities of Extrinsic Feedback Components in a Virtual Self-Exploration Environment"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Comparison of the Auditory and

Visual Modalities of Extrinsic

Feedback Components in a Virtual

Self-Exploration Environment

Michael W.H. Chen

10411941

Master Thesis Credits: 18 EC Master Information Studies Human-Centered Multimedia University of Amsterdam Faculty of Science Supervisor Dr. Frank Nack Faculty of Science University of Amsterdam Science Park 904 1098 XH Amsterdam July 8th, 2016

(2)

ABSTRACT

This paper presents a research that seeks to examine whether the user experience of virtual self-exploration environments could be enhanced by the addition of extrinsic feedback com-ponents in the auditory and visual modalities. An experi-ment has been conducted with 40 participants, wherein they experienced different versions of the Your Worst Nightmare installation [5], followed by a questionnaire to evaluate the presented feedback components. It can be concluded that the addition of feedback components in general enhances the user experience, as the simulation was found more immer-sive and pleasant. Regarding which modality would be more suitable for virtual self-exploration environments, a combi-nation of the auditory and visual modalities produced the best user experience, as 16 of the 20 participants considered the ideal version the best. In turn, the setting of the ideal version could presumably be applied to other virtual self-exploration applications, similar to the YWN installation.

1.

INTRODUCTION AND MOTIVATION

Due to the growth of technology, increasing amounts of peo-ple are getting more familiar with the concept of virtual reality. Virtual reality (VR) is basically creating the illu-sion of presence in an environment that does not really exist with the use of technology by sending information to vari-ous human senses (e.g. sight and hearing) [19]. Nowadays, VR plays a role in various fields. Besides the entertainment field (e.g. gaming), VR likewise plays a role in more se-rious fields, such as education, training, and therapy [15]. One of the main benefits of incorporating VR simulations in non-entertainment fields is that users can train, learn and enhance the performance of tasks in a safe and controlled environment with little or no risk [17]. Moreover, the effec-tiveness of VR simulations is likely enhanced when the VR experience is pleasant, immersive and engaging [21, 3, 13].

Apart from the VR fields mentioned above, VR simulations could likewise be used for explorational purposes. Via such VR exploration, users could get to know themselves better for example. This particular field shares some character-istics of browsing, which is according to Bates: ‘Browsing is the activity of engaging in a series of glimpses, each of which exposes the browser to objects of potential interest; depending on interest, the browser may or may not examine more closely one or more of the (physical or represented) ob-jects; this examination, depending on interest, may or may not lead the browser to (physically or conceptually) acquire the object.’ [2]. Virtual self-exploration environments could then be seen as a 3D version of a browsing environment, which is a novel field in VR systems.

To make virtual self-exploration environments work in prac-tice, it should at least be appealing for the public. With other words, VR simulations needs to be enjoyable in prac-tice, so that people will continue to experience it. The enter-tainment field is a great example where enjoyment is esstial. According to Sweetser et al. [18], in order to create en-joyment in games, the game should comply to the following criteria: concentration, challenge, player skills, control, clear goals, feedback, immersion, and social interaction. These criteria are likewise relevant for virtual self-exploration sys-tems. Obviously, the whole exploration needs to be a pleas-ant and immersive experience. Immersing in the sense that

users should become less aware of their surroundings dur-ing the VR exploration, and users should feel emotionally involved in the VR world. Therefore, feedback was the main focus for this research to make virtual self-exploration envi-ronments more appealing.

Feedback is ’sensory information that stimulates one or more of the five senses (vision, audition, touch, smell, taste) or proprioceptive and vestibular information that indicates the extent to which the individual’s desired effect was accom-plished’ [12]. Feedback can be divided under two main cate-gories. Intrinsic feedback is that which the individual senses directly from acting on the world (i.e. implicit), whereas ex-trinsic feedback is that which is sensed from an external source that indicates the consequences of a person’s action (i.e. explicit) [12]. This research focused on the extrinsic feedback.

For a pleasant/engaging VR experience, users must receive appropriate feedback at appropriate times (e.g. receive feed-back on progress towards goal, receive feedfeed-back on actions, users should always know their status or score) [18]. With-out it, users are left confused, distracted or frustrated, as it will not be clear what is happening or what is going to happen next, which in turn may affect the whole virtual experience in a negative manner.

Due to the lack of research in virtual self-exploration envi-ronments, the effects of extrinsic feedback is unknown. An example of a virtual self-exploration environment is the Your Worst Nightmare (YWN) installation [5], which was used for this research.

The remaining parts of this paper proceed as follows: section 2 highlights other research related to virtual self-exploration environments and feedback. Section 3 describes the research question that this paper attempted to answer. Following that, section 4 describes the methodological approach of this research. This includes the creation of the feedback compo-nents and the conducted experiment. After that, section 5 displays the results of the conducted experiment. The inter-pretations of the results are described in section 6, including the limitations of this research. Lastly, section 7 consists of concluding remarks, ending with future research directions.

2.

RELATED WORK

2.1

Your Worst Nightmare (YWN) [5]

The YWN installation is a collaborative project between the Waag Society 1 and the University of Amsterdam 2. The

YWN project is inspired by Orwell’s ‘1984’ [14]. The an-tagonists in George Orwell’s book used a torture chamber, wherein prisoners were exposed to their worst nightmares, anxieties and phobias. The YWN project tries to simulate a similar concept by creating a virtual environment, wherein users are exposed to audiovisual material to maximise their arousal levels. A user must select the most frightening or disturbing image. Based on the user’s biofeedback measured during the simulation, the system displays more frightening and disturbing images for the user. The ultimate goal of this system is to determine the imagery data that has the

1https://www.waag.org/nl 2

(3)

greatest frightening or disturbing impact on the user. With other words, users experiencing the YWN simulation will discover their fears by browsing within a 3D environment, implying that the YWN installation is an example of a vir-tual self-exploration system.

2.2

Feedback

As previously mentioned, feedback is an essential compo-nent within VR applications. Most research have focused on unimodal and multimodal extrinsic feedback.

2.2.1

Multimodal Feedback

Zhang et al. [20] conducted an experiment, where they eval-uated auditory and visual feedback on task performance in a virtual assembly environment. Users needed to perform two tasks, which are a peg-in-a-hole assembly task (i.e. one pick-release operation) and a Sener electronic box assembly task (i.e. multiple pick-release operations). The results in-dicated that the task performance is different for the four feedback conditions (i.e. none, auditory, visual and both), but the introduction of auditory and/or visual feedback did improve the assembly task performance. However, partici-pants preferred the integrated feedback (both auditory and visual feedback) the most compared to the other feedback conditions.

2.2.2

Unimodal Feedback

Two examples of unimodal feedback in a virtual environ-ment are presented in [16] and [1]. In [16], continuous au-ditory feedback is presented in robot-assisted neurorehabil-itation of post-stroke patients. The term auditory feedback in their context denotes an audio signal, which is automat-ically generated and played back to the user in response to an action or an internal state of the system. Based on the conducted experiments, they concluded that continuous au-ditory feedback are likely to produce positive effects on pa-tient engagement and effort during robot-assisted movement training.

In [1], auditory feedback was used as a collision notification to provide collision avoidance feedback for users. Within a virtual labyrinth, users were continuously exposed with spa-tial sound in the background during their navigation through the labyrinth. As a result, providing the auditory feedback led to fewer wall collisions and increased awareness of the walls surrounding the user. However, the feedback did not increased the realism of the experience, mainly due to the sound not fitting in the environment.

2.2.3

Feedback in Games

In the world of gaming, auditory and visual feedback are key components, as they can function as tools of interactiv-ity and can support immersion. Darzentas et al. [8] explored the potential of auditory and visual output in games with the purpose to design and implement a digital entertainment experience that achieves a high level of immersion and en-gagement, and to investigate the hypothesis that unimodal and multimodal digital experiences create different levels of immersive responses. From the conducted experiments, it can be concluded that visual-only experiences are compa-rable to the traditional multimodal interaction in terms of difficulty, but struggle to create immersion. In the case for

audio-only experiences, they are highly immersive, present a greater challenge, and are generally preferred by users. These findings emphasises the importance of auditory feed-back for immersion. It seems that auditory feedfeed-back has a greater influence on the experience in games in comparison with visual feedback.

2.2.4

Biofeedback

Biofeedback is the process of gaining awareness of physio-logical functions using instruments, such as sensors. Some examples of biofeedback are brainwaves, muscle tone, skin conductance, heart rate and pain. In turn, the instruments could provide information about the physiological functions with the purpose to manipulate them at will [9]. Kuikkaniemi et al. [10] investigated the influence of implicit and explicit biofeedback in a First-Person Shooter game. According to the conducted experiments, implicit biofeedback does not produce significant differences in terms of player experience in an FPS game. On the other hand, explicit biofeedback resulted in players being more immersed and were positively affected, increasing the quality of the game experience.

3.

RESEARCH QUESTION

The purpose of this research is to discover how the user expe-rience could be enhanced for virtual self-exploration systems in terms of auditory and visual extrinsic feedback. Hence, the research question is as follows:

Should extrinsic feedback components within virtual self-exploration environments be presented in auditory, or visual modality, or the combination of both, to enhance the user experience?

By answering this question, appropriate modalities for vari-ous extrinsic feedback components within virtual self-exploration applications could be found that may enhance the user ex-perience.

According to Law et al. [11], there is no clear definition for ‘user experience’ (UX), as the concept of UX is dynamic, context-dependent and subjective, which is based on the po-tential benefits users may gain from using a product/service. Therefore, given the context of this research, user experience was mainly based on the immersiveness, pleasantness and overall satisfactory of the VR experience.

In addition to the research question, it would be interesting to examine whether the addition of feedback components in general enhances the user experience, disregarding the pos-sible modalities they could be presented in. Therefore, the sub-question: ‘Does the addition of feedback components in virtual self-exploration environments enhance the user ex-perience?’ was likewise been addressed.

4.

METHODOLOGY

This section describes the methodological approach that was taken during this research. It could be divided into two main parts. Firstly, the various extrinsic feedback components needed to be created and integrated in the YWN environ-ment (Section 4.2). Subsequently, user tests needed to be conducted in order to examine which form for each feedback component users prefer in general (Section 4.3 and 4.4).

(4)

4.1

YWN Virtual Environments

For this research, the virtual environment of the YWN in-stallation was upgraded, both externally (i.e. textures, light-ing, colours, structure) and internally (i.e. addition of ex-trinsic feedback components). The virtual environment con-sists of two main parts which are the main room and the final room. The main room is the starting position of users at the beginning of the simulation. A total of four rooms are con-nected to this main room which are three image rooms and one exit room. The exit room is located at the right side, marked with an exit sign. If users are becoming unwell dur-ing the simulation for instance, they can use the exit room to exit the simulation prematurely without completing the simulation (Figure 1).

Figure 1: The main room of the YWN environment. Three image rooms are connected at the front of the main room and the exit room on the right side.

In each image room, one image is presented which is ini-tially blurred, but will be visible when users enter the cor-responding room. After they selected the most disturb-ing/frightening image (i.e. walking through the image), they will be teleported to the main room for the next round. Based on the users’ choice, the time spent viewing each im-age, and the corresponding heart rate, the images in the consecutive rounds become more disturbing/frightening.

After the image selection phase is completed, users will be teleported to the final room. This room presents users with the image selections they made, which principally represent the anxiety and fears derived by the YWN system. In this room, users can again observe the images. There is also an exit room connected to the end of the final room, which is the ending point of the simulation (Figure 2).

Figure 2: The final room of the YWN environment. In this case, five images were presented to the user that represents the anxiety of the user derived by the YWN system.

4.2

Extrinsic Feedback Components

Various auditory and visual extrinsic feedback components were integrated into the YWN installation. Given the

frame-work of the YWN installation, it was decided that a total of five different extrinsic feedback components would be added to the current system, which are a Timer (T), a Remaining Rounds Indicator (RRI), Exit Room Messages (ERM), an Image Selection Confirmation (ISC), and a Heart Rate Indi-cator (HRI). The auditory and visual modality of every feed-back component were created similar as possible for more fair evaluations. For this research, four general categories were derived to classify the created feedback components, which are ‘state’, ‘information transfer’, ‘action confirma-tion’, and ‘biofeedback’ in the hope to create a framework that could likewise be applied to other VR systems.

4.2.1

State

The ‘state’ category denotes feedback that provides users with information about the state they are in during the sim-ulation. The Timer and the Remaining Rounds Indicator fit in this category, as both feedback components provide users with information regarding the overall state of the simula-tion.

As users experiencing the YWN simulation need to select one image in each round, a timer has been added. This prevents users of staying in one round for a long period and ensures that users will be forced to select an image, making progression in the simulation inevitable. The Timer displays the remaining time to select an image. The current timer has been set to 45 seconds, which gives users enough time to view each image, while progressing further in a decent tempo. When the time is up, a new round will be loaded with different images. The auditory form of the Timer is a male synthesised voice with an American accent 3. As the Timer is set to 45 seconds, the auditory form of the timer consists of four different audio files. At the beginning of each round, users will hear ‘45 seconds left’, following with ‘30 seconds left’ when the timer hits 30. Then when the timer hits the 15 seconds mark, the user will hear ‘15 seconds left’. Lastly, when the timer reaches the 5 seconds mark, the countdown will be played down to zero: ’Time’s up. Loading new images.’. For the visual form of the timer, it is a timer represented in a GUI (Graphical User Interface) displayed in Figure 3.

Figure 3: The visual form of the Timer feedback component.

Furthermore, it may be quite frustrating for users if they do not know how much longer the simulation would take or how their progression in the simulation is. The Remaining Rounds Indicator notifies users with the number of rounds remaining before the simulation ends. Here both the au-ditory and visual form of the Remaining Rounds Indicator contains the words ‘x rounds remaining’, where x denotes a positive integer. The same voice was used as from the timer for the auditory form. The indicator activates just before users enter the main room again in the next round. Users will hear the voice and see the words appearing on the screen in the Head Mounted Display (HMD).

3

(5)

4.2.2

Information Transfer

The ‘information transfer’ category consists of feedback com-ponents that transfer information to the user. Reactively, users know what to do or what to be expected from after viewing and/or hearing the information.

The Exit Room Messages fall under this category, as the messages provide users with information regarding exiting the simulation. Without some message, users could be con-fused of what to do when they enter the exit room. Hence to prevent this confusion, exit room messages were added to inform users what to do, when they want to exit the simulation. Again the auditory form uses the same voice as the previous feedback components, reading out loud the exit messages. The visual form is presented on a plane with the same exit messages in text. There are two different exit messages. One message is for the exit room located next to the main hall. This exit room is used for people who want to exit the simulation prematurely, for example if they are not feeling well, or by some other reasons, making them unable to complete the simulation (Figure 4, left). The other exit message is for the exit room located in the final room, which is principally the final destination of the simulation (Figure 4, right).

Figure 4: The visual form of the exit room message located in the main hall (left) and final room (right).

4.2.3

Action Confirmation

There is likewise feedback as a response to actions performed by users. The Image Selection Confirmation component falls under this category. As users in the YWN simulation need to select images, feedback was presented when they selected an image. The purpose of this feedback is to confirm the selection of images, making users understand that the choice they made was saved.

The auditory form is a sound cue that confirms the image selection. Another choice would be a voice that told users that an image has been selected (e.g. ”The image has been selected.”). However, a sound cue was suggested for this case, as the Timer likewise uses a voice, there exist a chance that the voice of the ‘Image Selection Confirmation’ com-ponent would be mixed with the voice of the Timer, which may interfere the hearing of users. On the other hand, a sound cue would still be clearly heard.

In turn, a similar form for the visual modality was needed for fair evaluation of both modalities. If the visual form was something like a GUI/pop-up with some text, such as ”The image has been selected”, then the form would not be similar to the sound cue, as it contains words. Therefore, a checkmark was used (i.e. non-textual representation), that appears on the screen of the HMD when the image has been selected (Figure 5).

Figure 5: A green checkmark that confirms the image selec-tion by users.

4.2.4

Biofeedback

The last category is ‘Biofeedback’. The Heart Rate Indica-tor falls under this category, as the heart rate is the biofeed-back that is measured during the YWN simulation. This feedback component indicates the heart rate of users dur-ing the playthrough of the simulation in real-time. The auditory form is a heartbeat sound, where the intensity changes dynamically based on the measured heart rate of users. This component was played during the whole simula-tion as a background sound, meaning that users heard their own heartbeat during the simulation. The visual form was displayed in a GUI as beats per minute (BPM) (Figure 6).

Figure 6: The visual form of the Heart Rate Indicator feed-back component.

4.3

Experiment Phase 1

4.3.1

Goal

The conducted experiment consisted of two phases. Phase 1 fits two purposes. Firstly, to examine the sub-question of whether the addition of feedback components in virtual self-exploration environments could enhance the user experience (Section 6.1). Secondly, to examine what the most preferred modality for each feedback component was (i.e. none, audi-tory, visual or auditory-visual). In turn, an ‘ideal’ version would be created that consists of the feedback components in their most preferred modality (Section 4.3.4).

4.3.2

Population

Twenty people participated in phase 1 of the experiment, wherein sixteen were male and four were female. The age ranges from 21 to 29 with an average age of 23.35.

4.3.3

Procedure

For phase 1 of the experiment, four different versions of the YWN simulation were created:

1. A version with no extrinsic feedback (i.e. Baseline).

2. A version where each feedback component is presented in the auditory modality.

3. A version where each feedback component is presented in the visual modality.

(6)

4. A version where both modalities were provided simul-taneously (auditory-visual).

Participants were divided into two user groups. One user group experienced the YWN simulations in the order of < 1, 2, 3, 4 >, while the other user group experienced the YWN simulations in the order of < 1, 3, 2, 4 > to counteract any bias that would be caused by the order of experiencing the simulations.

Phase 1 proceeded as follows: first users would be experi-encing one of the four VR environments. Then, after com-pleting the simulation, users needed to partake a question-naire to evaluate the experienced simulation. This routine would be repeated until all four versions were experienced and evaluated. The total duration of the experiment was approximately 30 minutes.

Each YWN version was evaluated based on the following as-pects: the immersiveness, the overall virtual experience, the degree of usefulness, distraction, and pleasantness of each feedback component, ratings/grades of the various feedback components and the simulations, rankings of the experienced versions and some general/open questions, such as how their ideal version would look like. The structure of the question-naire is described in appendix A.

Before starting the VR experience users needed to apply all the necessary hardware:

• A heartbeat sensor of BioSignalPlux4

, which monitors the users’ heart-rate during the entire VR experience. • A HMD (i.e. Oculus Rift DK25), so users can view

the VR environment while playing.

• A Xbox 360 controller, so users can move in the VR environment.

4.3.4

Ideal Version Phase 1

In the questionnaire, participants were asked how their ideal version of the YWN simulation would look like, given the various feedback components and their possible forms. Table 1 displays the answers of 20 participants in phase 1.

T RRI ERM ISC HRI

No Feedback 10% (2) 0% (0) 5% (1) 20% (4) 20% (4) Auditory 20% (4) 55% (11) 15% (3) 25% (5) 20% (4) Visual 45% (9) 35% (7) 50% (10) 15% (3) 20% (4) Audit-Vis 25% (5) 10% (2) 30% (6) 40% (8) 40% (8)

Table 1: The frequency table of the most preferred form for each feedback component based on 20 participants.

Nine participants preferred the Timer (T) in visual form, be-cause a visual timer was found more intuitive than a timer in auditory form and therefore the auditory form was deemed as more distracting, and having both modalities simultane-ously was ”too much” (P3 (Participant 3)), ”redundant” (P8, P9) and ”overkill” (P14). For the Remaining Rounds Indica-tor (RRI), eleven participants preferred it in audiIndica-tory form,

4http://biosignalsplux.com/index.php/en/ 5

https://www.oculus.com/en-us/dk2/

because having both modalities was redundant, and the way how the visual form was presented was distracting: ”the ap-pearing text is annoying” (P5), ”Seeing the text apap-pearing in the middle of the screen is distracting. Hearing the text is enough.” (P17). Regarding the Exit Room Messages (ERM), ten participants preferred them visually, because they found that only the visual modality was sufficient: ”the voice does not have an added value with the visual one.” (P9). In ad-dition, one participant mentioned that he could read faster than the voice, and therefore it was found not useful: ”I can read faster than the person is telling the exit room message.” (P11). For the Image Selection Confirmation (ISC), eight participants preferred the auditory-visual modality, as they found the combination of the two modalities was pleasant to experience and less plain/boring than the two modali-ties separately. Lastly, the Heart Rate Indicator (HRI) was most preferred in auditory-visual form, because the majority found that the auditory part made the simulation more im-mersive and the visual part provides the exact BPM, which would be otherwise missing in only the auditory modality.

To summarise, the ideal version according to phase 1 is as follows:

• Timer ⇒ Visual;

• Remaining Rounds Indicator ⇒ Auditory;

• Exit Room Messages ⇒ Visual;

• Image Selection Confirmation ⇒ Auditory-Visual;

• Heart Rate Indicator ⇒ Auditory-Visual

4.4

Experiment Phase 2

4.4.1

Goal

The main goal of phase 2 of the experiment was to evaluate the ideal version obtained from phase 1 (Section 4.3.4), and to examine whether the ideal version was truly the best ver-sion by comparing the results of the ideal verver-sion with the results of the other versions in phase 1.

4.4.2

Population

A total of twenty people participated in phase 2 of the ex-periment, wherein sixteen participants were male and four participants were female. The age ranges from 20 to 43 with an average age of 25.9. Participants in phase 2 were other individuals than in phase 1 to counteract bias.

4.4.3

Procedure

In phase 2 of the experiment, participants were asked to experience the baseline and the ideal version. Phase 2 pro-ceeded as follows: first participants would be experiencing the baseline version. Then, after completing the simulation, participants needed to partake a questionnaire to evaluate the baseline. After that, participants needed to experience the ideal version with a questionnaire afterwards. The total duration of the experiment was approximately 15 minutes. The same questionnaire was used as in phase 1, so that all versions could be fairly compared. The results are displayed in section 5 and further elaborated in section 6.

(7)

5.

RESULTS

The data could be divided in ordinal type and interval type. For the ordinal data, the median and the mode were used to determine the central tendency of the data, and the mean was used for the interval data. To determine the statistical dispersion of the data, the range and inter-quartile range (IQR) were used for ordinal data, and the standard deviation was used for interval data.

Furthermore, several normality checks were performed on the data to decide which statistical test to be used. The Shapiro-Wilk test was used to check whether the data of the questionnaire was normally distributed. P-values lower than the significance level of 0.05 (i.e. confidence level of 95%) means that the data significantly deviate from a normal dis-tribution. If that is the case, the Wilcoxon Signed-Rank test would be used for both ordinal and interval data. If the data is normally distributed, the Wilcoxon Signed-Rank test would still be used for the ordinal data, but the interval data would be tested with the Student T test.

For the analysis, the immersiveness and the overall VR ex-perience data of each version were compared with the base-line to examine whether the results achieved by each ver-sion were significantly different. As all data are gathered from ‘matched pairs’ (i.e. same subjects are present in both groups), and most of the data are not normally distributed (Table 26, 27 in appendix F.2), the Wilcoxon Signed-Rank test was used. Using a confidence level of 95%, the critical Z score values are -1.96 and +1.96, meaning that if a Z score is in the range of -1.96 and +1.96, the P-value will likewise be greater than the significance level of 0.05. This would mean that there was no significance differences found between the two samples.

For the ratings of the overall virtual experience (Section 6.5), the Student T test was used, as the data of the ratings is interval data, and most of the data is normally distributed (Table 28). Again a confidence level of 95% was used, which means that the T score should be greater than +2.093 and smaller than -2.093 and the P-value smaller than 0.05 to measure significant differences in the two samples (Table 6).

5.1

Results of Experiment

The immersiveness part in the questionnaire was represented in a 5-Likert Scale with the scores ‘Not at all’ (1) - ‘Very much so’/‘A lot’ (5). The questions about the Overall Vir-tual Experience, usefulness, distraction, and pleasantness were likewise represented in a 5-Likert Scale with the scores ‘Strongly Disagree’ (1) - ‘Strongly Agree’ (5) (Appendix A).

To illustrate, the measures of central tendency and the sta-tistical dispersion of the data for the immersiveness part are displayed in table 2 and 3. The other parts (i.e. overall VR experience, degree in usefulness, distraction and pleas-antness, and ratings) are displayed in appendix C and D. In addition, frequency tables, histograms, normality checks, and agree-disagree tables are displayed in the appendix. The results of the statistical tests are displayed in tables 4, 5 and 6. Lastly, the rankings given by the participants in percent-age are displayed in table 7.

The abbreviations in the tables are as follows: Timer (T),

Remaining Rounds Indicator (RRI), Exit Room Messages (ERM), Image Selection Confirmation (ISC), Heart Rate In-dicator (HRI), Virtual Reality Experience (VRExp), Base-line Phase 1 (B), Auditory Version (A), Visual Version (V), Auditory-Visual Version (AV), Baseline Phase 2 (B2), Ideal Version (I), Question (Q).

Median Mode Question B A V AV B2 I B A V AV B2 I 1 3 4 4 4 4 4 4 4 4 4 4 4 2 3 3 3 2 3 2 3 2 3 2 3 2 3 2 2 2 2 1 1.5 1 2 2 2 1 1 4 3 3.5 3 4 3 4 2 4 3 4 3 4 5 3.5 4 4 4 2.5 4 4 4 4 4 3 4

Table 2: The median and mode of the scores of the Immer-siveness part.

Range Inter-Quartile Range Question B A V AV B2 I B A V AV B2 I 1 3 3 3 4 3 2 1.25 2 1 1 1.25 1 2 4 4 3 4 3 4 1 2 1 1 1 1 3 4 2 2 3 3 2 2 1 1 1 1 1 4 2 3 3 2 3 4 2 1 1 1 2 0.25 5 4 3 3 4 4 2 3 2 0.5 0.25 1 1

Table 3: The range and IQR of the scores of the Immersive-ness part.

A-B V-B AV-B I-B2 Q.1 Z Score -2.296 -2.029 -1.591* -1.814* P-Value 0.022 0.042 0.112* 0.070* Q.2 Z Score -2.138 -0.712* -2.536 -2.770 P-Value 0.033 0.476* 0.011 0.006 Q.3 Z Score -1.767* -1.467* -1.575* -0.368* P-Value 0.077* 0.142* 0.115* 0.713* Q.4 Z Score -1.502* -1.431* -1.564* -2.961 P-Value 0.133* 0.152* 0.118* 0.003 Q.5 Z Score -2.463 -2.858 -2.032 -3.332 P-Value 0.014 0.004 0.042 0.001

Table 4: The Z scores and the P-values of the Immersive-ness data. An asterisk (*) denotes no significant differences.

A-B V-B AV-B I-B2 Q.1 Z Score -1.979 -1.554* -1.824* -2.517 P-Value 0.048 0.120* 0.068* 0.012 Q.2 Z Score -2.588 -1.294* -1.750* -1.283* P-Value 0.010 0.196* 0.080* 0.200* Q.3 Z Score -0.119* -1.731* -2.445 -2.449 P-Value 0.905* 0.083* 0.014 0.014

Table 5: The Z scores and the P-values of the Overall VR Experience data. An asterisk (*) denotes no significant differences.

A-B V-B AV-B I-B2 T Score -1.435* -1.412* -2.483 -2.814 P-Value 0.167* 0.174* 0.023 0.011

Table 6: The results of the Student T test of the Overall VR Experience grades compared with the baseline. An asterisk (*) denotes no significant differences.

(8)

Rank 1 Rank 2 Rank 3 Rank 4 Baseline 5% 5% 15% 75% Auditory 30% 35% 15% 20% Visual 15% 25% 55% 5% Auditory-Visual 50% 35% 15% 0%

Table 7: The rankings given to each version in percentage by 20 participants.

6.

DISCUSSION

6.1

Rankings

In phase 1 of the experiment, participants were asked to rank the simulations with rank 1 the best version to 4 the worst version. Revisiting the sub-question which was stated as: ‘Does the addition of feedback components in virtual self-exploration environments enhance the user experience?’, ta-ble 7 indicates that the baseline was the least preferred ver-sion (rank 4), implying that the addition of feedback compo-nents in general, enhances the user experience of the simu-lations. The auditory-visual version was the most preferred, as ten people ranked that version as number 1. The audi-tory version was ranked 2 the most and the visual version was ranked 3.

The arguments given by the participants for their rankings are as follows. Eight participants mentioned that the base-line was either too plain, too boring or less interesting than other versions (e.g. P12:”Not much happened in this version and it was not really interesting.”, P15:”The no feedback version was boring and plain.”). Moreover, four participants pointed out that the baseline provided no relevant informa-tion, which was thought annoying (e.g. P17:” The baseline version provided barely any information, which I thought was quite annoying, because for example, I wanted to know how long the simulation would last.”). However, one par-ticipant preferred the baseline over the others, because the provided feedback components were too distracting and too dominant on screen (P7:”The visual components were very dominant on my screen, which distracted me a lot.”).

For the auditory version, three participants mentioned that the auditory feedback components were distracting (e.g. P18: ”The auditory version was way more distracting compared to the other versions.”). Moreover, two participants claimed that some of the auditory components were useless, such as the heart rate indicator, as it does not provide the informa-tion precisely i.e. hearing your heart rate will not exactly tell you what your BPM is (P7:”The auditory heart rate in-dicator causes only distractions and did not have an added value.”). Then again, ten participants were positive over the auditory version, as they felt that the audio was intu-itive, more importantly it contributed to the immersiveness, making the simulation overall more pleasant to experience (e.g. P3:”The auditory components were less distracting and felt more intuitive.”, P5:”Audio adds more to the immersion than the visual components.”).

For the visual version, four participants found the visual components distracting and they were likewise too dominant on screen, resulting in less immersion compared to the au-ditory and auau-ditory-visual version (e.g. P7, P12:”The visual version was too distracting because of the many components

on screen.”). The plus side was that the visual components display more precise information than the auditory version (e.g. the HRI displays the BPM precisely), according to four participants (e.g. P10:”I found that the visual components contain more information.”).

Lastly, eight participants mentioned that the auditory-visual version was the most complete and thus the most pleasant to experience. Two participants mentioned that more back resulted in better feedback overall (e.g. P6:”More feed-back is the best feedfeed-back.”). Furthermore, four participants mentioned that the combination of the auditory and visual modalities gave the feeling of a real simulation or a game (e.g. P19:”The last version was the best experience as it reminds me of some game.”). Still, four participants men-tioned that for some components, it was redundant to have both the auditory and visual form presented simultaneously (e.g. P9:”The audio was more distracting but I liked the visual one. Together was redundant.”).

6.2

Immersiveness

Based on the results of the experiment, several findings can be pointed out regarding the immersiveness (Tables 2 and 3). Firstly, the addition of feedback components enhances the attention hold in the simulation, as the medians and modes were higher and the dispersion in the data was lower for all versions compared with the baseline. The ideal version seems to have the best attention hold as more participants have given a high score (i.e. median = 4 mode = 4, range = 2, IQR = 1). Moreover, participants were less aware of being in the real world in the auditory-visual version and in the ideal version, with measures significantly different com-pared to the baseline (Table 4). Additionally, the addition of the various feedback components made the sense of being in the virtual world stronger, with the ideal version achiev-ing the highest degree in immersiveness, as only the scores of the ideal version were significant different compared with the baseline (Table 4). Furthermore, the results indicate that the feedback components in general gave users the feel-ing that they were makfeel-ing progress towards the end of the simulation. However, it seems that the addition of feedback components did not influence the urge to stop playing and to see what was happening in the real world, regardless of the modalities, as all measures were similar and not significantly different compared with the baseline.

An important aspect to notice is that the addition of audi-tory components had a great influence on the immersiveness in general. This suggests that sound has a strong influ-ence on the immersiveness in virtual environments, just as in movies and videogames [6, 7, 8]. In terms of gender and age, no significant differences were found in the data. In general, the ideal and the auditory-visual version scored the best for the immersiveness aspect.

6.3

Overall VR Experience

Regarding the overall VR experience (tables 10 and 15), the enjoyment of experiencing the simulation increased when the feedback components were present, as the baseline achieved lower scores than the other versions. It seems that the ideal version was enjoyed the most (median = 4, mode = 4, range = 3, IQR = 0.5), and additionally, the scores of the ideal version were significant different compared with the baseline

(9)

(Z score = -2.517, P-value = 0.012 in Table 5). Further-more, more participants were likely to experience the sim-ulation again when the feedback components were present, specifically the auditory-visual version was most preferred to be played again (median = 4, mode = 4, range = 4, IQR = 1.25). Lastly, it seems that participants found that the auditory-visual version and the ideal version was the most pleasant and complete in terms of the screen usage by the feedback components.

6.4

Degree in Usefulness, Distraction and

Pleas-antness

Here, the degree in usefulness, distraction and pleasantness of the various modalities of each feedback component are described (Tables 11, 12, 13 in appendix C, and 16, 17, 18 in appendix D).

In terms of the usefulness, tables 11 and 16 indicate that the Timer seems to be considered as most useful when it was presented in the auditory-visual modality. For the Re-maining Rounds Indicator, more participants gave the au-ditory modality a high value than the other modalities (i.e. IQR = 0.25), implying that the auditory modality was found the most useful, which was likewise validated by the re-sults of the ideal form (Auditory) in phase 2. The Exit Room Messages were found most useful when they were pre-sented in auditory-visual form (Table 23 in appendix F.1). The ideal form (Visual) performed below the auditory-visual form, suggesting that the auditory-visual modality may be more useful than the visual modality. Regarding the Im-age Selection Confirmation and the Heart Rate Indicator, most people found the auditory-visual form the most useful. These results were again confirmed by the ideal version in phase 2 with similar scores.

Concerning the degree of distraction, tables 12 and 17 im-ply that the visual modality of the Timer was the least dis-tracting (median = 2, mode = 2, range = 4, IQR = 1.25). The ideal version which was also visual showed similar re-sults. For the Remaining Rounds Indicator, it seems that the auditory modality was found less distracting than other modalities, as most measures had lower values compared to the other modalities. Regarding the Exit Room Messages, it seems a close match between the visual and auditory-visual modalities, as both have similar values for the mea-surements. However, the ideal form which was the visual modality shows even lower scores (median = 1, mode = 1), which may suggest that the exit room messages in visual form were less distracting than the auditory-visual modal-ity. In regards to the Image Selection Confirmation, the auditory modality was considered the least distracting. For the Heart Rate Indicator, the visual modality achieved the lowest scores (median = 2, mode = 2, range 3, IQR = 1), implying that the visual form was found to be the least dis-tracting. Furthermore, the ideal form which was auditory-visual, was found more distracting, implying that the audi-tory form of the heart rate indicator is mainly the cause of distractions.

Regarding the degree of pleasantness (Tables 13 and 18), it seems that the auditory-visual modality was found the most pleasant for the Timer. The Remaining Rounds Indicator was found the most pleasant when it was presented in

au-ditory form with high values for the median (i.e. 4) and mode (i.e. 4) and low range (i.e. 3). Again this outcome was confirmed with the results of the ideal form (auditory) in phase 2. For the Exit Room Messages, the auditory-visual modality was found the most pleasant. The ideal form was visual, but the results do not indicate that the ideal form was more pleasant than the auditory-visual form. This suggests that the Exit Room Messages are found more pleasant in auditory-visual form. The percentage in agree-ment likewise favourites the auditory-visual form over the vi-sual form (Table 25). For the Image Selection Confirmation, the auditory-visual modality seems to be the most pleasant, as the dispersion of the data was lower than others (range = 3, IQR = 0.25). The results of the ideal form indicate the same outcome. Lastly, it was not clear which modality was the most pleasant for the Heart Rate Indicator based on the measures of central tendency and dispersion. How-ever, more participants (92.3%, table 25) agreed that the auditory-visual modality was the most pleasant.

6.5

Ratings

Participants were likewise asked to rate/grade all feedback components and the experienced simulations. The median, mode, mean and standard deviation are displayed in tables 14 and 19 in appendices C and D. Based on the two ta-bles, the ideal version has the highest grade for the overall virtual experience (mean = 7.75), followed by the auditory-visual, auditory, and visual version. The same applies for the feedback components of the ideal version, except for the Exit Room Messages feedback component, as the auditory-visual version had the highest average (mean = 6.5), sug-gesting that the Exit Room Messages should be presented in auditory-visual form for a better user experience.

Table 6 displays the T scores and the P-values obtained by comparing the grades of the overall virtual experience of each version with the baseline. The data of the auditory and visual version compared with the baseline did not indicate a significant difference in results. On the other hand, the auditory-visual version and the ideal version produced sig-nificant different results, meaning that the auditory-visual version and the ideal version were graded higher in general.

6.6

Ideal Version Phase 2

In phase 2 of the experiment, participants were asked which version they liked the most, the baseline or the ideal version, provided with their reasoning. 19 of the 20 participants pre-ferred the ideal version over the baseline version. The main reasons were that the addition of the feedback components was found more pleasant, provides more useful information, causes less confusion, led to better immersiveness and a sense of progress, and the ideal version felt more like a game, while the baseline was plain and boring. One participant preferred the baseline over the ideal version, because the baseline was less distracting and therefore was ”the focus on the images better” (P13).

When asked whether they thought that the experienced ideal version was truly the best version, 16 of the 20 partici-pants stated that they thought that the ideal version was indeed the best version. The main reasons were that the provided modalities for each component was sufficient and there was minimal interference between the feedback

(10)

com-ponents. Furthermore, the participants mentioned that the corresponding modalities were intuitive given the type and functionality of the feedback components. Lastly, none of the components were found too dominant, distracting, un-clear or confusing, and they affected the overall virtual ex-perience in a positive manner (e.g. ”It is more clear what to expect and what is happening” (P18), ”the feedback com-ponents presented in this way was a pleasant experience.” (P20)).

On the other hand, four participants stated that the ideal version was not the best. There reasons were that too much feedback was given compared to the baseline version, in par-ticular the auditory part of the heart rate indicator was deemed a bit distracting and stressful: ”I don’t like to hear my heartbeat since that can be stressful to some extent.” (P17). Table 8 displays their ideal versions. Looking at the most preferred form for each feedback component, the Timer should be visual, the Remaining Rounds Indicator should be auditory-visual, the Exit Room Messages is undecided, the Image Selection Confirmation should be auditory-visual, and the Heart Rate Indicator should be removed.

T RRI ECM ISC HRI No Feedback 25% (1) 0% (0) 25% (1) 25% (1) 50% (2)

Auditory 25% (1) 0% (0) 25% (1) 0% (0) 25% (1) Visual 50% (2) 25% (1) 25% (1) 25% (1) 25% (1) Auditory-Visual 0% (0) 75% (3) 25% (1) 50% (2) 0% (0)

Table 8: Ideal Version of four participants who did not agree that the ideal version was best.

6.7

Limitations

The addition of extrinsic feedback components into the YWN installation seems to have positive influences on the user ex-perience in general. Furthermore, the results indicate that the ideal version is most likely the best version for the YWN installation, as 80% of the participants agreed upon that. Still, it is difficult to conclude whether the obtained results are trustworthy, due to several reasons.

Firstly, a total of 40 participants were involved in this study divided over two phases, meaning that the evaluation of each version is based on twenty participants (except the baseline version, as it was evaluated in both phases). Due to the low amount of participants, the results may not be repre-sentative for a greater audience. Furthermore, the results indicated that there was no correlation found between the preferences of people with their gender and age. However, this finding could not be validated, due to the uneven dis-tribution of the age and gender within the participants.

Secondly, an important note is that the participants in phase 1 graded the baseline worse compared to the participants of phase 2. Additionally, several feedback components with their modalities were given lower scores in phase 1 compared to the ideal form counterpart in phase 2. This seems to be a positive outcome, but it may be caused by a bias. Users knew that they were going to experience and evaluate the ‘ideal’ version, which may made them assume that the ver-sion is good. In turn, this could influence the questionnaire results. In addition, the last question in phase 2 (i.e. Is this ideal version truly the best version), was formulated a bit in a biased manner. Consequently, this could mean that the

evaluation of the ideal version was biased, making it diffi-cult to compare it fairly with the other versions. Another possibility would be that the participants in phase 2 gave in general higher scores than the participants in phase 1.

Lastly, in terms of the created feedback components, the outcome could be different if the feedback components were presented in an alternative form or if the placement of cer-tain feedback components on screen was changed. It was mentioned by several participants that they found that the placement of certain components were not optimal, and com-bining it with the pixelated quality of the virtual environ-ment due to the low performance of the Oculus Rift DK2, it may had some negative impact on the evaluation of the corresponding feedback components.

7.

CONCLUSION & FUTURE WORK

This research attempted to examine how the user experience of virtual-self exploration systems could be enhanced by the addition of extrinsic feedback components. The examined modalities for this research were auditory and visual, mean-ing that each feedback component was presented in auditory form, visual form and auditory-visual form.

For the immersiveness, it can be concluded that the auditory modality enhances the immersiveness in a much higher de-gree than the visual modality, which was likewise concluded by others [6, 7, 8]. Moreover, the addition of feedback com-ponents in general made the experience more pleasant and less boring/plain.

Revisiting the research question, which was stated as: ‘Should extrinsic feedback components within virtual self-exploration environments be presented in auditory, or visual modality, or the combination of both, to enhance the user experience?’, it can be concluded that the addition of feedback components in general, not concerning the modalities, enhances the user experience in virtual self-exploration environments. In gen-eral, the ideal version produced better performance in all aspects regarding the user experience compared to the other versions, meaning that not one modality, but a combination of both auditory and visual modalities results in the best user experience. Moreover, 16 of the 20 participants consid-ered the ideal version to be the best version for the YWN in-stallation, which indicates a clear preference of which modal-ity is more suitable for the various feedback components. Therefore, the setting of the ideal version might likewise be applicable (as a recommendation) for other similar virtual self-exploration environments as the YWN installation.

However, the setting of the ideal version may not be appli-cable for virtual self-exploration environments with an en-tirely different context, for instance, where more and differ-ent feedback compondiffer-ents are relevant, since there exist too much variety within the design and representations of feed-back components in virtual reality environments, making it difficult to design strict guidelines that work in general. This could likewise be seen in the derived categories for the feedback components in section 4.2 (i.e. state, information transfer, action confirmation, biofeedback), as no optimal modalities could be linked to them.

(11)

of future feedback components in VR environments. It is important that the combination of different modalities of the various feedback components should function with minimal interference, should not be dominant, and the modalities should complement each other and not feel redundant, so that each component with its corresponding modality has its own purpose within the virtual environment.

In the future, research needs to be done to examine whether it is possible to build a framework of categories for feed-back components where various modalities are correlated to those categories. For instance, by creating and examin-ing more different types of feedback components that could be classified under certain categories. With other words, if feedback component x fits in category y, which is correlated with modality z, then component x should be presented in modality z. Furthermore, more research needs to be done on how feedback should be presented for an optimal user experience. For instance, preferred voice types (e.g. female, male, synthesised, accents), various sound cues, presenting the components at the right places, at the right times with appropriate durations and so forth, similar to the work of Brewster [4], where he designed a framework for integrat-ing non-speech sound (i.e. earcons) into human-computer interfaces. Additionally, future work directions could like-wise consist of designing a framework, similar as Brewster’s framework for earcons, but then for other modalities, such as the haptic modality.

8.

REFERENCES

[1] C. Afonso and S. Beckhaus. How to not hit a virtual wall: aural spatial awareness for collision avoidance in virtual environments. In Proceedings of the 6th Audio Mostly Conference: A Conference on Interaction with Sound, pages 101–108. ACM, 2011.

[2] M. J. Bates. What is browsing-really? a model drawing from behavioural science research, 2007. [3] D. A. Bowman and R. P. McMahan. Virtual reality:

how much immersion is enough? Computer, 40(7):36–43, 2007.

[4] S. A. Brewster. Providing a structured method for integrating non-speech audio into human-computer interfaces. PhD thesis, University of York England, UK, 1994.

[5] v. d. H. G. M. R. Chen M., van der Kooij K. Your worst nightmare. 2016.

[6] A. J. Cohen. Music as a source of emotion in film. Music and emotion: Theory and research, pages 249–272, 2001.

[7] K. Collins. Playing with sound: a theory of interacting with sound and music in video games. Mit Press, 2013. [8] D. P. Darzentas, D. Horizon, M. Brown, and

N. Curran. Designing games for all: Exploring output and immersion.

[9] V. Durand and D. Barlow. Essentials of abnormal psychology. Cengage Learning, 2012.

[10] K. Kuikkaniemi, T. Laitinen, M. Turpeinen, T. Saari, I. Kosunen, and N. Ravaja. The influence of implicit and explicit biofeedback in first-person shooter games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 859–868. ACM, 2010.

[11] E. L.-C. Law, V. Roto, M. Hassenzahl, A. P. Vermeeren, and J. Kort. Understanding, scoping and defining user experience: a survey approach. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 719–728. ACM, 2009.

[12] B. J. Mohler. The effect of feedback within a virtual environment on human distance perception and adaptation. ProQuest, 2007.

[13] S. Nichols, C. Haldane, and J. R. Wilson. Measurement of presence and its consequences in virtual environments. International Journal of Human-Computer Studies, 52(3):471–491, 2000. [14] G. Orwell. 1984. Houghton Mifflin Harcourt, 1983. [15] J. Psotka. Immersive training systems: Virtual reality

and education and training. Instructional science, 23(5-6):405–431, 1995.

[16] G. Rosati, F. Oscari, D. J. Reinkensmeyer, R. Secoli, F. Avanzini, S. Spagnol, and S. Masiero. Improving robotics for neurorehabilitation: enhancing

engagement, performance, and learning with auditory feedback. In Rehabilitation Robotics (ICORR), 2011 IEEE International Conference on, pages 1–6. IEEE, 2011.

[17] M. T. Schultheis and A. A. Rizzo. The application of virtual reality technology in rehabilitation.

Rehabilitation psychology, 46(3):296, 2001. [18] P. Sweetser and P. Wyeth. Gameflow: a model for

evaluating player enjoyment in games. Computers in Entertainment (CIE), 3(3):3–3, 2005.

[19] V. R. S. (VRS). Virtual reality site, 2015.

[20] Y. Zhang, T. Fernando, H. Xiao, and A. R. L. Travis. Evaluation of auditory and visual feedback on task performance in a virtual assembly environment. Presence: Teleoperators and Virtual Environments, 15(6):613–626, 2006.

[21] M. Zyda. From visual simulation to virtual reality to games. Computer, 38(9):25–32, 2005.

(12)

APPENDIX

A.

QUESTIONNAIRE

• Immersiveness (5-Likert Scale, Not at all - Very much so/A lot)

1. To what extent did the simulation hold your atten-tion?

2. To what extent did you feel consciously aware of being in the real world whilst experiencing the sim-ulation?

3. Did you feel the urge at any point to stop playing and see what was happening around you?

4. To what extent was your sense of being in the vir-tual environment stronger than your sense of being in the real world?

5. To what extent did you feel like you were making progress towards the end of the simulation?

• VR Experience (5-Likert Scale, Strongly Disagree - Strongly Agree)

1. I enjoyed the virtual simulation.

2. I would like to experience this virtual simulation again.

3. I was able to see on the screen everything I needed during the simulation.

• Feedback components (5-Likert Scale, Strongly Disagree - Strongly Agree)

1. The ‘x’ feedback component was useful. 2. The ‘x’ feedback component was distracting. 3. The ‘x’ feedback component was pleasant. • Rating (10-Likert Scale)

1. How would you rate/grade feedback component ‘x’ ? 2. How would you rate/grade the overall virtual

ex-perience?

• Rank the different versions (Experiment Phase 1) • Open questions i.e. How does your ideal version look

like (Experiment Phase 1)? Is this ‘ideal version’ truly the best version (Experiment Phase 2)?

B.

RANKINGS

B A V AV Mean 3.6 2.25 2.5 1.65 Standard Deviation 0.82 1.12 0.83 0.75 Median 4 2 3 1.5 Mode 4 2 3 1 Sum 72 45 50 33

Table 9: Rankings of the four versions, based on the mean, standard deviation, median, mode and sum. The lower the sum the higher the rank as rank 1 is the best and rank 4 is the worst.

(13)

C.

MEDIAN & MODE

Median Mode Question B A V AV B2 I B A V AV B2 I 1 3 3.5 4 4 3 4 2 3 4 4 3 4 2 2.5 3 3 4 2 3.5 2 3 3 4 2 4 3 4 3 4 4 3 4 1 4 4 4 4 4

Table 10: The median and mode of the scores of the Overall VR Experience part. Median Mode Feedback Components A V AV I A V AV I T 4 4 4 4 4 4 4 4 RRI 4 4 4 4 4 4 4 4 ERM 3 4 4 3.5 4 4 4 4 ISC 4 4 4 4 4 4 4 4 HRI 3 4 3.5 3 4 4 3 3

Table 11: The median and mode of the degree in usefulness of each feedback component.

Median Mode Feedback Components A V AV I A V AV I T 2.5 2 2.5 2 2 2 2 1 RRI 2 2.5 3 2 2 2 3 1 ERM 2 2 2 1 2 2 2 1 ISC 1 2 2 2 1 2 2 1 HRI 2 2 2 2 2 2 1 2

Table 12: The median and mode of the degree in distraction of each feedback component.

Median Mode Feedback Components A V AV I A V AV I T 3.5 3 3.5 4 4 4 4 4 RRI 4 3 3.5 4 4 3 3 4 ERM 3 3 4 4 3 4 4 4 ISC 4 4 4 4 4 4 4 4 HRI 4 3 4 4 4 4 4 4

Table 13: The median and mode of the degree in pleasant-ness of each feedback component.

Median Mode B A V AV B2 I B A V AV B2 I T - 6 6 6.5 - 7.5 - 7 8 7 - 7 RRI - 7 6 6 - 8 - 7 6 6 - 9 ERM - 6 7 7 - 6 - 6 7 7 - 7 ISC - 7 6.5 7 - 7.5 - 7 7 6 - 7 HRI - 7 7 8 - 8 - 7 7 9 - 9 VRExp 6 7 7 7.5 7 8 5 7 7 8 7 8

Table 14: The median and mode of the ratings/grades of each feedback component and the VR experience.

D.

RANGE & IQR

Range Inter-Quartile Range Question B A V AV B2 I B A V AV B2 I

1 3 3 3 3 3 3 2 1.25 1 1 1 0.5

2 4 4 4 4 3 4 1.25 1.25 2 1.25 2 1.25 3 4 3 4 4 4 3 3 2 1.25 0.25 2 1

Table 15: The range and IQR of the scores of the Overall VR Experience part.

Range Inter-Quartile Range Feedback Components A V AV I A V AV I T 4 4 4 3 2 1.25 0.5 1 RRI 3 3 4 4 0.25 1 1 1.25 ERM 3 4 4 4 1 1 1 2 ISC 4 4 3 4 2.25 1.25 1.25 1.25 HRI 4 4 4 4 2 2 1 1

Table 16: The range and IQR of the degree in usefulness of each feedback component.

Range Inter-Quartile Range Feedback Components A V AV I A V AV I T 3 4 4 3 2 1.25 1 2 RRI 3 4 4 3 2.25 1 2 2 ERM 3 3 3 3 1 0.5 0.25 2 ISC 3 4 4 3 1 1 1 1.25 HRI 4 3 4 4 2.25 1 2 1.5

Table 17: The range and IQR of the degree in distraction of each feedback component.

Range Inter-Quartile Range Feedback Components A V AV I A V AV I T 3 3 4 3 2 2 1.25 1 RRI 3 2 2 3 1 1 1 1 ERM 3 4 3 3 1 1 1 1 ISC 3 4 3 3 1.25 1 0.25 1 HRI 4 3 4 4 1.25 1 1 1

Table 18: The range and IQR of the degree in pleasantness of each feedback component.

Mean Standard Deviation

B A V AV B2 I B A V AV B2 I T - 5.65 5.80 6.10 - 7.60 - 2.18 2.07 2.02 - 1.27 RRI - 6.95 6.05 6.50 - 7.70 - 1.61 1.79 1.36 - 1.87 ERM - 6.05 6.15 6.50 - 6.15 - 1.54 1.60 2.01 - 1.66 ISC - 6.50 6.00 6.70 - 7.55 - 1.76 2.03 1.72 - 1.47 HRI - 5.95 6.15 6.65 - 7.65 - 2.61 2.64 2.94 - 1.63 VRExp 5.75 6.5 6.4 7.15 6.75 7.75 1.92 1.79 1.39 1.69 1.41 0.85

Table 19: The mean and standard deviation of the rat-ings/grades of each feedback component and the VR ex-perience.

(14)

Not at all (1) Slightly (2) Moderately (3) Fairly (4) Very Much So (5) Q. B A V AV B2 I B A V AV B2 I B A V AV B2 I B A V AV B2 I B A V AV B2 I 1. # 0 0 0 1 0 0 5 2 2 0 3 0 6 5 4 3 3 2 7 7 11 10 9 12 2 6 3 6 5 6 1. % 0 0 0 5 0 0 25 10 10 0 15 0 30 25 20 15 15 10 35 35 55 50 45 60 10 30 15 30 25 30 2. # 1 2 0 4 0 4 3 7 4 9 3 8 7 5 9 4 8 5 5 5 4 2 6 2 4 1 3 1 3 1 2. % 5 10 0 20 0 20 15 35 20 45 15 40 35 25 45 20 40 25 25 25 20 10 30 10 20 5 15 5 15 5 3. # 8 9 8 8 14 10 4 10 8 10 3 8 6 1 4 1 2 2 0 0 0 1 1 0 2 0 0 0 0 0 3. % 40 45 40 40 70 50 20 50 40 50 15 40 30 5 20 5 10 10 0 0 0 5 5 0 10 0 0 0 0 0 4. # 0 0 0 0 1 1 8 4 4 4 5 0 5 6 8 2 7 4 7 8 6 14 7 12 0 2 2 0 0 3 4. % 0 0 0 0 5 5 40 20 20 20 25 0 25 30 40 10 35 20 35 40 30 70 35 60 0 10 10 0 0 15 5. # 6 0 0 1 4 0 3 1 1 2 6 0 1 6 4 1 6 2 9 7 10 11 2 12 1 6 5 5 2 6 5. % 30 0 0 5 20 0 15 5 5 10 30 0 5 30 20 5 30 10 45 35 50 55 10 60 5 30 25 25 10 30

Table 20: The frequency table of the questions about immersiveness. Baseline (B)/Auditory (A)/Visual (V)/Auditory-Visual (AV)/Baseline 2 (B2)/Ideal (I), participants: 20

Figure 7: Immersiveness Question 1.

Figure 8: Immersiveness Question 2.

Figure 9: Immersiveness Question 3.

(15)
(16)

F.

OVERALL VIRTUAL EXPERIENCE

Figure 12: Overall Virtual Experience Question 1.

Figure 13: Overall Virtual Experience Question 2.

(17)

Strongly Disagree - SD (1) Disagree - D (2) Neutral - N (3) Agree - A (4) Strongly Agree - SA (5) Question B A V AV B2 I B A V AV B2 I B A V AV B2 I B A V AV B2 I B A V AV B2 I 1. # 0 0 0 0 0 0 9 4 4 1 2 2 3 6 4 6 12 3 5 5 10 10 3 10 3 5 2 3 3 5 1. % 0 0 0 0 0 0 45 20 20 5 10 10 15 30 20 30 60 15 25 25 50 50 15 50 15 15 10 15 15 25 2. # 3 1 1 1 0 2 4 4 6 4 11 3 5 6 7 3 2 5 3 6 4 10 5 7 2 3 2 2 2 3 2. % 15 5 5 5 0 10 20 20 30 20 55 15 25 30 35 15 10 25 15 30 20 50 25 35 10 15 10 10 10 15 3. # 7 1 1 1 3 0 1 7 2 0 4 1 1 3 3 0 4 1 7 9 9 14 5 12 4 0 5 5 4 6 3. % 35 5 5 5 15 0 5 35 10 0 20 5 5 15 15 0 20 5 35 45 45 70 25 60 20 0 25 25 20 30

Table 21: The frequency table of the questions about the VR experience. Baseline/Auditory/Visual/Auditory-Visual/Baseline 2/ Ideal, participants: 20

(18)

F.1

Agree vs. Disagree

Categorising the results of each questionnaire to either agree or disagree. With other words, ‘Strongly Disagree’ (1) and ‘Disagree’ (2) were categorised as ‘Disagree’, and ‘Strongly Agree’ (5) and ‘Agree’ (4) were categorised as ‘Agree’. The neutral (3) was disregarded.

Disagree Agree

Question B A V AV B2 I B A V AV B2 I Q.1 52.9% (9) 28.6% (4) 25% (4) 7.1% (1) 25% (2) 11.8% (2) 47.1% (8) 71.4% (10) 75% (12) 92.9% (13) 75% (6) 88.2% (15) Q.2 66.7% (10) 35.7% (5) 53.8% (7) 29.4% (5) 61.1% (11) 33.3% (5) 33.3 (5) 64.3% (9) 46.2% (6) 70.6% (12) 38.9% (7) 66.7% (10) Q.3 42.1% (8) 47.1% (8) 17.6% (3) 5% (1) 43.8% (7) 5.3% (1) 57.9% (11) 52.9% (9) 82.4% (14) 95% (19) 56.2% (9) 94.7% (18)

Table 22: Agree vs Disagree VR Experience

Disagree Agree A V AV I A V AV I T 35.3% (6) 31.3% (5) 16.7% (3) 5.6% (1) 64.7% (11) 68.7% (11) 83.3% (15) 94.4% (17) RRI 11.8% (2) 6.7% (1) 20% (3) 11.8% (2) 88.2% (15) 93.3% (14) 80% (12) 88.2% (15) ERM 30.8% (4) 26.7% (4) 20% (3) 44.4% (8) 69.2% (9) 73.3% (11) 80% (12) 55.6% (10) ISC 38.9% (7) 29.4% (5) 31.3% (5) 13.3% (2) 61.1% (11) 70.6% (12) 68.7% (11) 86.7% (13) HRI 42.9% (6) 38.9% (7) 23.1% (3) 30.8% (4) 57.9% (8) 61.1% (11) 76.9% (10) 69.2% (9)

Table 23: Agree vs Disagree Usefulness

Disagree Agree A V AV I A V AV I T 58.8% (10) 80% (12) 71.4% (10) 87.5% (14) 42.2% (7) 20% (3) 28.6% (4) 12.5% (2) RRI 73.7% (14) 76.9% (10) 60% (9) 82.4% (14) 26.3% (5) 23.1% (3) 40% (6) 17.6% (3) ERM 87.5% (14) 88.2% (15) 93.8% (15) 87.5% (14) 12.5% (2) 11.8% (2) 6.2% (1) 12.5% (2) ISC 90% (18) 77.8% (14) 75% (12) 93.8% (15) 10% (2) 22.2% (4) 25% (4) 6.2% (1) HRI 64.7% (11) 89.5% (17) 76.5% (13) 70.6% (12) 35.3% (6) 10.5% (2) 23.5% (4) 29.4% (5)

Table 24: Agree vs. Disagree Distraction

Disagree Agree A V AV I A V AV I T 37.5% (6) 42.9% (6) 33.3% (5) 15.4% (2) 62.5% (10) 57.1% (8) 66.7% (10) 84.6% (11) RRI 15.4% (2) 40% (4) 0% (0) 21.4% (3) 84.6% (11) 60% (6) 100% (10) 78.6% (11) ERM 20% (2) 30.8% (4) 7.7% (1) 14.3% (2) 80% (8) 69.2% (9) 92.3% (12) 85.7% (12) ISC 7.1% (1) 30.8% (4) 16.7% (3) 13.3% (2) 92.9% (13) 69.2% (9) 83.3% (15) 86.7% (13) HRI 31.3% (5) 25% (3) 7.7% (1) 25% (4) 68.7% (11) 75% (9) 92.3% (12) 75% (12)

(19)

F.2

Normality Checks: Shapiro-Wilk Test

Q.1 Q.2 Q.3 Q.4 Q.5 B 0.017 0.085* 0.002 0.000 0.001 A 0.012 0.078* 0.000 0.018 0.010 V 0.004 0.010 0.001 0.023 0.007 AV 0.001 0.015 0.000 0.000 0.001 B2 0.005 0.024 0.000 0.006 0.050 I 0.000 0.033 0.000 0.000 0.000

Table 26: The normality check of the Immersiveness ques-tions for each version.

Q.1 Q.2 Q.3 B 0.001 0.062* 0.001 A 0.014 0.117* 0.001 V 0.004 0.088* 0.007 AV 0.009 0.008 0.000 B2 0.001 0.000 0.050 I 0.004 0.062* 0.000

Table 27: The normality check of the Overall VR Experience questions for each version.

T RRI ERM ISC HRI VRExp

B - - - 0.601 A 0.088 0.067 0.218 0.004 0.126 0.123 V 0.011 0.378 0.002 0.006 0.007 0.113 AV 0.186 0.100 0.023 0.078 0.006 0.006 B2 - - - 0.046 I 0.200 0.011 0.088 0.108 0.014 0.014

Table 28: The normality check of the ratings/grades given to the various feedback components of each version.

Referenties

GERELATEERDE DOCUMENTEN

Dat ook de beeldbreedte zèlf van directe invloed is relateren Van der Zee en Boesten aan het wel bekende size-constancy-effect (zie bijvoorbeeld Gregory (4)). waarin

Data of fourteen participants was gathered in an experiment where participants had to point to the perceived directions within a sphere in virtual reality, when they felt vibrations

How will a system look like that uses visual feedback in the form of a virtual coxswain to help rowers engage more on the ergometer.. This research question has

Even with only one AOI, the gathered eye-tracking data allowed for some conclusions to be drawn: the findings of the present study suggest that an eco-driving

To examine whether the semantic neural representations are shared across languages and modalities, the exact same 10 object concepts were used in three separate fMRI experiments

In these studies, synesthetic congruency between visual size and auditory pitch affected the spatial ventriloquist effect (Parise and Spence 2009 ; Bien et al.. For the

They exposed participants to asynchronous audio- visual pairs (*200 ms lags of sound-first and light-first) and measured the PSS for audiovisual, audio-tactile and visual-tactile

Exposure to the ventriloquism situation also leads to compensatory aJtemffed.r, consisting in post exposure shifts in auditory localization (Canon 1970; Radeau 1973; Radeau