• No results found

Psychological experimentation in virtual reality : a technical report

N/A
N/A
Protected

Academic year: 2021

Share "Psychological experimentation in virtual reality : a technical report"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Psychological experimentation in

Virtual Reality: a technical report

Kas van ‘t Veer

University of Amsterdam

Thesis for BSc Psychology (Brain and Cognition)

Supervisor: Jasper Winkel

(2)

Introduction

Virtual reality is a term that is usually associated with video games, but it also has great value in psychological research. In recent years, virtual reality applications are being used more and more in psychological experiments and in psychotherapy (Bohil, Alicea & Biocca, 2011). Virtual reality (VR) indicates that a computer simulated environment is created in which a user is immersed. Using visual, auditory and sometimes even tactile or olfactory stimuli the virtual environment aims to be an accurate representation of a real world (Richard, Tijou, Richard & Ferrier, 2006). Being an illusion of the real world, VR can be used to mimic real-world events and elicit real-world responses in its users (Riva et al., 2007). At the same time this realistic environment can also be molded into anything a researcher needs to test his subjects, making VR a very useful tool.

Virtual reality has also proven to be useful in other psychological fields other than research. For instance, it has shown to be an effective aid for rehabilitation and pain reduction (Schultheis & Rizzo, 2001; Malloy & Milling, 2010). , many therapists have been effectively using virtual reality to provide therapy for patients with a wide variety of mental disorders (Valmaggia, Latif, Kemptom, & Maria, 2016). The most classic example of this is by using virtual reality to treat anxiety-related disorders by acting as a form of exposure therapy. When a patient is immersed in a virtual environment in which fear inducing stimuli are present, it can help the patient to slowly habituate to these stimuli. Even though virtual reality can evoke strong responses of fear, the responses are generally weaker than those of real world stimuli and therefore virtual reality can help the patient step up towards dealing with an actual real world stimulus. In this way, virtual reality has been used to reduce the symptoms of post-traumatic stress disorder in war veterans (Rothbaum, Rizzo, & Difede, 2010; Rizzo, Hartholt, Grimani, Leeds, & Liewer, 2014), social anxiety disorder (Anderson et al., 2013) and even fear of flying (Rothbaum et al., 2006).

This introductory chapter will mostly discuss why and how VR works in the psychological field and what prerequisites need to be met in order to use a VR application as effectively as possible as a psychologist. The rest of this article will concentrate on the experiment we’ve performed using VR. This experiment was aimed to study the role of realism in a VR

experiment by testing a paradigm about effort-based decision making. A short elaboration about the paradigm, the experimental design and the results will be given, but the major part of this article will heavily focus on the technical details about developing the software together with a group of psychology students.

(3)

Benefits of virtual reality

First of all, virtual reality allows researchers or therapists to immerse people into any environment they wish, without needing to create or travel to any physical environment itself. As long as they have sufficient technical skills, there is complete and utter freedom in terms of what kind of environment the researcher wants to create. When developing a virtual environment, imagination is the limit. The only physical materials that are required are a lab space with a suitable computer system with the required peripherals. This also means that in a research setting the researchers are granted complete control over the environment. Theoretically, any entity inside the virtual environment can be programmed to interact in whatever predefined way possible to the test subject. This inherently means that virtual reality grants the researchers almost unlimited experimental control over the

situation. This is a major benefit, because this opens up lots of new research opportunities that were previously impossible due to physical limitations. An interesting example would be to test the effect of changing distant scenery of the virtual environment by changing it in real time (Bohil, Alicea, & Biocca, 2011), something which not possible in the real world. Another possibility would be to implement simulated humans which are controlled by an artificial intelligence algorithm, which completely rules out any kind of biases that a human may have.

Secondly, modern virtual reality applications have a high degree of ecological validity, because they are generally perceived as realistic (Parsons, 2015). Ecological validity is the extent to which an experimental environment is similar to a real environment and therefore also indicates to which extent findings from the experimental environment can be projected to reality. This is a very important property for an experiment because scientifically, findings that do not relate to the real world are of very little use. In a proper VR experiment, the ecological validity can be very high (Parsons, 2015). This is very related to VR’s ability to induce a sense of presence. Presence indicates the degree in which someone in a virtual environment feels as if he is “actually there”. This sense of presence is what’s accountable for ecological validity. What’s most exceptional about this is that ecological validity is usually considered to be inversely correlated to experimental control. Often times a research setup only performs well on either ecological validity (such as field studies) or experimental control (laboratory experiments). However, VR provides a great middle ground by allowing

maximum experimental control while still keeping relatively high ecological validity if the virtual environment is well developed. This is a very powerful combination that could potentially improve research data.

Another interesting find is that virtual reality can be a very exciting experience for most subjects. As many experiments use lots of repetitive trials, it’s logical that subjects become bored after a while. However, bored subjects can be a big problem in psychological research, since a subject that doesn’t give his best effort anymore might reduce the reliability of the results. Because virtual reality hypothetically reduces boredom, it’s plausible that it can further contribute to better data in this way, either by increasing motivation of test subjects

(4)

on average, or by allowing the researcher to conduct more trials while still maintaining an acceptable level of boredom.

Additionally, virtual reality is highly compatible with many psychophysiological

measurements. Even though the subject may move around inside the virtual environment, in the vast majority of cases the subject will remain seated in the real world. This allows for many extensive measurements to be taken that would be much harder – if not impossible – if the subject was actually walking around, such as EEG, heart rate, breath rate and blood pressure. With some modifications, VR can even be compatible with fMRI (Bohil, Alicea & Biocca, 2011). This again opens up lots of interesting new research opportunities. For instance, when subjects are inside an fMRI scanner, they will have to remain stationary. Inside a virtual environment however, they will be able to move around in a realistic way, making fMRI research about spatial navigation much more accessible.

Lastly, virtual reality is relatively safe, especially if proper precautions are taken. There is a small risk of nausea (which will be discussed in more detail later), but other than that the subject remains safely within the bounds of the laboratory, where the risk of physical harm is not related to the risk of physical harm inside the simulated environment. This is especially interesting when you want to use a virtual environment that would be dangerous in the real world. This is very useful for treating and researching specific anxiety disorders or PTSD. For instance, a person with pyrophobia (irrational fear of fire) that would panic from real open flames can be placed in environments with simulated flames, which may not trigger panic attacks as much as real flames. Eventually, during the final stages of therapy, the patient could even be immersed in an environment in which a room is on fire. This makes VR an invaluable tool for people with these kinds of disorders.

(5)

How modern VR works

Modern implementations of virtual reality usually work with head-mounted displays. A head-mounted display (HMD) is a display device that consists of a screen with two lenses attached to it. The device is mounted onto a person’s head such that there is a lens right in front of each eye (see figure 1). When simulating a virtual environment, the screen shows a stereoscopic 3D perspective that represents the virtual world from a given point of view. A stereoscopic 3D perspective is an image perspective that aims to create the illusion of viewing a three-dimensional space when it enters the eyes through the lenses (see figure 2). It consists of two separate images that are placed next to each other such that each image enters one eye. These two images are nearly identical, but they differ slightly in terms of perspective in order to create the illusion of human binocular vision, which in turn produces the illusion of viewing a three-dimensional space when processed by the visual cortex.

Figure 1. A person wearing a head-mounted display

Figure 2. A typical stereoscopic 3D perspective that a head-mounted display might show

Measuring equipment such as a gyroscope and accelerometer are also usually included in a HMD, making it possible to track the orientation of the head of the person that wears it. The point of view of the stereoscopic 3D image is then manipulated in real-time in accordance to

(6)

the orientation of the users head. This creates a very realistic illusion of being able to look around inside the virtual environment. When the user moves his head X degrees into direction Y, the gyroscope records this movement and the user’ s point of view inside the virtual environment will also shift X degrees into direction Y. With a powerful computer system, this process happens so fast that when you turn your head inside a virtual environment, the brain will perceive the change in point of view just as if you would turn your head in reality.

In this experiment the Oculus Rift DK2 was used as a HMD, which – in addition to a

gyroscope - also carries specifically placed infrared lights, which can be read by an infrared camera that ships with the DK2. This allows for keeping track of the position of the head as well. When the camera is properly set up, the system will also be able to keep track of when and where you move your head. With this functionality, the point of view inside of the virtual environment can now also be corrected for movements of the head, in addition to the rotation of the head. This allows for users to peek around objects and it also further approximates the way our vision changes in the real world in relation to physical movement, increasing presence and subsequently increasing ecological validity.

Next to visual stimuli, auditory stimuli are also simulated in most VR applications. Auditory stimuli are generally presented through headphones that are worn over an HMD, but some HMD’s come with an integrated pair of headphones. Because modern 3D simulations are mostly marketed around being visually impressive, audio doesn’t get nearly as much attention by software developers, but some simplified theory will be shortly discussed. Sound in a virtual environment is meant to simulate sound in the real world as accurately as possible. When an entity in the real world emits sound, sound waves are emitted in many directions. Some of these sound waves will travel directly towards the ear, but many sound waves that are oriented in completely different directions may also reach the ear by passing through and reflecting off of objects. This allows sound from a single source to reach both ears, either by passing through the head or other objects, by bending around the head by reflecting off of air particles, by reflection off of solid objects (such as the outer ear) or any combination of these (see figure 3).

(7)

Figure 3. Sound waves reaching human ears (courtesy of freesoftwaremagazine.com)

The interesting thing about these indirect sound waves is that they distort depending on the way in which sound was reflected or passed through an object. Additionally, because sound travels relatively slowly, sound waves from the same source will reach the ears at different points in time, depending on how much time these waves spend travelling. When you hear a sound, the auditory cortex estimates the position of the sound source by comparing the perceived sound from both ears by time of arrival and the degree distortion, giving you the feeling that you “know” where the sound came from. When sound from a virtual

environment has these same kinds of characteristics, it makes sense to the auditory cortex just like real sound does and it is interpreted as a sound in three-dimensional space with a location, making it seem realistic.

Computer game engines have been simulating this for a long time. Because tracing sound waves that bounce around in a big three-dimensional space is too computationally intensive, this has been cleverly simplified by just combining delay and head-related transfer functions (HRTFs). A HRTF can be applied to a sound wave to simulate the effect of a human head with ears. It’s used to translate a mono sound into a stereo sound for the left and right ear as if it were modulated by a real human head. With a single given source sound, its position and a HRTF, a left and right sound channel can be created, each with a different degree of

distortion, depending on the location of the sound. Typically combined with other functions that simulate things like binaural delay and environmental echo, a very realistic sounding virtual environment can be created in which sounds can be intuitively pinpointed to a location. Many modern video game engines use these kinds of techniques. In our research, Unreal Engine 4.10 was used, which uses very good three-dimensional sound simulation for headphones, contributing further to the sense of presence and ecological validity.

(8)

How to make VR work effectively and avoid problems

VR can be a very useful tool for psychological research and therapy, but much care should be taken when developing a VR application, because there are several potential pitfalls that can reduce the effectiveness of VR.

The most discussed problem with VR is cybersickness. Cybersickness is a form of nausea and/or dizziness that is induced by visual stimuli from a virtual environment (Wiederhold, & Bouchard, 2014). In very extreme cases it can lead to fainting or it can trigger a seizure in epileptic patients. Luckily, most of these issues can be avoided through careful development and by screening for risk factors. Some of these risk factors include heart disease,

neurological disorders (such as epilepsy) and being overly sensitive to motion sickness. When people with these risk factors are excluded and subjects are advised to discontinue the experiment when nausea occurs, the risk of serious cybersickness can be kept to a minimum, which still makes VR ethically acceptable (Davis, Nesbitt & Nalivaiko, 2015). However, this low risk only applies if the application is carefully developed to run as smoothly as possible.

VR applications will be more prone to induce cybersickness when the player moves rapidly through the environment with sudden changes in velocity or direction (Davis, Nesbitt & Nalivaiko, 2015). The mismatch between the sense of motion one gets from vestibular stimuli and visual stimuli is responsible for this. In practice, this mismatch tends to happen when a person moves in a virtual environment, but his actual body does not feel as if it is moving. A larger mismatch will cause a higher chance of cybersickness. This is why virtual roller coasters are most notorious for inducing cybersickness. The virtual rollercoaster moves very rapidly and erratically, while in the real world the person does not move at all. This creates a huge mismatch, resulting in a high chance of cybersickness. Furthermore, a high degree of mismatch also reduces the sense of presence.

Minimizing this mismatch will mean that there will be some constraints on which kind of research applications are able to be developed using VR. In many cases it will be the most practical to make a virtual environment in which the user remains stationary. In a stationary position the player will not move much at all, so sudden changes in velocity or mismatches between virtual and real world locations are out of the question, which keeps the chances of cybersickness to a minimum. While it’s not impossible to develop a virtual environment in which subjects can freely walk around at will, it will potentially increase the number of subjects that will need to quit because of cybersickness.

A great middle ground for this problem can be achieved by creating an environment in which the user remains seated in a vehicle that moves slowly and predictably. Even though your body can remain stationary inside a vehicle, you can still move throughout the environment. If the vehicle also moves at constant predictable pace, the vestibular system will report roughly the same information as in a stationary position, because there are no changes in velocity or direction. When a subject is placed in a virtual vehicle that does not rapidly

(9)

accelerate, this greatly reduces the perceived mismatch between real-world and virtual locations in both environments, while still allowing for some movement. This concept was also used in this experiment.

Two other common causes of cybersickness are latency and jitter. In the field of VR, latency means there is a delay between head movements and subsequent changes in the field of view in the virtual environment. Jitter indicates that the users vision does not change smoothly when looking or moving around, but changes in a jerky fashion. Latency and jitter also reduce the sense of presence and ecological validity. In modern VR, both of these phenomena usually result from poor performance of the computer system.

In comparison with a regular computer screen, a HMD shows two images that have to be rendered independently because they present a different point of view. This fact alone nearly doubles the number of calculations the computer system needs to do. Additionally, the screen on an HMD typically also has a higher refresh rate than a regular computer screen, which means that it shows more frames per second. This increases the smoothness of the experience, but also further raises the computational intensity, because rendering more frames means more calculations. Lastly, a HMD doesn’t just show two flat images next to each other; both images are post-processed to compensate for the effects of the lenses that are in front of the screen, which is why a VR image looks like two concave images with black borders and warped colors on a regular screen. All of these factors together cause virtual reality to be extremely computationally intensive.

When a computer system is not powerful enough to render a real-time visual image, it will fail to generate a frame every time the display needs one. This causes problems because the illusion of a fluently moving picture on a screen is created by rapidly displaying frames in regular intervals. In other words, the computer system will need to keep up with the refresh rate of the display. If a frame is not calculated in due time, the display will not be able to update the moving image. This causes missing frames, which result in both delay and jitter. There are two general ways to solve this problem; increasing the graphical performance of the computer system or by reducing the performance cost of the application.

Increasing the graphical performance of the computer system is only possible to some extent. As modern VR is very computationally intensive, even computer systems available at universities may call for a graphical compromise. Either a very powerful computer system has to be available for the project or the project needs to be amply funded so a proper computer system can be purchased. When this amount is added to the cost of a proper HMD, a VR experiment can get quite expensive in terms of materials.

The other option is to reduce the performance cost of the application, specifically by lowering the computational intensity in terms of graphics. Some VR frameworks are more demanding than others, but more realistically looking environments will generally be more demanding in terms of performance. A more realistic environment can contribute to a

(10)

stronger sense of presence, but only if the application can be smoothly executed. The extent to which high-end graphics can be used will always depend on the computer system in question. In some cases, performance cost can be reduced while only sacrificing a negligible amount of realism through extensive tweaking and testing.

Another potential problem is that a VR experiment requires extensive technical skills to develop, which is a quality that not many psychologists possess. Usually they will need to hire software developers and graphics designers which again make it more expensive to run a VR experiment. Furthermore, communication between software developers and

researchers is crucial. Developers tend not to have as much methodological knowledge as psychological researchers, so in order for the final application to be scientifically sound they should be well informed about different experimental conditions, validity and potential sources of noise and bias. In the same way the researchers should be well informed how to use the VR application properly and be taught what to do when technical issues arise. When a technical problem arises while running the experiment, it can cause the researcher to lose subjects if it cannot be fixed in due time.

Depending on the complexity of the experiment, a VR experiment will typically take much more time to develop than most other psychological experiments. Environments need to be designed, sounds, textures, meshes and animations need to be made, a lot of programming needs to be done and it needs to be thoroughly tested.

In summary, successfully performing a VR experiment tends to require high-end computer systems, a carefully thought out design, hiring external parties, good communication and a lot of time (and money) for development and testing. On the flipside, it grants researchers with both lots of experimental control and good ecological validity.

(11)

The effort-based decision making paradigm

In this VR experiment, the effect of realism on effort-based decision making was evaluated. Effort-based decision making addresses the way in which humans and animals evaluate cost-benefit ratios and apply these evaluations to their decision making behavior (Figure 4) (Kurniawan, Guitart-Masip & Dolan, 2011). Studying effort-based decision making can help to understand motivational processes in humans and therefore it may contribute to more efficient laboring and studying, understanding depression and understanding other

mechanisms for which motivation is important (Treadway, Bossaller, Shelton & Zald, 2012; Bonnelle et al., 2015). The most important neurochemical that is known to influence effect effort-based decision making is dopamine (Wade, de Wit, & Richards, 2000).

Figure 4. Visualization of Effort-based decision making (Bonnelle et al., 2015)

Humans and mammals tend to prefer tasks that require minimal effort and yield maximum reward. When the reward:effort ratios of two tasks are perceived as equal, this means that there is no clear preference for either task. This point of no preference is known as the point of indifference (POI). The POI for a task is different for each individual, depending on how the effort and reward are subjectively perceived. By quantifying the point of indifference of a certain individual for a certain task, statistical conclusions can be drawn about how an individual subjectively perceives the relation between the effort and reward of a task. This specific experiment evaluates the effects of realism in a virtual environment on the point of indifference. Both the realism of the effort and the realism of the reward will be manipulated in separate conditions. Several hypotheses can be inferred in this case. When effort is perceived as realistic, it could cause the subjects to experience more strain, therefore increasing the subjectively perceived effort. This would mean that realistic effort would cause subjects to be willing to perform less effort for the same reward, effectively raising the POI. On the other hand, realistic effort contributes to a realistic environment. A realistic virtual environment can excite the test subject. Excitement could increase

(12)

motivation, causing subjects to be more willing to perform tasks of higher effort for the same reward, which would lower the POI. A third hypothesis can now also be constructed; if both ideas were to be true to equal extent, the effects would cancel each other out and realistic effort would have no effect at all.

Realistic reward was found to have a more solid hypothesis, subjects were expected to be more willing to expend more effort for the same reward if the reward is experienced as more realistic, effectively lowering the POI.

(13)

Methodology

Participants

In total 51 people participated in the experiment. One participant had to quit the

experiment prematurely due to nausea and was thus excluded from the study. Two other participants were excluded from the analysis because of an error made by an experimenter and because of a technical issue. Participants consisted mostly of acquaintances of co-working students and psychology freshmen (for whom a certain amount of research

participation is compulsory). Participants were not paid for their attendance, but were able to be randomly selected for prize money, which was 10 eurocents for each virtual coin that was acquired. All participants were screened for risk factors for cybersickness. One person who enlisted for the experiment was not tested because of these risk factors.

Boredom was found not to be a problem with mandatorily participating freshmen students. VR was generally found to be exciting for all participants, ruling out the main disadvantage of using freshmen students for research.

Experimental setup and conditions

The experiment consisted of three conditions: realistic reward, realistic effort and a baseline condition. Participants completed the conditions in counterbalanced order. VR was used for every condition, which each consisted of the same set of 13 trials. Participants were

instructed to power a virtual mine cart that drove over a train track by making pumping motions with a bicycle pump to fill a virtual power bar. At the beginning of each trial, participants were given a choice between a high effort and a low effort track. Next to each track, the corresponding reward was displayed in virtual coins. The high effort track

generally yielded a higher reward (more coins) than the low effort track. The Color-coding on the displayed tracks informed the participants about the amount of effort each section of the track would require; green sections of the track required no pumping effort, orange sections of the track required medium effort and red sections of the track required high effort. Participants were also informed that both tracks would require the same amount of time to complete if they would power the cart sufficiently. The cart moved slower inside the red and orange sections, but these sections were smaller in size in order to equalize the time each section required.

In the baseline condition, participants were seated in front of a virtual desk with a computer screen. On the left and right wall, two additional computer screens were placed. The screens on the wall abstractly displayed the two available tracks and their corresponding reward in coins as stacked yellow bars. After choosing a track by clicking the left or right button, the screen on the desk showed both the power bar and a progress bar for the chosen track, indicating the current position of the cart. Participants were only able to track their progress through this information on the screen: no visible cart was moving. Inside medium effort sections, the power bar depleted slowly and inside the high effort sections it would deplete

(14)

more rapidly, which meant that more pumping was required while in these sections to keep the cart moving.

Figure 5. Baseline condition

In the realistic reward condition, participants were in the same virtual environment, but instead of having the reward abstractly displayed on the screen it was displayed as two stacks of shiny virtual coins on the left and right side of the desk. The middle of the desk now also showed a closed treasure chest. After making a choice, the chest temporarily opened and the corresponding stack of coins flew towards the center of the participants view, right above the chest and subsequently fell down into chest. When the coins hit the bottom of the chest, a sound effect and a particle effect were played. The sound effect was either a loud or a quiet sound of bouncing coins, depending whether the high or low reward track was chosen. The particle effect consisted of golden sparks bursting out of the chest and into the room.

(15)

In the realistic effort condition, the desk was not displayed. Instead, the participants found themselves seated in an actual visible mine cart inside the room. The power bar was now visible on a small screen inside the cart. A virtual version of the pump was also mounted onto the floor of the cart. The virtual pump was implemented in such a way that the handle was kept at roughly the same position as the handle of the actual pump. After selecting a track, a large door would open and the cart started moving forward, into an outside forest environment. Participants now tracked their progress by estimating their position on a real size virtual track. Sections of the tracks that required low effort were overgrown with grass and those of high effort with shrubs. When entering the sections that required more effort the participants would visibly slow down. Sections that did not require any effort were not overgrown. When the cart would collide with grass or the shrubs, a sound effect of

crumbling grass or shrubs was played.

(16)

In order to compensate for the effects of the forest environment in the realistic reward condition, two windows were placed inside the room in the other conditions through which the same forest environment was visible.

Before the start of the experiment, participants completed at least one test trial in every environment, to make sure they understood the amount of effort the different colors represented knew how to operate the cart and knew where to look for information about effort, reward and progress.

Heart rate was also measured by means of electrocardiography (ECG) on a separate machine. The system clock of both machines was equalized every day before the

experiments started, in order to be able to align measurements from both machines in terms of time.

A video which clearly shows all of the different conditions being executed is publicly available at:

(17)

Dynamic rewards

Rather than using a static reward value for each track, an algorithm was developed that dynamically determined the corresponding reward for each track depending on the previous choices of the subject. Each time a high effort track was chosen the difference between the rewards of both tracks was reduced and each time a low effort track was chosen it was increased. This functionality was added in order to approximate the POI as accurately as possible for each condition. In order to do this a total effort value of every single track was calculated by summing effort values of each section (1 for green, 2 for orange and 4 for red). A reward modifier value was also used. This idea behind this value was that it slowly

approximates the POI throughout a condition. The reward modifier ranged between 0, which meant no difference between rewards, and 1.25, which meant a maximum difference in rewards. Each condition starts with a reward modifier of 0.625, which is the exact center of this range. For each high effort choice the reward modifier was decreased and for each low effort choice the reward modifier increased. The amount of increase or decrease was dependent on how many consecutive times a choice was made (high effort or low effort). For the first choice, this value was 0.02, 0.05 for the second, 0.1 for the third and 0.2 for any consecutive choice after the third. This method was used because theoretically the current reward modifier would be further away from the POI when the subject keeps making the same choice over and over.

For each track of each trial the corresponding reward in coins was calculated by the following formula:

rewardtrack = 10 + total efforttrack * reward modifier

The POI value was calculated by averaging the reward modifier of the last 4 trials for each condition for each subject. Assigning POI values has demonstrated to be a reliable method for measuring individual differences in subjective effort (Westbrook, Kester, & Braver, 2013).

(18)

Materials

The system specifications of the computer system that was used in our experiment were: • Intel Core i7-3770 CPU, 3.4Ghz, 4 cores

• Zotac GeForce GTX 970 Extreme Core Edition • 16GB of DDR3 memory

• 256GB Solid state hard drive • Logitech Gaming Mouse G300 • Oculus Rift DK2

The application was developed using Unreal Engine 4.10 on a Windows 7 Enterprise (SP1) operating system.

The controller that was used for this experiment consisted of a bicycle pump with a mouse attached to the base of the pump. The mouse was placed on top of an aluminum strip, which was covered in felt. This made it possible for the mouse to measure movements of the pump handle. Because the felt-covered aluminum strip was attached to the handle of the pump and the mouse was attached to the base, this caused the mouse to read how much the felt was shifting up and down and thus it was possible to measure the position of the pump. More information about experiences and difficulties with computer hardware can be found in the appendix.

(19)

Results

Point of indifference

For each subject, three POI values were calculated by averaging the reward modifier of the last 4 trials of each condition. This value was used as the dependent variable.

Below are several descriptive statistics and diagrams about the gathered data.

Table 1. Descriptives for each condition in the data

Figure 8. POI distribution for each condition in the data

Figure 9. Mean POI for each condition in the data

A repeated measures ANOVA indicated that within each subject, there was no significant difference between the different conditions in terms of POI (F(46) = 0.982, p = 0.382).

(20)

Contrasts also revealed no difference between the baseline and realistic reward conditions (F(1, 47) = 0.628, p = 0.432) and no difference between the baseline and realistic effort conditions (F(1, 47) = 2.004, p = 0.162). Sphericity was not violated.

Manipulation check

In addition, a manipulation check was performed for each condition by asking the

participants to what extent they felt they had to work hard and to what extent they found the reward to be attractive. A Likert scale ranging from 1 (not at all) to 7 (very much). Below are two sets of diagrams about the gathered data.

Figure 10. Distribution of perceived effort (Likert 1-7) for each condition in the data

Figure 11. Distribution of perceived effort (Likert 1-7) for each condition in the data

No significant effect was found for perceived effort, contrary to perceived reward, for which a significant effect was revealed (F(2, 41) = 7.444, p = 0.001). Contrasts revealed a significant difference between both the baseline and realistic reward conditions (F(1, 42) = 20.081, p < 0.001) and between the baseline and realistic effort conditions (F(1, 42) = 6.055, p = 0.018) in perceived reward. Sphericity was not violated.

(21)

Conclusions and discussion

No significant effects were found during this experiment, which was unexpected by the author. Of course it’s plausible for this idea to be true, in which case realism really does not affect the POI at all, which consequently means that it does not affect the subjectively perceived effort or reward. Though, the manipulation check revealed that this is highly unlikely. When merely asked to which extent subjects felt they had to expend effort and to which extent they found the reward attractive for each condition, a significant effect did emerge for attractiveness of the reward for each condition. This indicates that a flaw in the experiment has probably induced noise to the POI values, masking a possible latent effect. By far the most important flaw has been the very large floor effect that was found in the data. Most subjects were surprisingly motivated to gather as many coins as possible and chose the high effort track most of the time. This is clearly visible in figure 8; a lot of data entries have a very low POI. It’s entirely possible that this may have introduced a lot of noise to the data, which may have masked a possible effect. The results of the manipulation check support this idea. Subjects still reported to be more attracted to the reward in both

experimental conditions, even though no difference in POI was found; which stands against the idea that people are willing to expend more effort for a subjectively better reward (Kurniawan, Guitart-Masip & Dolan, 2011).

It’s virtually certain that the cause for this floor effect was the bad implementation for the input device that was used by the subjects to power the virtual mine cart. The bicycle pump with the computer mouse attached, proved to be a very inaccurate measurement. The main issue here was that a computer mouse measures only relative movement and it this

measurement was quite inconsistent in the setup that was used. There was no way to accurately determine the position of the input device, because a computer mouse cannot report absolute position. The vertical position of the handle of the pump had to be

reconstructed by keeping track of the reported relative movement each time the mouse reported movement, which presented the true cause of the problem.

The mouse proved to be unable to report the data accurately for different speeds. When the bicycle pump was pushed down really fast, the amount of movement that was reported by the mouse for the whole motion was many times lower than when it was pushed down more slowly. This meant that as soon as subjects started to pump faster, less power would be added to the virtual mine cart. Slow steady strokes using the whole range of motion of the pump handle proved to report much more movement than pumping really fast with a slightly more narrow range of motion. This caused subjects that had a tendency to pump very fast to actually generate less power than those that were pumping slowly. More importantly, bush-covered parts of a track drain power much more quickly, which naturally leads the subject to start pumping more vigorously. This made it very easy to get stuck in bushy areas of the track, because the faster pumping to counteract the larger power drain

(22)

actually resulted in less power being added. Unfortunately, there was no more time available to construct a better input system from start by the time this error had come to light. The only possible solution to prevent the subjects from getting stuck was to reduce the overall physical difficulty of the task. In retrospect turned out to be the crucial flaw that severely degraded the validity of the entire experiment.

The most important change this experiment needs during possible future continuation is a different input device that can register its position accurately. This will allow the physical difficulty of the task to be increased, which will subsequently reduce the floor effect and therefore may reveal a previously masked effect. One possible option for this is either creating a custom input device from scratch by using a microcontroller and a sensor or by using a USB steering wheel controller on which a lever is mounted. In contrary to the computer mouse, this type of device will be able to determine its own exact location, which will make all the previous problems that the current input device has disappear.

A follow-up experiment is necessary to determine whether a more properly developed task will allow realism to have an effect on effort-based decision making. The manipulation check questionnaire suggests that it probably will.

(23)

References

Anderson, P. L., Price, M., Edwards, S. M., Obasaju, M. A., Schmertz, S. K., Zimand, E., & Calamaras, M. R. (2013). Virtual reality exposure therapy for social anxiety disorder: A randomized controlled trial. Journal of consulting and clinical psychology, 81(5), 751. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nature reviews neuroscience, 12(12), 752-762.

Bonnelle, V., Veromann, K. R., Heyes, S. B., Sterzo, E. L., Manohar, S., & Husain, M. (2015). Characterization of reward and effort mechanisms in apathy. Journal of

Physiology-Paris, 109(1), 16-26.

Davis, S., Nesbitt, K., & Nalivaiko, E. (2015, January). Comparing the onset of cybersickness using the oculus rift and two virtual roller coasters. Proceedings of the 11th Australasian

Conference on Interactive Entertainment (IE 2015) (Vol. 27, p. 30).

Kurniawan, I., Guitart-Masip, M., & Dolan, R. (2011). Dopamine and effort-based decision making. Frontiers in neuroscience, 5.

Malloy, K. M., & Milling, L. S. (2010). The effectiveness of virtual reality distraction for pain reduction: a systematic review. Clinical psychology review, 30(8), 1011-1018.

Parsons, T. D. (2015). Ecological validity in virtual reality-based neuropsychological assessment. Encyclopedia of Information Science and Technology, Third Edition, 1006-101510.

Richard, E., Tijou, A., Richard, P., & Ferrier, J. L. (2006). Multi-modal virtual environments for education with haptic and olfactory feedback. Virtual Reality, 10(3-4), 207-225.

Riva, G., Mantovani, F., Capideville, C. S., Preziosa, A., Morganti, F., Villani, D., ... & Alcañiz, M. (2007). Affective interactions using virtual reality: the link between presence and emotions. CyberPsychology & Behavior, 10(1), 45-56.

Rizzo, A., Hartholt, A., Grimani, M., Leeds, A., & Liewer, M. (2014). Virtual Reality Exposure Therapy for Combat-Related Posttraumatic Stress Disorder. Computer, 47(7), 31-37. Rothbaum, B. O., Anderson, P., Zimand, E., Hodges, L., Lang, D., & Wilson, J. (2006). Virtual reality exposure therapy and standard (in vivo) exposure therapy in the treatment of fear of flying. Behavior Therapy, 37(1), 80-90.

Rothbaum, B. O., Rizzo, A., & Difede, J. (2010). Virtual reality exposure therapy for combat-related posttraumatic stress disorder. Annals of the New York Academy of Sciences, 1208(1), 126-132.

(24)

Schultheis, M. T., & Rizzo, A. A. (2001). The application of virtual reality technology in rehabilitation. Rehabilitation psychology, 46(3), 296.

Treadway, M. T., Bossaller, N. A., Shelton, R. C., & Zald, D. H. (2012). Effort-based decision-making in major depressive disorder: a translational model of motivational

anhedonia. Journal of abnormal psychology, 121(3), 553.

Valmaggia, L. R., Latif, L., Kemptom, M. J., & Maria, R. C. (2016). Virtual reality in the psychological treatment for mental health problems: An systematic review of recent evidence. Psychiatry Research.

Wade, T. R., de Wit, H., & Richards, J. B. (2000). Effects of dopaminergic drugs on delayed reward as a measure of impulsive behavior in rats. Psychopharmacology, 150(1), 90-101. Westbrook, A., Kester, D., & Braver, T. S. (2013). What is the subjective cost of cognitive effort? Load, trait, and aging effects revealed by economic preference. PLoS One, 8(7), e68210.

Wiederhold, B. K., & Bouchard, S. (2014). Sickness in Virtual Reality. Advances in Virtual

(25)

Appendix

This supplementary chapter will shortly discuss some of the technical developments throughout the experiment. It mostly is narrated from a first-person point of view and not meant to be scientific in any way.

Appendix A: Issues with hardware and performance

During the experiment it became clear that getting a proper VR setup for a BSc graduation project with next to no funding is a very hard task. Our team was completely dependent on the technical support office of the university, which was unable to provide us with the proper hardware that is required for VR. The support office was notified about the fact that we wanted to run a modern VR application. Nevertheless the system that we were provided with first came with a GTX 750 graphics card installed. Right away, the support office had to be contacted again, as this inexpensive card did not even come close to what we needed, further delaying the development of our task. The system was then upgraded with an AMD Radeon 280X graphics card, which proved to be just able to run the task back when it was a lot more simplistic in graphical terms. In order to create a more realistic environment, this system still wasn’t nearly as powerful as we needed it to be. As this was already the most powerful card the technical support office owned and the total budget for the whole experiment equaled no more than €100, it seemed that we would have to deal with very large graphical limitations. Because I was not satisfied with these poor conditions I ended up temporarily installing my personal graphics card from my home computer, a GTX 970, into the lab computer, which again resulted in a minor setback because the casing was initially too small to fit a large graphics card and had to be stripped down. Eventually the card was able to be installed, which had given us much more headspace in terms of performance, allowing us to create a more visually realistic task. Still, even though this is one of the most powerful graphics cards available at the current time, the performance hit caused by VR still required us to make very considerate choices about what kind of graphics we would want to invest our performance in.

Appendix B: Working as a software developer together with a

team of social scientists

Being the only person with extensive programming experience, the entire team of 11 people relied entirely upon me for development of the software. By the time everything was

properly installed there were only about 10 days left to develop the experiment. I’ve had the luck to have had several extensive Skype sessions with the developer of the previous

experiment, who was able to give me a quick head start of where to find everything in the editor for Unreal Engine 4 and how to tackle common problems. Despite having a lot of experience with programming and software design, video game engines were completely new to me and there was very little time to learn all the basic principles. It is easy for an

(26)

outsider to underestimate how much work a software developer needs to do in order to fix an issue that seems very simple at first sight. Even though it was fortunate to be of so much value to the team, it proved to be a hard task to stay focused and keep stress levels at bay when there are 10 days to create a complete VR experiment, using software you have never worked with before, while working together with a team of 11 people who do not have any idea what you are actually doing, even though they are still involved in the theoretical design. I estimate to have spent approximately 160 hours on learning Unreal Engine 4 and developing the experiment in this time span of 10 days. Most of the process was a

combination of learning about the engine and developing the experiment at the same time. Each time I encountered something that could not be solved with my current knowledge at the time I set aside some time to further familiarize myself with the related topics in order to gain insight into the problem, which allowed for an efficient way of using the limited amount of time by only learning what is actually needed.

The construction of the input device was the only technical area in which I was not involved, as I was instructed to fully focus on programming. Due to the limited time, this was the only sensible thing to do, but nonetheless handing a software-related task to people that are not involved with the software itself has shown to be a very dangerous practice. The discussion section of this article should make clear that the input device proved to be the limiting factor in our experiment. If only the constructors of the input device would have communicated with me about their design choices from the start these kinds of errors could have easily been prevented, which would have drastically increased the validity of this experiment. I would like to emphasize on the importance of communication between software and non-software people in team projects. Communication is also important the other way around: from software developer towards experimenters. If you are the only programmer in a team, you will have to think ahead in terms of where a person without technical knowledge can go wrong. For example, simply checking the mouse sensitivity from an indicator light or

plugging a headphone into the right sound adapter every day before starting up the

experiment was a task that repeatedly seemed to be unable to be successfully completed by the experimenters. Psychology students really need help with many things regarding

computers and software, especially while operating an experiment that barely had time to be tested, as there are bound to be some issues you’ll need to resolve.

Appendix C: Featured technical developments

Trial transitions

One of the things that was a technical challenge was transitioning from one trial to the next. After each trial, the cart had to be teleported from the end tunnel back to the starting tunnel. This had to be done in a seamless manner, in order to preserve the feeling of continually progressing through the environment. Because of the way Unreal Engine 4 renders the lighting, the inside of the starting tunnel was a lot different than inside of the end tunnel, which resulted in the teleportation process showing entirely different colors

(27)

after teleporting, breaking the immersion. After some experimentation, this problem seemed to be solvable by “baking” the lighting into the textures before runtime. This allows for a performance increase, as baked lights do no longer have to be calculated during

runtime, as well as keeping lighting totally consistent throughout the session. In order to get the baked lighting of the end tunnel to look the same as the starting tunnel, both tunnels were placed at an exact same identical location. This resulted in three tunnels (start, left end and right end) clipping into each other in the Unreal Editor. The lighting was then baked and during runtime the tunnels had to be moved to their actual locations. This introduced another problem at the time though. In Unreal Engine you’ll have to indicate whether each object is movable or not. A movable object can be moved at runtime, like the name

suggests, while a static (immovable) object has the advantage of pre-baked lighting (it will not move anyway, so the shadows and reflections that it will cast will generally not need to be recalculated constantly). I needed baked lighting for the tunnel, but I also needed to move the end tunnels to their proper locations during each trial. In the end I’ve found a deeply hidden option somewhere in the mesh editor called “Light as if static”. This allowed me to override the default lighting rules and bake the lights of a movable object. This made the tunnels look exactly the same inside and therefore the teleportation was barely

noticeable anymore. There was just a short hitch, which I was unable to avoid due to new tracks being loaded. This little trick did introduce some other visual anomalies though. Because of the baked lighting, the inside of the start tunnel did not light up anymore after the doors opened, which theoretically should allow sunlight to enter the tunnel. Enabling some lighting options which simulated pupil miosis and mydriasis, allowed the sudden increase of light into the camera to overbrighten the field of vision for a very short time, which makes the unlit inside of the tunnel go by unnoted. Unfortunately, this option had to be disabled later due to performance issues. Another byproduct of static lighting on movable objects was that the ending tunnels no longer cast a shadow onto the environment. A

possibility to resolve this issue was to add a predefined shadow texture showing a literal shadow to each tunnel, but unfortunately there was not enough time to add this.

Nevertheless these problems are not as bad compared to the teleportation process changing all of the colors.

Problems with lighting and shading

Lighting in general proved to be a tricky subject with Unreal Engine 4. The engine is capable of rendering some extremely realistic scenes, but many things have to be configured just right in order to make it work well. This requires extensive experience and studying of all the different options and mechanics, for which again was not much time. For instance, there was a problem with washed out colors of the environment and all of the trees casting pitch black shadows onto the environment, which after many hours of looking for solutions seemed to result simply from only having a white light source projecting directly out of the sun while not having configured any light that reflects from the sky. As a logical consequence of no light falling under the trees, the tree shadows turned pitch black. When a sky light was added this issue was easily resolved and also increased the realism of the scene by

(28)

configuring photon bounces which allowed all of the objects to be indirectly illuminated via other neighboring objects as well. Afterwards the hue of the sky light was also changed to a very light blue instead of the default white, as a normal sky also looks blue. This resulted in the colors of the environment being corrected and turned the whole scene into something that looked much more realistic. The best part of this is that it came at a zero performance cost, as all of the objects never needed to be moved, so all of this lighting information was able to be “baked in”.

All of these little caveats were learned through reading and watching online tutorials and through a combination of trial and error and healthy curiosity about all the options that were offered by the Unreal Engine.

Another lighting problem in the early stages of development consisted of the shadows from the trees onto the landscape being extremely badly defined; all of them used to be elliptical blobs at some distance from the tree, without the stem being visible at all. After a lot of experimentation it turned out not to be the trees that needed tweaking but the landscape itself. The sections of the landscape were rather large in comparison to its very small shadow map resolution. Shadows that were turning out as blobs were the result of the lights being baked into an extremely low resolution image that resulted in tree shadows to be reduce to blobs. When the shadow map resolution was increased, the stem and even some

transparency from gaps between leaves were visible in the shadow, which greatly increased the quality of the scene. In the same way the default shadow map resolution for the actual tree leaf surface itself was set too low, resulting in each tree its leaves having a completely even dark color. When this was raised, the leaves had actual definition and were shaded decently. Because this lighting was also precalculated and the graphics card had more than enough video memory, the performance hit from this change was very minor.

Mesh combining

Some other performance optimizations that were performed were mainly about mesh combining. A mesh is a wireframe type of shape that defines the outlines of an object. On top of this mesh a texture is placed in order for the object to have actual colors and other definition. In many cases, when a lot of instances of the same mesh are being used, the collection of meshes can be combined into a larger mesh that can be rendered at once, drastically reducing the draw calls required for the scene and therefore increasing

performance. By using the performance analysis tools I found out that many separate semi-transparent meshes can cost a lot of performance in Unreal Engine 4. The large amount of bushes and grass patches were the culprit of this. These were all separate meshes because all of these grasses and bushes have some sort of behavior built in. When the cart runs over a patch of grass or a bush, a sound is played and the bush falls into the ground of the grass becomes trampled. In order to implement this behavior, separate so-called actors which each contain a separate mesh have to be defined for each single patch of grass and each bush.

(29)

Each actor also has its own collision box, roughly the same size as its mesh, which allows the specific behavior to be performed when the cart comes in contact with this collision box. Because each bush and each patch of grass has an independently running set of behaviors linked to it they also have to be drawn separately and since these are semi-transparent objects, this costs a lot of performance. This issue was circumvented by making each actor invisible and numbering them. Afterwards one large combined mesh was drawn with all of its instances being numbered in the same order and placed at the exact same spot as the now invisible actors. When an actor collided with the cart, an index would now be passed through to the combined mesh, which was then redrawn with the corresponding numbered instance manipulated accordingly. This neat little trick allowed us to have the performance advantage of combined meshes while also allowing for specific separate behavior for each mesh.

Animated physics for coins

Another issue was found with the floating coins from the Realistic Reward condition. First of all it took a very long time to properly configure coins flowing smoothly and in perfect choreography with a smooth opening and closing of the chest lid which also had to be synchronized with its sound effect. Each coin actually had real time simulated physics, which required an extensive amount of tweaking and some tricks regarding linear algebra and acceleration formulas. In each coin has a specific top speed and as long as the top speed wasn’t reached, the coins would be accelerated towards a central spot right above the chest. The actual top speed for each coin depended on its distance from this target “magnet spot”. The closer a coin got towards this central spot, the lower the top speed. Together with the acceleration, this allowed the coins to smoothly accelerate towards the target area above the chest, while at the same time slowing down when it almost gets there.

By just implementing this algorithm however, the coins hit the side of the chest on their way to the target spot above the chest. By multiplying the Z-axis of the direction vector of the coin by a constant of 4 the coins moved a lot faster on the vertical axis than on the other axes, allowing them to move in an upward curve towards the center spot.

The last issue with at this point was that the coins kept the shape of a stack while they floated towards the target spot. This issue was resolved by also introducing a random component to the direction vector of the coin, depending on how long ago the floating process had started. A randomness value was introduced, ranging from 0 to 1, which started at 1 and quickly decreased to 0 as the coin started to move. This factor determined how much random direction was added to the coin. The actual direction towards the target spot that was also incorporated into the movement of the coin was multiplied by 1 minus this randomness factor. This means that in the beginning of the floating phase, each coin moves in a different direction, but during the first half second the direction of the coins started to change towards the target spot above the chest and the effect of the random vector was eliminated.

(30)

In practice, this meant that the coins first moved outward, expanding and randomizing the shape of the stack and afterwards started moving towards the target area above the chest in the shape of a lump of coins instead of a straight stack. When all of the coins were in a certain distance to the center spot, the floating phase stopped and gravity was enabled again, resulting in the coins dropping down into the chest which closed right after. As physics are random in nature during some test runs the coins tended to fall outside of the chest, which broke immersion. This issue was fixed by building an invisible funnel on top of the borders chest, which was angled towards the center bottom of the chest, allowing coins that would have hit the sides of the chest to always bounce into the right direction, making them still land inside the chest.

After the lid was closed, the actual coins disappeared from the world, in order not to clutter the virtual environment with needless physics objects that induce strain on the system. Instead a mesh of a golden pile of coins was added to the bottom of the chest, allowing subjects still to see some coins inside of the chests as it opens, in order to hopefully not break immersion by no longer showing the exact coins that have landed there previously.

Making a faulty input device (partially) work

The issues regarding acceleration problems with the mouse on top of the bicycle pump handle (the input device) were partially solved by incorporating a few algorithms. First of all the pump handle moved in accordance to the actual pump in the Realistic Effort condition. Because the actual location of the pump handle was not accurately measurable, the speed at which the handle moved towards either the topmost or bottommost position depended on the actual distance from both of those positions. Simply put, the closer the pump handle got towards the edges of its range of motion, the slower it moved for each unit of movement from the mouse. This allowed the pump handle to effectively overcome the measurement error by approximating towards the edges like an asymptote, instead of moving at it directly and either clamping it into a specific range. This resulted in a smoother movement of the pump handle towards the edges of its range instead of suddenly stopping, which looks more realistic and doesn’t break the immersion as much. The problem around people becoming stuck in bushes was also solved in a similar way. The amount of power added was made to be dependent on the amount of power the cart already had. A smaller power reserve would fill up more quickly than one that already has been partially filled. This prevented subjects from completely getting stuck in a high effort area by allowing them to put more power in the cart per unit of movement and therefore preventing a complete halt of the experiment. Because the problem of getting stuck now no longer needed to be feared as much, the difficulty of the task could be raised a little bit higher in order to reduce a floor effect on the POI. Unfortunately as can be read in the discussion chapter this reduction wasn’t nearly enough.

Referenties

GERELATEERDE DOCUMENTEN

To discover whether the design of the virtual reality application supported the imple- mentation process of the VR headset within care-home Randerode, the VR headset and tablet

Therefore, the first research question is: Can training with virtual reality increase firefighters’ training effects in the context of Situation Awareness when coping with

This study identified several identified barriers and solutions for the implementation of interactive Virtual Reality, experienced by therapists and project leaders in a mental

The results show that the cultural variables, power distance, assertiveness, in-group collectivism and uncertainty avoidance do not have a significant effect on the richness of the

The isoniazid melting peak identified on the RDS formulation thermogram (171.1 °C) was similar to that of the pure drug (171.7 °C), showing its stability within

FOOD AND DRUG ADMINISTRATION (FDA) – refer to US Food and drug administration. Medicines informal market in Congo, Burundi and Angola: Counterfeit and Substandard

The present text seems strongly to indicate the territorial restoration of the nation (cf. It will be greatly enlarged and permanently settled. However, we must