• No results found

Hands-on Scientific Visualization in Virtual Reality

N/A
N/A
Protected

Academic year: 2021

Share "Hands-on Scientific Visualization in Virtual Reality"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Hands-on

Scientific Visualization

in Virtual Reality

Adeela Ashraf

June 8, 2018

Supervisor: dr. R. G. Belleman

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

The field of scientific visualization is concerned with the visualization of data, which is derived from various sources. One way of visualizing data is by means of virtual reality (VR), where a user can observe visualized data using VR equipment. Currently such visualization is possible, but not optimal in means of intuition and user-friendliness. The Visualization Toolkit (VTK) is a large library consisting of more than 2000 C++ classes. In recent years support for the library with regard to compatibility with VR-equipment has been expanded with the OpenVR API. The Leap Motion controller is a device that tracks the position and movement of hands, which can be used with VR-equipment by utilizing the LeapSDK. In this thesis, an application is described that visualizes and allows for an easy and immersive hands-on interaction with data. This is done by visualizing the fingertips of the hands and interacting with data. The nature of interaction is implemented by using the SMRX dataset, which is a static mixer used to mix viscous fluid streams. The index finger of each hand is able to visualize streams through the mixer, allowing the visualization of mixed streamlines in real-time. With these results, a first step is made towards a system where data could be explored in an intuitive way, enhancing the experience of immersion in the VR-environment.

(4)
(5)

Contents

1 Introduction 7 1.1 Context . . . 7 1.2 Related Work . . . 8 1.3 Research Question . . . 10 1.4 Project Deliverables . . . 10 1.5 Thesis Structure . . . 10 2 Application Design 11 2.1 Hardware . . . 11 2.1.1 Oculus Rift . . . 11 2.1.2 Leap Motion . . . 12 2.2 Software . . . 13

2.2.1 The Visualization Toolkit (VTK) . . . 13

2.2.2 OpenVR . . . 14 2.2.3 Leap Motion SDK . . . 14 2.3 Design Requirements . . . 15 2.3.1 Hand Visualization . . . 15 2.3.2 Interaction . . . 15 3 Method 17 3.1 Multithreading . . . 17 3.2 Hand Visualization . . . 19 3.3 Data Insertion . . . 22 3.4 Interaction . . . 22 4 Results 23 5 Discussion 25 6 Conclusion 27 6.1 Future Work . . . 27

(6)
(7)

CHAPTER 1

Introduction

1.1

Context

The field of scientific visualization is concerned with the visualization of data, which is derived from various sources. The data is represented in a three-dimensional format, where it is aimed to render the data in an as insightful way as possible by utilizing volumes, surfaces and light sources. Often the visualized data contains a time component as well. In many research projects large amounts of data are generated, such that it can become difficult to analyze the data prop-erly in a numerical manner. Therefore, it becomes desirable to create visualizations in order to find patterns in the data which otherwise could not have been detected [5].

In regard to the field of scientific visualization, reference models can be used to give a better overview of the process of visualizing data. One such model is the Haber and McNabb model, which is currently widely used in various visualization systems. This process consists of creating a visual representation of data, and is referred to as the visualization pipeline. In a brief de-scription of the process, first raw data is analyzed and prepared. Next, the prepared data can be used to apply filters, utilizing the data in ways that the visualization of the data will be altered. For instance, one could think applying a filter that visualizes isosurfaces or isolines of a dataset. Once the desired filters are applied to the data, which is then called focused data, it is then mapped to geometric coordinates. Lastly, the data is rendered and transformed into image data, which is seen as the final visualization on screen. See figure 1.1 for an overview of the pipeline [7].

Figure 1.1: A high-level overview of the visualization pipeline of the Haber and McNabb reference model 2.

One way of viewing visualized data is by means of virtual reality (VR), where a user can observe visualized data through a simulation using VR equipment. Observing data in a virtual environment has been shown to add more value to the level of immersion and understanding of the environment [6]. Before the start of this project, visualization was possible but not optimal

(8)

in means of intuition, user-friendliness, or immersion. It was also not possible to modify the visu-alization pipeline from within the VR-environment. The modification of data is done by leaving the VR-environment and modifying the data through some other medium. To see changes, the VR-environment needs to be entered again. This process of leaving, modifying, and entering can be seen as inefficient and time-consuming, especially for datasets where lots of modification is needed.

A commonly used framework used for visualizing data is the Visualization Toolkit (VTK), which is a large library containing more than 2000 C++ classes that can be used to create and alter visualization pipelines. In recent years the support for the framework has been expanded with the OpenVR module, which is an API that supports SteamVR, allowing VTK to be com-patible with VR-equipment [19, 21].

Using VTK in the VR-environment it is possible to build applications, which are however not interactive in means of intuition, user-friendliness or modification. To enhance the immersion in such a environment one could look at a hands-on approach when interacting with data. This has not been explored in great detail, which also leaves desire for an application where the use of hands takes a main focus. One way of doing this is by utilizing hand-tracking hardware such as Leap Motion [9].

Once users are allowed to utilize their hands in the VR-environment, it becomes possible to apply this tool in order to interact with data. The advantage this provides above interaction in a non-VR environment is that the effects of the interactions are shown immediately, providing a real-time simulation. Such interaction would provide for a more easier and insightful experience on some set of data. This would be a first step towards a system where data could be explored in an intuitive way.

In conclusion, implementation and improvement is desired for creating an application using the VTK framework where hands-on interaction with visualized data is possible. This would allow for a more easier and insightful experience. To achieve this goal, a VR-application needs to be built that allows for hands-on interaction with data.

1.2

Related Work

In recent years, some work has been done which is related to interaction and modification of data in the VR-environment. One example is Paraview, which is an open-source application used to visualize data. Originating from the same group that constructed the Visualization Toolkit (VTK), Paraview allows large datasets to be visualized and visualization pipelines to be built. Its GUI allows the for a user-friendly experience with data where interaction is done in a intuitive manner, see figure 1.2 for a visual representation. Altering the pipeline is done in a similar way as well. Paraview also supports VR-equipment, which however also suffers from the constraint that no interaction or modification to the pipeline can be done in the VR-environment [12, 17].

Research has been done regarding enhancements to the Visualization Toolkit, enabling scien-tific visualization in immersive environments. In said paper, two new approaches are proposed that simplify the combination of a user-friendly interface (or immersiveness) and VTK. Several enhancements are given that give real-time updates and efficient interaction [16].

(9)

Figure 1.2: Visualization in Paraview displaying streamlines using a vector field defined on a dataset 4.

Similar work regarding this thesis has been done using VTK and Paraview by Daan Kruis, where an application is created that can be used for creating a contour visualization pipeline in the VR-environment. The user is allowed to change the contour value of visualized data in the environment. In another paper by Henk Dreuning, an application has been made which provides a virtual environment, allowing for the creation of a basic visualization pipeline. It is possible to modify selected parameters and view its result in the same environment [4, 13].

A challenge is combining VR-equipment with hand-tracking hardware such as Leap Motion. In line with the desire for hands-on interaction in the application, some calculations must be done to process the input from the Leap Motion controller and use the data in the VR-environment. In this regard some research has been done, where native support has been implemented for the Oculus Rift and Leap Motion controller. While the paper does not utilize the software in mind such as VTK and OpenVR, the method to process the input from the Leap Motion controller still is applicable to the implementation desired in this thesis [15].

Earlier work has been done within the University of Amsterdam by dr. R. G. Belleman, implementing an application that combines VTK Toolkit with Leap Motion in order to achieve hands-on interaction with data. This work will be used as reference as to how the application can be built [2].

(10)

1.3

Research Question

It is desired to build an application that allows for hands-on intuitive interaction by interacting with data. This should allow for a more easier and insightful experience to some dataset. From this a research question can be stated as follows:

How can an application be built within the virtual reality environment that allows for hands-on intuitive interactihands-on, such that an easier and insightful experience could be given hands-on data?

1.4

Project Deliverables

The deliverable of the project takes the form of an application, which can be split into two main parts: hand visualization and interaction. By combining both parts, interaction with data is possible real-time. If it is possible to interact with data, a first step is made towards a system where data could be explored in an intuitive way, enhancing the experience of immersion.

The first part of the application has as focus the implementation of rendering some form of hand-representation. As it is a challenge to process the input from the Leap Motion controller and render it in the VTK-environment, this is taken as a separate goal. The second part of the deliverable has as focus that the visualized hands can interact with data. By interacting with data, the visualization pipeline is altered, where the effect of the alteration is seen in real-time within the VR-environment. The important aspect of this application is that the method of execution should aid in easier interaction, and that the nature of interaction should provide more insight into the dataset.

1.5

Thesis Structure

The structure of the following chapters in this thesis is as follows. In chapter 2, the design aspects of the application are discussed, which concern the hardware, software and design choices. Chapter 3 discusses how the implementation of the design aspects in the previous chapter are executed, along with the reasoning behind certain choices that were or were not made during the implementation. In chapter 4 the results of the implementation of the application are shown, where in chapter 5 these results are discussed. Lastly, chapter 6 makes conclusions about the project and discusses future work that could be done.

(11)

CHAPTER 2

Application Design

This chapter focuses on the design aspects of the application which is the deliverable of this project. The design aspects concern the hardware and software that are used, and the require-ments the application itself should adhere to in order to meet the defined project goals. The intention of addressing these design aspects is to give a better insight on what terms and con-straints the application is built, and provides reasoning as to why certain choices were made.

2.1

Hardware

In this subsection the hardware that is used to build the application is discussed, along with the reasoning behind the choices that were made.

2.1.1

Oculus Rift

In order to visualize data in the VR-environment, the Oculus Rift headset developed by Oculus VR is used as VR-equipment. To be more precise, the Oculus Rift Consumer Version 1 (CV1) is used, see figure 2.1 for the head-mounted display (HMD) of this version. Along with the HMD, the headset also provides two sensors which track the position and orientation of the HMD, and two controllers for user input. For this thesis, the the controllers from the headset will not be used as the deliverables aim for building a hands-on application.

The Oculus Rift CV1 is used for the application as it is compatible with the VTK Toolkit, OpenVR and the Leap Motion controller found from earlier work. It should be noted that the HTC Vive could be a compatible option as well. However, this was not chosen due to practical availability. Nonetheless, as VTK and the Leap Motion controller are compatible with not just the Oculus Rift, it is expected that the application is able to be used with different HMDs as well [9, 10, 19, 21].

In indirect regard to the hardware requirements needed to build the application, some hard-ware requirements must be met in order to use Oculus Rift on a comfortable level. The most notable requirement is a sufficient graphics card, such as the NVIDIA GTX 1060. The compat-ibility of the computer hardware can be tested using the Oculus Rift Compatcompat-ibility Check Tool

1, and recommended system specifications are preferred to be followed as well. 2 When building

the application, the graphics card used is the GTX Titan.

1https://support.oculus.com/1357437467617798/ 2https://support.oculus.com/1773584749575567/

(12)

Figure 2.1: An exploded view of the Oculus Rift HMD Consumer Version 1 to which a Leap Motion controller is attached4.

2.1.2

Leap Motion

The Leap Motion controller (also called The Leap) is a sensor device developed by Leap Motion Inc. that tracks the motions of the hand and fingers. Hands are tracked at a high frame rate and accuracy, which allows for user input such that the use of a mouse or touch is not needed. For desktop use the device is placed flat facing upward, but can also be attached to a HMD, see figure 2.1 for an illustration.

The controller utilizes optical sensors and infrared light to allow tracking, where the field of view is roughly 150 degrees. The range in which the hands are detected is 25 to 600 mm above the controller. The controller tracks at an optimal condition when the hand is seen in a clear, high-contrast view. In recent years, the Leap Motion controller has become compatible with VR-equipment such as the Oculus Rift and the HTC Vive [1].

As an alternative to working with VTK for Leap Motion-related applications, the developers of Leap Motion have provided the Leap Motion Interaction Engine. Meant to be utilized with the Unity game engine and the Leap Motion controller, it provides a customizable layer allowing to create interaction with objects and interfaces. For the Interaction Engine, Unity Core Assets can be downloaded and used in development. As extensions to these Core Assets the Hand module can be downloaded as well, containing a small collection of prefabs which can be used in development [8, 10].

The Interaction Engine allows the user to create interaction with objects, interfaces and many other sorts of applications. These illustrate quite well the potential the Leap Motion controller has, in- and outside the domain of virtual reality. As an example, one such application is Blocks, an application where the user can create and interact with geometric shapes and manipulate gravity in the VR-environment. See figure 2.2 for an illustration of the application. Other examples of such applications include a mirror-simulation, arranging and assembling virtual objects, and a weightless environment [9].

(13)

Figure 2.2: An application built using the Leap Motion Interaction Engine, where the user can create and interact with geometric shapes and manipulate gravity in the VR-environment6.

As it was desired to build a hands-on application, the Leap Motion controller was used as a suitable option with regard to hand-tracking hardware. It was also chosen due to practical availability and the notion that it is quite compatible with VR-equipment. Furthermore, as will be mentioned in the following section, the Leap Motion is deemed compatible with the VTK Toolkit and OpenVR. Lastly, from the earlier mentioned applications it has become clear that the use of the Leap Motion controller in the VR-environment is able to provide an intuitive experience, allowing it to replace the Oculus Rift controllers in applications.

2.2

Software

The following subsection is dedicated to discussing the software that is used to build the appli-cation, along with the reasoning behind the choices that were made. It should be noted that the software discussed regards compatibility with the Microsoft Windows operating system, where the language of focus to build the application with is C++, as it is the language the VTK toolkit is written with.

2.2.1

The Visualization Toolkit (VTK)

The Visualization Toolkit (VTK) is an open-source library developed by Kitware Inc. containing more than 2000 C++ classes mainly used for 3D computer graphics, image processing and the visualization of data. Next to C++ it is also possible to work with Tcl, Java and Python using wrappers. VTK can be run on many platforms such as Windows, Linux, Mac, and other Unix distributions. It is possible to install VTK by source or download the binary executables. For the desired application, VTK needs to be built from source using CMAKE in order to include OpenVR dependencies, see subsection 2.2.2 for more details about the VR dependencies [19].

As mentioned in section 1.1, the Haber and McNabb reference model is used in various vi-sualization systems to visualize data. This is also referred to as the vivi-sualization pipeline and applied to VTK as well. The general process of visualizing data in line with this pipeline in VTK is done in some steps. This process will be discussed in order to give a better insight in how the hands and data are visualized in the VR-environment in the built application [7].

(14)

In the process data is read from source, this can either be existing or generated data. Ex-isting data is often found with the .vtk or .vti extension, and is read with the function vtkStructuredPointsReader and vtkXMLImageDataReader respectively. To generate data, functions from the vtkObject class can be used. After the data is read, filter can be applied to the analyzed data, such as a contour filter. A contour filter creates isosurfaces or isolines, depending on whether the data is stored in a 3D or 2D format.

The next step in the pipeline is mapping the data by utilizing vtkPolyDataMapper class to create geometric objects. Next an actor is created by using vtkActor, which represents the geometry and properties of an object in a rendered scene. Lastly, the actor is rendered by utilizing vtkRenderer or vtkOpenVRRenderer. The classes provide an abstract specifi-cation for renderers, it converts the geometry, lights specifispecifi-cation and camera view into an image or screen. Independent from this pipeline in a strict sense, however often relevant, are the classes vtkRenderWindow, vtkRenderWindowInteractor, vtkOpenVRRenderWindow and vtkOpenVRRenderWindowInteractor. The renderwindow classes create a window for the ren-derers to draw into where its behaviour can be specified. The renderwindowinteractor allows for platform-independent render window interaction such as frame rate control [13, 19].

2.2.2

OpenVR

In recent years the support for the VTK framework has expanded by the OpenVR module, which is an API that supports SteamVR developed by Valve, allowing VTK to be compat-ible with VR-equipment such as Oculus Rift and the HTC Vive. Classes from this module allow the visualization pipeline to be rendered in the VR-environment, utilizing classes such as vtkOpenVRRenderer, vtkOpenVRRenderWindow and vtkOpenVRRenderWindowInteractor. As OpenVR API allows VTK to be compatible with Oculus Rift, it is desired to be used in building the application [16, 19, 21].

2.2.3

Leap Motion SDK

The Leap Motion Software Development Kit (SDK) is an API for the Leap Motion controller de-veloped in recent years to allow users to extract data from the device when using VR-equipment. Dubbed Orion Beta, it is currently only available for the Windows operating system. Previous versions of the Orion Beta SDK also provide support for Mac OS and Linux. The current ver-sion can be used in six languages: JavaScript, C# (only with Unity), C++, Java, Python, and Objective-C [9].

At the core of the API, using Leap Motion SDK tracking data can be obtained Frame objects, which contain other objects such as Hand−, Finger− and Arm objects. The API contains an internal model of the human hand, and compares it to the input from the sensor to create a best fit for the produced objects. The data from the sensor is analyzed by frame, where each frame containing data is sent to the desired (compatible) application. Next to the data of a frame containing the positions of hands, fingers, palm, and arms, it also contains the velocities and gestures. Velocities denote the tracked speed at which an object is traveling [1].

Gestures are certain patterns of movement that can be used to invoke events. When a gesture is tracked, a Gesture object is added to the Frame object. The Leap Motion SDK supports the following four types of gestures, see also figure 2.3 [1]:

1. Circle: A single finger tracing a circle.

2. Swipe: A long, linear movement of a finger.

3. Key Tap: A tapping movement by a finger as if tapping a keyboard key.

(15)

Figure 2.3: Set of gestures that can be recognized by the Leap Motion SDK7.

As it was desired to work with hand-tracking hardware and VR-equipment, this library seemed a suitable choice to utilized in the development of the application.

2.3

Design Requirements

This subsection provides a clear overview of what design requirements must be met in line with the research question and project deliverables described in sections 1.2 and 1.3 respectively. First the requirements for visualizing hands in the application are set, which are followed by the requirements for interacting with data.

2.3.1

Hand Visualization

As mentioned in section 1.3, the first part of the application has to focus on some form of hand representation. In order to achieve hands-on, immersive and intuitive interaction, the hands should be rendered in the VR-environment when they are detected by the Leap Motion con-troller. While adequate rendering of hand representation aids in a more immersive experience, it is not the goal to render all parts the Leap Motion controller is able to detect. Instead, the focus will lie on rendering the most important parts of the hand regarding interaction, such as the fingertips. Rendering fingertips can be done by using ten spheres, which is imagined to be similar to the fingertips shown in figure 2.2.

The challenging part of hand visualization is the computation that needs to be done in order to render the hands at the right position. The data from the Leap Motion coordinate system must be translated into VTK world coordinates, where the position and orientation of the cam-era (HMD) must be considered as well. Therefore, hand visualization asks to work with three separate coordinate systems.

2.3.2

Interaction

The next part concerns interaction with data. Once it is possible to visualize hands, interaction with data should provide an easier and more insightful experience. As a test-case, a data set is chosen from a computational flow simulation problem. The dataset is called the SMRX dataset. Originating from a paper from 1999, the dataset consists of a static mixer, which is a device or object used in the chemical industry to mix viscous fluid streams. The mixing mechanism of fluid streams utilizes splitting, stretching, reordering and recombination of fluid streams. Many types of mixers are available that concern a variety of chemical processes, such as gas/liquid reactors, polymerization reactors, blending units, and heat exchangers. See figure 2.4 for a visual representation of a standard SMX static mixer [11, 14].

(16)

Figure 2.4: The geometry of a standard SMX static mixer with four of its elements shown [14].

One type of SMR mixer is the SMRX, which is manufactured by Sulzer Chemtech Ltd. Mainly used in polymerization reactors, it consists of a series of crossing tubes which are placed inside a rectangular reactor. Fluid streams are passed from one side of the mixer and come out on the other side, as illustrated in figure 2.5[11].

Figure 2.5: The SMRX geometry showing the direction fluid streams take when passing through [11].

As the SMRX dataset is available in .vtk format, it is possible to be viewed using Paraview. In line with the features of the SMRX dataset, Paraview can be used to simulate fluid streams by utilizing streamlines passing through the mixer. Users are able to use the dataset to show that the mixer allows for mixing of two fluid streams passing through it. However, placing the stream sources in the right place has shown to be a cumbersome trial-and-error process, as the placement of the sources is not always clear and the effect of placement is not shown immediately.

As the SMRX dataset is available for use with VTK, and Paraview is built using the VTK framework, the possibility arises to visualize the SMRX dataset along with streamlines in the application. Intuitive interaction with streamlines on the SMRX mixer can be achieved by using index finger of each hand as the point source of a streamline. When the index fingers are placed inside the rectangular reactor, the stream pass through the SMRX mixer mixing the the fluid streams. This effect is shown in real-time, which is in contrast to when done in Paraview. There-fore, the application can provide are more intuitive placement of stream sources and immediate output, thus providing a more easier and insightful experience with the dataset. For an example of streamlines used in Paraview, see figure 1.2.

(17)

CHAPTER 3

Method

This chapter is dedicated to explaining the implementation of the project deliverable, of which the design aspects are explained in the previous chapter. Next to explaining the implementation, the reasoning behind certain choices made in the implementation is also attempted to be jus-tified, along with what other methods were considered but not ultimately implemented. First, the implementation of multithreading in the application is discussed, which is followed by the visualization of hands. Next it is discussed how the SMRX dataset is implemented using VTK. Lastly, the implementation of the interaction with the SMRX dataset is discussed.

3.1

Multithreading

As the application utilizes VTK and the Leap Motion controller, the application consists of two main parts: input processing and rendering. As mentioned in section 2.1.2, the Leap Motion controller receives tracking data in the form of Frame objects, which holds data from objects for a specific frame. The VTK toolkit is able to render objects into the renderwindow, which is the view seen when using the Oculus Rift. In abstract lines, a continuous stream of Frame objects must be retrieved from the Leap controller, processed in the correct manner, and rendered into the renderwindow using VTK classes.

Achieving a continuous stream of Frame objects from the Leap Motion controller can be done using various methods in theory. One method is utilizing the vtkTimerCallback, which is a callback function that observes if a certain event has been invoked. For instance, if a certain condition has been met, the callback will observe this and trigger the suitable event according to the implementation. With the vtkTimerCallback, instead of invoking events when a condition has met the event is invoked every time interval, where the length of the interval can be specified. In theory it is possible in this manner at every time interval to retrieve the Frame object, apply the desired computations, and render accordingly. It has however been found through empirical evidence that utilizing the vtkTimerCallback is not compatible with the OpenVR libraries. This deemed the callback option unusable [19].

Another method of obtaining a continuous stream of data is by utilizing the Leap Motion Listener class from the LeapSDK. This consists of a set of callback functions that can be set to listen for input from the Leap Motion controller. The Controller object is bound to the Listener class, and calls the Listener functions from a thread that is created by the LeapSDK. Such functions include onConnect, onDeviceChange and onExit to name a few. One of the more important and relevant functions include onFrame, which is called every time new data is available from the Frame object [1].

As the onFrame function from the Listener class is seemed suitable to be used for a continuous stream of data, it however gives rise to a new set of problems. The first problem that arises is

(18)

that the stream of data is in a strict sense not continuous, as the function is invoked at every time interval only if new data is available from the Frame object. All operations in the application which are not directly related to the input from the Leap Motion controller have become heavily dependent on needing new input continuously. Parts of the application that need to be executed regardless of input are in this manner therefore not possible.

Another problem that arises is that there is no grip on the thread that is created by LeapSDK. The thread is not able to be controlled, unless changes are made to the library itself which is not desired. The last problem that arises is in the form of a data race condition. Data received from the onFrame function is stored in variables, which are then used to render the desired output on a high frame rate. This could cause the situation that when the values from the variables are read for rendering, it is attempted to write to the same variables by a newly invoked onFrame function. Even though the data race problem could be solved utilizing mutex locks and condition variables in order to protect critical sections, the overall list of problems deem the usage of the Listener class not desirable.

As achieving continuous input by utilizing functions from VTK or LeapSDK is not deemed suitable, it is chosen to apply multithreading by creating separate threads: one for input process-ing and one for renderprocess-ing. For convenience, these threads will be referred to as the leapThread and vtkThread respectively from this point onward. Using multiple threads allows for full control within the application. As these threads ought to have access to shared resources in a sequential manner, a solution is desired that provides synchronization and security for the critical sections of each thread. This solution takes form in the usage of semaphores.

A semaphore is a variable that is used to gain access to critical sections that are used by multiple processes, which is a concept that was coined by computer scientist Edsger Dijkstra around 1962 or 1963. When a critical section is in use, the semaphore is adjusted accordingly, such that no other process can make use of the critical section at the same time [3].

As an example, a counter for the number of vacant seats in a room is represented as a semaphore. If a person, the equivalent of a thread in this example, wishes to take a seat, the counter is increased by one. If a person leaves the seat, the counter is decremented. If a person wishes to take a seat when the counter has reached its maximum number of seats, it will not be possible for said person to take a seat in the room. It is guaranteed that multiple persons will not take the same seat, which is equivalent to guaranteeing safety in critical sections. This example illustrates the concept of a counting semaphore. When the counter is only allowed to reach a maximum of one (a single seat in the room), the concept is denoted as a binary semaphore [3].

Returning to the usage of threads in our application, two threads are used in total with each having their own critical section utilizing shared variables. To simulate a continuous steam of input and rendering, each thread contains a while-loop that runs as long as the application is running. Inside each while-loop all of the desired actions are done. The critical part of each thread may only be accessed when the other thread is not using it. Using Windows’ API, two threads and one semaphore object were created, such that two threads operate using the concept of a binary semaphore [20].

At every iteration in the while-loop of a thread, the semaphore object is requested. If the semaphore is not available, a time-out will occur. In the implementation it is necessary that the thread keeps looping in order to ensure a more stable framerate. Therefore, during a time-out no event is invoked. If the semaphore becomes available, it will hold the semaphore object, carry out the tasks that are desired, and release the object again. This ensures security for the critical sections and synchronization between the threads [20].

(19)

3.2

Hand Visualization

Once multithreading is possible by receiving input from the leapThread and rendering in the vtkThread, the next step is to visualize the hands of the user. In an abstract concept, the input from the leap motion needs to be processed such that hands are rendered in the correct place in the VR-environment. As mentioned in section 2.3.1, the focus during implementation lies on rendering the most important parts in terms of interaction. Therefore, it is chosen to limit the scope of visualization to just the fingertips of both hands. This is done by using ten spheres, like seen in figure 2.2. To retrieve data for the fingertips from the Leap Motion controller, the onFrame object is used. When hands are tracked, the coordinates of the fingertips are stored in Pointable objects. All pointables in a frame can be accessed by using the PointableList object [1].

In the process of visualizing the fingertips three types of coordinate systems are used: of the Leap Motion controller, the VTK world, and the camera (which is the HMD). The Leap Motion coordinate system uses millimeters as units on the same scale as the real world. The Leap Motion controller is the frame of reference, where the origin is located at the top center. The right-handed coordinate system is applied, where it is assumed that the device is placed front facing up between the user and the monitor screen. See figure 3.1 for an illustration. If the controller placed in a different position, such as being attached to the HMD, the LeapSDK will not be able to detect this [1, 19].

Figure 3.1: The Leap Motion controller uses the right-handed coordinate system1. The VR-environment renders actors using the VTK world coordinate system. Also using the right-handed coordinate system, its units are depicted in meters. The vtkOpenVRCamera class is used to render output to the HMD, making use of a camera object. For the camera VTK tracks many parameters which can be of use, such as the view up direction of the camera, the view front direction, and the focal point. These directions, along with another axis that can be computed manually, a right-handed coordinate system for the camera can be constructed respective to the VTK world coordinate system. As the camera is used to render output to the HMD, the users’ orientation and position in the VTK world determines the positioning of the camera’s coordinate system. See figure 3.2 for an illustration [19].

(20)

Figure 3.2: The VTK world coordinate system in which the user’s position and orientation determine the positioning of the camera’s coordinate system.

The Leap Motion coordinates are relative to the camera’s, which in turn are relative to VTK world coordinates. The fingertips must be rendered in VTK world coordinates. A brief description of what steps must be taken in order to render the fingertips for each frame are as follows, where each step will be explained in more detail afterwards:

1. Retrieve the coordinates from the Leap Motion controller.

2. Transform the coordinates relative to the camera’s coordinate system.

3. Transform the coordinates relative to the VTK world coordinate system.

4. Render spheres using coordinates.

Before any threads are invoked, the actors for the fingertips are initialized. For each hand a global list of actors of vtkSphereSource objects is created using the vtkActorCollection class. Here the actors of the left hand are colored red and the right blue to have a better distinction of hands when the user is in the VR-environment. To store coordinates of each fingertip, a global double array for each hand is initialized and set to zero. For the fingertips of each hand the x, y, and z coordinates must be known, such that each array has a size of 15. When a hand is on frame, the onFrame object is retrieved where the Pointable objects from the PointableList are evaluated. The coordinates of a hand not in frame are set to zero. As LeapSDK is capable of determining whether the right or left hand is on frame, this is made use of at every loop in the leapThread [1, 19].

(21)

By evaluating the PointableList of a hand by using its Hand object, each Pointable is accessed to obtain the coordinates of a fingertip. To convert these to camera coordinates, some adjustments must be done: correct the positioning of the axes with the camera’s, convert from millimeters to meter units, and correct for centering of the Leap Motion controller with respect to the HMD. When the Leap Motion controller is placed on the HMD, its coordinate system does not adjust to the camera’s, which must be corrected by swapping the Y- and Z- axes and negating the direction of the Z-axis.[1, 15].

Converting from millimeters to meters is done by dividing all values by 1000. As the center of the Leap Motion controller is placed in front of the HMD, an offset is created with respect to both origins. This offset can be expressed as tx, ty, and tz. In the application, this offset has

been measured as 8 centimeters (80 millimeters) in the Z-axis in terms of the camera’s coordinate system, meaning tz = −0.08. These adjustments can be expressed as a transformation matrix

3.1 [1, 15]. MLeap/Camera=     −0.001 0 0 0 0 −0.001 0 −0.001 0 tx ty tz     =     −0.001 0 0 0 0 −0.001 0 −0.001 0 0 0 −0.08     (3.1)

The next step is to transform the camera coordinates relative to the VTK world’s coordinate system. In the current step VTK world coordinates are used, however they are not corrected for the position and orientation of the camera with respect to the VTK world. In order to do this, the position and orientation of the camera must be retrieved. The position of the camera in the VTK world is retrieved by invoking the GetPosition function on the camera object. The coordinates are stored in a double array. The orientation of the camera is found by retrieving the direction of the up and front view vectors of the camera. Both are retrieved by invoking the GetViewUp and GetDirectionOfProjection functions respectively. These vectors are equivalent to the Cyand Czaxes regarding direction (not length) in figure 3.2. To find the vector equivalent

to Cx, the cross product is applied to Cy and Cz [19].

To find the correct orientation of the camera with respect to the VTK world, it was first attempted to find the angles from the camera’s vectors with respect to the VTK world. These three angles were used to calculate for each coordinate of a fingertip how much they must be rotated with respect to each axis. The fingertips’ coordinates, which have been transformed to the camera’s coordinate system, were placed in the origin of the VTK world. Using the calcu-lated angles, the coordinates of each fingertip were rotated around the X, Y, and Z axis. This method did not work, as the series of rotations around the axes were not commutative, producing incorrect rotations of the fingertips in the VTK world [19].

Another approach has been applied instead. Once the three vectors of the camera are found, their directions can be seen as equivalent to the camera’s coordinate system. The VTK world’s coordinate system can be expressed as an identity matrix, where each axis’ length is equal to one. To compare the camera’s coordinate system to that of the VTK world, the three vectors’ length found for the camera are recalculated to a length of one. The three vectors can be expressed as a matrix as well. The next step is the find the transformation matrix that transforms the camera’s matrix into that of the VTK world. The found transformation matrix is then multiplied with the coordinates of each fingertip [19].

Once this has been done, the position of the camera is added to the each coordinate. The next step is to render the fingertips. For a hand which is on frame, its actorCollection object is retrieved and iterated through to access each actor. Using the SetPosition function, the position of each fingertip is placed in the VTK world [19].

(22)

3.3

Data Insertion

The insertion of the SMRX dataset is done by reading .vtk file using the vtkStructuredPointsReader function. To visualize the mixer as seen in figure 2.5, a contour filter must be applied such that its isosurfaces are generated. This is done using the vtkContourFilter class, which is connected to a mapper by using the vtkPolyDatamapper class. A actor is made using vtkActor, to which the mapper is connected. Lastly, the actor is rendered by invoking addActor with respect to the renderer [19].

3.4

Interaction

As mentioned in section 2.3.2, the index fingers of each hand must be used as the point source of a streamline. To make this possible, the streamlines and point sources itself must be rendered in some form. In a high-level overview this is done as follows: an actor is created separate from the rendered mixer in the previous subsection, to which the desired properties are set. A point source is created and connected to a stream tracer. To the stream tracer a tubefilter is connected. Next, mapper is created to which the tubefilter is connected. A lookuptable is connected as well in order to map scalar values. A actor is created for the streamlines in and is connected to the mapper. Next, a separate actor is created for the pointsource in order to display from where the streamlines originated. This is visualized using spheres. Creating pointsources and streamlines are done separately for each hand. [19]

This overview is now discussed in greater detail. Once an actor is created, properties such as the ambient color, diffuse color and opacity are set using their respective Set functions. Next, a pointsource object is created using vtkPointSource and made global, as it is desired to be used in the vtkThread. A pointsource is source object to which a user can specify the number of points that can be generated within a specified radius from a center of point. At initialization, the center is set to the origin with the number of points and radius both being equal to ten [19].

In the next step the point source is connected to the stream tracer, which is a filter that integrates a vector field in order to make it possible to generate streamlines. A tube filter is connected to the stream tracer, which is a filter that generates tubes around lines. This is needed in order to visualize the streamlines. Next a mapper is created, to which the tube filter is connected. Next to the tube filter a lookup table is connected as well, which is used by the mapper to map scalar color values into RGBA-values. Next an actor is created and rendered for the streamlines [19].

To visualize the pointsource itself, spheres are used. The function vtkGlyph3D is a filter that copies geometric representations, such as spheres, to specified points. This makes it possible to copy spheres to all points within a point source object. A mapper and actor are created, where lastly the actor is rendered. In aid of visualizing the proper scaling and setting of the VTK world coordinate system, axes are visualized as well. This is done by creating a vtkAxes object, for which a mapper and actor are created and is rendered as a final step [19].

In order to visualize streamlines where the point of origin are the index fingers of both hands, the center of each point source is updated in the vtkThread. When a hand is on frame, the pointsource center of the respective hand’s index finger is updated with the coordinate of the index finger itself [19].

(23)

CHAPTER 4

Results

This chapter is dedicated to displaying the results of the implementation discussed in section 3. The visualization of the left hand is shown in figure 4.1. It can be seen that if the left hand is on frame, its fingertips are shown with red-colored spheres. For the right hand the fingertips are visualized with a blue color.

Figure 4.1: The left hand visualized in the VR-environment, where the red spheres represent the fingertips.

The visualization of the mixer is seen inf figure 4.2. It is possible to walk around and through the actor such that it can be viewed from various angles.

(24)

Figure 4.2: The SMRX mixer visualized.

The visualization of streamlines through the mixer are shown in figure 4.3, where the origin of the streamlines are shown using spheres. Here both hands were used to visualize streamlines, where the left hand’s streamlines are colored red and the right hand’s are green. Once the streamlines are visualized, it is possible to walk around and view the streamlines from various angles. It is also possible to only use one hand such that streamlines from one source are only visualized. It must be noted that if once hands are in frame again, the streamlines will disappear as the new point source’s position is updated to that of the new frame.

Figure 4.3: Streamlines flow through the SMRX mixer, where the point sources are marked using small spheres. The red streamlines originate from the left hand and the green from the right.

(25)

CHAPTER 5

Discussion

In chapter 4 the results of the implementation of the application are shown. This chapter is dedicated to discussing these results, along with their implications and interpretations of what they mean in a wider context. Results that were or were not expected will be discussed as well.

The first part of the application regards hand visualization. As seen in figure 4.1, hands are visualized by rendering spheres for the fingertips of each hand. The left hands’ fingertips are visualized as red, the right hands’ as blue. It is possible to walk around in the VR-environment and visualize the hands in the correct position and orientation when they are on frame. This result is as expected, as the manner in which coordinates are transformed to the VTK world is close to practices found in the field of linear algebra [18].

As it is possible to visualize hands in the application this leaves opens possibilities to With the visualization of the mixer as seen in figure 4.2, it is possible for the user to walk around in order to view the mixer from various angles. The user can also walk through the mixer. It is only possible when standing near the sides of the mixer to visualize streamlines using one or both hands. The placement origins of the point sources is to some extent more intuitive and immersive, as the user is not concerned with placing the origins correctly with respect to the streamlines like done in Paraview.

The result in figure 4.3 can be compared to the visualization of streamlines in Paraview as seen in figure 5.1. When applying a streamline to the mixer, it will be visualized with many streamlines instead, as often it is not insightful to visualize a single streamline. When comparing the amount of streamlines in Paraview with the amount in 4.3, the application visualizes signif-icantly less streamlines.

The amount of streamlines can be changed by increasing the amount of points per pointsource. The reason for the lesser amount of streamlines is that more streamlines take more time to ren-der, which is a significant constraint in the application that works on a real-time basis. The increase of time when rendering was not expected, as the amount of rendering that needed to be done in real-time was not considered to be computation-heavy. The amount of streamlines do not significantly affect the immersion and understanding of the dataset, as it still can be shown in what way streamlines mix and flow through the mixer. To some extent it is applicable that there is a trade-off between the amount of streamlines visualized and the time it takes to render.

(26)
(27)

CHAPTER 6

Conclusion

This chapter is dedicated to emphasize to what extent the goals of the project deliverables have been met. The most significant results are emphasized, along with what limitations the appli-cation currently has. Lastly, some future work is discussed that can be worked on the improved and expand the application made in this project.

As mentioned in section 1.3, the application can be split up in two parts, which are hand visualization and data interaction. By combining both parts interaction with data is possible in real-time. From the results and discussion, it is concluded that it is possible to interact with data, where the interaction is easy and immersive. With this achievement, a first step is made towards a system where data can be explored in an intuitive way, enhancing the experience of immersion.

A limitation found with the current state of the application is that the implementation only works with the SMRX dataset. There is no flexibility for use with other datasets and other extensions than .vtk, nor is it possible to do any other action that visualize streamlines.

6.1

Future Work

The visualization of hands can be improved and enhanced in the VR-environment. As it is now possible to visualize the fingertips, a next step could be to visualize the other parts of each hand, such as the bones and joints. A full visualization of hands could enhance the amount of immersion that is experienced. Next to visualizing the hands one could also look at interacting with objects, such that translating and rotating objects in the VR-environment is possible. This would introduce more possibilities of use for the application, where holding objects is essential to understanding the dataset.

Another way the application can be enhanced is by implementing a pipeline similar to the work of Daan Kruis [13], where the stages of the pipeline are able to be adjusted within the environment. As mentioned in earlier, a current limitation of the application is its lack of modularity. The application only works with the SMRX dataset specifically and other extensions than the .vtk are not accepted. One could make the implementation modular such that more types are supported and visualization and applying the desired filters can be done within the application itself.

(28)
(29)

Bibliography

[1] API Overview. url: https://developer.leapmotion.com/documentation/cpp/devguide/ Leap_Overview.html.

[2] R. G. Belleman. Source code for prototype implementation.

[3] Edsger W. Dijkstra. “Over de sequentialiteit van procesbeschrijvingen”. circulated pri-vately. n.d. url: http://www.cs.utexas.edu/users/EWD/ewd00xx/EWD35.PDF.

[4] Henk Dreuning. A visual programming environ- ment for the Visualization Toolkit in Vir-tual Reality. 2016.

[5] Michael Friendly. “Milestones in the history of thematic cartography, statistical graphics, and data visualization”. In: (2009). url: http://www.math.yorku.ca/SCS/Gallery/ milestone/milestone.pdf.

[6] K. Gruchalla. “Immersive well-path editing: investigating the added value of immersion”. In: IEEE Virtual Reality 2004. 2004, pp. 157–164. doi: 10.1109/VR.2004.1310069. [7] R. B. Haber and D. A. McNabb. “Visualization Idioms: A Conceptual Model for Scientific

Visualization Systems”. In: Visualization in Scientific Computing. 1990.

[8] Leap Motion Inc. Hands Module Leap Motion Gallery. url: https://gallery.leapmotion. com/hands-module/.

[9] Leap Motion Inc. Leap Motion. url: https://www.leapmotion.com/. [10] Leap Motion Inc. Unity. url: https://developer.leapmotion.com/unity/.

[11] D. Kandhai et al. “LatticeBoltzmann and finite element simulations of fluid flow in a SMRX Static Mixer Reactor”. In: International Journal for Numerical Methods in Fluids 31.6 (1999), 10191033. doi: 10.1002/(sici)1097- 0363(19991130)31:6<1019::aid-fld915>3.3.co;2-9.

[12] Kitware. Taking ParaView into Virtual Reality. 2016. url: https://blog.kitware.com/ taking-paraview-into-virtual-reality/.

[13] Daan Kruis. “Creating interactive visualization pipelines in Virtual Reality”. In: Thesis for the Bachelor Computer Science, University of Amsterdam (2017).

[14] Shiping Liu, Andrew N. Hrymak, and Philip E. Wood. “Laminar mixing of shear thinning fluids in a SMX static mixer”. In: Chemical Engineering Science 61.6 (2006), 17531759. doi: 10.1016/j.ces.2005.10.026.

[15] Anastacia Macallister, Tsung-Pin Yeh, and Eliot Winer. “Implementing Native Support for Oculus and Leap Motion in a Commercial Engineering Visualization and Analysis Platform”. In: Electronic Imaging 2016.4 (2016), 111. doi: 10.2352/issn.2470- 1173. 2016.4.ervr-417.

[16] Patrick Oleary et al. “Enhancements to VTK enabling scientific visualization in immersive environments”. In: 2017 IEEE Virtual Reality (VR) (2017). doi: 10 . 1109 / vr . 2017 . 7892246.

[17] ParaView. url: https://www.paraview.org/.

(30)

[19] The Visualization Toolkit. url: https://www.vtk.org/.

[20] Using Semaphore Objects. url: https : / / msdn . microsoft . com / en - us / library / windows/desktop/ms686946(v=vs.85).aspx.

Referenties

GERELATEERDE DOCUMENTEN

Commentaar: Er werd geen alluviaal pakket gevonden: maximale boordiepte 230cm.. 3cm) Edelmanboor (diam. cm) Schop-Truweel Graafmachine Gereedschap Tekening (schaal: 1/

In a subsequent perception study, we found that this difference in gesture production could be interpreted meaningfully by other participants, who reliably judged

DEFINITIEF | Farmacotherapeutisch rapport viskeuze cysteamine oogdruppels (Cystadrops®) bij de behandeling van afzettingen van cystine kristallen in het hoornvlies bij cystinose |

[r]

Notice that in this case, we did not provide an explicit interpretation of the outcome (as we did for performance), because we aimed to identify the way in which

Door de resultaten van groep vijf wordt in dit onderzoek ook bevestigd dat de therapie integriteit en het verbale gedrag van de therapeut niet van belang zijn voor de uitkomst op

The color point linearly shifts to the yellow part of the color space with increasing particle density for both the transmitted and re flected light, as shown in Figure 4.. The reason

brunneum Cb15-III for A&amp;K strategy against the major subterranean termite pest species (Microtermes sp.) sampled in cocoa agroforests (Chapter six) showed no