• No results found

Visualizing multivariate data in VR

N/A
N/A
Protected

Academic year: 2021

Share "Visualizing multivariate data in VR"

Copied!
31
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Visualizing multivariate data

in VR

Tijmen Zwaan

May 7, 2021

Supervisor(s): Robert Belleman

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

Virtual reality (VR) has been shown to be a useful tool for visualizing data in an intuitive way. This thesis describes the implementation of a visualization application in VR using the principles as described by Jacques Bertin that allows the user to import and interact with their own CSV-datasets for visualization, without the need for a customized applica-tion dedicated to the specific dataset. To comply to the VR frame rate requirements, the performance of the application is tested against differing sizes of datasets. In conclusion, the application satisfies all the listed requirements and some directions for future research of the effectiveness of the application and possible feature extensions are listed.

(4)
(5)

Contents

1 Introduction 7 2 Theoretical background 9 2.1 S´emiologie Graphique . . . 9 2.2 Virtual Reality . . . 10 2.3 Related work . . . 11 2.3.1 ImAxes . . . 11 2.3.2 DatavizVR . . . 11 2.3.3 Enterprise applications . . . 11 2.3.4 Summing up . . . 12 3 Design 13 3.1 Requirements . . . 13 3.1.1 Virtual Reality . . . 13 3.1.2 Data import . . . 14 3.1.3 Visualization methods . . . 14 3.1.4 Configuration . . . 14 3.1.5 Interaction . . . 15 3.1.6 Performance . . . 15 4 Implementation 17 4.1 VR hardware . . . 17

4.2 Libraries and packages . . . 18

4.2.1 VRTK . . . 18

4.2.2 Simple file browser . . . 18

4.2.3 IATK . . . 18

4.2.4 Performance Testing Extension for Unity Test Runner . . . 18

4.3 User interface . . . 18 4.4 Shape shader . . . 19 4.5 Performance tests . . . 19 5 Results 21 5.1 Application . . . 21 5.2 Performance . . . 21 5.2.1 Test results . . . 22 6 Conclusion 25 6.1 Future research . . . 25 6.1.1 Effectiveness . . . 25 6.1.2 Additional functionality . . . 25 A generate csv.py 29

(6)
(7)

CHAPTER 1

Introduction

The visualization of scientific data gets increasingly challenging as datasets grow in size and complexity [21]. This thesis explores the use of interactive Virtual Reality (VR) in an effort to alleviate some of these challenges.

Visualizing data can be a significant aid towards forming insights, without having to look at each individual data point, since the human mind is much more adapt at interpreting visual patterns [2]. With the use of VR, these visualizations can be shown as 3-dimensional objects that the user can walk around and interact with, which has been shown to have a positive impact on gaining these insights [11].

This thesis focuses on the development of a VR application in which data can be visualized and interactively explored. It includes an analysis of multiple visualization techniques, and measures the impact of these visualizations on the performance of the application. For the development process of this application, the HTC Vive VR headset is used, however the final product is capable of running on other hardware.

The research question is as follows: What are the requirements for the development of a VR application with which multivariate data can be interactively displayed?

In chapter 2, a theoretical background of general data visualization techniques is discussed, followed by the design description of a practical application in chapter 3 and an implementation of this application in chapter 4. Finally the performance of the final product is analysed.

(8)
(9)

CHAPTER 2

Theoretical background

This chapter first describes the principles of datavisualization as described by Jacques Bertin, after which follows a justification for the use of VR as well as a collection of related works.

2.1

emiologie Graphique

In his work “S´emiologie Graphique”, Jacques Bertin provides a framework that translates infor-mation to an appropriate visualization [6]. The concepts described in this section are used in the following chapters to create a mapping from datapoints in a dataset to visual attributes.

Bertin’s framework describes “marks” or “signs” as basic units used to represent some form of information other than themselves. The marks he describes are as follows:

• Points are dimensionless locations on the plane represented by signs that need some form of size, shape and color.

• Lines represent information with a certain length. • Areas have both a length and a width.

• Surfaces are areas in a three-dimensional space but do not have a volume themselves. • Volumes have a length, width and depth.

Each mark can be modified using visual variables that have certain characteristics. The variables Bertin describes are position, size, shape, color value, color hue, orientation and texture. These variables were expanded upon by Mackinlay and MacEachren with the addition of color saturation, arrangement, crispness, resolution and transparency [16]. Examples of these visual variables can be seen in figure 2.1.

The characteristics of each variable determine whether it is appropriate for the type of infor-mation the visualization is conveying. Bertin describes the characteristics as follows:

• Selective: A variable is selective when changing this variable is easily distinguishable in a group of objects.

• Associative: A variable is associative when several marks can be grouped together across changes in other visual variables.

• Quantitative: A variable is quantitative when the difference between two marks can be interpreted numerically.

• Ordinal: If a variable lends itself to be read in a specific order, the variable is an ordinal variable.

• Nominal: The opposite of Ordinal. A nominal variable cannot be ordered in an intuitive way.

(10)

Figure 2.1: A visualization of visual variables. Figure from Robert E. Roth, 2017 [19].

Visual variables can have multiple of these characteristics which determine the effectiveness of the variable in representing certain types of data. Mackinlay states that visual variables can be ordered by effectiveness based on the type of data being represented, and that this order should be used when assigning visual variables to the data [16]. An example would be to use a selective variable like ‘shape’ to differentiate between cats and dogs, and a quantitative variable like ‘size’ to show what they weigh. The ordered variables as described by Mackinlay are shown in figure 2.2.

For Bertin the process of creating a data visualization is a manual process, where the re-searcher goes through multiple iterations of visualizations to reach insights. In contrast, Mackin-lay introduced a program that automated the process of mapping visual attributes to the data. Automatically determining the visual attribute mapping can lead to faster results, however cer-tain combinations of visual attributes might be omitted by such an algorithm that would have otherwise lead to new insights.

2.2

Virtual Reality

The use of VR as a data visualization tool provides multiple benefits over a traditional approach. The addition of a third dimension allows for a natural extension of the positional visual variable. Three-dimensional visualizations are not a new concept and have been widely used in computer science. However, the use of a flat screen removes the natural ability of depth perception, often making a three-dimensional visualization ambiguous, where it can be unclear what the spatial

(11)

Figure 2.2: Accuracy ranking of perceptual tasks. Figures from J. Mackinlay, 1986 [16].

relation between marks is. The introduction of VR solves this problem by reintroducing depth perception. Additionally, VR allows a more natural interaction with the application compared to a keyboard and mouse. VR has been shown to lead to a better perception of datascape geometry, more intuitive data understanding and a better retention of the perceived relationships in the data [11].

2.3

Related work

The following section is a short overview of existing data-visualization virtual reality applications.

2.3.1

ImAxes

ImAxes is an interactive system based on IATK for exploring multivariate data where the user can manipulate the axes of a visualization as physical objects as seen in figure 2.3 [7]. The type of visualization that appears depends on the proximity and relative orientation of the axes with respect to one another.

While the interactive system provides an intuitive way of interacting with the data, the mapping to visual variables is predetermined. The user only determines the combination of these variables. For example, in the wine dataset used to demonstrate ImAxes, the color variable indicates the difference between red and white wine, while most of the other aspects of the data are mapped to positional variables. Additionally, ImAxes does not allow data to be imported from within the application [7].

2.3.2

DatavizVR

DatavizVR is a data visualization application that closely resembles the application described in this thesis. It is currently under development [10]. It allows the user to visualize data by assigning axes in a dataset to specific visual variables. Additionally it allows for some data manipulation that did not make the scope of this thesis, like clustering, sorting and scaling.

2.3.3

Enterprise applications

Applications like 3data, BadVR and Immersion Analytics are VR visualization platforms de-signed for businesses [1][5][14]. They combine the theories of Bertin with automated data analysis using both AI and more traditional analysis algorithms. Because these applications are designed for businesses, they require setup and integration with the platforms they are being used for.

(12)

Figure 2.3: ImAxes allows the user to grab and rearrange the axes of a scatterplot [7].

2.3.4

Summing up

Although VR data-visualization tools are available, many focus on a larger integration with enterprise systems, require the user to implement the visualizations on the data themselves or are otherwise limiting in the type of visual variables that can be applied to specific data points. The application in this thesis differs by being plug-and-play for the end-user, and staying closer to Bertin’s vision of manually rearranging visual variables.

(13)

CHAPTER 3

Design

This chapter describes the requirements of the application as well as a way to practically apply Bertin’s principles.

3.1

Requirements

The main functional requirements are the following:

R.1 The application must be a virtual reality application.

R.2 The user must be able to import their own data into the application

R.3 The application must be able to render a visualization for the imported data.

R.4 The user must be able to specify specific columns in the data file to visual attributes in the visualization.

R.5 The user must be able to interact with the visualization.

R.6 The application must be able to handle datasets of sufficient size with no noticeable per-formance impact.

Each is described in more detail below.

3.1.1

Virtual Reality

VR hardware

To use a VR application, some form of VR hardware is required. Many variations of VR hardware exist from personal headsets to full-scale rooms using projectors. However, to keep the application usable for a consumer market, it is best to focus on an architecture using a personal headsets, as those are now widely available for consumers.

Game engine

Designing a VR-interface from scratch is an unfeasible undertaking for this project, so the first decision is the choice of a framework or game engine. Feasible options are the Unity game engine, the Unreal game engine, and the WebXR Device API [13][12][15]. Both Unity and Unreal have native VR support built into the engine. However, Unity has available functionality to export an application to WebXR, making it a more viable option for wide multi-platform support [18]. Additionally, Unity is considered to be more beginner-friendly, and has a more extensive online community, making it easier to find documentation and solutions to future problems.

While Unreal has more advanced tools to achieve a high visual fidelity, this is not the focus of this project.

(14)

Figure 3.1: Sketch of shape attribute.

3.1.2

Data import

To allow the user to import their own data into the application, a file-picker is required. With a file-picker, the user can browse through the files on their system, and select a specific file to import. The application must be able to parse the input file, so this file must be limited to a specific data-format. The comma-separated value (CSV) format is fitting for this application. CSVs are widely used in consumer, business and scientific applications. A CSV file allows a dataset to have an arbitrary amount of data records, as well as an arbitrary amount of fields per record. Each row in a CSV file corresponds to a data record (or data point), while each column in a CSV file represents the same data attribute for each of the records. Due to this structure, the principles of Bertin can be applied by representing each record with a mark, and assigning visual variables to the columns of our CSV.

3.1.3

Visualization methods

To apply Bertin’s principles to the application, the user must be able to freely apply and combine visual attributes to the data. While the choice for CSV as a data-format provides a simple structure, it lacks the capabilities of data relationships, where one record is related to another, or multiple others. This means in the visualization, each record will be represented by its own separate mark with no relational visualizations connecting them to each other.

With these considerations, the most applicable visualization technique is that of a three-dimensional scatter plot. On top of the three positional attributes, each point can have additional color, size and shape attributes. For user convenience, the input data should be normalized to fit within the plot. That means the data should be scaled according to the minimum and maximum values across all records for each of the data attributes.

Because the data is provided by the user, this data can both be discretized into categories, or have a continuous spectrum. Fortunately, a continuous visualization method can handle both these types of data. Plotting a discretized dataset will automatically look discrete, even on a continuous spectrum.

So all attributes should be continuous, for example as the input value for the color attribute increases, the color should shift from blue to green to red. In the case of the shape attribute, at the ends of the spectrum the shape is either a cube or a sphere. The values in between use a linear interpretation between these two shapes as illustrated in figure 3.1.

3.1.4

Configuration

To configure the visualizations as mentioned in 3.1.3, the user must be able to map columns in the input CSV file to visual attributes in the visualization through some interface.

(15)

3.1.5

Interaction

VR allows the user to more intuitively interact with their environment. By adding interactions to the visualizations, the insights gained from the data can be improved. The user must at least be able to move the visualization around.

3.1.6

Performance

A low frame rate or slow tracking responsiveness in VR applications can lead to discomfort, dizziness and headaches for the user. For this reason the performance of any VR application is crucial for the user experience. The size of the dataset is expected to have an impact on performance, so for a smooth user experience, the application must be able to visualize sufficiently large datasets without a noticeable impact on performance.

The impact can be determined by running experiments on differently sized datasets, as well as rendering several different visual attributes and comparing the performance.

(16)
(17)

CHAPTER 4

Implementation

This chapter will focus on implementation detail of the application. First a selection of packages and frameworks used in the application is described, after which follows a description of the user’s work-flow within the application.

4.1

VR hardware

The HTC Vive VR headset was used during the development of the application. It features a head-mounted display capable of tracking movements of the user’s head with six degrees of freedom. Additionally it has two controllers with the same tracking ability. This tracking capability is similar to that of other VR headsets that are commercially available. This can allow the final application to be compatible with alternative headsets as well.

game engine VR hardware

Simple File Browser IATK packages My Data Visualizer Unity Test Runner Performance Testing Extension experiments

(18)

4.2

Libraries and packages

An overview of the packages and the way they are combined in the application can be seen in figure 4.1. The following sections provide a description of these packages.

4.2.1

VRTK

VRTK is a toolkit based on the Unity engine which provides a common API used to design VR applications for multiple VR platforms [20]. It includes tools to track both the headset and the controllers in a 3-dimensional space. Additionally it allows for common interactions like picking up objects, pointing and clicking on buttons. This means the SteamVR API package used to control VR hardware from the “Valve” company does not need to be interacted with directly. Instead, all interactions in the application are implemented through VRTK. VRTK is used to satisfy both R.1 and R.5 from the design.

4.2.2

Simple file browser

Simple file browser is a Unity plugin that allows the user to browse through and select local files from their machine [22]. At first, the file browser is invisible, and by using VRTK, the file browser is made to follow and float a small distance in front of the headset of the user. The user can then activate the file browser with a button-press on their controller. At this point the file browser will become visible and stop following the headset, allowing the user to interact with it as a static object. After selecting a data file, the file browser is hidden and resumes following the headset. This satisfies R.2 from the design.

4.2.3

IATK

The Immersive Analysis Toolkit (IATK) is based on VRTK and provides an API to generate data-visualizations [9][8]. These visualizations can be modified, and specific visual attributes can be assigned to specific fields in our data. This is used to satisfy R.4 from the design. Some features within the toolkit require certain external proprietary dependencies like MapBox [17]. Since none of the features are needed for the application. The toolkit is modified to remove unused dependencies for the application like MapBox as well as the implementation of the shape attribute [17].

4.2.4

Performance Testing Extension for Unity Test Runner

The Unity Performance Testing Extension is a Unity Editor package that provides an API and test case decorators to make it easier to take measurements/samples of Unity profiler markers, and other custom metrics outside of the profiler, within the Unity Editor and built players. It also collects configuration metadata, such as build and player settings, which is useful when comparing data against different hardware and configurations.[3]

To make use of this extension the Unity project must be compiled using assembly definitions that specify the required dlls for each of the packages that it uses. Additionally this allows the extension to be excluded from compilation of the final product [4].

4.3

User interface

This section describes the work-flow of the application and the way the user interacts with it. The main starting point of the application is the file browser which can be activated using a button press on the controller. After the user has selected a data file to import, the application creates both a visualization and a related selection matrix. The selection matrix is a two-dimensional array of buttons which contains a column for each visual attribute in the visualization and a row for each column in the dataset. By using the selection matrix, the user can create a mapping from columns in the dataset to specific visual attributes. The visualization is automatically updated whenever the user changes this mapping. A simple mapping of the first three columns in the

(19)

data to the x, y and z positional attributes is applied by default when loading a new dataset. Both the visualization as well as the selection matrix can be moved around and positioned by the user by simply grabbing with the controller and dragging it to a new location.

The application allows the user to import more than one dataset. For each dataset that is imported, a separate visualization and selection matrix are created. This satisfies R.3 from the design.

4.4

Shape shader

For most of the visual attributes, the application uses the shaders provided by IATK to perform all the rendering. The shaders provided by IATK only allow for either a static texture, or a fixed model to serve as marks in the visualization. The default visualization of the application will make use of the simple “points” shader which renders a flat texture which faces the user for each of the marks.

However, to make use of the shape attribute, a separate shader is needed. This shader receives the x, y and z coordinates of each mark that needs to be rendered, as well as a parameter that determined the shape that needs to be rendered. Inside the shader are static definitions of a low-polygon cube and sphere that both have the same number of vertices and triangles. Using a simple linear interpolation function based on the input parameter, the shader combines the vertices of both these shapes into a new shape, again using the same number of vertices.

4.5

Performance tests

To test the application with the testing extension, multiple random datasets with varying amounts of data points are generated using the python script in appendix A. Each dataset is loaded into the application and measurements are performed for 200 frames while the viewport of the user is fixed in place and pointed towards the visualization.

Each test is performed multiple times and the results are combined to calculate the average frame rate and the standard deviation.

The shape shader is expected to have a larger impact on the performance than the default shader which renders simple points. For this reason all tests are performed both with and without the use of the shape attribute.

(20)
(21)

CHAPTER 5

Results

This chapter will first provide some examples of the user interface and visualizations after which the results of the performance test will be analyzed to determine an upper bound to the dataset size that the application can handle, in accordance with R.6.

5.1

Application

Figures 5.1 and 5.2 show the file browser and the selection matrix as seen in the application. Figures 5.3 and 5.4 show examples of visualizations generated by the application. The dataset used in these examples is provided by the IATK package and contains measurements on a selection of red and white wines [9].

Figure 5.1: Simple file browser in VR.

Figure 5.2: The selection matrix.

5.2

Performance

The Vive VR headset demands a frame rate of 90 frames per second. This means that each frame has to be rendered within 11.11 milliseconds. When Unity finishes rendering a frame early, it will wait until those 11.11ms are passed before starting on the next frame. If it exceeds the available time it will simply wait for the next frame. As a result, the measured amount of time for each

(22)

Figure 5.3: Visualization example using the x, y, z and color attributes.

Figure 5.4: Visualization example using the x, y, z, color and shape attributes.

frame will always be a multiple of 11.11ms, with a slight margin of error. This creates difficulty in determining the performance impact of smaller datasets, because as long as the frame finishes rendering within the allotted time frame, the measurement will be identical.

5.2.1

Test results

In all of the performance tests, each datapoint consists of a measurement of 200 sample frames, of which the average and standard deviation is calculated.

(23)

The results in figure 5.5 show that the frame rate of the application with the points shader remains stable up to approximately 400,000 datapoints, after which the frame time increases. While the average frame time at 400,000 datapoints sits around 16 ms, the individual measure-ments alternate between 11.11 ms and 22.22 ms, indicating that only some of the frames are not rendered within the allotted time. This can also be seen by the bars in the figure that indicate the standard deviation.

Figure 5.6: Performance tests with shape shader.

Figure 5.6 shows that the shape shader has a larger impact on the frame time. As a result the limit of 11.11ms frame time is exceeded with a lower number of datapoints, effectively making the graph a more granular representation of the performance impact. The figure indicates that the performance impact grows linearly with the number of data points that are displayed.

Besides frame rate, the performance testing extension allows for the measurement of other profiler markers, such as the mesh and camera rendering times. The mesh rendering time seems to have very little correlation with the number of datapoints. It seems to initially increase when the number of datapoints increases, but around 300,000 datapoints it starts to decrease before ending up roughly around a stable point.

This result could be explained by optimization techniques within Unity that perform culling on a mesh before rendering to remove all vertices that are occluded. Since the datasets contain randomly generated datapoints, whenever the dataset is large enough, the only datapoints that are visible are those forming a plane at the front of the visualization. All other points are occluded behind that plane. Whether Unity performs such techniques needs further research.

Overall, the mesh rendering time is consistently below 3 µs, indicating that it has practically no impact on the application performance.

Unfortunately I was unable to find a profiler-marker using the performance testing extension that would provide more insight into where most of the performance budget is being used, although it is likely that most of the load is in the shader that generates the rendered mesh.

However, from the results it is clear that the application performs well using both shaders for datasets with up to 10,000 datapoints, And using just the points shader, the application performs well up to 300,000 datapoints.

(24)
(25)

CHAPTER 6

Conclusion

The application satisfies all the requirements as described in the design chapter. The Unity en-gine in combination with a shader-based implementation of data visualizations like IATK allows the user to visualize datasets of sufficient size without a detrimental impact on performance. Im-porting datasets from within the application provides a viable option for users that are otherwise unfamiliar with the implementation details that a project like ImAxes requires.

The source code is available at https://github.com/tzwaan/MyDataVisualizer.

6.1

Future research

6.1.1

Effectiveness

The ultimate goal of visualizing data is acquiring insights into the data. An experiment could be set up to determine the effectiveness of this application towards that goal. Such an experiment could entail a dataset which contains certain correlations that are known to the researchers, but unknown to participants in the experiment. The participants could be divided into groups that will use either the VR application, or a regular computer application to try and find the known correlations. A comparison can be made between the two to quantify the effectiveness of the VR application compared to traditional methods.

6.1.2

Additional functionality

Multiple additional features could be added to the application to allow for more user interaction as well as more visualizations. The visual attributes could be expanded by implementing textures as well as orientation. The orientation attribute could allow for a visualization of a vector field, but this would likely require additional restrictions on the data format to implement properly.

User interactions could be expanded by adding manual scaling of the visualization, as well as selecting specific marks in the visualization itself to read out the actual values of the data record. Other useful features would be to filter the dataset from within the application, or to add some way of displaying the average, median and standard deviation within the visualization.

(26)
(27)

Bibliography

[1] 3Data Analytics. 3data.io. https://3data.io/.

[2] S. M. Ali, N. Gupta, G. K. Nayak, and R. K. Lenka. Big data visualization: Tools and challenges. 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I), pages 656–660, 2016.

[3] U. T. ApS. [documentation] Performance Testing Extension for Unity Test Run-ner. https://docs.unity3d.com/Packages/com.unity.test-framework.performance@ 2.5/manual/index.html.

[4] asmdef. How to remodel your project for asmdef and UPM, 2019. https://gametorrahod. com/how-to-asmdef-upm/.

[5] BadVR. Badvr. https://badvr.com/.

[6] J. Bertin. S´emiologie Graphique : Les Diagrammes, Les r´eseaux, Les Cartes. 1967.

[7] M. Cordeil, A. Cunningham, T. Dwyer, B. Thomas, and K. Marriott. Imaxes: Immer-sive axes as embodied affordances for interactive multivariate data visualisation. 10 2017. doi:10.1145/3126594.3126613.

[8] M. Cordeil, A. Cunningham, B. Bach, C. Hurter, B. H. Thomas, K. Marriott, and T. Dwyer. [tutorial] introduction to IATK: An immersive visual analytics toolkit., 2018. https:// vimeo.com/320646218.

[9] M. Cordeil, A. Cunningham, B. Bach, C. Hurter, B. H. Thomas, K. Marriott, and T. Dwyer. Iatk: An immersive analytics toolkit. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pages 200–209, Mar 2019. doi:10.1109/VR.2019.8797978.

[10] DatavizVR Inc. Datavizvr. https://store.steampowered.com/app/551960/DatavizVR_ Demo/.

[11] C. Donalek, S. G. Djorgovski, A. Cioc, A. Wang, J. Zhang, E. Lawler, S. Yeh, A. Ma-habal, M. Graham, A. Drake, S. Davidoff, J. S. Norris, and G. Longo. Immersive and collaborative data visualization using virtual reality platforms. pages 609–614, Oct 2014. doi:10.1109/BigData.2014.7004282.

[12] Epic Games. Unreal Engine. URL https://www.unrealengine.com.

[13] J. K. Haas. A history of the unity game engine. Technical report, 100 Institute Road, Worcester MA 01609-2280 USA, March 2014.

[14] Immersion analytics. Immersion analytics. https://www.immersionanalytics.com/. [15] B. Jones, M. Goregaokar, and N. Waliczek. WebXR Device API. https://www.w3.org/

TR/webxr/.

[16] J. Mackinlay. Automating the design of graphical presentations of relational information. ACM Trans. Graph., 5:110–141, 04 1986. doi:10.1145/22949.22950.

(28)

[17] mapbox. Mapbox. https://mapbox.com.

[18] Mozilla. WebXR exporter. https://assetstore.unity.com/packages/tools/ integration/webxr-exporter-109152.

[19] R. Roth. Visual Variables, pages 1–11. 01 2017. doi:10.1002/9781118786352.wbieg0761.

[20] VRTK. VRTK - Virtual Reality Toolkit, 3.3.0. https://vrtoolkit.readme.io/v3.3.0. [21] I. Yaqoob, I. Hashem, A. Gani, S. Mokhtar, E. Ahmed, N. Anuar, and A. Vasilakos. Big

data: From beginning to future. International Journal of Information Management, 36, 12 2016. doi:10.1016/j.ijinfomgt.2016.07.009.

[22] S. Yasir. Unity Simple File Browser. https://github.com/yasirkula/ UnitySimpleFileBrowser.

(29)

APPENDIX A

(30)

import c s v import a r g p a r s e import s t r i n g import random import p a t h l i b column names = l i s t ( s t r i n g . a s c i i u p p e r c a s e ) v a l u e n a m e s = l i s t ( s t r i n g . a s c i i l o w e r c a s e ) def m a k e f i l e f r o m d a t a ( d i r e c t o r y , f i l e n a m e , d a t a ) : i f d i r e c t o r y : path = p a t h l i b . PurePath ( d i r e c t o r y , f i l e n a m e ) e l s e : path = p a t h l i b . Path ( f i l e n a m e ) w i t h open ( path , ’w ’ , n e w l i n e= ’ ’ ) a s f : w r i t e r = c s v . w r i t e r ( f ) f o r row in d a t a : w r i t e r . w r i t e r o w ( row ) c l a s s D a t a G e n e r a t o r : def i n i t ( s e l f , columns , n r r o w s ) : s e l f . columns = columns s e l f . n r r o w s = n r r o w s s e l f . t i t l e s = None s e l f . c u r r e n t r o w = 0 def i t e r ( s e l f ) : return s e l f def n e x t ( s e l f ) : i f not s e l f . t i t l e s :

s e l f . t i t l e s = random . sample ( column names , len ( s e l f . columns ) ) return s e l f . t i t l e s i f s e l f . c u r r e n t r o w < s e l f . n r r o w s : s e l f . c u r r e n t r o w += 1 return g e n e r a t e r o w ( s e l f . columns ) r a i s e S t o p I t e r a t i o n ( ) def g e n e r a t e r o w ( columns ) : def v a l u e f o r t y p e ( t y p e ) : i f t y p e == i n t : return random . r a n d i n t ( 0 , 1 0 0 ) e l i f t y p e == s t r : return random . c h o i c e ( v a l u e n a m e s ) e l i f t y p e == f l o a t : return random . u n i f o r m ( 0 , 1 0 0 ) e l s e : r a i s e TypeError ( t y p e ) row = [ v a l u e f o r t y p e ( column ) f o r column in columns ]

(31)

return row

i f n a m e == ” m a i n ” :

p a r s e r = a r g p a r s e . ArgumentParser (

d e s c r i p t i o n= ’ Make a new c s v f i l e w i t h random d a t a ’ ) p a r s e r . add argument (

’−−rows ’ , metavar=’ROWS’ , type=int , nargs=’+’ , d e f a u l t =[10] , help= ’ The number o f rows i n t h e c s v r e s u l t ’ )

p a r s e r . add argument (

’−− f i l e ’ , metavar=’FILE ’ , type=str , d e f a u l t=’ export ’ , help= ’ The name o f t h e t a r g e t f i l e ’ )

p a r s e r . add argument (

’−−d i r ’ , metavar=’DIR ’ , type=str , d e f a u l t=’ exports ’ , help= ’ The name o f t h e t a r g e t d i r e c t o r y ’ )

a r g s = p a r s e r . p a r s e a r g s ( )

f o r n r r o w s in a r g s . rows :

f i l e n a m e = f ’ { a r g s . f i l e } { n r r o w s } . c s v ’

d a t a = D a t a G e n e r a t o r ( [ str , int , f l o a t , int , f l o a t ] , n r r o w s ) m a k e f i l e f r o m d a t a ( a r g s . dir , f i l e n a m e , d a t a )

Referenties

GERELATEERDE DOCUMENTEN

As 188 Re-HEDP might be a preferable therapeutic radiopharmaceutical for the treatment of painful bone metastases, we developed a simple method for the preparation and quality

Het onderwijs is de belangrijkste taak van de regering na haar rol als nachtwaker (zorg voor veiligheid open- bare orde en justitie) en heeft daarmee een hogere prioriteit dan

Despite the choice of haptics setting limitation, this experiment aims to understand whether or not the haptic feedback, created by bebop gloves, enhances user performance in the

Based on our literature research we identified 4 key concepts in relation to VR rehabilitation products, namely, augmented feedback, eye tracking, increased motor

Table 4 Overview of input for requirements for the design of a VR application for forensic mental healthcare gathered in semi-structured interviews with think-aloud with members from

In table 1 it is visible that on the one hand, positive Correlations were found between the variables perceived ease of use of the VR application, perceived behavioural control of the

The meCUE results are similar for all loading animation conditions in all constructs of product qualities (usability, functionality, aesthetics) and emotions

The user tests that were conducted could have provided more data proving the relaxation of the participants. For example, the time it took for a participant to