• No results found

Bridging the Gap Between Object-Based and Narrative-Based Storytelling

N/A
N/A
Protected

Academic year: 2021

Share "Bridging the Gap Between Object-Based and Narrative-Based Storytelling"

Copied!
36
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bridging the Gap Between

Object-Based and

Narrative-Based Storytelling

Axel Bremer 11023325 Bachelor thesis Credits: 18 EC

Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisor dr. ir. J. Kamps

Capaciteitsgroep Media & Cultuur Faculty of Humanities University of Amsterdam

Turfdraagsterpad 9 1012 XT Amsterdam

(2)

ABSTRACT

Object-based and narrative-based storytelling both have strengths and weak-nesses. Narrative-based storytelling misses the interaction with the objects, and object-based storytelling makes it hard to portray the story behind the ob-jects due to visitor’s museum fatigue. This thesis proposes a solution to bridging the gap between these methods by creating an application that integrates the object in the narrative-based method, in this case, a museum catalog. This so-lution allows creators of exhibitions to enhance the catalogs for their exhibition in an easy way. It even provides a method for retrospective enhancing of already existing catalogs.

(3)

Acknowledgement

I would like to thank Jaap Kamps for helping me come up with the ideas for the applications made for this thesis. I would also like to thank Wim Hupperetz and the Allard Pierson museum for letting me use the Crossroads exhibition and taking the time to look at the application demos.

(4)

Contents

1 Introduction 5 2 Related work 8 3 Approach 10 3.1 Software . . . 10 3.2 Data . . . 10

4 First Two Demos 12 4.1 Virtual Museum Application . . . 12

4.2 Augmented Reality Application Version 1 . . . 15

5 Augmented Reality Application Version Further Development 20 5.1 Development of Version 2 . . . 20

5.2 Development of Version 3 . . . 23

6 Discussion and Conclusions 29 6.1 Future Work . . . 30

Appendices 32 A Code 32 A.1 Narrative XML file . . . 32

A.2 AnnotationScript.cs . . . 32 A.3 PinchZoom.cs . . . 32 A.4 TouchRotate.cs . . . 32 A.5 Process.py . . . 33 A.6 VuforiaHandler.cs . . . 33 A.7 AromaTrackableEventHandler.cs . . . 33 B Figures 34 B.1 Contrast image versions . . . 34

(5)

1

Introduction

The primary role of a museum is telling a story about events from the past. Remnants of cultures past, objects of cultural importance and many other things are found around the globe and museums often are the institutions that take the task upon them to research these objects, preserve their stories and portray those stories to the people that want to know more about the history of the world. The way this is done can be divided into two categories; object-based storytelling and narrative-based storytelling, which will be exemplified in the next paragraphs. Both of these categories have their strengths and weaknesses which will be discussed below.

Firstly, object-based storytelling will be discussed. An example of object-based storytelling could be an exhibition in a museum gallery. A group of objects that share a specific connection together is displayed in a room, and each object has a small bit of information given about it.

Figure 1: Example of object-based storytelling

One significant advantage of this method is that visitors can examine the object in great detail because the actual object is right in front of them. However, it does have a significant disadvantage concerning storytelling. Museum visitors do not have time to read a long story about every object in the room, because they can only spend a certain amount of time in the museum. So the only information given is a little information about the object itself and sometimes a small explanation about how these objects are connected to each other. Due to the museum fatigue[6] of museum visitors, the exhibition becomes about the

(6)

storytelling is the Crossroads book which pertains to the Crossroads exhibition [1], two pages of which can be seen in figure 2

Figure 2: Crossroads, an example of narrative-based storytelling

This method of storytelling, as the name would suggest, focuses much more on the narrative behind the objects and uses the objects to enhance the narrative rather than use the information to enhance the objects. The advantage of this method is that the reader can take their time to read about this narrative because a book like this can be read at home at the readers own leisure. The disadvantage of this method is the lack of interaction with the object. Readers can only examine the object as a 2D photograph instead of seeing the actual object and being able to see it from all angles.

The problem concerning these two methods is that they are seen as mutually exclusive. But creating a method which mixes these two methods should be

possible. Which is why, in this thesis, we will try to answer the following

question:

How can we bridge the gap between object-based and narrative-based storytelling?

This main question is divided into three sub-questions:

• What are current approaches to object-based and narrative-based story-telling?

(7)

• How can we bring the narrative to the object, or bring the object to the narrative?

• Is augmented reality with 3D models effective to augment the narratives in a traditional museum catalog?

We will start in chapter 2 by looking at the work that is already being done regarding object-based and narrative-based storytelling. After that, chapter 3 will cover the software and data used to create the applications whose demos will be covered in chapter 4. Chapter 5 examines the further development of the best of the two demos.

(8)

2

Related work

The question that will be looked at in this chapter is:

What are current approaches to object-based and narrative-based storytelling? Firstly we will look at what is being done by the Allard Pierson Museum re-garding object-based and narrative-based storytelling by looking at the Cross-roads exhibition. This exhibition shows the different ways a museum can tell a story.

Object-based Crossroads is a traveling exhibition focusing on connectivity

and cultural exchange during the Early Middle Ages in Europe.[5] A part of the exhibition consists of the standard ‘objects in a room’, which is the object-based method mentioned earlier.

Narrative-based Crossroads can also be enjoyed in book-form. As stated in

the preface of the book:

The international Crossroads project connects European cultural heritage as it emerged between AD 300 and 1000. In this project, dif-ferent narrative contexts are explored in terms of continuity, change and entanglement, taking into consideration the effects of the con-verging pagan and Christian influences as the transition was made into a predominantly Christian society which transformed the Early Middle Ages. The exhibition is presented through specially selected museum objects, displayed thematically, and different media such as an interactive interpretative mapping tool, the cross-culture time-line.

This book is more than just a catalog because it shifts its focus from the objects to the stories behind the objects.

Mixing of the methods The Crossroads exhibition already has some ways

with which it tries to mix the narrative-based and object-based methods. There was a digital application developed specifically for this exhibition. The applica-tion is called the Cross Culture Timeline. Which shows, using large projecapplica-tions of maps on the wall, the connections between different objects. It is made to show the diversity of the objects, as well as the links that tie them all together.[9] Another way they tried to enhance the exhibition was by creating holographic animations which bring four key objects from the exhibition to life. Anima-tions were also made to show how some of these objects were used in the past. [2]

Concerning Augmented Reality Augmented Reality is an up and coming

(9)

Aug-mented Reality in their exhibitions. Fossils are being brought back to life, visi-tors can look for and catch plants and animals in a museum, and holograms of astronauts can be seen doing a spacewalk all using Augmented Reality.[4] These are all examples of exhibitions being enhanced using technology. These enhancements often require quite a bit of work and are made especially for a specific museum, collection, or exhibition. In this project, a solution is offered that requires little configuration when changing it for use in a different museum, collection, or exhibition.

In this chapter, we studied the question: What are current approaches to object-based and narrative-object-based storytelling?. Our main findings are the following. First, the Crossroads exhibition at the Allard Pierson museum shows a few ways of telling a story using both methods and also a mix of these methods. Second, there is a fair amount of Augmented Reality being used in museums at the moment. However, most of these are specifically made for certain exhibitions. In the next chapters, we will introduce an application which is easily configured for other collections.

(10)

3

Approach

In the next chapter, we will investigate our second research question by creating two applications that have different approaches to mix Object-based and Story-based narrative telling. On the halfway point of the project, the two applications were evaluated, and one application was picked to be further developed. In this chapter, we will look at which software and data are used to create these applications. The first application is a type of virtual museum showing 3D models and their corresponding stories. The second application is an Augmented Reality application that uses already existing catalogs of exhibitions.

3.1

Software

To create these systems the Unity 3D engine is used. Unity is a cross-platform engine used primarily to develop three-dimensional and two-dimensional games for a multitude of devices.[14] Unity was picked for its ease of scripting (using the C# language) and for its cross-platform capabilities. This cross-platform property is useful for making these systems as this helps with releasing the systems on both major mobile operating systems (iOS and Android).

For the Augmented Reality part of the system, the Vuforia Augmented Reality SDK is used. This is a Software Development Kit for Augmented Reality on mobile devices. Vuforia supports multiple types of targets which can bind the 3D models to the real world. 2D image targets, 3D targets of multiple images, basic 3D object targets, and targets based on 3D models.[10] In this system, only the 2D image target is used, but the application could easily be extended to use other types of targets.

3.2

Data

The data used in this project consists of a collection of 3D models of the Cross-roads exhibition set up by the CEMEC project.[5] (Connecting Early Medieval European Collections) The 3D models are high-quality scans of the objects that were in the Crossroads exhibition at the Allard Pierson Museum and that are now exhibited at the Byzantine and Christian Museum in Athens (April-September 2018). Some of these models are annotated with data about certain aspects of the object. The models are uploaded on Sketchfab, an online 3D model sharing website that has an online viewer and editor as well as an API to integrate those features in other applications.[11]

(11)

Figure 3: Incense burner viewed in Sketchfab viewer with annotation.

The Sketchfab API would allow us to dynamically download the models and their annotations from Sketchfab during the application runtime. The downside of this is that if this were to happen, each end user would have to log in using a Sketchfab account. To keep the applications as easy to use as possible it has been decided not to make us of the Sketchfab API at this moment and to download the models and add them directly to the applications. Contact is being made with Sketchfab about the ability to use the API without needing end users to log in so there is a possibility that at the time of the actual launch of the application the Sketchfab API will be used.

The narratives that belong with the Crossroads exhibition is recorded in the book of the same name. This book is more than just a catalog of objects. It is most of all a book filled with stories which are enhanced with the use of 2D photographs of objects. That is what makes it a great example of the narrative-based storytelling.

Evaluation of the applications was done by letting a number of potential users (the amount ranging from 5 to 10 depending on the version) play around with the system. The potential users were observed while they tested the software and afterwards, they could give feedback concerning the usage of the applica-tion.

(12)

4

First Two Demos

This approach will be used to answer the following question in this chap-ter:

How can we bring the narrative to the object, or bring the object to the narrative?

Two basic ideas for applications were proposed during an early brainstorm ses-sion for this project. One to better incorporate the narrative in the object-based method, A Virtual Museum application, and one to integrate the object in the narrative-based method, an Augmented Reality application. Two very basic demos were to be made of these ideas to decide which one would be developed further. When meeting with the director of the Allard Pierson Museum, Wim Hupperetz, he showed far greater interest in the Augmented Reality application than in the Virtual Museum application. After this meeting, it was decided that the focus should lie more on the Augmented Reality application, which was therefore developed further.

The interest shown in the Augmented Reality application was significant enough that it was decided to enter this application in an App Challenge. This Digging for Data App Challenge is organized by the province Zuid-Holland to pique peoples interest about cultural heritage and archaeological findings. On the 22nd of June, a 2,5-minute pitch and demo were given to get picked to win

a D20.000,- development budget. Unfortunately, this application was not the

winner that day.

In this chapter, both the applications will be shown. First will be the Virtual Museum application and its corresponding Toolkit. After that, the multiple versions of the Augmented Reality application will be shown in chronological order.

4.1

Virtual Museum Application

One of the two starting ideas was the virtual museum. The thought behind this was that people who did not have access to a certain collection could enjoy this narrative from the comfort of their own home. However, most of all it was to be a proof of concept for the toolkit that lets storytellers easily create a narrative and have it stored in an easy to use digital form. At the moment the way objects are saved in a digital form has much information about the object itself but not a lot about the story in which this object is interwoven. [3]

The Allard Pierson Museum does have an online collection of stories in which objects are interwoven in a digital form. Take for example the Near East story [8]. It tells a story about the Near East and contains links to object wherever necessary. However, this links takes the reader to a 2d photograph of the object and some information about the object itself and nothing more.

(13)

The objective of the virtual museum application was to give the user the same experience of reading the story but give them an even more interactive method

to discover the objects. By using the high-detail 3D models of the objects

in question the user can see a lot more of the object than from a 2d photo-graph.

Figure 4: Proof-of-concept demo Virtual Museum Application

The left and right arrow buttons load the next object in the narrative, the zoom buttons make the object bigger and smaller, and the up and down arrows rotate the object along the horizontal axis. Rotation along the vertical axis is being done automatically. This choice was made because more buttons would clutter up the user interface and take the focus away from the object. At this point, the change to touch input has not yet been made.

The main point of this proof of concept demo was to test a new way of digiti-zation of stories. This is realized by creating certain Narratives in XML files. Such a Narrative would consist of a collection of objects each with its own story as to how it fits in the Narrative. Each of these objects can also be linked to another object with a specific type of link. For example, the objects could have been used for the same purpose, used by the same subculture or found in approximately the same location. When observing an object, the user can choose to see the next object in the narrative or to see one of the objects that are linked to the current object by pressing the buttons to those objects as can be seen in figure 4.

(14)

Figure 5: Flowchart of the Virtual Museum Application

These objects, their stories and their links are saved in an XML file. This is an example of how such a file would be formatted:

(15)

<N a r r a t i v e> <I n t r o T e x t>T h i s i s t h e i n t r o t e x t t o t h e n a r r a t i v e</ I n t r o T e x t> <P e r i o d>P e r i o d i n w h i c h t h e N a r r a t i v e h a p p e n s</ P e r i o d> < !−− O b j e c t s u s e d t o t e l l t h e n a r r a t i v e −−> <O b j e c t s> <O b j e c t i d=” 0 ”> < !−− T h e i d o f t h e o b j e c t −−> <Name>Name o f t h e o b j e c t</Name>

<F i l e N a m e>Name o f 3D m o d e l and t e x t u r e s</ F i l e N a m e> <Summary>S h o r t summary a b o u t o b j e c t</Summary>

<Text>L o n g e r Text a b o u t how t h e o b j e c t c o n n e c t s t o t h e n a r r a t i v e .</ Text> < !−− How t h e o b j e c t i s l i n k e d t o o t h e r o b j e c t s i n t h e n a r r a t i v e . −−> <L i n k s> <L i n k> <O b j e c t I d>I d o f l i n k e d o b j e c t</ O b j e c t I d> <ObjectName>Name o f l i n k e d o b j e c t</ ObjectName> <L i n k T yp e>how i t i s l i n k e d t o t h a t o b j e c t</ L i n kT y p e> </ L i n k> </ L i n k s> </ O b j e c t> <O b j e c t i d=” 1 ”> . . . </ O b j e c t> <O b j e c t i d=” 2 ”> . . . </ O b j e c t> </ O b j e c t s> </ N a r r a t i v e>

Figure 6: Example of Narrative XML formatting

For a more concrete example see appendix A.1, this XML file is based on a part of the Near East story from [8].

4.2

Augmented Reality Application Version 1

The second demo to be made was that for an Augmented Reality application. The goal of this application is to mix the two storytelling methods by incorpo-rating the 3D models into narrative sources that are already written down using Augmented Reality.

Version 1 was made using the two downloadable 3D models from the CEMEC sketchfab account, an incense burner and a St. Menas flask. It was designed to use the Google Cardboard Augmented Reality Viewer[7] to inspect the 3D mod-els in Augmented Reality. It was made to test if viewing 3D modmod-els of Museum Objects in Augmented Reality is on par with viewing them in a museum. Vuforia offers a collection of sample image targets which are made for proper

(16)

Figure 7: Drone sample image target and its features

Although the feature point detection algorithm Vuforia uses is not specified in their documentation, the documentation does specify to use image targets that are rich in detail, have good contrast and do not have repetitive patterns. Shown in figure 7 is one of the sample image targets offered by Vuforia. As can be seen in the picture, most of the features are found in the sharp edges of the image.

(17)

Figure 8: Unity Scene view with image targets and models

(18)

angles.

The blue and black spheres that can be seen on the incense burner model are the annotations which tell more about that particular part of the object. Using the script found in appendix A.2 the closest annotation is found and turned black to show the user that it is the closest one. If the user taps the screen, the text belonging to that annotation is displayed on the screen (figure 10) and with another tap the screen is cleared.

Figure 10: Google Cardboard Version 1 Incense burner with annotation

Evaluation After letting some users play around with the system using the

Google Cardboard viewer the consensus was that viewing museum objects through Augmented Reality may not be on par with viewing them in a museum, but it is a good alternative for when one is at home. The other opinion that a big part of the users had was that the Google Cardboard element had no added value. It was not that easy to use and therefore distracted the user from ac-tually viewing the object. The resolution of most smartphones is also not high enough for the right amount of immersion when viewing the object through Google Cardboard.

After meeting with the director of the Allard Pierson Museum where the demos of both the applications were shown it was decided that the Virtual Museum application would not be developed any further and all further attention should

be focused on the Augmented Reality application. This decision was made

because the Augmented Reality application showed more promise and generally got a more enthusiastic reaction from users.

In this chapter we tried to answer the question: How can we bring the narrative to the object, or bring the object to the narrative? Two application demos were made for this purpose. The demo of the Augmented Reality, which brings the

(19)

object to the narrative, was picked to be developed further. This development shall be covered in the next chapter.

(20)

5

Augmented Reality Application Version

Fur-ther Development

In this chapter, we will examine the further development of the Augmented Reality application, which was picked from the first two demos to be developed. It will be an application that enhances a regular catalog of museum objects. In this particular case, it enhances the already existing book of the Crossroads exhibition, also named Crossroads.[1]

When a user is reading the book and wishes to examine one of the objects closer he or she would just have to take out their phone launch the app and aim it at the book to see the 3d model and examine it more closely. With this in mind, the question we are answering in this chapter is:

Is augmented reality with 3D models effective to augment the narratives in a traditional museum catalog?

This is being done by improving the demo application with new features and letting them get evaluated by potential users.

5.1

Development of Version 2

One significant advantage of this application is that because of the usage of the Vuforia platform any image can be used as an image target. This means that the application could be made retrospectively for any catalog one would want as long as there are 3D models of the objects in the catalog.

The next step was to pick which part of the book would be the image target. Figure 11 shows that an entire page has a lot of usable features. However, this does not guarantee a good image target. Apart from the features Vuforia’s developer portal also gives a rating of how augmentable the image is. The full-page got a rating of 0 out of 5. This rating and the fact that using the whole page would make the application language dependent was a reason for not using the entire page as the target.

(21)

Figure 11: A page as seen in the book and it’s usable augmentation features

The image targets used in this version are the photos of the objects as they appear in the book. They are extracted from a pdf version of the book using the Microsoft snipping tool and are uploaded to Vuforia. Figure 12 displays the number of features found in the Menas Flask picture and shows that there are not a lot of usable features. However, when comparing the performance of the single picture target versus the entire page target, it is found that the individual picture is more easily recognized than the entire page and has a more stable performance overall. It does not disappear as often as the full page target, and it does not cause jitter as much.

(22)

Evaluation Despite providing a more stable performance than its full-page counterpart, some images, for example, the Menas flask, still not had a perfor-mance as stable as one would need to enjoy viewing the object. Testers reported that the jittering of the object was too distracting to examine the object in a normal way. In figure 13 the incense burner image target can be seen. This is a target that gave a stable performance and allowed users to examine the object to their leisure.

Figure 13: The photo as seen in the book and it’s usable augmentation features

In version 2 there was not yet a way of controlling the object when viewing it. To see all sides of the object one would have to walk around the book or move the book itself physically. When potential users were testing the application, it was found that most of them immediately tried to rotate the object and zoom in on it using touch gestures.

(23)

5.2

Development of Version 3

In version 3 of the application, three updates were made; touch controls were added; image targets were improved; target and model loading was improved.

Touch controls The object being viewed can now be rotated and zoomed

into using touch input. Using the Touch scripting API[13] the user can zoom in and out by pinching on the screen, and the user can rotate the object by dragging their finger across the screen. See appendix A.3 and A.4 for the script used to control this.

Improved image targets The low amount of features found in some images

(figure 12) caused the application to be near unusable for those models. Vuforia documentation claims that image targets with higher contrast make for better image targets. Using the GIMP image manipulation program[12] the contrast for the image targets was raised from 0 to 55 for a high contrast version of the images and from 0 to 100 for a full contrast version. The differences between all versions can be seen in figures 12, 15 and 16.

(24)

Figure 16: The photo with full contrast and it’s usable augmentation features

The high contrast version shows a lot more feature points than the normal contrast version, it also still shows a good resemblance to the actual picture in the book. The full contrast version shows an abundance of feature points and gets a full 5-star rating on augmentability. However, the image is visually very different from the picture in the book which may impede recognition during runtime.

To test which level of contrast performed best all three the versions were added to the image target database simultaneously. The application is started with the camera pointed away from the target image. The camera is then pointed towards the target image, and the contrast versions are written down in the order they triggered. Normal, high and full contrast are written down as N, H and F respectively. This was done for the incense burner, oil lamp, and Menas flask. Every version of these image targets can be seen in appendix B.1. The result is shown in table 1.

(25)

Incense Lamp Flask 1 H H N H 2 H N H N H 3 H N H F N H F 4 H F N H F N H 5 H N H F N H F 6 H N H H 7 H N H F N H 8 H N H H 9 H H H 10 H N H N H 11 H H N H 12 H N H H 13 H F N N H 14 H N H F H 15 H H H

Table 1: Order of detection of normal (N), high (H) and full contrast (F)

All of these detections occurred within 1 second of moving the camera to face the image target. In every case but one the high contrast version was detected and in each case where it was detected it was the first one to be detected. In the case of the St. Menas flask, it is the only one detected 13 out of 15 times. These results show us that the high contrast version of the image targets is the best choice to implement in the application.

Target and model loading The goal is to make this application as easy to

use and configure as possible. One way this is achieved is through the way one can configure it for a new collection. To do that only a few things need to be done:

• Put the models, textures, and images from the books in their respective folders.

(26)

Inside the Unity editor, no changes have to be made apart from choosing the right image target database. All the image targets are automatically loaded at runtime using the VuforiaHandler script found in appendix A.6. When an image target is detected the corresponding model is loaded into the scene and the right texture and scripts are added. This is done by adding the following code to the default trackable script provided by Vuforia. The entire script can be examined in appendix A.7.

// Get the object name

string modelName = this.name;

// Instantiate the model using the name of the trackable

g = GameObject.Instantiate( Resources.Load("MuseumModels/" + modelName) ) as GameObject;

// Set as child of the trackable

g.transform.parent = this.transform;

// Load in the textures and add them to the object

Texture tex = (Texture)Resources.Load("MuseumTextures/" + modelName + "-diffuse");

Texture norm = (Texture)Resources.Load("MuseumTextures/" + modelName + "-normal");

g.GetComponentInChildren<Renderer>().material.shaderKeywords =

new string[1]{"_NORMALMAP"};

g.GetComponentInChildren<Renderer>().material.SetTexture("_MainTex", tex);

g.GetComponentInChildren<Renderer>().material.SetTexture("_BumpMap", norm);

// Set the right scale and position

g.transform.localScale = new Vector3(.05f,.05f,.05f);

g.transform.localPosition = new Vector3(0,.05f,0);

// Add the zoom and rotation scripts

g.AddComponent<PinchZoom>(); g.AddComponent<TouchRotate>();

Figure 17: Code added to the DefaultTrackableEventHandler.cs script

This code only works when the image target, model, and textures have the same filename (the textures have an added ”-diffuse” or ”-normal” depending

(27)

on the type of texture). To make sure all these files have the same name the Process.py (appendix A.5) script was created. This script warns the user if there are images, models or textures with missing counterparts. An example of such a warning is shown in figure 18. It also changes the contrast of each book image to a higher contrast that has better tracking performance.

E :\ D r o p b o x \ Thesis > p y t h o n .\ P r o c e s s . py

All o b j e c t s s h o u l d h a v e a model , a b o o k image ,

a n o r m a l t e x t u r e and a d i f f u s e t e x t u r e . The f o l l o w i n g o b j e c t s are not c o m p l e t e and n e e d to be f i x e d :

apm - reused - c a p i t a l is m i s s i n g : a d i f f u s e t e x t u r e and a b o o k i m a g e and a m o d e l

lvr - h e l m e t is m i s s i n g : a b o o k i m a g e and a m o d e l

apm - horse - and - r i d e r is m i s s i n g : a n o r m a l t e x t u r e and a b o o k i m a g e

E :\ D r o p b o x \ Thesis >

(28)

In this chapter the question to be answered was: Is augmented reality with 3D models effective to augment the narratives in a traditional museum catalog? By addressing the concerns brought up the people who tested version 1 we tried to create an application that can enhance the traditional museum catalog. It was found that we can use the already existing images in the book as targets for augmentation while giving a performance stable enough for viewing the objects. We also made the process of configuring this application for other collections easy to perform. The users testing this application responded enthusiastically and liked using the application. With this information, it can be said that using augmented reality to enhance the traditional museum catalog is effective.

(29)

6

Discussion and Conclusions

In this thesis the main question was:

How can we bridge the gap between object-based and narrative-based storytelling?

This question was divided into three sub-questions which we’ve tried to answer in the previous chapters.

What are current approaches to object-based and narrative-based

story-telling? Our main findings are the following. First, the Crossroads exhibition

at the Allard Pierson museum shows a few ways of telling a story using both methods and also a mix of these methods. Second, there is a fair amount of Augmented Reality being used in museums at the moment. However, most of these are specifically made for certain exhibitions. In the next chapters, we will introduce an application which is easily configured for other collections.

How can we bring the narrative to the object, or bring the object to the

narrative? Two application demos were made for this purpose. The demo

of the Augmented Reality application, which brings the object to the narrative, and the demo of the Virtual Museum application, which brings the narrative to the object. After testing and evaluating both demos, the Augmented Reality application was chosen to be developed further.

Is augmented reality with 3D models effective to augment the

nar-ratives in a traditional museum catalog? By addressing the concerns

brought up the people who tested version 1 of the application we tried to cre-ate an application that can enhance the traditional museum catalog. It was found that we can use the already existing images in the book as targets for augmentation while giving a performance stable enough for viewing the objects. We also made the process of configuring this application for other collections easy to perform. The users testing this application responded enthusiastically and liked using the application. With this information, it can be said that using augmented reality to enhance the traditional museum catalog is effective. The application created in this project bridges the gap between object-based and narrative-based storytelling by integrating the objects into already existing narrative sources in an easily configurable way. The application was tested by a small group of users who reacted enthusiastically to the demo. This way, we have proposed one solution to mixing these two storytelling methods. There are undoubtedly a myriad of ways this can be done, but this method is a step in

(30)

6.1

Future Work

During this project, a lot of people were enthusiastic about the idea of the Augmented Reality application, and at the end of the project, I was asked to further develop the application in association with the 4D Research Lab at the University of Amsterdam.

The following are some things that can be done to get this application to a ready-to-release state:

Testing Testing during this project was done on a small scale, about 5-10

users for each version. Before this application can be released, it needs to be tested on a bigger scale, with more users and more 3D models.

Sketchfab API Should a more substantial amount of 3D models cause any

problems during testing, the Sketchfab API could be used to download models dynamically and decrease the size of the application considerably. This integra-tion will also allow for the downloading of annotaintegra-tions, which at this moment have to be positioned on the object manually. This integration with Sketchfab can only work if it is possible to circumvent the log-in required by the API, which is going to be discussed with Sketchfab.

Other targets There is a multitude of targets that can be used with the

Vuforia toolkit. The application could be extended to not only work on a

catalog but could, for example, be used on an actual museum object to show what it would have looked like in its original state. These options are to be considered in further development.

(31)

References

[1] David Abulafia et al. Crossroads: Travelling through the middle ages.

Al-lard Pierson Museum Amsterdam and partners in CEMEC project, 2017.

[2] Cemec admin. Audiovisual contents in the Crossroads exhibition. 2017.

url: https://cemec-eu.net/cms/?p=324.

[3] Library of the University of Amsterdam. Inventory Database. 2018. url:

https://www.uvaerfgoed.nl/beeldbank/en/allardpiersonmuseum.

[4] Jennifer Billock. Five Augmented Reality Experiences That Bring Museum

Exhibits to Life. 2017. url: https://www.smithsonianmag.com/travel/ expanding-exhibits-augmented-reality-180963810/.

[5] CEMEC-EU. Connecting Early Medieval European Collections. 2016. url:

https://www.cemec-eu.net/.

[6] Benjamin Ives Gilman. “Museum fatigue”. In: The Scientific Monthly 2.1

(1916), pp. 62–74.

[7] Google. Google Cardboard - Google VR. 2018. url: https://vr.google.

com/cardboard/.

[8] Allard Pierson Museum. Beeldbank Stories: Near East. 2018. url: https:

/ / www . uvaerfgoed . nl / beeldbank / en / story / allard pierson -museum/near-east.

[9] Inge-Kalle den Oudsten. The Cross Culture Timeline. 2016. url: https:

//cemec-eu.net/cms/?p=227.

[10] PTC. Vuforia Engine. 2018. url: https://www.vuforia.com/engine.

html.

[11] SketchFab. Cemec Collection. 2018. url: https : / / sketchfab . com /

moobels/collections/cemec.

[12] The GIMP team. GIMP - GNU Image Manipulation. 2018. url: https:

//www.gimp.org/.

[13] Unity Technologies. Unity - Scripting API: Touch. 2018. url: https :

(32)

Appendices

A

Code

Every script mentioned in this thesis can be found on this GitHub page: https://github.com/AxelBremer/BachelorProject

To actually create an application one would need to install the Unity Editor and the Vuforia toolkit. Because of the big number of small files unity creates, up-loading the entire project folder to GitHub is very inconvenient. The Readme file on github gives a more detailed description of setting up the application.

A.1

Narrative XML file

https://github.com/AxelBremer/BachelorProject/blob/master/XML/NearEast. xml

An example of a Narrative XML file based on the Near East story from [8]

A.2

AnnotationScript.cs

https://github.com/AxelBremer/BachelorProject/blob/master/scripts/ AnnotationScript.cs

This script checks which of the annotations currently on the screen is closest and turns on its halo. It turns off the other halos.

A.3

PinchZoom.cs

https://github.com/AxelBremer/BachelorProject/blob/master/scripts/ PinchZoom.cs

This script enables the user to use pinch gestures to zoom in and out of the model.

A.4

TouchRotate.cs

https://github.com/AxelBremer/BachelorProject/blob/master/scripts/ TouchRotate.cs

(33)

A.5

Process.py

https://github.com/AxelBremer/BachelorProject/blob/master/scripts/ Process.py

This script processes the models, textures, and images for use in the application. It gives the user a warning if any objects are incomplete.

A.6

VuforiaHandler.cs

https://github.com/AxelBremer/BachelorProject/blob/master/scripts/ VuforiaHandler.cs

This script loads in the imagetargets for every object in the Vuforia database.

A.7

AromaTrackableEventHandler.cs

https://github.com/AxelBremer/BachelorProject/blob/master/scripts/ AromaTrackableEventHandler.cs

This is an edited version of the default trackable event handler provided by Vu-foria. When a trackable is detected the matching model and textures are loaded in and the right scripts are added to the object.

(34)

B

Figures

B.1

Contrast image versions

Normal Flask

High Contrast Flask

(35)

Normal Burner

High Contrast Burner

(36)

Normal Lamp

High Contrast Lamp

Referenties

GERELATEERDE DOCUMENTEN

het Maaslands aardewerk werd teruggevonden in de gracht met waterkuil 166, in gracht 184, op de kruising van grachten 159-157 en 128-119, in gracht 124, in greppel 39 (de randscherf

In het contact met hem zullen zij stellig onder de indruk zijn gekomen van zijn fenomenale (parate) kennis. Sjef is beeldend kunstenaar, 'amateur'veldbioloog,

Tussen en binnen K&amp;K-bedrijven bestaan grote variaties in stalemissie; ten dele kun- nen die verklaard worden door het tankmelk- ureumgehalte, maar op de meeste K&amp;K-

The experimental setup used for measurement of the pressure losses over the flow cell (05) and for visualization of the temporal and spatial velocity variations inside the spacer-

Looking to the importance of such treasure of community knowledge and sustainable natural resource management system, an effort has been made to learn with Monpa tribe

Verbiest werd namelijk aangesproken door een uit de onderduik teruggekeerde jood die zijn huis niet meer in kon en zijn huisraad kwijt was.. De zaak was vrij

Kissau and Hunger explained in their chapter (13) “[how] the internet could be just such a finely meshed tool, constituting an appropriate research site for advancing the

plastic bag ban has been implemented by the local governing bodies on the attitudes and behavior concerning the use of plastic carrier bags by the shopkeepers in the Meenakshi