• No results found

The offline presentation composer

N/A
N/A
Protected

Academic year: 2021

Share "The offline presentation composer"

Copied!
41
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

The offline presentation

com-poser

Dennis Butter

June 26, 2019

Supervisor: drs. A. (Toto) van Inge Signed:

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

,

(2)
(3)

Abstract

Modern presentation software has limited dynamics. Additionally, the freedom to navigate through presentations made by presentation applications is narrow as most presentation soft-ware generates static slide-based presentations. On top of that the embedding of third-party applications in this software is not possible. The Live Presentation Composer (LPC) has been developed to solve these problems. The presentations created with this application are fully dy-namic. That means that presentations are created dynamically with the potential use of dynamic content. However, the application does not provide an editor for these dynamic presentations. This thesis proposes and implements a prototype of a timeline-based editor for the LPC. Fur-thermore, the suitability of the current save format of the LPC is checked in connection to the implementation of this editor. Additionally, this thesis examines how this editor can be expanded to allow the generating of dynamic presentations from scratch.

(4)
(5)

Contents

1 Introduction 8

1.1 Authentic presentations . . . 8

1.2 Developments in the manner of presenting . . . 9

1.2.1 The development of slides . . . 9

1.2.2 The development of videos . . . 10

1.3 The Offline Presentation Composer . . . 11

2 Theoretical background 14 2.1 The principles of presentation applications . . . 14

2.2 The edit principles of presentation applications . . . 16

2.3 The edit principles of Music and Choreography Editing and Composing Software 17 2.4 The temporal and spatial aspects . . . 18

2.5 The Live presentation composer functionality and user interface . . . 19

2.6 Conserving presentations for future use . . . 19

2.7 Proposed improvements . . . 20

3 Design considerations 23 3.1 Workflow of the OPC and the LPC . . . 23

3.2 Application specification . . . 24

3.3 Static object visualization . . . 24

3.4 Correcting mistakes made to presentations . . . 25

3.5 Static object manipulation edit options . . . 25

3.6 The impact of edit options . . . 26

3.7 Sound editing . . . 26

3.8 Editing dynamic objects . . . 26

3.9 Development setup choices . . . 26

4 Implementation 28 4.1 Application design . . . 28

4.2 Interactive timeline . . . 29

4.2.1 Converting a presentation repository into a timeline repository and visu-alizing objects and their object manipulations . . . 29

4.2.2 Editing . . . 30

4.3 Animation Preview Window . . . 32

4.4 Presentation Preview Window . . . 32

4.5 (Saving) presentation output . . . 32

5 Conclusion 35 5.1 Conclusion . . . 35

5.2 Future research . . . 36

5.2.1 Sound addition and editing . . . 36

5.2.2 Additional object transitions . . . 36

5.2.3 Spatially grouping of objects . . . 36

(6)

5.2.4 Presentation preview window . . . 36 5.2.5 Editing of dynamic objects . . . 36 5.2.6 Addition of dynamic objects . . . 37

(7)
(8)

CHAPTER 1

Introduction

1.1

Authentic presentations

The presentation setting is a popular manner to communicate with a group of people. It is used in many areas including: academic, businesses and so on. The practice of presenting goes back as far as time allows us to remember. In early days when there were no means of recording or presentation software, like for example, Microsoft Powerpoint or Prezi, presentations consisted of physical objects and/or events. These physical objects could consist of literally any kind of object that could be presented, including: a candle, some kind of liquid, a tree or even a church and so on. Additionally, they could also be representations of other physical objects like a drawing or a painting. Events consisted of real-time happenings. For these events and physical objects to be presented the audience would sometimes have to be relocated to a certain location. For example, if the presenter wanted to present a theatrical play, the audience had to be relocated to the theatre where the play could be given. Some typical examples of presentations from the past are the Christmas lectures from Michael Faraday. One of these lectures, dating from the year 1855, is illustrated in figure 1.1 [22]. During the lectures Faraday introduced people to chemistry, heat, giant soap bubbles, astronomy and more. Faraday would stand behind a desk with all kinds of physical objects and would point at these objects whenever the subject went into that direction. In figure 1.1 Faraday is pointing at a histogram depicted with blocks. This histogram and the objects on the table are examples of physical objects that could be presented. Not only did Faraday present physical objects during the lectures, various experiments were demonstrated as well. These experiments are examples of events that could be presented.

Presentations before the use of presentation software did not only allow the presentation of all kinds of physical objects and events, but they made all kinds of dynamics possible as well. This can be explained using the example of Faraday’s lectures. If Faraday wanted to present some physical object that was on the desk in front of him, there were more options than just pointing the object out. The object could be picked up and held in front of the audience, the object could be passed on to the audience and so on. There were a lot of options which could be used to present the physical object. Furthermore, these kind of presentations had no fixed order. At any point during the presentation the presentation could go in a different direction than originally planned. For example, if someone asked a question, the presenter could give an in-depth answer to this question and perhaps show physical objects to make this answer clearer. Afterwards the originally intended presentation could be continued or even more questions could be asked. From this point onwards these early presentations are called ”authentic presentations”.

(9)

Figure 1.1: Michael Faraday during one of his Christmas Lectures in the year 1855, source: [22]

1.2

Developments in the manner of presenting

1.2.1

The development of slides

Before taking photos became common and cheap, presenting some event or physical object that was not nearby, was mostly pretty bothersome and sometimes nearly impossible. It meant that the audience had to be relocated to the correct location or that a drawing, a painting or an expensive photo had to be made. Relocating the audience was time-consuming and could some-times not be done if the distance was too large. Also creating a painting or shooting a photo was expensive. Thus, the best option was usually to make a quick drawing. However, these drawings had another disadvantage. They could be used to provide a picture of a physical object, but they missed the details to give a complete picture. So the accessibility of cheap photos was a godsend. As a consequence photos made it more feasible and affordable to capture images of events or physical objects that were not immediately available. Projectors made it possible to display magnified photos on a board. This resulted in presentations consisting of a collection of clear, thorough static projected images (or ”slides”).

These slides enriched presentations by making concepts more alive and more simple to clar-ify, which made them powerful. However, these slides came with a cost. Where in the past a presentation had no fixed structure, man now had to work through a static set of slides. The freedom to change a presentation at any moment had vanished and navigation became more strained. Also the set of objects, which are individual elements of a presentation, changed. In the past these objects only consisted of physical objects (which are defined under section 1.1) and sound (for example, the voice of the presenter, the ringing of a bell, the singing of a choir and so on). With slides a new type of objects became available, namely projected images. These projected images became one of the main objects used in presentations, which led to the dimin-ishing of a presentation’s dynamics. As explained earlier, there were many ways to present an object in the past, but these projected images could only be presented in a fixed position. This ”spatial versus temporal” aspect introduces another topic of this thesis.

All objects have a temporal and a spatial aspect. The temporal aspect of an object refers to the changes of an object in time. For example, an object can change over time by being visi-ble during the display of one slide and then vanish afterwards. The spatial aspect of an object refers to the changes of an object in space. An example of this is the repositioning of an object.

(10)

Currently almost all presentations are created with the use of presentation software. This soft-ware has changed the way of presenting even more. Currently the major part of presentations still consist of a static set of slides, meaning that the problems of forced navigation and shortage of dynamics still exist. For some presentation applications the problem of forced navigation has become even worse. In these applications the presenter always has to navigate through a set of slides in one fixed order. According to Andreas Dieberger et al. awkward pauses and inter-ruptions happen all too often and once the narrative side-trip is concluded, the audience has to follow the presenter through the same steps in reverse, trying to resume the main narrative from where it was interupted [4]. For example, if a question is asked about some content from a previous slide, the presenter first has to go back in slides to the correct slide. Afterwards the question can be answered and the presenter has to go through the same slides again to get back to the current slide. This process is troublesome. However, presentation software did improve another aspect of presenting. The set of usable objects on slides was enlarged to contain videos and (recorded) sound as well, but this increase was not enough. Modern day computers run a lot of applications. The presenting of interactive applications that can be intervened (like simula-tions or a running editor), can sometimes be a handy tool to clarify a subject. At the moment it is not possible to embed these applications in modern presentation software and display these on slides. Currently the presenting of such applications means that the used presentation software has to be minimized first. After that the specified application can be started and later on the presentation window has to be enlarged again. Meanwhile the audience is able to see everything that is happening. The desktop is visible and the audience can see irrelevant processes that are running outside of the presentation software.

To be able to distinguish the objects specified before and compare these objects to one an-other two terms are introduced: ”static objects” and ”dynamic objects”. Dynamic objects are active applications where live interaction is possible, e.g. running editor, simulator etc. These are the kind of applications that can be intervened during a presentation. Static objects are defined as all objects that do not fall under dynamic objects and where the only interaction is moving and resizing the window of sound, images and videos.

1.2.2

The development of videos

Another invention which had an impact on the manner of presenting, were videos. The first kind of videos ever made were silent motion-picture films that did, as the word already implies, not contain sound. Despite this absence these films were perfect for demonstrating events. The earliest example of a motion-picture film is the Roundhay Garden Scene by Louis Le Prince (see figure 1.2) [7]. This film features Adolphe Le Prince, Sarah Whitley, Joseph Whitley and Harriet Hartley walking around in the Oakwood Grange garden of Joseph and Sarah Whiteley. Although the film is short and only contains a few frames it gives a more complete picture of the walk than any single picture could have given. Films improved at presenting events compared to any other object at that time and they quickly made their entrance into the presentation setting.

(11)

Figure 1.2: Roundhay Garden Scene 1888 by Louis Le Prince, source: [7]

Thus, videos enriched current presentations by providing a more fitting way to demonstrate events, but that is not their only impact on presentations. Videos could also be used to fully replace the authentic way of presenting, especially when sound could finally be added. Current examples of this are documentaries. Documentaries have all the characteristics needed to present some topic. Comment can be used to explain the topic and graphics can be used to clarify the comment. However, replacing the authentic way of presenting by videos comes at a cost. Videos are linear, meaning they can only be played one way. This means that the freedom to navigate is very limited. One could forward a video, rewind it or pause it, but that is it. The content is fixed. Just like slides, videos also reduce the dynamics of a presentation. Man decides upfront how the content will be recorded and this cannot be changed during presenting. Another troublesome part of videos is its editing aspect. Editing videos means that content on individual frames has to be adjusted, which can be done with video editing applications. However, the editing options these applications provide are limited.

1.3

The Offline Presentation Composer

Modern presentation software has little dynamics, as objects are displayed in fixed positions. Additionally, there is no option to embed dynamic objects (active applications where live inter-action is possible) and the software also takes away the user0s freedom to navigate. An application called the ”Live Presentation Composer” (the LPC) has been created to solve these problems [8]. This application makes it possible to create so-called ”Dynamic Presentations” at the time of presenting. These are presentations that have none of the issues that presentations created by modern day presentation software have. In this respect these presentations are comparable to authentic presentations. However, the main difference is that dynamic presentations are run by computer software and allow the usage of both static and dynamic objects.

Presentations created with the LPC can be saved and replayed at any time. However, the option to edit these presentations does not exist. One way to edit these presentations would be to save the presentations as videos and edit them with a video editing application. This is pretty restricted as explained under section 1.2.2. A more appropriate approach would be to create a new editing application. A presentation consists of objects and their manipulations. These manipulations subsequently consist of a temporal and a spatial aspect. To display dynamic presentations and provide an option to edit them these objects and their manipulations could be rendered in an interactive timeline. This leads to individual object manipulations becoming editable, which makes the editing of dynamic presentations more feasible than with the use of video editing applications. The reasoning behind the choice of such a timeline is explained in further detail under section 2.4.

(12)

This thesis proposes and implements a prototype of a timeline based editor for the LPC called the ”Offline Presentation Composer” or shortly the OPC. The questions that arise here are:

• In what manner is it possible to create an editor for presentations created with the LPC? • How suitable is the current dynamic presentation save format for creating an editor for

presentations created with the LPC?

• What programming platform is suitable for an editor for presentations created with the LPC?

• In what manner is it possible to expand the OPC to allow the generating of dynamic presentations from scratch?

(13)
(14)

CHAPTER 2

Theoretical background

2.1

The principles of presentation applications

Nowadays there is a lot of different software available to create presentations. These applications differ from one another and all have their own characteristics. Some applications recognise the problems of modern presentations that have been mentioned under section 1.3. In an attempt to solve these problems these applications have come up with different solutions. These solu-tions are shown in table 2.1. Column ”Extra Dynamics” shows the aspects per presentation application that allow for more dynamics in a presentation compared to the other presentation applications listed. All of the presentation applications listed allow the use of images, text and videos. Column ”Extra Usable Objects” shows all extra objects that can be used per presen-tation application compared to the other presenpresen-tation applications. Finally, the column ”Extra Freedom to Navigate” shows the aspects per presentation application that allow for more free-dom to navigate.

Table 2.1: Presentation aspects to improve the issues of modern presentations Application Name Extra Dynamics Extra Usable

Objects

Extra Freedom to

Navigate Slide-based Microsoft PowerPoint Animations Audio Hyperlinks Yes

Powtoon Animations Audio - Yes and no

Slides -

-Hyperlinks, horizontal and vertical navigation

Yes Visme Animations Audio, iframes Hyperlinks Yes Multipresenter - - Three modes to navigate Yes Prezi Animations Audio Presentation canvas No Microsoft PowerPoint recognised that dynamics were scarce in modern presentations. To some-what solve this problem ”animations” were introduced. Animations are effects that can be added to make an object appear, disappear or move. These effects can adjust the position, the size and the color of objects. Powtoon is an presentation application that sees this problem as well. It is a standalone application that can also be used to enhance Microsoft PowerPoint [25]. The application provides a more advanced editor for animations than Microsoft PowerPoint does and animations created with Powtoon can be imported in PowerPoint as can be seen in figure 2.1. Microsoft Powerpoint presentations can also be exported to Powtoon. Visme and Prezi allow the use of animations too, but just like Microsoft PowerPoint in a less extensive way than Powtoon.

(15)

Figure 2.1: Powtoon animation creation example, source: [20]

Visme is a presentation application that attempts to tackle the problem of too few usable ob-jects in modern presentation software. It has an extensive set of obob-jects that can be used in a presentation. It also gives the user the possibility to add iframes to a presentation [9]. Iframes are HTML documents that are embedded inside another. With the use of iframes a user can add objects to a presentation that are not natively available in Visme. This is simply done by placing the code of the external online content inside an iframe as can be seen in figure 2.2. However, some web-based content providers do not allow their code to be embedded in an iframe and therefore not all online content can be added to a presentation in Visme. Besides, Visme does not provide the possibility to fully embed dynamic objects in a presentation.

Figure 2.2: Inserting code in Visme iframe, source: [9]

Most presentation software makes use of a static set of slides. Both Powtoon, Microsoft Pow-erPoint, Slides and Visme have this property. This limits the user0s freedom to navigate as explained under section 1.2.1. Powtoon is somewhat different in this aspect. It does provide an option to display created presentations in the form of slides, but it also gives an option to display these presentations as videos [24]. However, this does not improve the user’s freedom to navigate as videos are strictly linear (there is only one path from the start of the video to the end). Some slide-based presentation applications provide options to improve the user0s freedom to navigate. For example, Microsoft PowerPoint, Slides and Visme allow the user to add hyperlinks to slides. Hyperlinks are links that can link any slide to another. This gives the user more freedom, but it does not solve the entire problem. Slides comes with an additional solution. Instead of navigating in just one direction, Slides allows the user to navigate both in horizontal and vertical directions through slides. Multipresenter gives the user even more freedom. It provides three modes [10]. The first mode shows the user two slides on two seperate screens. The user can choose what slides are shown on each screen. The second mode gives the user more context about a slide by showing one to four of the previous slides. Finally, the third mode allows the user to choose which slide is fixated on one screen while the user can use the other screen to navigate through

(16)

the set of slides. Then the user can adjust the fixated slide by dynamically dragging content to the screen. This last mode gives the user the most freedom to navigate and is displayed in figure 2.3.

Figure 2.3: Multipresenter’s third mode, source: [10]

Not all presentation applications are slide-based; some applications allow the presenter to create a canvas that can be explored during a presentation. Prezi is one of these applications [27]. Its user interface is shown in figure 2.4 The idea behind such a canvas is that the presenter puts his thoughts into a picture or so called ”Mind Map”. Then the presenter defines a path on this canvas and that path will be displayed during the presentation. This path can be made by moving across the canvas and zooming in and out. Every time the presenter has found a partition of the canvas that is desirable to display in the presentation, this partition can be saved as a frame, which is basically the same as a slide. Thus, the final path still consists of a static set of slides. This means that the presenter’s freedom to navigate during a presentation remains limited. However, such a canvas does bring the advantage of giving the audience an overview of the context of the presentation.

Figure 2.4: Prezi’s user interface, source: [27]

2.2

The edit principles of presentation applications

All presentation applications discussed have an editing environment. Each of these environments contains an overview of the complete set of slides (in Prezi0s case frames). Here new slides can be added or removed. Most of these overviews are just one slide wide, but MultiPresenter uses a two slides wide overview as can be seen in the left part of figure 2.5. This has been designed this way because MultiPresenter uses two monitors [10]. A proportion of each of the environment interfaces consists of a window to display a single slide. In this window the user can edit the aspects of the slide displayed, like adding content and adding animations to this content (this last aspect in not available in all applications as can be seen in table 2.1).

(17)

Figure 2.5: Multipresenter composing and editing environment, source: [10]

Prezi0s frame overview is somewhat different to the slide overviews of the other presentation applications. The frame overview consists of different layers and the path between the different frames is displayed here [23]. The first frame of the first layer contains the canvas on which all other frames are located; here frames can be added and linked to one another. To each of the frames new frames can be added, which results in a new deeper layer of frames.

Powtoon has lookup tabs where objects (shapes, characters, text, sound and so on) can be found which can be used for animations [17]. These objects can simply be dragged into a fit-ting slide. Then every object can be edited with a toolbox and animations can be applied to it. Powtoon also provides a timeline for each of the objects as can be seen in figure 2.6. The user can use this timeline to determine the timing and the duration of the effects of these objects.

Figure 2.6: Timeline for objects in Powtoon, source: [17]

2.3

The edit principles of Music and Choreography Editing and

Com-posing Software

Musical pieces can be divided into various different sound objects. All objects have a temporal and a spatial aspect as stated in section 1.2.1. Sound objects are no exception. The temporal aspect of sound consists of the changes of sound in time. These changes include the emergence and vanishing of sound. The spatial aspect of sound refers to the changes of sound in space. These are the perceptionial changes of sound like changes in pitch.

In music editing and composing software timelines are often used to give the user an overview of the music piece worked on. Magix Music Maker 17 (shown in figure 2.7) is an example of such a program. The timeline that is rendered in this program displays the temporal change of music, showing whenever a music object is active and when it is not.

(18)

Figure 2.7: Magix Music Maker 17, source: [11]

In addition to music editors there are also choreography editors like Dance Designer (see figure 2.8). Here the editor facilitates the editing aspect of the spacial aspect of an object.

Figure 2.8: Dance Designer, source: [3]

2.4

The temporal and spatial aspects

As mentioned under section 2.3 timelines (like the instrumental timeline shown in figure 2.7) are a good way to display temporal manipulations of presentation objects [6].

A timeline is a two-dimensional rendering of along (most often) the vertical axis the objects at hand and along the horizontal axis the lifespan of the objects. If a timeline should also describe a 2D or 3D spatial location it would render the vertical object axis obsolete and the timeline representation is lost. However, in case of a 3D storyboard both the lifespan and the spatial location of the objects can be rendered while maintaining the timeline representation. Diorama Engine is a an example of such a storyboard [16].

This means that timelines can be used to display the spatial aspect of objects. However, edit-ing objects in such a representation is quite challengedit-ing, because the same object is duredit-ing its live-time visible in several storyboard frames. Therefore, for editing purposes reduction of one

(19)

2.5

The Live presentation composer functionality and user interface

The interface of the LPC can be divided into two parts [8]. The first part, the inventory zone, is invisible to the audience. Only the composer of the presentation is able to see this zone. Instances of images, videos and any kind of third-party application can be added here. These objects can be manipulated at will, meaning they can be changed to a desired state (for example, they can be resized) before entering the ”Virtual Presentation Scree Section” (VPSS). Since the composer fully supports multi-touch, every manipulation can be done by touch. Whenever an object is dragged into the VPSS it is mirrored to the presentation screen. This makes the element visible to the audience. The object can still be manipulated the same way as in the inventory zone.

Figure 2.9: The setup of the LPC, source: [8]

2.6

Conserving presentations for future use

Often it is desired to use the same presentation for multiple occasions. This means that the user has to be able to save a recently created presentation, which can be loaded from a possi-bly different device at a different time. Dynamic presentations consist of a set of manipulated objects and their manipulations over time. Thus, in order to save these presentations, not only the objects have to be saved, but the object manipulations as well. Other things have to be considered as well when finding a way to store presentations. Dynamic presentations are cre-ated live and they give the user a lot of freedom. This is a positive aspect, but it also makes these types of presentations error prone. To be able to reuse dynamic presentations it is there-fore also important that they are saved in such a way that they can be easily edited after creation. The markup language XML can be used to describe and identify presentation object infor-mation accurately and unambigously [21]. Besides, it has the benefit that XML inforinfor-mation can be manipulated programmatically. In other words, multiple XML documents can be pieced together, taken apart and converted into any other format without loss of information, which makes editing XML extensively possible. Lastly XML allows sets of documents, which are all of the same type, to be created and handled consistently and without structural errors. This is a must to be able to save and load presentations correctly. All in all, XML makes for a suitable format for saving dynamic presentations. The LPC saves presentations in two separate XML files: • ”Resources” file. This file contains information about all objects that are used in a presen-tation. Their paths, initial sizes and intitial positions are listed. An example of such a file can be seen in figure 2.10.

• ”Manipulations” file. This file is build up of manipulations over time. When objects enter the VPSS their current sizes and the positions from where they enter the VPSS are stored

(20)

to this file. All manipulations that are performed on these objects while they remain in the VPSS, are stored to this file as well. An example of such a file can be seen in figure 2.11.

Figure 2.10: An example of a manipulations file

Figure 2.11: An example of a resources file The LPC distinguishes three different object manipulations:

• Size change (the change in size of an object)

• Position change (the change in position of an object) • Visibility change (the change in visibility of an object)

The visibility change manipulation always happens at a single point in time. Size change and position change manipulations can happen either at a single point or over a timespan. The first case refers to a size change manipulation or a position change manipulation of an object enter-ing the presentation screen. When the object enters the presentation screen, the senter-ingle point manipulations are used to provide information about the current size and position of the object. Manipulations that happen over a timespan, are manipulations performed while the object is already visibile on the presentation screen. This type of manipulation consists of multiple sub-sequent manipulations (see for example, the consecutive position change manipulations in figure 2.10).

To replay a presentation, the manipulations file can be traversed in sequential order and in the same way it can be loaded into an editor. Sound that has been recorded during a presenta-tion is saved to a single sound file and is played whenever a presentapresenta-tion is being replayed using the LPC.

2.7

Proposed improvements

As Van der Ham described, the user interface of the LPC uses standard MFC (Microsoft Foun-dation Classes) basic user controls [8]. Besides, MFC is not really object-oriented. That means that writing applications in MFC requires a lot of code [2]. However, at that time MFC was chosen since it provided important APIs for both the capturing of screenshots and the sending of input data to applications. MFC also exceeded WPF (Windows Presentation Foundation) in terms of performance (the only other Windows platform that provided the APIs as well). UWP (Universal Windows Platform) is optimised for graphics-intensive scenarios and it is also more suitable for designing a sophisticated UI than MFC, among other things because it seperates

(21)

solve the problems of MFC [8]. Project Centennial allows combining UWP and MFC [12]. This means that MFC can be used for the capturing of screenshots and the sending of input data to applications while UWP can be used for the rest of the code. However, Microsoft recently released the missing APIs (”InputInjector” and ”Screen capture”) for UWP [14][13]. This means that it is now also possible to fully replace MFC by UWP.

Currently the LPC does not provide options to edit presentations after saving [8]. Presenta-tions can be replayed, but they cannot be edited after presenting. Thus, a visual editing tool is proposed. This editing tool will give an overview of a presentation in time in addition to its editing facilities. Potentially this editing tool can be implemented in such a way that the LPC is no longer needed as an offline editor. As explained under section 2.4 timelines can be used to display spatial and temporal aspects of presentation objects. The Vis.js Timeline JavaScript module provides an interactive visualization chart to visualize data in time [26]. It consists of two datasets, one for ”groups” and one for ”items”. Items are displayed in time and can be used to render object manipulations. Groups are used to group items together. The timeline has built-in options to add, move, remove, group and update items. Besides, it also has a built-in option to add new groups. For each of these events there are callback functions which make it possible to extend these events with self-written code. Thus, the module is widely customizable. Since the module uses two datasets, the manipulations and resources files have to be converted in some way into these datasets so that the timeline can be rendered. This can be done with DOMParser interface [5]. This interface provides the ability to parse XML or HTML source code from a string into a DOM Document. When done editing the datasets should be converted back to the manipulations and resources files, so that they can be loaded by the LPC. This can be done with the XMLSerializer interface [28]. This interface provides a method to construct an XML string representing a DOM tree, which can be saved to a XML file.

Due to the fact that objects in presentations are sometimes linked to one another, spatially grouping of objects is proposed, see section 5.2.3. This is necessary to make presentations or-ganised and it would also make the editing of presentations afterwards more intuitive.

(22)
(23)

CHAPTER 3

Design considerations

3.1

Workflow of the OPC and the LPC

By treating the extended workflow of the LPC and the OPC integration it is possible to demarcate the editor function of the OPC. The resulting workflow is shown in figure 3.1. The top part of the rectangle in this figure depicts the OPC, the bottom part the LPC and the middle part in between the dotted lines depict both. With the addition of the OPC there are now two ways to create a presentation. A presentation can be made while presenting using the LPC, but it can also be made offline using the OPC. After a presentation has been created it can be saved to a so called ”presentation repository”. This repository contains both a manipulations and a resources file (explained under section 2.6) combined with multiple static objects. This repository can then be loaded by either the OPC or the LPC. When the OPC loads a presentation repository it converts the manipulations file into a so-called timeline repository. This repository consists of both a ”groups” and a ”items” dataset which are used by the Vis.js module to render an interactive timeline (mentioned in section 2.7). After the presentation has been loaded by the OPC, the user has various options to edit the presentation. When the user is done editing, the presentation can be saved back to the presentation repository. Edited object information can be saved to the resources file and edited object manipulations can be saved to the manipulations file. If the user decides to load a presentation repository with the LPC, two different subsequent actions can be executed. The first action is to replay a presentation and the second action is to extend the presentation, after which the presentation has to be saved back to a presentation repository. New static objects have to be added to the presentation repository, object information has to be added to the resources file and object manipulation information to the manipulations file.

(24)

Figure 3.1: Workflow of the OPC and the LPC

3.2

Application specification

To create a presentation editor for the LPC the following requirements have been set for the OPC:

• The characteristics of the authentic way of presenting should be preserved, so the OPC will service:

– All kinds of dynamics in a presentation. Objects should be freely manipulable, mean-ing both the spatial and temporal aspects of objects should be manipulable.

– All kinds of objects in a presentation. Both static and dynamic objects should be displayable.

– Free navigation through a presentation. Presentations should be able to go in any direction at any time.

• Correcting mistakes or making adjustments to a presentation in an intuitive manner. • Manipulations of all objects resulting from the LPC.

• Editing of a presentation without loss of important information about objects and their manipulations.

• Edited presentations should be saved in such a format that they can be loaded by the LPC.

3.3

Static object visualization

(25)

As explained under section 2.6 visibility change manipulations happen at a single point in time. However, since objects are visible during a timespan, visualizing these manipulations over a timespan instead of at a single point results in a more clear and intuitive overview.

To display the spatial aspect of object manipulations a part of the user interface is used for the rendering of single manipulations. To give a full overview of presentations another part of the user interface can be assigned to provide a preview of a presentation, see section 5.2.4.

3.4

Correcting mistakes made to presentations

The LPC provides a lot of freedom to manipulate objects. This makes it error prone. To correct mistakes of size change and position change manipulations of objects, all intermediate manipulations should be corrected. The solution would be to adjust size change and position change manipulations, so that these always result in a linear transition of an object over time. An example of a position change manipulation correction is illustrated in figure 3.2.

Figure 3.2: Example of the correction of a position change manipulation

3.5

Static object manipulation edit options

As mentioned under section 2.6 the LPC distinguishes three different object manipulations: size change manipulations, position change manipulations and visibility change manipulations. To make static objects fully editable, the following options should be possible:

• Starting and ending values of position change and size change manipulations should be editable. This means that the starting and ending position of position change manipulations and the starting and ending size of size change manipulations should be editable.

• Instances of all kinds of object manipulations should be addable and removable. • Timespans of object manipulations should be extendable and reducible.

• Object manipulations should be movable.

(26)

3.6

The impact of edit options

Editing object manipulation can have an impact on other object manipulations of the same object. This is due to the fact that some object manipulations are connected to one another. This means that such modifications can result in the removal of certain object manipulations or in characteristic changes to these manipulations (size or position of the object). To demonstrate that the OPC can handle these interactions, it is enough to show that the OPC is able to cope with object interactions for visibility change manipulations. Interactions of size change and position change manipulations are similar. As mistakes can be made when editing object manipulations, a comfirm message should be shown whenever an object is about to be deleted.

3.7

Sound editing

With the LPC it is possible to record sound during the creation of a presentation. This sound is then saved to a single avi file and played when a presentation is replayed. To be able to create similar presentations with the OPC as can be constructed with the LPC and to be able to edit presentations created by the LPC, the OPC must be able to record and edit sound, see section 5.2.1.

3.8

Editing dynamic objects

One key feature of the LPC is the embedding of dynamic objects. This was necessary to achieve one of the three characteristics that define the authentic way of presenting, namely the possibility to show almost every kind of object during a presentation. To save these dynamic objects for future use, the LPC records them as videos. When replaying a presentation these videos are played at the same timestamp as in the original presentation.

Adding extra dynamic objects to a presentation in the OPC can be done in the same way. Editing such objects means that the input of the application has to be adjusted. Of course this cannot be done if the saved dynamic objects is in video format, see section 5.2.5.

3.9

Development setup choices

Since the LPC runs on Windows it is a pragmatic choice to develop the OPC prototype on Windows as well. As discussed under section 2.7 UWP is best suited for graphical-intensive scenarios. Currently the OPC does not contain a lot of intensive graphic operations. However, the current version of the OPC is not a final version and a lot of features may be added in the future. These features could include intensive graphic operations, like the addition of a 3D storyboard or the addition of complex object animations. UWP is also good suited for designing a sophisticated UI. The intention of this thesis is not to design the perfect UI; however, it is a perk for later adaptation of the application. Finally, UWP has API support for all features used by the LPC. That makes it possible to recreate every action that is executed when creating a presentation with the LPC. This is useful, since editing features may require certain actions to be copied for either the functionality or the intutive aspect of these features. Due to these characteristics UWP has been chosen as platform to build this application on.

(27)
(28)

CHAPTER 4

Implementation

4.1

Application design

The application design is illustrated in figure 4.1 and can be divided into three parts: • Interactive Timeline

• Animation Preview Window • Presentation Preview Window

(29)

4.2

Interactive timeline

4.2.1

Converting a presentation repository into a timeline repository and visualizing

objects and their object manipulations

A number of phases have been left out of workflow figure 3.1 to maintain a clear overview. When zooming in between the Prentation Repository and the ”Edit Presentation” phase in this figure, the following sub-workflow becomes visible:

Figure 4.2: Repository conversion and timeline visualization sub-workflow

During the phase of loading a presentation with the OPC the manipulations and resources files are parsed into two separate DOM documents (a resources DOM document and a manipulations DOM document), as described under section 3.1. An interactive timeline has been implemented in the OPC, with the Vis.js JavaScript module mentioned in section 2.7. Due to time restrictions only images have been added to the timeline, but videos could be added in a similar way as video windows have the same object manipulations as images. As mentioned under section 3.1, the Vis.js module uses two datasets to render a timeline, a groups dataset and an items dataset. First the resources DOM document is looped through and the groups dataset is filled with JavaScript objects containing an ”id” and a path to an image. Afterwards, the manipulations DOM document is looped through, during which the items dataset is filled with JavaScript objects containing the following information about object manipulations:

• Starting values of the object manipulation (start sizes, start position and start time). • Ending values of the object manipulation (end sizes, end position and end time). • The id of the corresponding group (or object) the JavaScript object is linked to. • The type of the object manipulation.

• An id to distinguish the JavaScript object.

During this loop two variables are kept, which are used to track the first and the last object manipulation of sequences of the same kind of object manipulation of the same object (such as the position change manipulation sequence in figure 2.10). The two objects are combined into one single JavaScript object, which is added to the items dataset.

Additionally, during the loop it can occur that an object manipulation is immediately followed 29

(30)

by a different kind of object manipulation. The type of this single point object manipulation is checked and in case of a visibility change manipulation with the attribute Visibility set to true (indicating that an object becomes visible on the presentation screen), it is added to a dictionary with as key the index of the corresponding group. When another single point object visibil-ity change manipulation is encountered, which is linked to the same group with the attribute Visibility set to false (indicating that an object becomes invisible), the other visibility change manipulation is removed from the dictionary. Afterwards, a JavaScript object is added to the items dataset, with as start time the time of the visibility change manipulation with the attribute Visibility set to true and as end time the time of the other visibility change manipulation. During the timespan of this JavaScript object the corresponding presentation object is visible. Other kind of single point object manipulations that can be encountered are position change and size change manipulations. These manipulations are also added as JavaScript objects to the items dataset. However, these JavaScript objects only contain starting values and not ending values, as they only indicate the initial values of an object when it becomes visible.

Groups are rendered on the vertical axis of the timeline and corresponding object manipula-tions are rendered on the vertical axis as can be seen in figure 4.3.

Figure 4.3: Timeline visualization

4.2.2

Editing

As mentioned under section 2.7 the Vis.js module has built-in options to add, move, remove, group and update items. It also has a built-in option to add new groups. For each of these events there are callback functions, which make it possible to add self-written code. When zooming in on the ”Edit Presentation” phase more phases become visible as there are multiple options to edit items on the timeline (see figure 4.4). Due to time restrictions, code has only been added to the callback functions of the moving and the update events. However, other callback functions can be extended in a similar way.

(31)

Figure 4.4: Timeline editing sub-workflow

The moving event indicates that an item is either moved, reduced or extended. After this event the Vis.js module updates the start and end time of the item and renders the resulting item on the timeline. A function has been added to the callback function of the moving event, which checks if an item is either a position change, a size change or a visibility change manipulation. In case of a size change or a position change manipulation the function terminates. However, to show that the OPC can cope with object interactions, code has been added for visibility change manipulation items. For such items the first following size change and position change manip-ulation items are used to determine the starting values of the object becoming visible on the presentation screen (see figure 4.3). These items have to be pinned to the start of the visibility change manipulation item or otherwise there is no way of knowing the starting values of the corresponding object, when the object becomes visible. Thus, these items are found in the items dataset, whenever a visibility change manipulation item is moved on the timeline, is extended or is reduced. The Vis.js module provides a filter function that allows the filtering of items with a custom filtering function. This function is used to filter all object manipulation items of the corresponding group of the items dataset in the range of the visibility change manipulation item. The wanted items are the first size change and position change manipulation items of this fil-tered dataset. The start times of these items are adjusted to the start time of the corresponding visibility change manipulation item, resulting in the pinning of these items to the start of the visibility change manipulation item.

Additionally, it can happen that by moving or reducing a visibility change manipulation item, other items fall partly or completely outside of the range of the visibility change manipulation item. Consequently these items are removed from the items dataset. However, this could result into the unwanted removal of items. Therefore, a confirm message is thrown whenever this hap-pens. When extending a visibility change manipulation item it can happen that visibilty change manipulation items become adjacent or overlap. In this case there is no time in between these items and thus the visibility change manipulation items are combined into one item. The start and end times of the combined item are adjusted to cover the timespan of the original items. The superflous visibility change manipulation items are found with the use of the filter function of the Vis.js module and are removed from the items dataset.

To show that editing the starting and ending values (other than the start and end times) of items is possible, code has been added to the ”update” callback function. The update event is called whenever an item is double-clicked. The extended update callback function checks the type of the item. A visibility change manipulation item contains no other starting and ending values than a start and an end time, hence the function terminates in this case. However, it does not terminate in case of a size change or a position change manipulation item. In these cases the function checks whether the item is located at a single point in time or has a timespan. If an item is located at a single point in time, windows are shown prompting new starting values

(32)

for this item. In case of a position change manipulation a new starting position can be provided and in case of a size change manipulation a new starting size can be provided. In case the item has a timespan, not only these windows are shown, but extra windows are also shown prompting for new ending values.

By providing a new ending position or a new ending size the next size change or position change manipulation item is impacted. This item’s starting values are updated based on the ending values provided. This has been done to avoid teleport-like behavior (objects instantly moving from one position on the presentation screen to another) for the altering of position change ma-nipulation items and to avoid the occurence of instant size changes for the altering of size change manipulation items.

4.3

Animation Preview Window

The animation preview window (see figure 4.1) is updated every time a size change or position change manipulation item with timespan is selected in the interactive timeline. The correspond-ing object is then rendered in this window. For a size change manipulation item the object is positioned at the top left of the window and for a position change manipulation item the object is positioned in the same relative position as it was on the presentation screen at the starting point of the corresponding manipulation in time. The starting position of a position change ma-nipulation is calculated by dividing the x- and y-coordinates by the x and y presentation screen sizes respectively and multiplying it by the x and y animation preview window sizes. For both a position change and a size change manipulation the sizes of the objects are scaled to the size of the animation preview window.

For a size change manipulation the size of the object in the animation preview window is ei-ther increased or decreased over a time period equal to the duration of the manipulation. This is done by incrementing the size of the object until it reaches the end size of the size change manipulation. If a position change manipulation item is selected, the position of the object in the animation preview window is changed over a time period, equal to the duration of the manipulation. This is done by incrementing the x- and y-position of the object.

4.4

Presentation Preview Window

An attempt has been made to implement the presentation preview window with JavaScript. JavaScript provides the options to create simple animations by manipulating objects over an interval. However, JavaScript does not allow multithreading and timestamps of different intervals must differ [19]. This makes it impossible to animate multiple objects at once. As the limits of JavaScript have been encountered other options to implement the presentation preview window should be explored, see section 5.2.4.

4.5

(Saving) presentation output

For the LPC to be able to load and replay a presentation after editing with the OPC, the timeline repository has to be transformed back into a presentation repository. Therefore, the groups and items dataset have to be translated to corresponding manipulation and resources files. Since the OPC currently only allows the editing of object manipulations, only the manipulations file has to be rewritten. This has been implemented with the XMLSerializer interface mentioned in section 2.7. First a recording element is added to this file, indicating the start of the XML file. Then all visibility change manipulations are retrieved from the items dataset. For every such manipulation two VisibilityChange elements are added to the manipulations file on the correct

(33)

certain object becomes visible on the presentation screen. For the second element the Visibility attribute is set to false. The Time attribute of this element is set to the end time of the object manipulation and the index attribute to the index of the object. This element shows whenever a certain object becomes invisible.

After that all size change and position change manipulations are retrieved from the items dataset. For every size change and position change manipulation respectively multiple SizeChange and PositionChange elements are added to the manipulations file based on the size of the timespan. The first SizeChange element that is added for a size change manipulation retrieves the starting size of the manipulation. For every milisecond in the timespan another such element is added behind it until the end of the timespan has been reached. The size change attributes are linearly incremented over these elements untill the last element retrieves the end size of corresponding the size change manipulation. This has been done to create a fluent size change. The Position-Change manipulations are translated into XML elements in a similar manner. However, instead of continuously incrementing the size attributes for these manipulations the position attributes are incremented.

The AddWebAllowedObject method makes it possible to inject an instance of a native class from a Windows Runtime component into the JavaScript context of the WebView [15]. This method has been used to implement a function that saves the manipulation file on the user0s device.

(34)
(35)

CHAPTER 5

Conclusion

5.1

Conclusion

The original OPC design consisted of three components: the presentation preview window, the animation preview window and the interactive timeline. The interactive timeline and the ani-mation preview window have been implemented, but within the time and scope of the project it was not possible to implement the presentation preview window. Therefore, the OPC does not provide a good overview of presentations as a whole. To make a complete editor this component should be added though. However, the implementation of the other two components still shows that it is possible to create an editor for the static objects of presentations created with the LPC. Sound and dynamic objects can currently not be edited with the OPC, but for the OPC to be complete these functions should be available. All the previously mentioned modifications would make the OPC a fully operatable editor for presentations created by the LPC.

The current dynamic presentation save format consists of static objects and two XML files. The markup language XML can describe and identify presentation object information accurately and unambigiously. Furthermore, editing XML is possible extensively, which is necessary to be able to correct and adjust presentations. Also the handling of XML files can be done consistently and without structural errors. All in all, the markup language XML is well suited for the sav-ing of presentation object and presentation object manipulation information. That means that the current dynamic presentation save format is suitable for creating an editor for presentations created with the LPC.

Additionally, UWP is a suitable platform for the OPC. It is optimised for graphics-intensive scenarios, which allows future addition of features that include intensive graphics operations. Besides, it is suitable for designing a sophisticated UI, which is a perk for later adaptation of this application. On top of that UWP supports all features used by the LPC. This is useful, since editing features may require certain actions to be copied for either the functionality or the intutive aspect of these features.

Finally, generating of dynamic presentations from scratch by the OPC can be made possible by making a number of changes to the OPC. The adjustments mentioned earlier in this section should be implemented. Additionally, callback functions of the interactive timeline have to be extended to allow the addition of objects to the timeline and to a presentation. Other callback functions for the addition and removal of object manipulations have to be extended as well. After the correct implementation of all of these changes to the OPC the functionality to add dynamic objects from inside the OPC should be made available.

(36)

5.2

Future research

5.2.1

Sound addition and editing

The OPC is currently not able to record and edit sound. To complete the editor these function-alities have to be added. In practice most presenters do not talk all the time. It would therefore be more fitting to make use of multiple files, each containing a single sound fragment. In this way each sound fragment can be added to the interactive timeline showing the user the duration of the sound fragment and the start and end points of the fragment in time. In this manner one can also use the interactive timeline to edit individual sound fragments instead of one big sound file.

Additionally, it can be useful to upload other sound fragments to a presentation other than recorded sound. This sound can, for example, be used to clarify some information during a presentation, but it can also be used for other purposes like entertainment.

5.2.2

Additional object transitions

Currently the OPC only allows linear spatial manipulations. This means that object manipula-tions always lead to the linear transition of an object over time. In order to give presentamanipula-tions more dynamics, more different transitions should be added for these object manipulations. An example would be an object that accelerates during its position change manipulation. It starts moving slow and then moves faster and faster over time. This is one of many additional types of transitions that can be added.

5.2.3

Spatially grouping of objects

To make presentations more organised and to make editing of presentations with the OPC more intuitive, objects could be grouped spatially. One potential way to achieve this is to add grouping information of objects to the resources file of presentation repositories. This information can then be used by the LPC and the OPC to group objects.

5.2.4

Presentation preview window

As explained under section 4.4 JavaScript is not suited for the implementation of the presen-tation preview window as it does not allow multithreading. However, in order to provide a complete overview of presentations, the presentation preview window should be implemented in the OPC. One way to achieve this would be to implement the presentation preview window in a programming language that allows multithreading [1].

5.2.5

Editing of dynamic objects

In order to complete the OPC it should be able to edit dynamic objects. Basically an application can be run by starting the application and providing it with a stream of input over time. This means that another way to save dynamic objects would be to save this input in a file. That file can then be fed to the application at runtime. However, a problem occurs when trying to determine when to provide an application with input. Applications have varying waiting times based on the machine used, the application itself, the input and the current running processes. This means that for almost every replay of a dynamic object the timing of input supply differs. This is not really a problem for editing the input as it is still time-dependent and can therefore be rendered on a timeline. However, it is a problem for replaying the dynamic objects. Also to save a dynamic object it is not enough to just save the application’s input. To run an application its path has to be known as well. Application paths differ from one machine to another so saving

(37)

to happen [18]. This event can be user input. One way to solve the waiting time problem would be to adjust these operating systems and add an extra process state in which a process is for sure waiting for user input. In that case, instead of giving the program input at fixed timestamps it could be given whenever its process enters this new state. Another solution would be to create different extensions for the OPC for every application. These extensions can then provide the user with the necessary information about waiting times. Then these waiting times can also be used to determine in what way manipulations of instances of interactive embedded applications can be edited, removed and added.

To solve the application path problem the application name could simply be saved. Functions could then be implemented in the LPC and the OPC to figure out the paths to these applications when loading a presentation. These paths could be saved in a settings file so that they only have to be evaluated once. The only time these paths need to be updated is when they are no longer correct. This occurs when an application is uninstalled or moved to a different location. This means that the path of an application has to be found again when the old path is inaccurate.

5.2.6

Addition of dynamic objects

To be able for the OPC to generate dynamic presentations from scratch, new instances of dynamic objects should be addable from inside the OPC. One possible way to achieve this would be to assign a part of the OPCs user interface for the adding of dynamic objects. Dynamic objects could then be added in a similar way as currently happens in the LPC. However, instances of these objects should be saved in the way mentioned under section 5.2.5.

(38)
(39)

CHAPTER 6

Discussion

The implementation of the OPC shows that it is possible to create an editor for static object of dynamic presentations created with the LPC. However, not all possible edit functionalities have been implemented by extending callback functions of the interactive timeline. Also only images are used in this timeline. Therefore, to actually prove that it is possible to create an editor for static object of dynamic presentations created with the LPC, all edit functionalities should be implemented and more kinds of static objects should added.

Additionally, the current dynamic presentation save format is suitable for creating an editor for presentations created with the LPC. However, there might be other formats that are even more suitable than the current save format, but these have not been compared. Concerning the choice of a suitable programming platform for an editor for presentations created with the LPC, UWP was picked. However, no test has been done to compare UWP to other programming platforms on performance. So perhaps either project Centennial or MFC might be a more proper choice based on performance.

(40)

Bibliography

[1] Joseph Albahari. Threading in C. url: http://www.albahari.com/threading/ (visited on 06/09/2019).

[2] Ognian Chernokozhev. Why is Win32 considered bloated and outdated by today’s stan-dards compared to UWP? url: https://www.quora.com/Why-is-Win32-considered-bloated-and-outdated-by-todays-standards-compared-to-UWP (visited on 06/09/2019). [3] Dance Designer. url: https : / / i . ytimg . com / vi / tBL6JiKN9Eg / maxresdefault . jpg

(visited on 06/09/2019).

[4] Andreas Dieberger, Cameron Miner, and Dulce Ponceleon. “Supporting narrative flow in presentation software”. In: Conference on Human Factors in Computing Systems: CHI’01 extended abstracts on Human factors in computing systems. Vol. 31. Citeseer. 2001, pp. 137– 138.

[5] DOMparser. url: https://developer.mozilla.org/en-US/docs/Web/API/DOMParser (visited on 04/11/2019).

[6] Giannis Drossis et al. “3D visualization and multimodal interaction with temporal infor-mation using timelines”. In: IFIP Conference on Human-Computer Interaction. Springer. 2013, pp. 214–231.

[7] First Movie. url: https://www.firstmovie1.com/ (visited on 06/07/2019).

[8] Thomas van der Ham. “Live presentation composer”. Bachelor’s Thesis. University of Am-sterdam, June 2016.

[9] How to add and embed external online content. url: https://support.visme.co/how-to-add-external-online-content/ (visited on 05/27/2019).

[10] Joel Lanir, Kellogg S Booth, and Anthony Tang. “MultiPresenter: a presentation system for (very) large display surfaces”. In: Proceedings of the 16th ACM international conference on Multimedia. ACM. 2008, pp. 519–528.

[11] Magic Musix Maker 17. url: https://images.app.goo.gl/sEhUKgVpdVECXGa76 (visited on 06/09/2019).

[12] Microsoft. Bring your desktop app to the universal windows platform. 2016. (Visited on 04/10/2019).

[13] Microsoft. InputInjector Class. url: https://docs.microsoft.com/en- us/uwp/api/ Windows.UI.Input.Preview.Injection.InputInjector (visited on 04/14/2019). [14] Microsoft. Screen capture. 2018. url: https://docs.microsoft.com/en- us/windows/

uwp/audio-video-camera/screen-capture (visited on 04/14/2019).

[15] Microsoft. WebView.AddWebAllowedObject(String, Object) Method. url: https://docs. microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.webview.addweballowedobject# Windows_UI_Xaml_Controls_WebView_AddWebAllowedObject_System_String_System_

(41)

[17] Navigating the Powtoon interface. url: https://support.powtoon.com/en/article/ navigating-the-powtoon-interface-4138017 (visited on 05/27/2019).

[18] Operating System - Processes. url: https : / / www . tutorialspoint . com / operating _ system/os_processes.htm (visited on 05/27/2019).

[19] Max Peng. Multithreading Javascript. url: https://medium.com/techtrument/multithreading-javascript-46156179cf9a (visited on 05/20/2019).

[20] Powtoon. Animation settings for characters and props. url: https://support.powtoon. com / en / article / animation - settings - for - characters - and - props (visited on 05/27/2019).

[21] Q1.14: Why should I use XML? url: http://xml.silmaril.ie/whyxml.html (visited on 06/09/2019).

[22] Laurence Scales. Flashes, Bangs And Burning Diamonds: The Early Royal Institution Christmas Lectures. 2015. url: https : / / londonist . com / 2014 / 12 / flashes bangs -and-burning-diamonds-the-early-royal-institution-christmas-lectures (visited on 05/27/2019).

[23] Structuring in Prezi Next. url: https://support.prezi.com/hc/en- us/articles/ 360003498513-Structuring-in-Prezi-Next (visited on 05/27/2019).

[24] Understanding the difference between movie and slideshow modes. url: https://support. powtoon . com / en / article / understanding the difference between movie and -slideshow-modes-7600801 (visited on 05/27/2019).

[25] Using the PowerPoint add-ins. url: https://support.powtoon.com/en/article/using-the-powerpoint-add-ins (visited on 05/27/2019).

[26] vis.js. Timeline. url: https://visjs.org/docs/timeline/ (visited on 04/11/2019). [27] What is prezi. url: http://drprezi.com/what-is-prezi/ (visited on 05/27/2019). [28] XMLSerializer. url: https : / / developer . mozilla . org / en - US / docs / Web / API /

XMLSerializer (visited on 04/11/2019).

Referenties

GERELATEERDE DOCUMENTEN

In the dis- tributed object model, design objects and product objects from multiple resources form an integration of knowledge from various dis- ciplines. • Taking this one

The first version of our artificial flagella consists of a micro-flap with a photo-curable polymeric material as the principle component, in which magnetic nanoparticles are

Ook voor cliënten en mantelzorgers is het waar- devol om een goed inzicht in het eigen netwerk te hebben, zodat ze weten aan wie ze welke ondersteuning kunnen vragen.

zijn om hun werk te doen. OI: When employees in this department are not able to perform a specific task, they quickly learn how to do it. FT1: Wanneer werknemers op deze afdeling

Keywords: Appreciative Inquiry; Generative Change Process; Alteration of Social Reality; Participation; Collective Experience and Action; Cognitive and Affective Readiness

The results show that the items to measure the emotional, intentional, and cognitive components of the response to change are placed into one component. The results for the

The aim of this study was to explore how the critical success factors on the micro- level (change approach, willingness and ability) and meso-level (change

This research is focused on the dynamics of readiness for change based on the tri dimensional construct (Piderit, 2000), cognitive-, emotional-, and intentional readiness for