• No results found

Collaborative visual analytics in multi-disciplinary health care teams

N/A
N/A
Protected

Academic year: 2021

Share "Collaborative visual analytics in multi-disciplinary health care teams"

Copied!
28
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

INFORMATION STUDIES: BIS, FACULTY OF SCIENCE

Collaborative visual analytics in multi-disciplinary

health care teams

Author:

Daniel TOR

daniel.tor@student.uva.nl

10781927

Supervisor:

dhr. prof. dr. Marcel WORRING

m.worring@uva.nl

(2)

Daniel Tor

Faculty of Science, University of Amsterdam

Abstract.Multi-disciplinary health care teams and visual analytics are both not new in any sense, however, effectively working together in multi-disciplinary health care teams using visual analytics is unexplored terrain. In multi-disciplinary health care teams, complex and urgent decisions have to be made with limited time to analyse data that is relevant to that decision. To add to this complexity, the decisions have to be made in synergy with other members of the team and the patient in question. If such teams could use visual analytics in a collaborative manner, the team would benefit from many advantages that an effective knowledge sharing environment has to offer. This research focuses on enhancing the col-laboration within a multi-disciplinary team using visual analytics in various settings. Using earlier research on multi-disciplinary health care teams and research on collaborative visual analytics, mock-ups of a collaborative visual analytics system have been made and the use of this system has been simulated in workshops with members of multi-disciplinary health care teams.

Keywords.Long-term health care, Health care, analytics, IT, collaborative visual analytics, information visualisation, collaboration, multi-disciplinary teams, strategic fit, sys-tem design, health care team model;

Introduction

When analysing data, the goal is to gain inside from the data. Depending on the data available, the methods of gaining insight from data can vary. One of the biggest fac-tors is the size of the data set. Data is growing, generally, but also in health care. The amount of data that is being collected and stored is growing vastly. This poses a chal-lenge to organisations in terms of managing, analysing and gaining insight from this data. At some point, a data set that has grown enough falls under the term ’big data’ (Murdoch & Detsky, 2013). With these developments, especially with big data-sets that can be considered big data, complex-ity and size are increased. This potentially leads to higher costs but also better insights.

Historically, health care has always generated large quantities of data (Raghupathi & Raghupathi, 2014). Through trends like digitalising old hard-copy records, data is even being stored retroactively. This data can be used for various benefits for health care organisations, such as clinical analysis but also, for example, for the support of the organisation (Raghupathi & Raghupathi, 2014). This leads to large and complex data sets that need to be analysed.

Caban & Gotz (2012) described the analytics problem in health care in the following way : "Today, (a) physi-cians and clinical practitioners are faced with the challeng-ing task of analyzchalleng-ing large amount of unstructured,

multi-modal, and longitudinal data to effectively diagnose and monitor the progression of a particular disease; (b) patients are confronted with the difficult task of understanding the correlations between many clinical values relevant to their health; and (c) healthcare organisations are faced with the problem of improving the overall operational efficiency and performance of the institution while maintaining the qual-ity of patient care and safety."This shows three distinctive categories in problems, a problem concerning profession-als understanding diagnosis and monitoring of patients or conditions, patients understanding their own data and or-ganisations understanding their own performance. A very good example of the first category is provided by Manssour & Freitas (2000).

Visual analytics is the field that allows for analysis of complex and large data sets effectively. Thomas & Cook (2005) define visual analytics as: "Visual analytics is the sci-ence of analytical reasoning facilitated by interactive visual interfaces". Visual analytics has great potential in health care to analyse, filter, and illustrate data (Caban & Gotz, 2012). In the paper by Manssour & Freitas (2000), an analy-sis of visual analytic systems is done by means of reviewing the landscape of these systems. The paper provides a good starting point for mapping medical visualisation needs in the first category and shows that visual analytics tools in medicine are becoming more of a necessity than a wish. This research focuses the last category problem: Under-standing team performance.

The problem of understanding and gaining insight from large sets of data also affects multi-disciplinary teams in health care. Studies have been done into the medical in-formation needs of such teams (Blazeby et al., 2006), but

(3)

there is still work to be done on collaboration on informa-tion level. To support multi-disciplinary teams better, a col-laborative means of using information is needed.

Collaboration in visual analytics is a relatively new field. The goal of collaborative visual analytics is insight in the data through visual means and to share those insights among others. Meniuc (2014) and Nobarany et al. (2012) provide a lot of work in this field, where, in earlier research Meniuc (2014) used a design perspective with the aim to im-prove design efforts and Nobarany et al. (2012) used dis-tributed cognition theory to improve existing visual ana-lytics systems in dynamic contexts. Nobarany et al. (2012) researched their theories in real-life settings, but Meniuc (2014) has not. Meniuc (2014) laid out a theoretical frame-work (using some of Nobarany et al. (2012)’s frame-work) consid-ering reusability in visual analytics design spaces, adding that there is future work for validation of this theoretical framework. In the thesis we aim to validate the frame-work laid out by Meniuc (2014) in a medical setting, us-ing multi-disciplinary health care teams as our playground. The settings that multi-disciplinary health care teams are used in are (among others) home, chronic and acute set-tings (Lemieux-Charles & McGuire, 2006). Lemieux-Charles & McGuire (2006) recognised that teams in these settings each urge a different way of viewing them. The main re-searched settings in this paper are chronic and home set-tings.

The relevance of this paper is that we are going to see if multi-disciplinary health care team traits can be used (us-ing Meniuc (2014)’s design space) to design a collaborative visual analytics system. Meniuc (2014)’s model is extended with a team characteristics application framework.

The research question(s) posed

The research question that is posed is the following: How can collaborative visual analytics be designed effectively for multi-disciplinary health care teams?

To guide the research, the research question is broken down into the following sub-questions;

• How can a medical health care team be defined? • What are the available collaborative visual analytics

design practises?

• How can the needed collaborative visual analytics design practises be related to medical team charac-teristics?

1 Method

1.1 Method

As mentioned earlier, in this work we will connect the de-sign space of Meniuc (2014) to a multi-disciplinary health

care team environment, essentially assessing the fit of the framework by Meniuc (2014) multi-disciplinary health care team environment. Meniuc (2014) has defined de-sign choices that have to be made with the environment in mind. The connecting can done by examining the work of Lemieux-Charles & McGuire (2006) about multi-disciplinary health care team performance on elements that can be used for the design of information systems. The environment, then, is a multi-disciplinary health care team setting. Lemieux-Charles & McGuire (2006) provides char-acteristics of multi-disciplinary health care teams them-selves. In this paper we will therefore connect the de-sign space by Meniuc (2014) to the characteristics defined by Lemieux-Charles & McGuire (2006) and validate them through a field research.

1.2 Thesis structure

The structure of the thesis is as follows; In the work by Meniuc (2014), several clusters of design elements are provided. In the thesis, these clusters are reviewed and choices are argued for as to which elements are researched. Along side this, the multi-disciplinary health care teams are defined which can be used to find analysis topics that would speak to multi-disciplinary team members. The the-ories about the multi-disciplinary health care team and de-sign spaces is used to create mock-ups that are relevant to multi-disciplinary health care teams. In the next step the multi-disciplinary health care team is mapped onto the design space, after which the mock-ups are designed and built. This means that the environment characteris-tics are used as a base to make certain design decisions. After building the mock-ups, health care teams are mod-elled so certain characteristics can be taken into account when doing the interviews. In the field research we attempt to validate the design space through expert reviews, by ex-perts in the field of disciplinary team work and multi-disciplinary team members. Both the multi-multi-disciplinary health care team model and the mock-ups are going to pro-duce data in the interviews. This data is cross-referenced and analysed in the results and discussion sections. This structure has been visualised in figure 1.

1.3 Visual analytics system evaluation methods

Assessing visual analytics systems is a challenge be-cause such systems consist of several disparate compo-nents that may be combined or not for the visual analytics systems’ purpose. These components include analytical reasoning, visual representations, computer-human inter-action techniques, data representations and transforma-tions, collaboration tools, and especially tools for commu-nicating the results of their use. The use of these systems can vary a lot in terms of the length of user activities and the kinds of work-flows users follow. Additionally, users can

(4)

Figure 1. Visualisation of thesis structure

work alone or in groups. To understand these behaviours an evaluation can target the component level, the system level, or the work environment level, and requires realis-tic data and tasks. Many facets of systems can be evalu-ated, for example; insight characterisation and measure-ment, design guidelines or synthetic data set generation and use (Plaisant et al., 2009; Scholtz, 2010, 2011; Jeong et al., 2009; Perer & Shneiderman, 2009). For this paper, de-sign guidelines for collaboration tools will need to be eval-uated.

The design guidelines that are evaluated in this research are collaboration tools that facilitate non-co-located asyn-chronous collaboration. As such, this environment is simu-lated during the testing. The testing is done through mock-ups of how the system would look in reality. In these sim-ulations, the usefulness of the collaboration tools is evalu-ated by the participants by means of comparison with other available collaboration tools on the element level. Realis-tic data and tasks for each of the parRealis-ticipants are utilised for the evaluations. Additionally, these results are cross-referenced with multi-disciplinary health care team char-acteristics so these results can be related to certain defin-ing characteristics of a participants team-environment.

2 Theoretical outline

In this section we will give a theoretical outline that will explain the process of analytics so that the reader can

un-derstand the general functionality and implications of vi-sual analytics system. This also forms the base terminology to understand when reviewing the collaborative visual an-alytics design space. After the visual anan-alytics process and collaborative visual analytics design space are reviewed, the multi-disciplinary teams are defined. The insight into how multi-disciplinary health care teams work, provides a basis to make design decisions in the collaborative visual analytics design space.

2.1 Visual analytics

Visual analytics is a multi-disciplinary approach that is aimed at processing high-volume data, discovering pat-terns, deriving insights from large and complex data sets and communicating the findings and takes advantage of various related research areas such as visualisation, data mining, data management, data fusion, statistics, and cog-nition science (Meniuc, 2014; Keim & Zhang, 2011). Vi-sual analytics solutions are growing in complexity and size along with the data sets to keep up. This has a direct effect on design decisions that have to be made when creating a visual analytics solution (Meniuc, 2014).

2.1.1 The visual analytics process. Before analysing visually, heterogeneous data sources need to be processed and integrated. From this original data, models can be generated and visualised for evaluation and refinement. Models can be automatically generated (with data mining, for example) but also abstracted from data using visualisation techniques for the specific character-istics of the data. In the analytics process, insight can be gained from the tasks of visualisation, generating models or any of the interactions. The feedback loop stores the gained insight and assists the analyst in the future when analysing (Keim & Zhang, 2011). This particular way of modelling visual analytics is displayed in figure 2.

The model by Pirolli & Card (2005) in figure 3 provides an alternative view on visual analytics. This model con-tains two main loops, namely; the foraging loop and the sense-making loop. This process can be approached in a top-down or bottom-up manner. The foraging loop is concerned with data foraging and structuring (possibly to the extent of a schema) and the sense-making loop is con-cerned with analysing (or mental modelling) the data once structured. The main loops contain objects and steps be-tween the objects. In a bottom-up approach, one starts with the foraging loop. Within the foraging loop three ob-jects exist:

• External Data Sources • Shoebox

(5)

Figure 2. Visual analytics process by Keim et al. (2008)

In search and filter (from data sources to shoebox) the an-alyst examines data sources and filters them based on own requirements. In read and extract (from shoebox to evi-dence file), collections of shoebox evievi-dence are read to ex-tract nuggets of evidence that may be used. In schema-tise (from foraging loop to sense-making loop and from ev-idence file to schema) the information from the evev-idence file may be represented in a schematic way, both formally and informally. Within the sense-making loop three objects exist:

• Schema • Hypotheses • Presentation

In build case (from schema to hypotheses) a theory or case is built by additional marshalling of evidence to support or falsify hypotheses. In tell story (from hypotheses to presen-tation) a presentation or publication of a case is made to some audience.

The further into the process the analyst gets, the more effort and structure the result will have required.

Kang & Stasko (2011) have proposed another model, based on the models by Pirolli & Card (2005) and Keim & Zhang (2011). This model is more open and less descriptive in nature than the other models, but can be close to reality in some cases. One of the first things to notice is that are

no starting points. Certain models don’t have a forced end (to indicate that analytics is a continuous cycle that may be repeated forever) and this model is no exception. However, this model has no designated end at all, where others often do (such as the presentation node in the model by Pirolli & Card (2005)). As Meniuc (2014) notes: "the observational experiments conducted by Kang & Stasko (2011) suggest that the various steps of the process are intertwined and cyclical, rather than following a linear sequence". These models in-dicate that users need to have access to necessary compo-nents at any point in the analytical process, as they may move through it a non-linear way, and, that the opportuni-ties for the reuse of components varies for every step. This model is depicted in figure 4.

All three models indicate the varying realities of analysing data visually. Some but not all of the elements in the models are candidates for reuse in collaborative visual analytics environments. Additionally, the models show how and when these reusable elements can be reused on a conceptual level.

Meniuc (2014) adds that: "Not all analytic activities have an equally explored potential for reusability". Meniuc (2014) states that the main activities that warrant reusability are the activities in the foraging loop (most clearly found in the model by Pirolli & Card (2005)). The sense-making loop hasn’t been researched a lot in terms of reusability.

(6)

Figure 3. Visual analytics process by Pirolli & Card (2005)

Figure 4. Visual analytics process by Meniuc (2014) based on Kang & Stasko (2011)

2.2 Collaborative visual analytics

To introduce the reader to the concept of collaborative visual analytics, we explore visual analytics models and their terminology. This will give the reader a proper un-derstanding of the collaborative visual analytics system el-ements, even if the reader is unfamiliar with collaborative visual analytics.

Collaborative visual analytics is a form of analytics where different analysts work together jointly and cooper-atively to perform analytical tasks. It includes sharing data and insights, collective analysis and coordinated decisions and actions. The are four main reasons to collaborate in vi-sual analytics: (1) experts’ knowledge can be available any

time and at any place, (2) this expertise can be transferred to others, improving the local level of knowledge, (3) based on the supported accessibility, visualisation products can be reviewed and modified as they are produced, reducing turn-around time, and (4) remote accessibility can reduce the need to relocate the expertise physically (Coleman et al., 1996). This form of analytics can be seen as an addi-tion to regular visual analytics because it entails the same goals, but in an extended way. Collaborative visual ana-lytics increases visibility and improve alignment of deci-sions and actions among multiple actors (Isenberg et al., 2010; Meniuc, 2014; Arias-Hernandez et al., 2011). In the technical sense: "Collaborative BI (collaborative business

(7)

intelligence) is the merging of business intelligence software with collaboration tools, including social and Web 2.0 tech-nologies, to support improved data-driven decision making." (Rouse, 2012). Another way of explaining collaborative vi-sual analytics would be to say that parts of a vivi-sual analyt-ics process or tool can be reused (Nobarany et al., 2012). This means that reusability in visual analytics often refers to collaborative visual analytics.

2.2.1 The process of collaborative visual analytics.

When designing systems, it is good practise to do so with the processes that it is aimed to support in mind. The process of visual analytics can be described using various models, such as the models by Keim et al. (2008) or Pirolli & Card (2005). The process of collaborative visual analyt-ics is similar, but involves sharing analytical aspects. This sharing is done with the following purposes: Communicat-ing data, validatCommunicat-ing conclusions and coordinatCommunicat-ing actions (Wells, 2009).

Meniuc (2014) proposes a synthesis of the existing mod-els discussed in subsection 2.1.1 to facilitate the discus-sion on reusability in visual analytics. These models show similarities but state different elements explicitly. Meniuc (2014) excludes two elements that can be found in the orig-inal models: analytic conceptualisation and linearity of the analytic process, as Meniuc (2014) considers them to be pre-steps. The new model is cycle oriented and does not force a sequence in steps. The model also does not include a start or end element which means that users may start or finish the process during any step. The model is visualised in figure 5 (Jeong et al., 2015).

2.2.2 Collaboration. A lot depends on if the collab-oration is performed on the same location and at the same time (proper terms: (non-)co-located, (a)synchronous) (Silva et al., 2011). This thesis focuses on asynchronous, non-co-located collaboration.

Collaboration Synchronous Asynchronous Co-located

Non-co-located X Table 1

Choices that can be made when collaborating

Research by Lu et al. (2011) suggests that analysts prefer to use analytical elements for themselves more than they would use them in a collaborative manner. The paper also suggests that this might be due to user interface issues. Me-niuc (2014) thinks that this might also be due to the ana-lysts not being used to collaborate in this manner. Anaana-lysts state that the learning curve for collaborative visual analyt-ics tools is too steep. This argues that the user interface for these tools should be considered as well as possible.

2.3 Designing collaborative visual analytics systems

When designing collaborative visual analytics systems, various factors have to be taken into account. This sec-tion aims to lay out all these factors, much of which is re-searched by Meniuc (2014). Meniuc (2014) laid out a theo-retical framework considering reusability in visual analyt-ics design spaces, adding that there is future work for vali-dation of this theoretical framework.

To validate the framework by Meniuc (2014), the design space for collaborative visual analytics is described and analysed. Having a design space for building visual ana-lytics systems can be justified because there are two ma-jor challenges; the large amount of possible design deci-sions and the complexity of visual analytics systems (Me-niuc, 2014). A design space can then be an efficient way to simplify the process of system design because a design space realises a descriptive generalisation that permits to specify a concrete instance of a design artifact (Schulz et al., 2013). A design space guides systems designers in the iden-tification of requirements and interactions between them. Meniuc (2014) defines a visual analytics design space in the following way: "A set of key visual analytics system design considerations and alternatives, enabling the identification of system requirements that support the reusability of ana-lytic steps."

2.3.1 Design space clusters. It is useful to divide a design space into design space clusters to provide a systematic way of designing collaborative visual analytics tools. These design space clusters are chronologically or-dered in terms of a design cycle.

Creation. The Creation cluster is defined as capturing, representing and sharing of the visual analytic process. The creation cluster contains the following four elements:

Analytic Activity Taxonomy. When designing a collab-orative visual analytics system with reusability support, a designer must choose a proper taxonomy. The tax-onomy represents the backbone for the analytic history depiction mechanism. Such a formal classification built within the system will allow for standardisation and com-parison of analytic activity, which is a definite prerequisite for reusability according to Meniuc (2014). The taxonomy might be changed to fit the analysts needs.

Provenance. Analytic provenance or analytic history depiction represents the historically recorded analytic ac-tivity and the related insights. This consists of two main parts when considering design decisions; history captur-ing and history depiction. There are various ways histori-cal paths can be walked through and captured. For the his-tory capturing, a designer has two choices when capturing visual interactions; visualisation states and analytic action. Visualisation states can be used by capturing inter-mediate visualisations that have led to insight. Analytic in-teraction is similar to saving all the activities an analyst

(8)

exe-Figure 5. Visual analytics process by (Meniuc, 2014)

cutes while analysing like a log, but these activities may not immediately appear to be linked to a visualisation for end-users. It is possible to use a combination of both solutions. Besides this, a decision has to be made for the linearity of the captured elements. The history can be captured in the real granularity and sequence, which would be easy to nav-igate for the analyst. However, the real analysis process is non-linear and it might be better to capture it as such, po-tentially missing important elements. The depiction of the provenance contains choices of interactivity (what actions can be performed from a provenance depiction element in the software?). Meniuc (2014) provides a glossary of possi-ble actions.

Semantics. Events and actions can be captured, but they do not reveal the semantics behind an analytical cy-cle. Semantics can be captured through manual annota-tion, automated annotation or a mix of the former two, on top of the data layer. The manual method is not reliable, mainly because it is not systematic and therefore may be incomplete. Analysts often only use high-level descriptions and only for the final state of a visual analytics process. Be-cause the cycle is incomplete when using manual annota-tion, it is difficult to track a full analytic process and benefit from the concept of provenance. Another way to capture this is in an automated manner. To do this, a visual ana-lytics system has to record, analyse and display the activity performed by an analyst. This is often done on an event-based level, focusing on low-level interactions. For a cap-tured provenance to be reliable, usable, and reproducible, a more semantic type of recording is needed. To do this in an automated way is a challenge. It is also possible to use a mix of these methods, where the visual analytics system captures interaction steps and the user can manually

anno-tate these steps. Whichever method is chosen, the annota-tions can be embedded into the visual analytics system or stored independently. They could be displayed graphically, textually, in audio or in a non-predefined form. All of these decisions have a strong influence on the user friendliness of the visual analytics system.

Social Aspects. When collaboration between people occurs, social aspects play a role. It is important that this is considered by the system designer to ensure that collabo-ration processes perform well. Meniuc (2014) distinguishes two major types of social driven considerations; sharing-related considerations and usage-sharing-related considerations. The sharing-related considerations need to take into ac-count the reusable components in the visual analytics sys-tem have to be created by users. For users to do this, there needs to be a proper motivation. There are various man-ners to motivate users to do this. The usage-related con-siderations need to take into account that users may find it inappropriate to reuse a component created by another user, due to mistakes or other possible reasons. This means that a trust relationship has to be in place between users as well as their components. There are various ways to do this, but it is important that when doing this, privacy issues are not disregarded.

Identification. The Identification cluster is defined as how the analyst discovers and understands a component from the component repository. The measure would be; the convenience of component extraction and the compre-hensibility of the components. This is important because without easy and timely identification of reusable compo-nents of the system, analysts are less likely to adopt the functionality. Meniuc (2014) distinguishes three categories within the identification part within the design cluster:

(9)

Retrieval Mechanisms. Retrieval mechanisms are needed when the user consciously is looking for reusable components that are applicable for a specific task. The visual analytics system reusability repository can contain a large number of these components and thus the retrieval of them needs to be in place to facilitate a search for them. A straightforward way to make these components searchable is through tagging or annotation. Another is using content similarity techniques. This can be done in an automated way after pre-defined analytic actions, or only when a user makes a specific request. Such retrieval mechanisms need to take into account the current analyt-ical path and state of the current user and the potential reuser.

Component Awareness Mechanisms. This mechanism is essentially the opposite of the retrieval mechanism. In this case the user is not consciously looking for reusable components that are applicable for a specific task. There are various manners to achieve the goal of making the user aware of the existence of the reusable components. Meniuc (2014) classified awareness mechanisms into two mecha-nisms: content awareness mechanisms and social aware-ness mechanisms. Whereas content awareaware-ness mecha-nisms use content based algorithms to predict interest-ing components (such as related entities, related tags, lat-est activity, etc.) social awareness mechanisms use so-cial based algorithms to predict interesting components (such as popular analysts, similar analysts, work groups, etc.). A notification based system can notify the user of cer-tain reusable components based on a measure of impor-tance to the user. A recommendation system might also be usable for this purpose can recommend certain com-ponents based on (for example) the users’ profile and in-terests. These types of personalisations are important be-cause a user is likely to have a personal style of analysing.

Cognitive Aspects. The cognitive aspect is the degree by which a reusable component can be understood by a human. A reusable component should contain as many de-tails as possible, but cannot create a cognitive load for the user that may be too heavy. A large factor in this is how the reusable component is presented to the user. There are various options considering the display of a component to a user. The system can display only the relevant elements or views from the suggested component, the relevant el-ements or views from the context or a quick overview of other analysis paths that already used that component. The cognitive load may be reduced by introducing more help in terms of displaying ranking or comments.

Modification. The Modification cluster is defined as adapting and editing a reusable component to better fit the reuse situation. This step is entirely optional for a user since the user may want to use the reusable component unmodified. The design considerations can be divided into three main categories; modification of a single component

through adaptation of visualisation parameters and data ranges, modification of a set of components through condi-tional undo/redo insert funccondi-tionality or reordering of com-ponents through provenance navigation.

Application. The Application cluster is defined as the integration of the component in the current analysis path of the analyst, with or without automated guiding by the system. The design considerations in this step very much have to do with making the user understand what the effect is of applying the candidate component in their own anal-ysis path. An important factor in this is a preview function-ality, before the component is actually used. This helps the user make the right decision on whether or not to integrate this component in their own analysis path.

The clusters are meant to be followed sequentially, in the order presented above, when designing such a sys-tem. This order can also be found in the work by Lee et al. (2003), on which much of the clusters are based. Another possibility is to follow the steps in a reversed sequence, which may serve the target users of the visual analytics sys-tem better because the reversed sequence is more user ori-ented (starting with how to apply any envisioned compo-nent rather than setting up a taxonomy first).

2.3.2 Designing with a purpose. Schulz et al. (2013) also provide valuable insights on visualisation design spaces. Whereas Meniuc (2014) provides a clustering and description by design space functionality, Schulz et al. (2013) provide motivations and mechanisms for execut-ing certain visualisation or data tasks. Therefore Schulz et al. (2013)’s insights can be used to hypothesise about linking analytics goals and collaborative tasks. For exam-ple: Schulz et al. (2013) identify three analysis goals: ex-ploratory analysis, confirmatory analysis and presentation. A choice between these analysis goals greatly affects the possible collaborative options. For exploratory analysis, awareness mechanisms are of more help to guide a user through their exploration. A retrieval mechanism might be a more sound choice when doing confirmatory analysis. These possibilities all depend on the target users.

When designing any system, it is important to keep the target users in mind (Cross, 2011). In the case of this re-search, the target users are multi-disciplinary health care teams. It is therefore useful to conceptualise a multi-disciplinary health care team to formulate a reasoning that is applicable for and adapts to multiple health care team environments.

2.4 Multi-disciplinary health care teams

Multi-disciplinary teams in health care are often used as a way to improve quality (Cashman et al., 2004) and lower costs (Mukamel et al., 2009) in clinical care. Considerable attention has been focused on the effectiveness of multi-disciplinary teams in health care, and studies have been

(10)

linked team performance to patient outcomes. However, studies into performance are largely qualitative and anec-dotal in nature (Temkin-Greener et al., 2004). It can be safely concluded that working in multi-disciplinary teams improves effectiveness and efficiency in at least some cases, and more of these teams will populate the future of health care. It can be argued that, especially in health care, team work adds value to the outcome (Burke et al., 2004). Burke et al. (2004) show that through teamwork oriented training, the same set of individuals can improve outcomes significantly. The concept of a multi-disciplinary team isn’t irrefutably set (Lemieux-Charles & McGuire, 2006). How-ever, Lemieux-Charles & McGuire (2006) found the follow-ing definition of a team, by Cohen & Bailey (1997): "A col-lection of individuals who are interdependent in their tasks, who share responsibility for outcomes, who see themselves and who are seen by others as an intact social entity embed-ded in one or more larger social systems (for example, busi-ness unit or corporation), and who manage their relation-ships across organisational boundaries". Multi-disciplinary teams in health care are sometimes known in literature as patient care teams (Wagner, 2000) or health care teams (Lemieux-Charles & McGuire, 2006). Cashman et al. (2004) found that multi-disciplinary teams in health care are tradi-tionally seen as a rational and effective approach of deliv-ering health care services. This approach has been taken so seriously that the United States Institute of Medicine lists ’competency to practice as part of an interdisciplinary team’ as one of the five core practice competencies. Sim-ilarly, Fleissig et al. (2006) found that in the UK, multi-disciplinary teams in health care is seen as the primary way to provide care and Lemieux-Charles & McGuire (2006) found that the use of multi-disciplinary teams is grow-ing. To understand this phenomenon better, it is useful to conceptualise multi-disciplinary health care teams into a model that defines such a team.

2.4.1 Conceptualising a multi-disciplinary health care team. The most cited recent model of multi-disciplinary health care teams is Lemieux-Charles & McGuire (2006)’s Integrated (Health Care) Team Effective-ness Model (ITEM). This model describes various aspects of a multi-disciplinary health care team that are relevant for the results for the multi-disciplinary health care team. The model can, however, also be used to research a multi-disciplinary health care team’s needs and goals, more specifically: a team’s information needs. The model can, for example, provide a clear definition of a team’s composition, which in turn can be used to define the target users for an information system. The model is visualised in figure 6.

2.4.2 Composition. Arguably, the composition of a team is expected to be one of the most important as-pects when designing a collaborative visual analytics

sys-tems as it defines, for example, available skill-sets within a team (which are relevant when analysing data). Multi-disciplinary teams in health care are teams that contain at least two disciplines and consist of at least two profes-sionals. These teams are characterised by all team mem-bers participating in the teams activities, sharing leader-ship and relying on each other to accomplish team goals. Interdisciplinary teams are empowered to make and im-plement their own decisions, thus having a potential for effecting change (Temkin-Greener et al., 2004). This does however, create the necessity for a common platform to gain and share insights. An area that is often overlooked ac-cording to Lemieux-Charles & McGuire (2006) is that teams have multiple and changing membership of teams. Team boundaries are often unclear and fluid, as team members usually belong to multiple work groups and move in and out of groups to achieve different goals (Sundstrom et al., 2000). Teams are also not always the same in size, and or-ganisational culture also affects team composition. This is relevant to this research (because it may affect the type of collaboration between team members), but due to the short time period, can not be researched in depth. The same is true for the following aspects:

• Size • Professional tenure • Team tenure • Discipline • Disciplinary diversity • Team champion

• Age, age diversity, ethnic diversity • Status

• Willingness to learn • Stability over time

2.4.3 Team disciplinary roles. In literature, a spe-cific set of roles or individuals that a multi-disciplinary team (should) always consists of, has not been found. In-stead, a more general example list of medical roles which may exist in a team has been found:

• Nurse, nurses were often medical professionals with a general support role, sometimes leading the team (Jones et al., 2009; McCloskey & Johnston, 1990) • Doctor/Physician, doctors were often medical

pro-fessionals with a general role, often leading the team (Jones et al., 2009; McCloskey & Johnston, 1990)

(11)

Figure 6. Integrated (Health Care) Team Effectiveness Model (ITEM) by Lemieux-Charles & McGuire (2006)

• (Respiratory) Therapist, This role is a specialist role found in the paper of McCloskey & Johnston (1990) The main distinction in these lists is the amount of gener-ality these functions have (from general functiongener-ality un-til expert functionality). A wider list of examples has been published by Mitchell et al. (2012). In this paper eleven multi-disciplinary team cases are discussed, along with all their compositions, which show similarities to the list above. It is clear that teams aren’t always composed in the same way. Instead Wagner (2000) defines a list of types of roles in a medical team when discussing patient care teams:

• Nurse case managers • Medical specialists • Clinical pharmacists • Social workers

• Lay health workers

These lists, along with their ambiguity, show that when researching a multi-disciplinary health care team, an in-depth questionnaire based on a proper model of a multi-disciplinary health care team is needed.

2.4.4 Processes within multi-disciplinary health care teams. Particularly in collaborative environments, it is important to define the processes and their character-istics. This is important because these processes are the processes any system would have to facilitate. Lemieux-Charles & McGuire (2006) listed the following processes in multi-disciplinary health care teams:

• Communication • Coordination

• Interdisciplinary collaboration • Cooperation

(12)

• Conflict

• Participation and perceived influence • Leadership

• Process strategies

• Level of group development • Team climate

This list of process(-characteristics) will have to be consid-ered extensively when researching the fit between a collab-orative analytical system and multi-disciplinary health care teams.

2.4.5 Other multi-disciplinary health care team characteristics. The other characteristics as defined in the model by Lemieux-Charles & McGuire (2006) will also be included in the research. The two items from the model that will not be tested are Team effectiveness and Social and policy context. Team effectiveness is not researched because this type of data is often commercially sensitive to the suppliers as well as any patients. This data is usually not permitted for use by anyone apart from the owners of the data. Due to time limitations it is not possible to include the social and policy context. The social and policy context are not easily testable as theoretical work is still to be done on the subject in the context of multi-disciplinary health care teams.

2.4.6 Collaboration in multi-disciplinary health care teams. The model itself does not go in depth into the collaboration of multi-disciplinary health care teams. However, since collaboration is one of the primary research subjects for this research, the collaboration in such a team is described here. This information can be used to gain understanding of how a system could fit within a collaborative environment.

There are many types of collaboration possible, within many types of groups of agents. These agents can consti-tute multi-disciplinary health care team members but also members within communities or even animals or robots in any type of relationship. The collaboration can be de-scribed by its task types (as seen in the model by Lemieux-Charles & McGuire (2006)) or other characteristics (Ander-son & Franks, 2003). For the purpose of gaining an under-standing how a system might fit, a low-level description is needed first.

Multi-disciplinary teams start by bringing together key professionals with all the necessary knowledge, skills, and experience for a specific problem or a patient. The mul-tiprofessional composition of teams should increase the likelihood that individual patients are offered the most ap-propriate treatment for their condition, because manage-ment plans would be based on a broad range of expert knowledge from the start, and all aspects that influence

treatment options would be considered. Through regular meetings, team members discuss ways of treatment plan-ning, referrals between professionals, and examinations and investigations. The term for this type of collaboration is cross-functional collaboration(Lemieux-Charles & McGuire, 2006). This type of collaboration happens on a regular ba-sis with the goal of finding the best way to take care of a pa-tient within the organisational and natural limits. (Collab-orative) interactions between teams and across traditional boundaries (e.g., across patient care units; across organisa-tions; and between hospitals, communities, and patients’ homes) and across status hierarchies have not been sys-tematically examined (Lemieux-Charles & McGuire, 2006). Digital professional collaboration in health care remains a challenge (Li, 2015; Manssour & Freitas, 2000), however making good design decisions will help mitigate at least part of this challenge.

2.4.7 Concluding multi-disciplinary health care teams. Concluding the theory about multi-disciplinary health care teams, we can conclude that, from an in-formation perspective, it is worth considering the team as a whole as well as the different components that it has. The team-members will have aligned goals as well as diverging goals (which still might serve the team as a whole), each resulting in a different need for insights. This argument also holds because multi-disciplinary team composition holds many changing variables, which influ-ence needed individual insights as well as the common team insights. The different types of settings raise the question of information needs in different settings. This paper is limited to long-term health care settings for the elderly and handicapped. The model by Lemieux-Charles & McGuire (2006) (displayed in figure 6) summarises many of the points made in the last few sections. The model was created with team performance in mind, but many of those characteristics also affect the way information is handled and shared within a team.

3 Collaborative visual analytics designs 3.1 Forming new theory

To be able to answer the research question, the follow-ing question must be asked: Does the collaborative visual analytics design framework described in section 2.3 fit the concept of a health care team as described in section 2.4? To answer this question it is useful to see the entire envi-ronment as one system: "A collaborative analytics system can be seen as a cognitive system composed of analysts, rea-soning artifacts and tools that are used to represent, com-municate and manipulate the artifacts. Analysts’ mental re-sources normally constitute a significant part of a distributed cognitive system and are responsible for complex or creative computations and coordination of cognitive resources (in-cluding themselves). They also act as part of the distributed

(13)

memory of the system. However, they externalise the infor-mation stored in their memory to be able to cope with their complexity and volume. The externalised reasoning artifacts (e.g. evidence, notes, comments, causal networks, hypothe-ses, etc.) can be stored, shared, retrieved, and reused by other actors in the system."(Nobarany et al., 2012). The point of this quote is very much that analysts within the sys-tem communicate reasoning artifacts to one another with aligned purposes. The best manner to communicate these artifacts, depends on environmental variables (in our case: team characteristics).

The theory is that the elements of the newly formed sys-tem can be related to one another in the following two ways:

1. "A degree of team characteristic X warrants the busi-ness case for design cluster Y."This may be translated to: the more there is of characteristic X (let’s say "co-hesion") the less necessity there would be for design clusters Y ("social aspects") because when cohesion is high, social statuses are likely to already be known without the use of this system.

When this relationship exists, the characteristic might war-rant a choice within the design space cluster. This could be described as follows:

2. "Out of the choices in design space cluster X, choice Z makes the most sense because of a high degree of team characteristic Y". There are various choices available in design space cluster X (let’s say "prove-nance") and a high amount of characteristic Y ("com-munication"). The communication characteristic is an argument for the choice of capturing end-states rather than only using the interaction analysis. Be-cause a high degree of communication between team members is readily available, it is less neces-sary that the system captures every single interaction to communicate to the other user.

For this paper we are forming new theory using the pa-per by Meniuc (2014) as a basis using this particular kind of reasoning. As mentioned before this means that we are going to map the reality of a multi-disciplinary health care team as defined by Lemieux-Charles & McGuire (2006) onto the design space for collaborative visual analytic systems.

3.2 Design choices

There are a number of design choices that can be made for every design space cluster. For every design space cluster and team characteristic, a choice can be made. With extended generalisable research, best practices for designing collaborative visual analytics systems for multi-disciplinary health care teams could be documented in a matrix that cross-references the dimensions (as we have

done in section 6.1.4). The goal of this research is to find as many existing relationships as possible within the restric-tions of this research. The result of this can be found in sec-tion 6.1.4 in table 3.

3.2.1 Available research dimensions. In this re-search the available dimensions (design space clusters and multi-disciplinary health care teams) are researched. For every design cluster or design choice a certain amount of necessity exists, which may come from a certain multi-disciplinary team characteristic. To demonstrate this methodology, a few examples have been shown in the paragraphs within this section. The examples show a sys-tematic approach to organising the necessity for every de-sign space cluster. For the purpose of this example, team characteristics team processes and organisational context have been used for each design space to show a possi-ble method of reasoning. This means that for every de-sign space cluster, we will speculate about the connection for every element within the team processes and organisa-tional contextgroupings. This method is also used to anal-yse the results in section 5. Literature by Schulz et al. (2013) has been used in the speculation for the examples.

Taxonomy. A clear taxonomy is important for all points, however, a clear taxonomy is especially important when coordinating. When coordinating it is important that the team members use terms and concepts in the same way, to make sure coordinating runs smoothly. One could document that a conditional priority exists if a lot of coor-dination activities are present in a team.

Provenance. When looking at the goals and standards of a team, if a culture exists of speed over quality, a design choice might favour one that facilitates speed more than richness. In this design space, that could mean capturing end-states and linear paths only, rather than other analyt-ical possibilities. This also holds for communication, if a highly communicative culture exists, there is less need for the visual analytics system to support the history of an an-alytical cycle. When conflict and decision-making is com-monplace, a system should show the provenance behind arguments as richly as possible to improve thought- and analytical processes. This means supporting non-linear analysis paths.

Semantics. Considering the goals and standards of a multi-disciplinary health care team, if a culture exists of speed over quality, a design choice might favour one that facilitates speed more than richness. In this design space, that could mean capturing human annotations or no an-notations only, as that is the quickest way to work. This also holds for communication and collaboration, if a highly communicative culture exists, there will be less need for the visual analytics system to support the semantics of an analytical cycle. When conflict and decision-making is commonplace, a system should show the semantics

(14)

be-hind arguments to improve thought- and analytical pro-cesses.

Social aspects. The necessity for capturing the social aspects behind an analytical component comes from a need to know how reliable the source is. If we view this as-sumption in the model of a multi-disciplinary health care team, it may be argued that this necessity will exist less so if the team is very involved within itself and all social aspects are already known without the system. Various characteris-tics by Lemieux-Charles & McGuire (2006) may indicate that this is the case. Examples are characteristics such as partic-ipation and cohesion. In this way it may be argued that high participation and cohesion will lessen the need for includ-ing social aspects in a system. If the structure of a team is more hierarchical, it may make sense to design the social aspects of this system in this manner. This means that the persons higher in the hierarchy may automatically have a more positive social profile for the system. If a culture of incentives is already in place in the medical team, it may make sense to extend it to this system. This means that the existing business rules of incentivising should also apply in the design. Meniuc (2014) discusses various incentives for the social aspects of collaboration.

Retrieval mechanisms. Again, if a culture exists of speed over quality, a design choice might favour one that facilitates speed more than richness. In this design space, that could mean opting for awareness mechanisms instead of retrieval mechanisms, or designing the retrieval mecha-nism in such a way that time cannot be wasted here (auto-mated tagging).

Moving on to the amount of resources within a team, if there are more resources available, it is not unreasonable to assume that more reusable components would popu-late a system. This would argue for a well-designed re-trieval mechanism. Tagging of components would have to be open and adaptable, so that the community of analysts can improve searchability.

Awareness mechanisms. If a team culture prefers ef-ficiency over quality, one could make a design decision that would focus on speed rather than quality. In this de-sign space, that could mean opting for awareness mecha-nisms instead of retrieval mechamecha-nisms. This saves time be-cause the analyst doesn’t need to be explicit about his or her intentions. Although it might cause inefficiencies when consciously trying retrieve a component without a retrieval mechanism.

Again, if there are more resources available, it is not unreasonable to assume that more reusable components would populate a system. This would argue for a well-designed awareness mechanism. This means, among other things, designing in a way which is suitable for proper profile setup. This profile set-up would facilitate a wide va-riety of users since a lot of resources are available.

Cognitive aspects. In this design space, when design-ing for efficiency rather than other analytical process as-pects, one could design for an extremely low cognitive load, so that an analyst may browse faster, although not always more effectively.

Modification. A similar example as the previous can be used in this design space cluster. Consider that in a multi-disciplinary health care team fast results are valued more than deep, custom and rich results. In this design space, that could mean designing for no or limited modification possibilities. When conflict and decision-making is com-monplace, a system should be able to modify earlier anal-ysis paths behind arguments to improve thought- and an-alytical processes. This modification could be used to im-prove conflict and discussion handling.

Application . In this design space, if one would like to design for well established standards within a team envi-ronment, one could design for very standardised preview possibilities. Of course this design space is relatively unex-plored and therefore doesn’t provide a lot of material to use in a reasoning process.

3.3 Selection of testable design hypotheses

Of the possible design choices that can be found in sec-tion 2.3, a selecsec-tion has to be made that can be tested in our real life settings. This is due to time limitations and tool limitations (the tools, at the moment, to build proper mock-ups are not there). To do this we are going to look for the most relevant design choices to test when considering a multi-disciplinary health care team. Multi-disciplinary health care teams are more cohesive than average because they are multi-disciplinary are self-organised (Anderson & Franks, 2003). This arguably lowers the necessity for a shared taxonomy since the shared taxonomy already likely exists outside information systems. Provenance, however, may be very important. Because of the multi-disciplinary nature of such teams, analytical styles may differ, resulting in a need for insight into any reusable components. This is also true for the semantics part of the collaborative ana-lytics design space. As team members may not be famil-iar with each others disciplines, the semantics of a com-ponent should be transparent to potential re-users. Be-cause of the cohesion within a self-organising team, we ar-gue that the social aspects (and social awareness in partic-ular) are less important test subjects. Retrieval and aware-ness mechanisms are mechanisms represent a large part of the functionality of a collaborative system, and are there-fore important to evaluate in this research. There is less research available on the modification and application de-sign space clusters, and therefore it is not possible to reli-ably test these mechanisms yet.

This results in six available design elements to be in-cluded in the mock-ups (the four sub-bullet points below,

(15)

Retrieval mechanisms and Awareness mechanisms). We view these elements as the most valuable when apply-ing collaborative visual analytics compared with the other available design elements. Design space clusters such as Social aspects and Taxonomy are likely to be less impor-tant because multi-disciplinary teams in health care tend-ing to describe themselves as cohesive (Mickan & Rodger, 2005). A cohesive team already shares taxonomies in a more efficient way than an in-cohesive team and social sta-tuses tend to be known well in cohesive teams. The follow-ing choice of design space clusters and design choices has been made:

1. Provenance

(a) End-state analysis (b) Full interactive analysis

(c) Both 2. Semantics

(a) Manual annotation (enabling semantic re-trieval)

(b) Automatic annotation (enabling non-semantic retrieval)

(c) Both

3. Retrieval mechanisms 4. Awareness mechanisms

3.4 Mock-ups

The design choices that are found in section 2.3, have to be visualised so the design space elements can be tested. We have chosen to create interactive dashboards using Tableau 9.01 software. With these dashboards we have used paint.NET2to illustrate components that Tableau 9.0 doesn’t include. For every design space section that has been selected, all possible options are created in the mock-ups. By using all options, a preference of one over another can be found during the experiments. This leads to mock-ups that are displayed in figures 7 to 13. Table 2 shows which mechanism can be found in every figure.

3.4.1 Mock-up 1. In figure 7, a mock-up can be seen that shows non-semantic retrieval results with an inter-action view. The non-semantic element has been visu-alised as the search terms. ’Jan Jansen verzuim’ (’Jan Jansen (name in dutch) non-attendance’) is a key-word based search that is non-semantic in nature. An analyti-cal system could recognise and connect these terms to a visualisation through automated annotation using the data that is being visualised. The retrieval element has been vi-sualised as a direct search method. It can be seen that an analyst has to consciously enter search terms and confirm

1a 1b 2a 2b 3 4 Figure 7 X X X Figure 8 X Figure 9 X X X Figure 10 X X X Figure 11 X X X Figure 12 X X Figure 13 X X Table 2

This table is a matrix that shows which design space element is portrayed in every figure. Descriptions of the design space elements can be found in section 2.3.

the search action. The interaction view can be seen as the thumbnails show a preview of the interactions that have occurred in a visualisation. A bigger view of this interaction view can be seen in figure 8.

3.4.2 Mock-up 2. Mock-up 2 is very similar to mock-up 1, but it’s difference is in showing the end states. Instead of interactions, the results of a retrieval action can be di-rectly viewed in the thumbnails. Mock-up 2 can be seen in figure 9.

3.4.3 Mock-ups 3 and 4. Mock-ups 3 and 4 are very similar to mock-ups 1 and 2 in that they show retrieval re-sults for both interaction views and end-state views. The difference is that these results are results displayed for a semantic search. The search ’afwijkingen personeel’ (’de-viations personnel’) is a key-word based search that could not have been recognised and connected to a visualisation through automated annotation using the data that is being visualised. Instead, manual annotation is needed to clarify that this is indeed the topic one might search for. These mock-ups can be found in figures 10 and 11.

3.4.4 Mock-up 5. Mock-up 5 in figure 12 shows an awareness mechanism. The results of the awareness mechanism is shown as a row of suggested visualisations beneath the primary visualisation that is currently being used in the top. In this case, the awareness mechanism suggests visualisations based on content or social algo-rithms. The title used for this functional element is ’An-deren analyseerden eerder’ (loosely translated: ’Others analysed before...’). In this mock-up an interaction view has been used.

3.4.5 Mock-up 6. Mock-up 6 is very similar to mock-up 5. The difference is that mock-mock-up 6 shows and end-state view of the analyses instead of an interaction view. Mock-up 6 can be found in figure 13.

3.5 Testing methods/Experiment setup

1http://www.tableau.com/ 2http://www.getpaint.net/

(16)

Figure 7. Mock-up 1: Non-semantic retrieval (interactions)

(17)

Figure 9. Mock-up 2: Non-semantic retrieval (end states)

(18)

Figure 11. Mock-up 4: Semantic retrieval (end states)

(19)

Figure 13. Mock-up 6: Awareness (end states)

The goal of this research is to gain deep useful insights since this is an exploratory research of a relatively new field. That is why six in-depth experiments are held rather than a large generalisable survey. To do this the exper-iments are done in a workshop-like setting with a semi-structured script. The goal of the workshop-like setting is to not only get opinions, but arguments for the opin-ions and the theory behind the opinopin-ions. Before the work-shop session starts, descriptive questions are asked about the teams themselves. This allows the team to be clas-sified in the model by Lemieux-Charles & McGuire (2006). During the workshop session the team-member present is shown the mock-ups in an order that mimics a real ana-lytical process (as described by Keim et al. (2008), Pirolli & Card (2005) and Meniuc (2014)) and a task is assigned to the team-member. When going through the mock-ups the people present are asked: "Which of these mock-ups would you consider most helpful at this point?". This ques-tion leads to follow-up quesques-tions that can be useful to the research. The preference itself and the reason for the pref-erence is discussed in section 5. The six experiments have been done in June and July of 2015.

4 Results

In the results section, the participants are asked to de-scribe any possible variables that motivated a preference

for a mock-up. The results are shown per design space. Be-fore every interview, participants were asked to fill out a survey which describes their team according to the model of Lemieux-Charles & McGuire (2006) in their own view. After the survey was filled out, an interview was held ac-cording to the description which can be found in section 3.5. From the interviews, the motivations for a choice by the participants has been grouped along with explanations for the motivation that were directly given by participants. There are two types of results, we have grouped them un-der ’general results’ and ’multi-disciplinary team results’. General results refer to results that can be viewed seperate from the multi-disciplinary team aspects (such as personal preference compared with demographic or social dimen-sions). Multi-disciplinary team results are results that have been cross-referenced with multi-disciplinary team char-acteristics. The results are described per design space ele-ment, within the design space element results, general re-sults and multi-disciplinary team rere-sults can be found.

4.1 Participants

For the experiments, six participants have been used in total. The participants were randomly chosen from multi-disciplinary teams working in Noord-Holland, the Nether-lands. All the interviewee’s are medical professionals in multi-disciplinary health care teams. The age of the

(20)

in-terviewee’s varied from 22-50. The participants who have been interviewed for this research can be described as fol-lows:

• All are Dutch (raised in the Netherlands) • 1 male and 5 females

• 3 of the participants were university educated (Bach-elor’s degree or more) in their professional field, the other 3 were educated in their field but don’t hold a university degree

• The average age of the participants was 33 • The average team size is 15,5 members

• All participants work in the Noord-Holland area of the Netherlands

• All participants work in care settings (rather than cure settings)

4.2 Results per design space

4.2.1 Provenance. When it comes to capturing provenance, one has three options:

1. End-state analysis 2. Full interactive analysis 3. Both previous types

Participants were shown all possibilities in different con-texts using the mock-ups. The results focus on which of the three mechanics the interviewees found the most useful to analyse when looking for components to use.

General results. In general, most of the participants found the end-state analysis more useful (4 out of 6). Out of the the other 2, both preferred the full interactive anal-ysis. In the interviews this can be accredited to three main causes.

Familiarityrefers to the fact that most participants were used to the type of analysis where an end-state is shown. Of these participants, half spoke out their preference end-states because the end-end-states were known to them and others because the interactive analysis seems unknown to them. In interviews where, after an initial preference was spoken out for end-state analysis, a more in-depth look was taken at the interactive analysis, some participants changed their preference.

The most common reason for a preference for end-state analysis is that the participants perceived this type of provenance to lead to results more quickly. This is due to, when previews are given of the end-state analysis, the resulting visuals are shown rather than interactions within the visuals.

Prioritywas also a factor in the decision by the partic-ipants. Many have said that both provenance methods probably have their use, however, many regarded the end-state option as necessary while the interaction option was merely useful.

A reason that has been named for the preference of the interactive analysis over the end-state analysis was that in-teraction analysis could be used to quickly identify useful components.

An interesting note is that many participants would like the end-state view when searching for items and when looking at an analysis itself might use the interactions to validate someone else’s earlier work.

Results cross-referenced with team properties. Par-ticipants who spoke out a preference for interaction anal-ysis differed in the following ways from most other partici-pants:

• All participants who spoke out a preference for inter-action analysis have a team of 20 or greater mem-bers, but not all participants with large teams (>15,5 members) prefer interaction analysis

• Participants who were less positive (than the average of all participants) about their team problem solving effectiveness, spoke out their preference for interac-tion analysis

• The participants that view their tasks as interdepen-dent prefer the interaction analysis whereas partici-pants who didn’t view their tasks as interdependent didn’t speak out their preference for interaction anal-ysis

4.2.2 Semantics. When it comes to semantics, there are three options:

1. Manual annotation (non-semantic retrieval) 2. Automatic annotation (semantic retrieval) 3. Both previous types

In the mock-ups, the implication of these types have been visualised rather than the process itself. This means differ-ent means of using semantics can be used in practice. Par-ticipants were shown all possibilities in different contexts using the mock-ups. The results focus on which of the three mechanics the interviewees found the most useful to anal-yse when looking for components to use.

General results. In general, most of the participants found non-semantic retrieval more useful (4 out of 6). Out of the other two, one preferred semantic retrieval more useful and the other preferred both. In the interviews this can be accredited to three main causes.

Familiarity and ease of useagain account for a major fac-tor. People are used to searching with key-words and terms

(21)

and some participants explicitly stated that they preferred this method because they know how to use it.

Effectivenessis also a common answer. Participants mo-tivated their decision often by stating that if they would want to retrieve something, they would know exactly what, and key-words would yield the best results.

The opposite has also been argued;Exploration was a common argument in favour of using the semantic retrieval method. In use-cases where the desired result was un-known or vaguely defined, the semantic retrieval would be very useful according to the participants.

Results cross-referenced with team properties. Par-ticipants who spoke out a preference for semantic retrieval or both types of retrieval differed in the following ways from most other participants:

• These participants stated that their teams ’often’ ex-ecute care delivery tasks, whereas the other partici-pants all stated that their teams ’always’ execute de-livery tasks

• All participants who spoke out a preference for se-mantic retrieval or both types of retrieval have a team of 20 or greater members, but not all participants with large teams (>15,5 members) prefer semantic re-trieval or both types of rere-trieval

• These participants agree partially with the statement ’in my team, a lot of decisions are made’, whereas other participants had a variety of answers

4.2.3 Retrieval/awareness mechanisms. Within the retrieval or awareness mechanisms, no distinctive tests have been done. What has been tested for is which of the two enjoyed the preference of the participants compared to each other. This is possible because both have a similar functionality, namely, referring the user to other elements in the system. When it comes to awareness or retrieval mechanisms, one has three options:

• Awareness mechanisms • Retrieval mechanisms • Both

The results focus on which of the three mechanics the in-terviewees found the most useful to analyse when looking for components to use.

General results. In general, most of the participants found the retrieval mechanism more useful (4 out of 6). Out of the the other 2, both preferred the full awareness mech-anism. In the interviews this can be accredited to two main causes.

Effectivenessis the reason almost all participants gave when they stated they preferred the retrieval mechanisms.

Effectiveness, in this sense, means that this is the best way to meet the need of the user. The need of the user would be to find another element of analysis.

The opposite has also been argued; Exploration was a common argument in favour of using the awareness mech-anisms. In use-cases where the desired result was un-known or vaguely defined, the awareness mechanisms would be very useful according to the participants.

Ease of use and familiarityhave been used as arguments for both options, as both mechanisms are intuitive and used in many other applications in the world.

Results cross-referenced with team properties. Par-ticipants who spoke out a preference for awareness mech-anisms differed in the following ways from other partici-pants:

• They stated that their teams have fairly clear rule sets. Other participants gave varying answers (fairly clear rule sets or less clear).

• These participants stated that their teams ’always’ execute care delivery tasks, whereas the other par-ticipants all gave varying answers.

• They stated that their teams have specialised tasks. Other participants gave varying answers (specialised tasks or less specialised).

• They stated that their teams were highly effective at problem solving. Other participants gave varying an-swers (highly effective or less effective).

• All participants who spoke out a preference for awareness mechanisms have a team of 14 or lesser members, but not all participants with small teams prefer semantic retrieval or both types of retrieval

4.2.4 Other observations. Other observation have been done when studying the data. The following observa-tions have been made:

• When using non-semantic searches, most partici-pants (5 out of 6) preferred the full interaction analy-sis.

• When using awareness mechanisms, most partici-pants (5 out of 6) preferred the end-state analysis. Most participants indicated that this was beneficial for the speed of the system.

• Generally, university educated participants seemed to prefer more ’exploratory’ search/awareness mech-anisms. One of the participants stated that this might be due to the fact that in university education, people are trained in analysing data visually more so than in other types of education.

Referenties

GERELATEERDE DOCUMENTEN

(This measure is, of course, also relative to the loss function of choice; and so is the definition of predictive complexity.) To turn it into an intrinsic measure of a

An overview is provided of the key characteristics of each case study: the objective, input data and reference data used, scale of analysis, the algorithm used,

The second model verifies the effect of the lagged change in the long-term interest rate, the short-term interest rate and the debt to GDP ratio on the growth rate of

overeenkomst voortvloeiende rechten en verplichtingen van de partijen ten nadele van de consument aanzienlijk verstoort.” Voorts is in artikel 3 lid 2 bepaald dat een beding steeds

For these models, we review both qualitative analysis methods, like cut sets and common cause failures, and quantitative techniques, including a wide variety of stochastic methods

Co-ordinated signals zie Linked signals Correction Correlation (math., s tatis t.) Corrosion Corrugation Cost Cost-benefit analysis -17- Frans Continu Ligne continue

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

disproportioneel veel aandacht besteden aan statistisch significante secundaire uitkomstmaten, omdat het effect op de primaire uitkomstmaat bijvoorbeeld tegenvalt of niet bewezen