• No results found

Human Interoperability in Different Stages of Healthcare Big Data Analytics : a Case Study

N/A
N/A
Protected

Academic year: 2021

Share "Human Interoperability in Different Stages of Healthcare Big Data Analytics : a Case Study"

Copied!
67
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Human Interoperability in Different Stages of Healthcare Big

Data Analytics: A Case Study

Max Baneke 10797564 August 17, 2018

MSc. in Business Administration – International Management Dr. M.P. Paukku

(2)
(3)

Statement of originality

This document is written by student Max Baneke who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(4)

Table of Contents

Abstract 4

Introduction 5

2. Literature review 7

2.1. Value-based healthcare 8

2.2. What is big data? 9

2.3. Big data analytic architecture 10

2.4. Role of big data in healthcare 13

2.5. Big data in Dutch healthcare 16

2.6. Human interoperability 17

2.7. Challenges for big data analytics 23

3. Methodology 26 3.1. Research design 26 3.2. Data collection 28 3.3. Data analysis 30 4. Results 31 4.1. Case information 31 4.2. Main finding 34 5. Discussion 43

5.1. Big data analytics in Amsterdam UMC 43

5.2. Human interoperability requirements 44

5.2.1. Harmonized strategy 44

5.2.2. Knowledge awareness 46

5.2.3. Aligned procedures 47

5.2.4. Aligned operations 48

5.2.5. Political objectives 49

5.2.6. Summary on human interoperability requirements 50

5.3. Human networks in different stages 51

5.3.1. Human networks in big data capture stage 51

5.3.2. Human networks in big data transformation stage 52

5.3.3. Human networks in big data consumption stage 52

5.4. Contribution to literature 53

5.5. Limitations and future research 54

6. Conclusion 55

7. Literature 56

(5)

Abstract

Increasing amounts of healthcare data are available characterized by heterogeneity and complexity. Therefore the ability of systems or components to exchange information and to be able to use that information, known as interoperability, is becoming more important. The human element in

interoperability has received little attention in the literature. This paper uses a human interoperability model, following data from the source to the end point in healthcare big data analytics, to answer the research question: ​How do human networks influence big data analytic interoperability in healthcare? The single-case study uses descriptive and explanatory research methods to examine the phenomenon. 10 semi-structured interviews are conducted with people involved in healthcare big data analytic practices in all levels of the hospital, the Amsterdam UMC. Further, access has been negotiated to a big data analytic report which provides information on nurses their noting behavior in the electronic health record. The results of the interviews support the three propositions, meaning that human networks significantly influence healthcare big data analytic interoperability in the data capturing, transformation, and consumption stage.

(6)

1. Introduction

The landscape of healthcare data is scattered. There are mobile apps, patient records, home data sensors, R&D lab results, and clinical results that all possess large amounts of healthcare data. A general tendency can be observed of systems becoming increasingly distributed and decentralized and progressively relying on information technology ​(​Handley, 2014). New technological advances in healthcare provide not only increasing amounts of data but also an increase in the complexity and heterogeneity of healthcare data (Dinov, 2016). This data is also known as big data, which is data that is known for its sheer volume, complexity, diversity, and timeliness (Groves, Kayyali, Knott, Kuiken, 2013). Analyzing ​b​ig data is a promising solution in healthcare for providing insights from large datasets and improving outcomes while reducing costs (Raghupathi & Raghupathi, 2014). The role of interoperability, ​which is the ability of systems or components to exchange information and to be able to use that information, is therefore becoming increasingly important and, not surprisingly,

influencing the performance of organizations (G​eraci et al., 1991 ; Kadadi, Agrawal, Nyamful & Atiq, 2014)​.

Poor information management and inadequate integration of healthcare technologies and

systems had led to an increase in medical costs (Wang, Kung, Ting & Byrd, 2015). Further, ​the healthcare industry has been slow with the adoption of big data practices relative to other industries (​Wang, Kung, Ting & Byrd, 2015).​ Scholars mention reasons such as privacy and security but the challenge that needs significantly more attention is providing high quality data, which can be achieved by generating accurate and complete data (Adler-Milstein & Jha, 2013). A large fraction of data is entered by humans but humans cause data entry errors. More important are the systemic problems

with clinical data in electronic format such as the different physician documentation styles

(Adler-Milstein & Jha, 2013), and then we haven’t even spoken about humans interpreting the information, which is frequently the flaw of business information systems (Kuilen & Jacques, 2015).

(7)

Big data analytics, if used well, leads to information but as Einstein stated: “information is not knowledge” (Kuilen & Jacques, 2015).

Interoperability has generally been perceived as a problem for which a technical solution is needed but ​humans are an integral part of networked enabled systems ​(​Handley, 2014). Therefore, there is a need to take humans in account in healthcare big data analytics (Wang, Kung & Byrd, 2018). True interoperability can be achieved by a combination of solutions related to people, process, data, systems, and information (Handley, 2014).​ But research on the human role in big data analytics in healthcare is scarce. Identifying and establishing social components is critical in big data analytic success (Woodside, 2013). This leads to the following research question:

How do human networks influence big data analytic interoperability in healthcare?

To understand what role humans play in healthcare big data interoperability and potentially

answer the question why healthcare is less good with information technology compared to other industries, this paper uses a human interoperability model to analyze human ​networks, which are networks between the human view elements: role, training, system, task, and team, and their influence on big data analytic interoperability (​Sahni, Huckman, Chigurupati & Cutler, 2017 ; Handley, 2014)​.

The IT adoption in the healthcare industry usually lags behind other industries. It is considered a significant challenge to find big data analytic projects in healthcare (Wang, Kung & Byrd, 2018). This in combination with the privacy sensitive information and not yet published research on the topic, it takes a considerable amount of time and effort to get access to healthcare big data analytic projects. The paper provides a unique insight in healthcare big data analytics by taking the people that gather healthcare data in account to the person in the highest position in the

organization responsible for the big data strategy of the largest hospital in the Netherlands.

By following the data from the source to the end point in the Amsterdam University Medical Center (Amsterdam UMC), and analyzing human networks influence on interoperability in big data

(8)

analytic processes by doing 10 semi-structured interviews and using a human interoperability model, this paper provides new insights to the literature in the challenges of big data analytics in healthcare. The contribution of this paper to the literature of big data analytics is twofold. First, the role of big data analytics in a Dutch hospital is examined, providing insights in big data developments in healthcare and fulfilling the needto look at and accumulate information regarding the practical applications of healthcare big data (Jee & Kim, 2013). Second, the paper looks into the role of human interoperability in big data analytics which is a topic of significance and on which the literature is scarce (Handley, 2014).

This paper is organized as follows. In the next section a literature review will be provided on the relevant topics related to the human role in big data analytics in Dutch healthcare. Based on the literature, three propositions will be introduced. This is followed by the method section that explains the research methodology and elaborates on how data is collected and analyzed. After that the results of the interviews are presented. And last, the discussion and conclusion are provided which include an interpretation of the results, an analysis of the limitations, and suggestions for future research.

2. Literature review

The literature review will describe the principal concepts that are relevant to the research field of human interoperability in big data analytics in Dutch healthcare. First the concept of value-based healthcare is presented. The second section presents the characteristics of big data and is followed by a model that shows the steps of big data analytics in the healthcare industry, known as the big data healthcare architecture. After this, the literature on benefits and opportunities of big data analytics in healthcare are reviewed and more information is provided on the Dutch healthcare system. After that, the topic of interoperability, and more specifically human interoperability, is examined and a model will be introduced to analyze human interoperability. Finally, challenges related to humans in the process of big data analytics will be presented leading to three working propositions.

(9)

2.1. Value-based healthcare

The “centraal planbureau” (Bureau for Economic Policy Analysis) estimates that in 2040 healthcare costs in the Netherlands in the most negative case will rise to 31 percent of the gross domestic product (CPB, 2011). This trend of rising healthcare costs is a worldwide problem​ ​(​Kaplan & Porter, 2011 ; Feldman, Martin & Skotnes, 2012).

An idea that has found much traction in the academic literature in the last decade as a solution for the problems in the healthcare industry is the concept of value-based healthcare. Value-based healthcare has been described as the strategy that will fix healthcare (​Lee & Porter, 2013).​ The main principle of value-based healthcare is better ​affordable healthcare by increasing the value for patients. Value-based healthcare is building on information provided by measuring outcomes and helps increasing efficiency and results (Porter, 2009). The idea of constantly measuring healthcare

outcomes, analyzing the results and considering methods of improvement is not new and stems from the early 20th century (Codman & Akerson, 1931).

Current healthcare models are more quantity than quality based. The main reason for this is that hospital reimbursement is related to the number of patients a hospital provides care to

(Raghupathi & Raghupathi, 2014). According to Porter (2009), the focus of more affordable healthcare should be on increasing value for patients measured by the quality of healthcare delivery per dollar. A number of steps need to be taken to achieve a value-based healthcare system. The first step towards value-based healthcare is the measurement and diffusion of health outcomes. Data on results will help to improve efficiency and outcomes (Porter, 2009).

Since the introduction of the concept by Porter, there has been little progress in creating a value-based healthcare system (Pendleton, 2018). Big data can potentially play a role in the measurement and diffusion of healthcare outcomes. To achieve this, h​ealthcare organizations will have to fight the challenges to gain benefits from big data investments (Murdoch & Detsky, 2013).

(10)

Since there is ambiguity surrounding the term big data, the next section will describe the characteristics of big data and how it is different from ”normal” data.

2.2. What is big data?

The human brain is able to include only five considerations in a decision ​(Abernethy et al., 2010). For some decisions we need a lot more considerations than only the five our brain is able to process. Big data analytics is a solution that can take large amounts of considerations in account. Big data is often described as large amou​nts of data that come from an increasing number of sources and can not be analyzed by traditional data analytic systems because of its complexity and volume ​(EY, 2014)​.

However there is no consensus on the definition of big data. It is often defined by a number of characteristics (​Gordon, 2013). Three characteristics are common in most articles describing big data: velocity, volume, and variety (Kaisler, Armour, Espinosa & Money, 2013). These three key

characteristics distinguish big data from “normal” analytical data.

Velocity refers to the speed of data creation because big data has the potential to give real time insights. Most healthcare data has traditionally been quite unchanging (x-ray films, paper files, etc.). But real-time data can be critical providing good healthcare, especially in life or death situations (heart monitors, blood pressure monitors, etc.) (Kaisler et al., 2013).

Volume is about the quantity of available data that is still exponentially growing. Also, it is about the ability of big data analytics to analyze large data sets (Kaisler et al., 2013).

Variety concerns the different forms in which the data can come: structured, unstructured, and semi-structured data (McAfee, Brynjolfsson, Davenport, Patil & Barton, 2012).

Some also consider a fourth dimension, describing big data using the term veracity. Veracity is about the quality, relevance, predictive value, and meaningfulness of big data. High variety and velocity have a negative impact on veracity (Feldman, Martin & Skotnes, 2012). The healthcare industry is known to have problems with data quality, especially with unstructured data.Data quality

(11)

is important in healthcare because wrong information could mean the difference between life and

death (Feldman, Martin & Skotnes, 2012).

Big data analytics is a complicated process. Data has to go through several steps before it can be used as actionable knowledge. The next section describes the process of turning healthcare data into knowledge using big data analytics.

2.3. Big data analytic architecture

An architecture framework is a model that gives a representation of a complex process and its interacting steps. An architecture framework often represents a perspective on a system and can be used to identify components functionalities and potential benefits (Handley, 2013;Wang, Kung, Ting & Byrd, 2015).

This section will introduce a healthcare big data analytic architecture constructed by Wang, Kung and Byrd (2018) and the different steps that are necessary in healthcare big data analytics (See figure 1).

A big data analytic architecture available in healthcare generally consists of 5 layers: data, data aggregation, analytics, information exploration, and data governance. Each layer in the model represents a particular function in healthcare big data analytics. The model enables healthcare

managers to understand how to transform the healthcare data into meaningful information through big data implementations (Wang, Kung and Byrd, 2018).

The first element in the healthcare big data analytic architecture is the data layer. The data layer represents all sources of data that have the potential to be used for big data analytics. As can be seen in figure 1, there are three types of data: structured, unstructured and semi-structured data (Wang, Kung & Byrd, 2018).

The data extracted from the data layer ends up in the data aggregation layer. The data aggregation layer acquires, transforms, and stores data. One of the main challenges of the data aggregation layer is to handle the incoming data which can possess different characteristics,

(12)

influencing the next steps in the process. In healthcare, one of the most important but also complicated sources to extract data from is the electronic health record (EHR) ​(Kleinikkink, 2018). An EHR is used to register patient information related to the patient’s health.

The next step in the process is to convert the data to standard formats, sort by a criterium, and validate the data. Last step in the data aggregation layer is to store the data in a database for further processing and analysis ​(Wang, Kung & Byrd, 2018).

The analytics layer does the processing and analysis of the data. Data analysis can be split into three major components: in-database analytics, stream computing, and Hadoop Map/Reduce ​(Wang, Kung & Byrd, 2018)​. Depending on the characteristics of the data and purpose of analysis a method is chosen to analyze the data.

In-database analytics allows data to be processed within a data warehouse by using an analytics platform. It provides high speed processing, a secure environment, and scalability. It is used by healthcare organizations for improving pharmaceutical management and preventive healthcare practice support.

The second data analysis type, stream computing, can process data in (near) real time. Advantage of this method is the opportunity to react quickly on information produced.

However, the most common tool to use for big data analytics is Mapreduce. Mapreduce enables data processing of large volumes of data. It has the ability to process both unstructured and structured batches of big data ​(Wang, Kung & Byrd, 2018).

After the analytics layer the data arrives in the final layer of the healthcare big data analytic architecture, the information exploration layer. The information exploration layer creates outputs of healthcare data such as visualization reports, healthcare related business insights derived from the analytics layer, and real-time information monitoring. Reporting is a critical feature in big data analytics to support daily operations and help managers make better and faster decisions. Real-time monitoring is one of the most important features of the information exploration layer (Wang, Kung & Byrd, 2018).

(13)

One layer is important during the whole healthcare big data analytics process. This layer is known as the data governance layer. The data governance layer consists of three parts and is related to all layers in the model. It consist of master data management, data life-cycle management, and data security and privacy management. First, master data management is about the management of data using processes, tools, policies, standards and governance to support standardization, incorporation, and the removing of data leading to better data analysis. Second, data security and privacy

management is considered as very important in healthcare, having the role of protecting the privacy of the patient’s data. Third, data life-cycle management is about the management of data through its lifecycle, from archiving data, through maintaining data warehouses, testing and delivering different application systems, to deleting and disposing data. Effective data life-cycle management leads to better competitive offerings and support of business goals with lower costs and less deadlines missed (Wang, Kung & Byrd, 2018).

(14)

The healthcare big data analytic architecture (See figure 1) gives a representation of the general processes involved in healthcare big data analytics. The next section will elaborate on the role of big data analytics in healthcare.

2.4. Role of big data in healthcare

An industry in which, more than in most other industries, data is produced with great promise is the healthcare industry (Andreu-Perez, Poon, Merrifield, Wong, & Yang, 2015). In the healthcare industry a growing number of structured and unstructured data sources are observed. According to estimations, 30 percent of the all the data in the world is generated by healthcare (Woodside, 2013). Big data in healthcare can be described using three terms: variety, security, and silo.

First, variety of healthcare big data is about the different types of data that are generated in the healthcare industry. There are three types of data: structured, unstructured, and semi-structured data. Historically, healthcare has produced predominantly unstructured data such as: medical records, handwritten doctor notes, paper prescriptions, images from scans, and admission and discharge records. Big data analysts have the preference to work with structured data because its structured nature makes it easier to analyze large data sets when using big data analytics ​(Mathew & Pillai, 2015). A combination of the two previous mentioned data types is semi-structured data.

Semi-structured data shares characteristics of both structured and unstructured data (McAfee, et al, 2012).

Second, security of healthcare big data is about the care with which data in healthcare should be handled. Reason for this is the privacy sensitive nature of the data.

Third, silo describes the current situation in which healthcare stakeholder have their own data warehouses of confidential or public healthcare-related information (Jee & Kim, 2013). This is a negative trend because the value of data increases when it is combined with other relevant datasets (​Janssen, Estevez & Janowski, 2014). The potential of big data in healthcare is created by combining,

(15)

both individual and on a population level, traditional data with new types of data (Feldman, Martin & Skotnes, 2012).

Big data in healthcare has the potential to make health services more efficient and sustainable shifting healthcare towards prevention, early intervention, and optimal management (Andreu-Perez, et al., 2015). ​Healthcare data can be used to find what treatments have the best outcome on patients with particular conditions, identify patterns related to drug use and side effects, and gain other important information that can improve the help to patients and reduce costs. Recent technological progress in the industry has improved the ability to work with healthcare big data, which is characterized by large datasets and different database structures (​Groves, Kayyali, Knott & Van Kuiken, 2013)​. The

literature on big data analytics in healthcare identify the following purposes for which it can be used:

Clinical operations

Big data analytics can be used to determine more relevant and cost-effective procedures to diagnose and treat patients (​Raghupathi & Raghupathi, 2014)​. An example of this is the Martini hospital in the Netherlands, which was able to achieve a cost reduction of €1400,- per patient using big data analytics by lowering the time a patient stays in the hospital. Anonymized data from the EHR of patients is sent to a third party databank. A computer program analyzes the condition of the patient and compares this to data from the databank to see what treatments were successful on patients with similar conditions in the past (Waterval, 2018).

Research & Development

Big data analytics could provide three advantages to research and development (R&D). First, big data can help predictive modeling to create faster, leaner and better targeted R&D in drugs and devices. Second, big data can help to improve clinical trial design and patient recruitment to better match treatments to individual patients, reducing trial failures and speeding new treatments to the market. Third, big data analytics can help to analyze clinical trials and EHRs to identify follow on

(16)

indications and discover conflicting effects before a product is introduced to the market (Raghupathi & Raghupathi, 2014).

Evidence based medicine

By combining and analyzing structured and unstructured data from EHRs, financial and operational data, clinical data, and genomic data, a patient’s risk to get a disease can potentially be predicted and treatments can be matched to outcomes to provide more efficient healthcare

(Raghupathi & Raghupathi, 2014).

Genomic analytics

Big data analytics has enabled genomic analytics which has led to more cost effective and efficient gene sequencing and allows for predictions on which treatment is best for the patient (Raghupathi & Raghupathi, 2014 ; Sie, 2017). The Hartwig Medical Foundation gathers clinical and genetic data from cancer patients in the Netherlands in one databank. Using genome sequencing technology, the foundation can offer targeted therapies which can stop tumor growth with less side effects than traditional chemotherapy (​Hartwig Medical Foundation, n.d.​).

Device/remote monitoring

Big data enabled the possibility to capture and analyze patients data in real-time, using in-hospital- and in-home devices. These devices can be used for monitoring and prevention purposes (​Raghupathi & Raghupathi, 2014)​. An example of this is the MS sherpa app which will be released in late 2018. MS sherpa is an app for smartphones which was built in collaboration with neurologists and Multiple Sclerosis specialists to help measuring and analyzing data of Multiple Sclerosis patients. It can predict how the disease will develop and with this information it can offer tailored therapy ​(Rijt, 2018)​.

(17)

In summary, big data analytics has a large number of purposes and allows for personalized healthcare, which is tailored healthcare for the patient. Until now hospitals have used (relative) standard procedures for operating patients and medication prescriptions. Personalized healthcare will lead to ​better health outcomes while reducing costs and is therefore closely related to the popular concept of value-based healthcare of better ​affordable healthcare by increasing the value for patients (Raghupathi & Raghupathi, 2014 ; Porter, 2009)​.

Healthcare industry characteristics are country dependent. Because this paper examines big data analytics in a Dutch hospital, the next section will provide more information on Dutch healthcare and the role of big data in the industry.

2.5. Big data in Dutch healthcare

One of the most important data sources for big data analytics in Dutch healthcare is the electronic health record. ​EHRs provide information about patients’ health, treatments that they have undergone by health institutions, prescribed medicines, and more. Using EHRs for big data analytics is valuable for supporting clinical research and improving clinical knowledge ​(Andreu-Perez et al, 2015)​. ​For now the value​ of EHRs in the Netherlands for big data analytics is limited because access to the records is only possible with the authorization of the patient.​ Under certain circumstances the caretaker is allowed to assume that the authorization is implicit, for example when informing a fellow practitioner. All infrastructure in Dutch healthcare will have to satisfy these regulations (Nictiz, 2017). But in the Netherlands a country-wide EHR system is missing which means that when a patient is sent to a hospital in a different region, the hospital doesn’t have access to a patient’s EHR (Zorgvisie, 2017). ​A second problem exists in the difficulty to extract data from the EHR. The EHR is not built for recycling data (Kleinikkink, 2018).

Despite the growth of available data in healthcare, few organizations have introduced a strategy for handling big data (​Woodside, 2013)​. The domain of big data-sharing agreements remains informal, poorly structured, manually enforced, and linked to isolated transactions (Koutroumpis &

(18)

Leiponen, 2013).​ The sharing agreements in the healthcare industry are of great importance because of the confidentiality of the data.

Several parties are involved in the process of big data analytics in Dutch healthcare, often using different systems that are incompatible. And with the tendency of systems becoming more and more distributed and decentralized, interoperability becomes more important than ever ​(​Handley, 2014). Because software systems are unable to connect to each other, much of the data remains

unused. The current healthcare systems are not yet ready to analyze big amounts of data (Stichting VUmc CCA,

2018)

.

The next section will provide a review of the literature on the ability of systems to exchange

and use information, also known as interoperability, and the role that humans play in this.

2.6. Human interoperability

Defining interoperability can be challenging because of its context and perspective depending nature (Gasser, 2015). At the fundamental level in the context of information technology,

interoperability is about “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” (G​eraci et al., 1991)​.

Different from Gasser (2015), who identified four layers of interoperability: data, human, technological, and institutional, Handley (2014) describes interoperability as a concept that consists of technological and human layers (See figure 2). In this paper the two layer approach is used. This is motivated by the context of the research which is focused a single country’s healthcare system, because of which institutional interoperability such as the legal system is less relevant. Further the data layer, which is described by Gasser (2015) as “the ability of interconnected systems to understand each other” is already included in the technical layer in handley’s (2014) two layer approach. The name two layer approach can be confusing because the model has 9 layers. Reason for calling it the two layer approach is because the top levels focus on the human requirements whereas the lower levels address technical interoperability (Handley, 2014).

(19)

The human layer consists of 5 human interoperability requirement layers: knowledge awareness, aligned procedures, aligned operations, harmonized strategy, and political objectives (Handley, 2014).

Figure 2. Layers of Interoperability (Handley, 2014)

The human interoperability model shows that specific human networks influence one of the five human interoperability requirement layers that define human interoperability and as a result influence interoperability (See table 2).

This section is structured as follows. First, this section will explain the importance of

interoperability. Second, the concept of human interoperability is further explained. Third, a model is introduced that analyzes the influence of human networks on human interoperability requirements.

Big data comes from many different sources and is saved in different information systems. The ability to make use of and bring together data from different sources is likely to determine the value that will be extracted from the data (​Janssen, Estevez & Janowski, 2014)​. One of the main challenges of big data is to develop and support data interoperability of information systems. The “Raad voor de Volksgezondheid (RVZ)”, an advisory body of the Dutch government, stated that there

(20)

is ‘insufficient interoperability of healthcare information systems because of a lack of standardization and willingness to share information’ (RVZ, 2014). Both reasons can be explained by the absence of belief that they will positively influence healthcare outcomes (​Zarzuela, Ruttan-Sims, Nagatakiya & DeMerchant, 2015).

Interoperability has generally been perceived as a problem for which solely a technical solution is needed. Therefore, the human role in interoperability has received little attention. But true interoperability can be achieved by a combination of people and technical solutions (Handley, 2014). Brown (2010) is one of the first to use the term human interoperability, and surprisingly still one of the very few. He defines human interoperability as system-level relationships that affect collaboration among operators interacting through technology environments. Paul, Brown-VanHoozer and Ghafoor (2009) mention the importance of human interoperability and the impact it can have on the

effectiveness of systems that allow creation and recreation of knowledge among group members

through sharing and interpreting information. Good human interoperability enhances information sharing in technological environments (Handley, 2014).

Human interoperability establishes sustainable human networks that are reliable, trusted and

effective (brown, 2010). These characteristics are crucial for healthcare big data analytics because

reliable and trusted human networks are likely to lead to trusted and reliable data analytics, which is a

severe problem in healthcare. Further, human interoperability is critical under conditions where

technological compatibilities present constraints or team interactions happen across remote location.

Both conditions are relevant to healthcare big data analytics (Feldman, Martin & Skotnes, 2012).

The human interoperability model is used to assess the influence of human networks on each of the five human interoperability requirement layers that define human interoperability and as a result influence interoperability (See table 2). A comprehensive human interoperability model must fulfill three conditions (Handley, 2014).

The first condition is that it must possess components that construct the human system.

(21)

used to understand how humans interact with other elements of a system and can be used to do

social-technical analysis (Handley, 2013). Handley (2014) uses five elements to map the human view:

role, training, system, task and team.

The second condition of the human interoperability model is that networks and connections within or/and to other organizations must be identified which can be used to identify the association between the five human view elements. The human networks included are the following (Carley, 2002):

1. Social network: Who knows who

2. Knowledge network: Who knows what

3. Capabilities network: Who has what resource

4. Assignment network: Who does what

5. Work network: Who works where

6. Information network: What informs what

7. Skills network: What knowledge is needed to use what resource

8. Needs network: What knowledge is needed to do what task

9. Competency network: What knowledge is where

10. Substitution network: What resources can be substituted for which

11. Requirements network: What resources are needed to do what task

12. Capital network: What resources are where

13. Precedence network: Which tasks must be done before which

14. Market network: What tasks are done where

15. Interorganizational network: Which organizations link with which

Specifying the relationships between the human elements fulfills the second requirement of a

(22)

also relationships that should exist to determine human interoperability shortcomings (Handley,

2014).

The third condition of the human interoperability model is that it should show how the human networks contribute to human interoperability. As proposed by Handley (2014) this can be done by linking the layers of interoperability (See figure 3) to the human view components and their networks. The layers of interoperability should be constructed so that they enhance sharing of information (Handley, 2014). Therefore the human networks have to be tested whether they have an influence on this.

The final model (See table 1) is used to identify what influence the human networks between

the human view elements (x-axis) have on the human interoperability requirements (y-axis) (Handley,

2014). Political Objectives Harmonized Strategy/Doctrines Aligned Operations Aligned Procedures Knowledge / awareness

Figure 3. Human Interoperability Requirements / Layers of Human Interoperability (Handley, 2014)

Product Level of

Interoperability

Role Training System Tasks Team

Harmonized strategy (1) Social network (2) knowledge network (3) capabilities network (4) assignment network (5) work network Knowledge awareness (6) information network (7) skills network (8) needs network (9) competency network

(23)

Aligned procedures (10) substitution network (11) requirements network (12) capital network Aligned operations (13) precedence network (14) market network Political objectives (15) interorganizat ional network Table 1: Human Interoperability Model (Handley, 2014)

In summary, human interoperability uses a human view on the design of a system to include

socio-technical aspects of a system (Handley & Smillie, 2008). It can be used to understand the interactions between humans and other elements of a system, depicted by the human networks, and the impact of the human networks on human interoperability requirements​ (​Handley, 2014). Taking human interoperability in account in a system design will lead to better integration and interaction among humans involved, improving congruent behaviors for collaborative tasks (Handley, 2014).

Handley’s (2014) model originates from a model by Tolk (2003), called Layers of Coalition Interoperability, which includes both layers of technical and organizational interoperability. The layers are originally focused on an army setting, in which the research on human operability was conducted. However, the different layers are also relevant in other industry settings as showed by Handley (2014).

The next section examines challenges in big data analytics that are related to the human networks in big data analytics. The challenges are structured by connecting them to the relevant stages in the big data analytic architecture in healthcare resulting in three propositions on the role of humans in big data analytics in healthcare.

(24)

2.7. Challenges for big data analytics

The challenges of big data analytics have been well described by scholars but not all challenges are related to the human networks. This section will report the big data analytic interoperability challenges related to humans described in the literature, or in other words how humans influence t​he ability of systems or components to exchange information and are able to use that information (G​eraci et al., 1991). The human challenges are connected to the steps in the healthcare big data analytic architecture they are relevant to, leading to three relevant working

propositions. ​The preference is given to support the research question using these general propositions over more specific propositions because the limited research on the specific variables in the

conceptual model (See figure 4) form an obstacle in creating by literature backed propositions.

Capturing Data

Different from most other industries that use big data successfully, in healthcare a lot of data is entered in information systems by humans. But data entered by humans in information systems increase the chance of systematic errors (​Adler-Milstein & Jha, 2013). Missing data can be the effect of a negative outcome, so there is no need to document anything, or a human error by not

documenting the outcome in the EHR (Lee, et al., 2017).

Further, current reimbursement policies require extensive documentation. Clinicians react by using copy paste features that increase the probability of mistakes or filling in old information. Also, physician documentation styles vary greatly making errors difficult to identify ​(​Adler-Milstein & Jha, 2013).

It is therefore likely that humans networks have a role in the ability of systems or components to exchange information and to use that information that has been exchanged in the data capturing

(25)

Proposition 1: Human networks influence healthcare big data analytic interoperability in the data capture stage .

Transforming Data

Transforming data consists of two layers, the data aggregation layer and the data analytics

layer (See figure 1). The data aggregation layer acquires, transforms, and stores data, and the data analytics layer analyzes the aggregated data (Wang, Kung & Byrd, 2018).

As previously mentioned, data in healthcare is fragmented (Handley, 2014). A problem of fragmented data is the difficulty to merge data from heterogeneous sources spread across labs,

operation theaters, hospital systems, financial IT systems, and EHRs (Schultz, 2013). The human role

in combining data sources and deciding which sources to use seems to play a critical role in big data

analytics in healthcare because it influences the results.

In healthcare most quantitative data is generated by machines and qualitative data comes from human text in the EHRs (Schultz, 2013). Human text in the EHR is used to provide additional

information about a patient.​ ​Unstructured data generated by humans is more difficult to analyze than structured data. It is therefore likely that humans influence the ability of systems or components to exchange information and to use that information that has been exchanged in the data transformation stage. This leads to the following proposition:

Proposition 2: Human networks influence healthcare big data analytic interoperability in the data transformation stage.

Consuming Data

After the data has been analyzed the data becomes information. Turning information into knowledge has been a problem with big data analytics in the past. ​This is a result of naivety, thinking that using big data does not require contextual or domain-specific knowledge for analysis and

(26)

knowledge to get the desired results (Mathew, & Pillai, 2015).​ ​To work with big data, certain skill sets are required (Schultz, 2013 ; Mathew & Pillai, 2015).

In addition to this, to create actionable knowledge out of collected data requires leadership’s

support. Leaders should set expectations on how cultural and organizational guidelines will be

structured to support the activities (Hamm, 2006). It is therefore likely that human networks have a role in the ability of systems or components to exchange information and to use that information that

has been exchanged in the data consumption stage. This leads to the following proposition:

Proposition 3: Human networks influence healthcare big data analytic interoperability in the data consumption stage.

(27)

3. Methodology

This section discusses the research methodology. First, the research design is explained. It will describe the overall strategy to effectively address the research problem and measures taken to ensure reliability and validity. Second, the data collection and analytic methods used for doing this research are elaborated on.

3.1. Research design

The paper examines the role of human networks in healthcare big data analytics in a Dutch hospital using a human interoperability model, answering the following research question:

How do human networks influence big data analytic interoperability in healthcare?

In order to identify the influence of human networks on interoperability a case study approach is used, intending to do an in-depth study by examining a phenomenon using multiple data sources (Creswell, 1998). The research question is tested by dividing it into three propositions that propose that human networks influence healthcare big data analytic interoperability in three different stages of healthcare big data analytics.

This qualitative research is both descriptive and explanatory. The benefit of using qualitative methods is that it takes context in account in a way that quantitative research cannot (Poblete & Grimsholm, 2010). The descriptive part, “This type of case study is used to describe an intervention or phenomenon and the real-life context in which it occurred” (Yin, 2003), describes the process of big data analytics in a Dutch hospital, the Amsterdam University Medical Center (Amsterdam UMC). The research is intended to give insights in the developments of big data analytics in the Amsterdam UMC in its real context but also as an illustration to show how big data analytics is used in hospitals. The

(28)

explanatory part tries to explain the role of human networks per stage in the big data analytic process in a healthcare setting using a human interoperability model.

The results of this thesis contribute to the literature by applying an existing model, an interoperability model, to the fast changing promising solution of healthcare big data analytics. More specifically the focus is on human interoperability which is a concept that has been overlooked in the past. By analyzing human networks influence on interoperability the paper shows the relevance of humans in interoperability and the importance to take human networks in account when using big data analytics as an organization.

Generalizability of results has been a problem in using case studies for research (Yin, 2003). However, in the case of big data analytics in the Amsterdam UMC it generalizes to the largest hospital in the Netherlands. Further, case studies rely on analytical generalization, which in this thesis is done by generalizing to the theory on the role of human networks in healthcare big data interoperability (Yin, 2003).

Another concern for explanatory case studies is the role of internal validity. The techniques to achieve internal validity are difficult to identify (Yin, 2003). Yin (2003) mentions pattern matching as the prefered analytic strategy in case study research. Therefore, pattern matching is executed by examining the different answers by the interviewees on the role of human networks.

The single case is drawn from a Dutch hospital, the Amsterdam UMC. A single case study is prefered over a multiple case study when the research goal is to produce extra and better theory (Dyer & Wilkins 1991). The case study is executed from the perspective of the human networks involved in big data analytics in the context of a hospital. A single case study helps in answering the research question, bringing a better understanding of a complex issue and adding strength to what is already known on the role of humans in big data analytics. Further, a single case study helps with the contextual analysis which is important in the field of big data analytic research (Andrade, 2009).

The necessary data is acquired by conducting 10 semi-structured interviews. A combination of predetermined questions and questions that rise during the interviews allow for additional questions

(29)

to the participants in order to gain a better understanding on the topic, and therefore taking construct validity in account (Saunders & Lewis, 2012). Answers in the previous interviews influence the questions asked in the coming interviews. The interviewees are accessed using personal contacts and contact information on websites and LinkedIn. The persons interviewed are contacted by email and phone.

One of the cons of interviews is that the data is known to possess possible biases. To limit bias, interviews are conducted with people from different departments in all levels of the organization (Eisenhardt & Graebner, 2007). The sources are chosen by purposive sampling. This is a

non-probability sampling technique which allows to focus on particular characteristics of a population that is best fit to provide an answer to the research question (Lærd Dissertation, n.d.).

As proposed by Patton (2015), an interview protocol is used to ask questions for specific information related to the aim of this paper’s research question and to have conversations about relevant topics (See Appendix A). Fixed questions also aid to keep interaction with the interviewee to a minimum to minimize research expectancies and bias. And last, fixed questions increase

reproducibility which is a commonly heard criticism of qualitative research (Mays & Pope, 1995). However, (big) data analytics is expected to become increasingly common in organizations and with this the human networks will probably change. Therefore it is unlikely that conducting this research in the future will generate similar results to the ones in this paper.

3.2. Data collection

This study relies on four sources of information. First, the literature on human interoperability in healthcare big data analytics was examined. Second, for this research a total of 10 interviews have been conducted of which one is a pilot interview with a co-assistent. ​The interviews were

approximately 40 minutes each. ​The amount of interviews was chosen based on how much new information additional interviews brought. The research follows healthcare big data in different big data analytic processes and people involved in all stages of healthcare big data analytics are

(30)

interviewed (See table 2). Third, field notes have been used taken during a congress about Dutch healthcare innovations and big data in The Hague (See appendix B). Fourth, ​access to a recent unpublished big data study was negotiated in which 10 interviews with nurses working at the Amsterdam UMC were conducted. The interviews are about the nurses their collection behavior in EHRs using free text when describing fall incidents in the hospital (Kahrel, 2018). ​The study gives valuable insights in the data capturing phase in healthcare big data analytics. ​This gives a total of 20 interviews that are used for this thesis.

Multiple sources of evidence are used: an external healthcare data expert from Nictiz, which is an expertise centre in e-health, nurses, medical specialists working with big data analytics,

managers, and a member of the board of the hospital (See table 2). By using data source triangulation the ability to interpret findings is increased (Thurmond, 2001). The external healthcare consultant has general knowledge on the big data analytic developments in healthcare, the medical specialist working with big data knows about the current state of big data usage and specific big data projects, and managers at the data department of the hospital have knowledge about the big data developments within the hospital and future plans. The final interview was with a board member of the Amsterdam UMC responsible for IT and big data.

The main distinctive characteristic of the respondents is that they operate at different levels in the big data analytic process. Whereas some have knowledge on the general topic, others have more knowledge on the practical implications. All interviewees have knowledge on the role of human networks in big data analytics.

The participants of the interviews have been selected based on two criteria. First, they had to work at the Amsterdam University Medical Center, a hospital located in Amsterdam, or have a relevant relation to it. This hospital is unique in regard to its status as a university medical center. Doctors working at the hospital combine their work with doing research which makes them a good fit to do interviews with on a fast developing subject such as big data. Second, they need to have a relevant connection to big data analytics in Dutch healthcare.

(31)

10 interviews are conducted in person at the office of the participant. To avoid language barriers and misinterpretation of data all interviews were conducted in the native language of the interviewees which is Dutch. The other 10 interviews had already been conducted in a lab in the Amsterdam UMC where the nurses were interviewed on their noting behavior in the EHR. Also the interviews with the nurses were conducted in Dutch.

Before the interviews started permission was asked to record them. The interviewees were asked about the human networks in healthcare big data analytics using the questions from Handley’s (2014) research on human networks. Responses to these question have been used to identify patterns in the human networks and how these influence the human interoperability requirements.

Table 2. Overview of the interviewees

3.3. Data analysis

When the interviews were finished the audio was transcribed with the help of a program called Amberscript’s free version. ​NVivo Pro 12 was used to collect the text-based research data. ​The second step was to organize and reduce the data by coding the data in NVivo 12 pro. A combination of predefined codes and new codes that have risen during the coding process have been used to open code the data. Secondly, the data was organized by grouping data with similar themes. The final step was to extract meaning from the coded data and search for patterns in ideas, themes, etc.

(32)

4. Results

This section will summarize the results of the interviews. First, information will be provided about the organization in which the research took place and what big data projects were identified during the interviews. This is followed by the results on the human networks in which per network details on the actual network are provided but also how the network should look like to positively influence the human interoperability requirements. This gap between the actual and “goal” human networks helps in assessing the influence of the human network on the human interoperability requirement.

4.1. Case information

The research took place in the Amsterdam University Medical Center (Amsterdam UMC). The Amsterdam UMC is unique in regards to its status as a university medical center. In the hospital, doctors combine their work as doctor with doing research. Recently a merger took place in which the AMC and the VUmc, two medical hospitals in Amsterdam, became the Amsterdam UMC. The hospital has more than 15.000 professionals providing healthcare to 350.000 patients per year (Amsterdam UMC, 2018).

The number of big data analytic projects is still relatively limited considering its potential. Several projects have been identified in the Amsterdam UMC that use big data analytics. All projects, except the analytics of omics in Cancer Center Amsterdam, are still in the research phase, which can take a considerable amount of time. During the interviews it was mentioned several times by the interviewees that they are not sure if their project is really using big data analytics. Although some projects didn’t fulfill the conditions of the four V’s: velocity, volume, variety, and veracity, they are likely to develop in the future to big data analytic projects.

The interviews were held with people working in four different projects, which are the followings:

(33)

Project 1

“Right dose right now” is a project at the intensive care department developing software that serves as decision support for optimal antibiotic dosing for patients at the intensive care. The project is still in the research phase trying to find the best predicting model for antibiotic dosing for patients at the intensive care.

“Crazy enough until recently it was common to give every patient the same standard dose of antibiotics

The models can use approximately 30.000 data points from the EHR per patient per day. More projects like this are being developed at the intensive care department and will be used as decision support for the doctor in the future. The reason that the intensive care is conducting several (potential) big data analytic projects, and therefore big data analytic research is relatively developed compared to the rest of the hospital, is that it has had a digital patient information system similar to that of the EHRs for approximately 15 years.

“Before we got a hospital wide EHR system in 2016, we at the intensive care had our own electronic health system that gathered data generated by machines and other information of the patient”

Project 2

The second project is trying to identify fall incident of hospitalized patients using text mining techniques in the EHRs. The project is in collaboration with an IT company with a track record in machine learning.

Fall incidents in the hospital are often described in free text in the EHR which makes it hard to identify fall incidents. Machine learning can help to identify fall incidents in free text. The ultimate goal of the project is to identify people with increased risk of falling using their EHR before they

(34)

“Approximately 100 fall incidents are per year reported by hospitals in the Netherlands. The literature describes that 2 to 15 percent of the older adults fall. This means that a factor of 10 to 20 more incidents happen than is reported by the hospitals”

The project started in 2016 and is still in the research phase.

Project 3

The third project is in the process of building a high performance computing expertise center. The project has received €100.000 to buy and work with a machine that is able to solve big data problems for which much computing power is needed. The goal of the project is to inform and bring together people that want to work with big data analytics and facilitate in their needs.

Project 4

Cancer Center Amsterdam, which is part of the Amsterdam UMC works with omics,

genomics, proteomics and other biological information that generate overwhelming amounts of data. The data is used to decide what therapy and medicines to offer to patients.

In summary, differences between the projects exist in the stage of development, data sources used, and goals they have. In terms of development, Project 4, which analyzes omics to provide personalized healthcare for example for cancer treatment, is the only project that is common practice at this point in time. The fall incident project, project 2, is in the research phase and is not common practice yet. Project 1 is also in the research stage but is already testing its model on real patients at the intensive care.

In terms of data sources used, project 1 and 2 both use the EHR, in which many different sources of data come together, as main source for big data analytics whereas project 4 uses blood samples from approximately 600 people a week resulting in 6 terabyte of machine generated data.

(35)

In terms of goals and analytic methods used, project 1 is creating a decision support system

that, when finished, will use stream computing. Project 2 would also like to use stream computing in

the future as was mentioned during interview 6: ​“Goal of this project is in the end to get it in the live EHR”. The other projects do not use nor need live data streaming. Project 2 is focused on identifying fall incidents using the EHR, so not decision support. And the goal of project 3 is to provide

personalized medicines and healthcare.

Although differences exist in the characteristics of the different projects, all characteristics fit

the healthcare big data architecture (See figure 1).

4.2. Main finding

The open coding process led to 65 codes. During the coding process the results were constantly compared to see if the data kept supporting emerging categories.

When selective coding the data, five main themes were identified: big data challenges, big data goals, sources of data, collaboration, and big data stimulation. The next section will present the results of the human networks. For each network an ideal goal was identified that would stimulate the specific human interoperability requirement. The difference between the actual networks and the potential networks can be used to analyze human networks influence on the human interoperability

requirements. Compatible and reliable human networks promote effective information sharing and

therefore better situational awareness, improved collaboration, and faster decision making (Handley,

2014).

1. Social network: Who knows who?

Goal: Know the people the people that are involved in big data analytics in the organization

Real situation:

Interview 7:​ “Once a week we take a look at the requests for data extracts of the data department and I show my incoming demands for data extracts.”

(36)

Talking about Intensive Care big data project:​ “That is a coincidence we had a talk with him yesterday”

Answering a question on the collaboration between the VUmc and AMC data departments -​ “It is not really… It is only on board level but that will also become more and more important in the future”

Interview 6:​ “We are not busy with the science that backs machine learning. To do that we work together with the mathematics and artificial intelligence department at the VU”.

“We are in the process of getting that data also from the AMC.” Interview 5:​ “The ICT department is actively working with that” Interview 8: ​“This is really an alliance project, so also in the AMC”

“Big data analytics knowledge in The Amsterdam UMC is very much scattered, as you have experienced.”

“By creating a platform we hope to make it easier to find those experts (in big data analytics).” Conclusion:​ ​Most interviewees that work with big data analytics know each other but

improvements can be made. Recently the VU medical center (VUmc) and Amsterdam Medical Center (AMC) merged to become the Amsterdam University Medical Center (Amsterdam UMC). It is clear that the people in the VUmc working with big data analytics know most of each others projects but knowledge on big data projects in the AMC is limited.

Since most people in the VUmc that work with big data know other people that are involved in big data analytic projects the social networks are considered to be decent.

2. Knowledge network: Who knows what?

Goal: Have both clinical and data scientific knowledge and a clear big data strategy

Real situation:

Interview 10: Asking about the big data strategy of the hospital: ​“No, it is in development but definitely not defined right now”

Interview 6:​ “To find the right expertise in big data analytics is difficult. You could see it as a specialism on itself. It is easier to find the expertise outside the hospital but that leads to opposition from people within the hospital.”

(37)

None of the doctors are educated to fit the challenges that you encounter when you want to inspect and compare different variables to find something in big data sets. We have limited people that are educated as medical specialist but also have knowledge in artificial intelligence. You will need those people to perform these things”.

“I think the Amsterdam UMC has to use its own power and attract people that have knowledge about big data analytics. One of things we do here is attracting people that have a background in artificial intelligence or have worked with deep learning or machine learning. Those people can be used to perform segmentation problems and predict clinical course.

“A typical phenomenon you see happening, and I recognize this, is that clinician that want to use deep learning or machine learning techniques to make predictions can’t do it themselves because they miss the expertise and because of this they approach other organizations. And this makes sense but I think that we have an increasing number of people with that expertise.”

“Researchers are often very much focused on their own piece of research. and are super specialized in what they do”

Conclusion:​ ​The tendency is that big data analytics should take place within the hospital. Two types of people are needed for big data analytics. People that have the “medical” knowledge and people that have the “big data analytic” knowledge. The people that possess both skills are very limited but increasing.

3. Capabilities network: Who has what resource?

Goal: The hospital should have a strong single information system which can be used to extract data from for big data analytics but also takes privacy of the patients in account. Further there should be an in- house computing system which is able to perform big data analytics.

Real situation:

Interview 6: ​“Until two years ago we did not have electronic health records, we had paper records. This change has an enormous impact on all levels of the organization. And actually we are mostly focusing on making it work and less attention goes to the data that is entered and re-used.” Interview 2: ​“The AMC and VUmc are now the Amsterdam UMC but patient data is still in principle separated.”

Interview 3:​ “The electronic health record is currently one system which is fuelled by other systems such as the lab system and the röntgen system.”

Interview 5:​ “You try to register as much as possible in a structured way but EPIC is not built to use for research.”

(38)

Interview 10: ​“We will need to build an infrastructure to facilitate big data analytics but we are not yet that far.”

Conclusion: EPIC as main EHR information system makes it easier to perform big data analytics since more people have knowledge about this information system and therefore it is easier to get help. Also privacy, which is considered an important condition for information systems is easier to regulate when one main system is used. But at the same time the EHR is not built for research.The infrastructure to perform big data analytics is missing. Different resources are used by different projects as a consequence of a missing big data analytic infrastructure.

4. Assignment network & 5. Work network: Who does what and who works where? Goal: Everyone works together doing big data analytics and expertise is being shared within the hospital.

Real situation:

Interview 2: ​”We work together with a number of companies in the area of machine learning. This collaboration works very well since in the team we have substantive knowledge of both machine learning and medicine.”

“So there is an interesting cross-talk between different expertises and then you get results.” Interview 5:​ “We are now establishing a data platform on which SAS will run. SAS is an organization that does a lot of data analytics in different industries.”

Interview 6:​”This is a co-creation project in which we bring the knowledge and they bring the technique”

“They also have an advisory role, the project is runned by the researchers at the VU and AMC. The role of the organization is to give advice in terms of software.”

Interview 4:​ “Big data is very good at finding irrelevant associations which can be relevant but it is important to have knowledge on the context”.

“To make chocolate of it you need context, which at this point in time is only offered by humans and more specific doctors and nurses.”

Conclusion: Data capturing for big data analytics is predominantly performed within the hospital. But transformation of big data is most common performed by external organizations because of the missing expertise within the hospital. The consumption of the big data happens within the hospital because of the expertise of the doctors in their field, which is often necessary to turn information into knowledge.

(39)

6. Information network: What informs what?

Goal: Different sources provide healthcare data that comes together in the EHRs. The EHRs are used for big data analytics by data analysts. And the data analysts provide the information

generated to the doctor which turns information into knowledge and uses this knowledge to inform other doctors and patients.

Real situation:

Interview 8: ​“The AMC and VUmc are now the Amsterdam UMC but the patient data is in principal still separated.

“We want to have one point of contact for high performance computing big data type of analysses in the Amsterdam UMC for the whole organization. So the board knows what resources the

researchers actually need for big data analytics and the researchers know where to get information on big data analytics.”

Interview 3:​“The electronic health record is currently one systems which is fuelled by other systems such as the lab system and the rontgen system.”

Interview 4: ​”Standardization is the core business of Nictiz.”

Interview 5:​ “You try to register as much as possible in a structured way but EPIC is not built to use for research.”

Information from noting behavior by nurses in EHRs: In case of fall incidents the nurses used 21 different words/phrases to indicate a fall incident in free text. This makes identification of fall incidents using the EHR relatively difficult (Kahrel, 2018).

Conclusion: A critical problem in big analytics in healthcare is the use of the EHR. Hospital workers add information about the patient in the EHR. Some information is standardized such as heart rate but also a lot of information is entered in free text. Free text is perceived as difficult to analyze using big data analytics. For example in the case of a patient falling in the hospital, 21 different words/phrases are used to describe the incident. EPIC, the EHR, is fuelled by other information systems but to extract data from EPIC for research purposes is difficult.

But initiatives are rising to improve the information networks. So the information being stored in the EHR is being improved, for example with standardization initiatives, to make it easier to extract data.

7. Skills network & 8. Needs network: What knowledge is needed to use what resource and to do what tasks?

Referenties

GERELATEERDE DOCUMENTEN

Drawing on the RBV of IT is important to our understanding as it explains how BDA allows firms to systematically prioritize, categorize and manage data that provide firms with

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Table 6.2 shows time constants for SH response in transmission for different incident intensities as extracted from numerical data fit of Figure 5.6. The intensities shown

Figure 4.1: Foot analysis: Foot type and static and dynamic foot motion 61 Figure 4.2: Analysis of the left foot: Heel contact, mid stance and propulsion 63 Figure 4.3: Analysis

I briefly describe the historical development of the pulsar field (Section 2.1), the mechanism of pulsar formation (Section 2.2), di fferent classes of pulsars (Section 2.3),

The Magsphere PS colloids formed larger aggregates than the TPM, cross-linked PS and DPPS colloids at the same ionic strength and waiting time, even though the Zeta potential of the

A study on the professional development of teachers who participated in such a typical context-based education professional development programme reported that teachers who gained

Our main argument for involving patients in translational research was that their input may help to make an innovation more relevant and useable for patients, and that it may