• No results found

An assessment of frameworks and methods for comparing: semantic interoperability standards in healthcare.

N/A
N/A
Protected

Academic year: 2021

Share "An assessment of frameworks and methods for comparing: semantic interoperability standards in healthcare."

Copied!
78
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

An assessment of frameworks and

methods for comparing semantic

interoperability standards in healthcare

Denise van Helvoirt June 6, 2019

(2)

2

An assessment of frameworks and methods for comparing

semantic interoperability standards in healthcare.

Student

Denise van Helvoirt

Collegecard number: 10711813 E-mail: d.j.vanhelvoirt@amc.uva.nl Mentor

O. C. Trauschke, MSc

Functional analyst and software developer Email: OTrauschke@chipsoft.nl Chipsoft Tutor R. Cornet, PhD Associate professor Email: R.cornet@amc.uva.nl Faculty of Medicine

Department of Medical Informatics, AMC-UvA Location of Scientific Research Project Chipsoft Orlyplein 10 1043 DP Amsterdam The Netherlands Research period December 2018 – June 2019

(3)

3

Contents

Summary ... 4 Samenvatting ... 5 Keywords ... 5 1. Introduction ... 6

2. Data and methods ... 8

2.1 Comparison of standards, frameworks and methods ... 8

2.1.1 Literature search for comparison of interoperability standards ... 8

2.1.2 Weight allocation to features ... 9

2.1.3 Comparison of frameworks and methods ... 10

2.1.4 Comparison of two interoperability standards ... 10

2.1.5 Application of the framework to MedMij and the Argonaut project... 11

2.2 Use-case descriptions ... 11

2.2.1 Description of situational differences between countries ... 11

2.2.2 Assessment of two differences between MedMij and Argonaut... 12

3. Results ... 13

3.1 Comparison of standards, frameworks and methods ... 13

3.1.1 Literature search ... 13

3.1.2 Results of the Delphi study ... 14

3.1.3 Comparison of Health Level 7 version 3 and DICOM ... 15

3.1.4 Comparison of MedMij and Health Level 7 Argonaut project ... 17

3.2 Use-case descriptions ... 18

3.2.1 Situational differences between the United States and the Netherlands ... 18

3.2.2 Possible solutions for two differences between MedMij and Argonaut ... 20

4. Discussion ... 23

Conclusion ... 25

Acknowledgements ... 25

References ... 25

Appendices ... 28

Appendix A. Inventory of features, their information formats and definitions. ... 28

Appendix B. Medians and interquartile ranges for each feature obtained from the Delphi study. . 41

Appendix C. Comparison of the standards Digital Imaging and Communications in Medicine and HL7 version 3. ... 51

(4)

4

Summary

To achieve interoperability and assist healthcare providers with exchanging patient information interoperability standards are developed. However, countries may use different sets of interoperability standards. Because of these differences healthcare information systems may need adjustments for different countries. When making adjustments to a product developers will need insight into the ability to convert between standards to assess the amount of work to be done when converting between standards. The ability to convert between standards would entail the impact of the changes that need to be made to implement a new standard. Because comparisons between standards in previous literature focused on different features and levels of detail this study aims to determine a standard framework for comparing interoperability standards.

This study found that the optimal framework came from Mykkänen et al. (2008) with a weighted score that was generally ten times as high for two other frameworks, around eighteen times as high as three other frameworks and fourteen times as high as one other framework. The weighted score of this framework was of 853 out of 1,150 points and it covered 159 features out of the 214 features collected during the literature search and Delphi study. The features that make up this framework are divided into eight categories instead of consisting of only six to ten features that cover general information, which was more common in the other assessed frameworks. As an application this framework was used to compare the standards Digital Imaging and Communications in Medicine and Health Level 7 version 3, which were discussed the most in found literature, and the projects MedMij and Argonaut, which make use of interoperability standards. The comparison of the standards Digital Imaging and Communications in Medicine and Health Level 7 version 3 with the framework of Mykkänen et al. (2008) showed a difference of 364 out of 853 points with differences in 68 features. For the MedMij and Argonaut comparison the score was 371 out of 853 points with differences in 69 features. The scores indicate that the difference between both the projects and standards is slightly less than half the total amount of difference that is possible. For the comparison of the two interoperability standards the score is quite similar to the results of previous comparisons, where the two standards also differed in half of the features assessed, though previous comparisons assessed only basic information.

The results demonstrate that a framework that assesses more than twenty features and covers a range of subjects is better suited for assessing differences and measuring the ability to convert between interoperability standards. Assessing only basic information such as type of standard or degree of adoption does not reveal differences in areas such as data, security or communication protocols, which can be just as important, if not more. These results could be applied to support the selection of standards during the development of healthcare systems. Future research is suggested to focus on reaching consensus on the definitions of each feature as such definitions were not provided in the literature that was found. Additionally, the inter-rater reliability could be measured to assess whether the results of the optimal framework are consistent.

(5)

5

Samenvatting

Om interoperabiliteit te bereiken en zorgaanbieders te ondersteunen met het uitwisselen van informatie zijn standaarden voor interoperabiliteit ontwikkeld. Echter, landen kunnen gebruik maken verscheidene combinaties van standaarden voor interoperabiliteit. Vanwege deze verschillen zouden informatiesystemen in de gezondheidszorg aanpassingen nodig kunnen hebben voor verschillende landen. Voor het maken van aanpassingen aan een product hebben productontwikkelaars inzicht nodig in de mogelijkheid om een standaard om te zetten in een andere zodat de hoeveelheid werk de nodig is ingeschat kan worden. Dit is de mate van veranderingen die nodig zijn om een nieuwe standaard te implementeren als een andere standaard reeds is geïmplementeerd. Omdat vergelijkingen tussen standaarden in de literatuur zich focusten op verschillende kenmerken en niveaus van detail beoogt deze studie een raamwerk te bepalen waarmee standaarden voor interoperabiliteit vergeleken kunnen worden.

Deze studie vond dat het optimale raamwerk uit Mykkänen et al. (2008) kwam met een gewogen score die over het algemeen tien keer zo groot was als twee andere raamwerken, achttien keer zo groot als drie andere raamwerken en veertien keer zo groot als een ander raamwerk. De gewogen score van dit raamwerk was 853 van de 1,150 en het bestond uit 159 van de 214 kenmerken die verzameld waren tijdens het literatuuronderzoek en de Delphi studie. De kenmerken waaruit dit raamwerk bestaat zijn verdeeld in acht categorieën in plaats van bestaand uit zes tot tien kenmerken die basis informatie omvatten, wat vaker voorkwam in de andere beoordeelde raamwerken. Dit raamwerk is gebruikt om de standaarden Digital Imaging and Communications in Medicine en Health Level 7 versie 3 te vergelijken, die het meest voorkwamen in de gevonden literatuur, en de projecten MedMij en Argonaut, die gebruik maken van interoperabiliteit standaarden. De vergelijking van de standaarden Digital Imaging and Communications in Medicine en Health Level 7 versie 3 met het raamwerk van Mykkänen et al. (2008) had 364 van de 853 punten met verschillen in 68 kenmerken. Voor de vergelijking tussen MedMij en Argonaut de score was 371 van de 853 punten met verschillen in 69 kenmerken. De scores tonen aan dat het verschil tussen de twee projecten en de interoperabiliteit standaarden net minder is dan de helft van de totale hoeveelheid verschil dat mogelijk is. Voor de vergelijking van de twee interoperabiliteit standaarden was de score redelijk overeenkomstig met de resultaten van eerdere vergelijkingen, waar de twee standaarden ook voor de helft van de kenmerken van elkaar verschilden, hoewel de eerdere vergelijkingen basis informatie evalueerden.

De resultaten tonen aan dat een raamwerk dat meer dan twintig kenmerken beoordeeld en verscheidene onderwerpen omvat beter geschikt is om interoperabiliteit standaarden te vergelijken. Alleen basis informatie beoordelen, zoals het type standaard en de adoptiegraad, onthult niet verschillen op het gebied van bijvoorbeeld data, beveiliging of communicatieprotocollen terwijl deze net zo belangrijk kunnen zijn of zelfs meer. Deze resultaten kunnen gebruikt worden om het selecteren van standaarden te ondersteunen tijdens het ontwikkelen van gezondheidszorgsystemen. Toekomstig onderzoek kan zich richten op het bereiken van consensus over de definities van kenmerken aangezien zulke definities niet waren verstrekt in de gevonden literatuur. Verder zou de interbeoordelaarsbetrouwbaarheid gemeten kunnen worden om te beoordelen of de resultaten van het optimale raamwerk consistent zijn.

Keywords

(6)

6

1. Introduction

Interoperability is important to facilitate health information exchange. Healthcare providers require patient information to treat patients, which may come from other providers. When care providers have access to more complete information about a patient they can improve the care they provide. Care providers can work more efficiently and the number of medical errors may reduce [1, 2]. According to the Healthcare Information and Management Systems Society interoperability in healthcare can be defined as ‘the ability of different information systems to connect in a coordinated manner across organizational boundaries to access, exchange and cooperatively use data among stakeholders’ [3]. Its aim is to optimize the health of individuals and populations. The HIMSS describes four levels of interoperability. These levels are foundational, structural, semantic and organizational interoperability. Foundational interoperability pertains to the inter-connectivity requirements necessary for two systems to exchange data. The structure or format of data exchange is covered by structural interoperability. The meaning and purpose of the data stays unaltered at this level. Semantic operability means that two or more systems can exchange, interpret and use data. It focuses on both the structuring and codification of data. The last level, organizational interoperability, focuses on technical, social and organizational components and policies. These components facilitate effective exchange between organizations and individuals and use of data.

To achieve interoperability and assist healthcare providers with exchanging patient information interoperability standards and programs are developed. However, there are numerous standards and each country or health information system may use a different set of standards [4, 5]. For example, the Argonaut project is utilized in the United States of America (USA) while the MedMij standard is used in the Netherlands. Both of these programs enable the exchange of information, though they take different approaches. While the former retrieves information through resources the latter focuses on services and service providers through which information can be retrieved. Another example is the use of the xDT standards in Germany and the Standard Structured Medical Information eXchange in Japan for the exchange of patient data [6]. Additionally, there are other attempts and initiatives in healthcare to achieve interoperability [7, 8]. This is all so that patient information may be communicated in a clear manner among healthcare providers and patients.

The use of different interoperability standards in different countries not only poses a problem in hospitals but also for businesses that develop products such as electronic health records and medication systems. If a business develops a product that supports a certain set of standards it may be required to support another set of standards when used in another country. For example, a product that implements a country-specific standard would need to implement another standard when used in another country. The differences between countries may lie for instance in the standards that are commonly used or required by the government or the language used for concepts in the product. In addition to standards businesses would also need to take into account aspects like the legislation about information exchange in a country. The changes that need to be made by developers depend on these differences taken into account by the business.

When making adjustments to a product developers will need to know what changes need to be made. To determine the required changes it would benefit software developers if the ability to convert between standards could be made clear. The ability to move from one standard to another can then be taken into account when developing a product. The ability to convert between standards would entail the impact of the changes that need to be made to implement a new standard. For example, differences in one third of the assessed features between two standards that have small impacts would mean the convertibility is low. Similarly, differences in a tenth of the assessed features that have large impacts will also mean that there is a low convertibility. The less differences there are between two standards and the less impact they have, the higher the convertibility. Having a clear overview of the

(7)

7 type and extent of the differences between standards, the ability to convert between standards, would assist developers in assessing the amount of work to be done when converting between standards. Additionally, it may help them in selecting standards to implement in their products.

To illustrate the situation as described above imagine the following case. A business sells an electronic health record system to healthcare institutions in country A. This country uses standards of the xDT family, which the company has implemented in their product. To reach more customers the company starts to offer their system in country B. In contrast to country A this country uses, for example, the standard Health Level 7 version 2. These two standards would differ in areas such as the information models used and the scope [9, 10]. To find these differences developers would have to read the specifications of the standards. Once the differences have been found and written down the necessary changes need to be assessed. However, if the features to examine and their impact were made clear beforehand developers could do such a comparison in a more structured way.

While comparisons have been made between standards in literature, these comparisons focus on different features and level of detail. These comparisons are performed using frameworks or methods. Frameworks are structures that underlie a system, concept or text while methods are systematic procedures that describe steps that are to be followed. For instance, a method for comparisons can state categories of information that users can freely fill in with information about a standard. In literature one framework or method may look at features such as content structure, security and multimedia support, a second framework or method at the level of standardization and the architectural concept and a third framework or method at the degree of adoption, number of message types and the use of an ontology [4, 11, 12].

While the frameworks and methods in these articles looked at only a limited number of features, another article shows that a framework that focuses on a large number of aspects and level of detail can be developed [13]. The fact that previous articles compare standards in different ways makes it difficult to compare the results of these articles or to assess new standards, as there is no standard way of comparing their features and convertibility [13, 14]. For this reason a standard framework or method should be chosen for assessing and comparing interoperability standards for information exchange in healthcare. To determine such a framework or method we aim to answer the following questions:

How can the difference between semantic interoperability standards be measured?

1. What methods and frameworks exist that can be used to compare semantic interoperability standards?

2. How do methods and frameworks that can be used for evaluating interoperability standards differ with respect to the features they assess?

3. How do the results of comparisons of standards differ between a selected method and previous publications?

4. How can differences between compared standards be overcome?

To answer these questions literature will be searched for such frameworks and methods. Following that, the Delphi method will be applied to assess the frameworks and methods and their contents. The results of this study will be used to assess the methods and frameworks and choose one based on a calculated score. Using this approach we expect to obtain an overview of frameworks and methods and their suitability for comparing standards, which will be represented by a score. This score reflects how much information each framework or method assesses relative to the total set of features obtained from the literature study. The optimal framework will then be used to compare two standards. The result of this comparison will be assessed by comparing it with the results of previous

(8)

8 studies. The result of the comparison between two standards will also be a score. However, this score will indicate the amount of difference between two compared standards.

The next section, data and methods, will first describe in more detail the terms and sources used during the literature search, how participants for the Delphi study were recruited and how this study was set up. The latter part of the section describes how the scores were established for the assessment of the frameworks and methods and the comparison of interoperability standards. The results section describes the frameworks and methods that were found, the opinions of participants that were gathered during the Delphi study and applications of a framework by comparing standards. The discussion will summarize principal findings, address the strong and weak points of this study and the application and impact of the results of this study. The last point discussed is steps to be taken next.

2. Data and methods

The approach used for this study was a literature search for articles and other sources and a Delphi study to make evaluation of frameworks and methods possible. This was followed by comparisons of interoperability standards and projects for interoperability that make use of interoperability standards.

2.1 Comparison of standards, frameworks and methods

2.1.1 Literature search for comparison of interoperability standards

To create an inventory of features with which standards can be described Google Scholar, PubMed and Cochrane library were searched. These sources were searched on January 8, 2019. For Google Scholar the first fifty results were considered. There were only a few terms used so that the search was not more specific than necessary and more results could be found. The following terms were used:

Google Scholar: healthcare AND standards interoperability AND message AND comparison

PubMed: standards[All Fields] comparison[All Fields] interoperability[All Fields]

Cochrane library: "interoperability" in Title Abstract Keyword AND “standards” in Title Abstract

Keyword AND "comparison" in Title Abstract Keyword - (Word variations have been searched)

As interoperability standard often involve the exchange of information or messages the term ‘message’ was included for Google Scholar to find more articles about interoperability standards. From the results articles that described and compared two or more interoperability standards in a structured way were included. Articles that discussed frameworks for assessing standards were also included. Consequently, articles that focused on how to bridge differences between standards were excluded. Duplicate articles were removed from the selection. In addition to the articles found, two more known sources, the Nictiz comparison webpage and the Interoperability Standards Advisory, were also included [15, 16]. During the construction of the inventory synonyms were joined where possible. To determine whether two features were synonyms the online Cambridge dictionary was used to determine the definitions of the features [17]. The whole process of the literature search is shown in Figure 1.

(9)

9

Figure 1: Overview of the literature search and the exclusion criteria used. 2.1.2 Weight allocation to features

The Delphi method was applied to reach consensus among experts on the subject of interoperability standards about the impact of the features identified during the literature search. Impact would be indicated by weights allocated by experts during the Delph study. A low weight would indicate that the subject a feature covers has a low impact when a change has to be made to convert one standard to another. A high weight would indicate that the subject a feature covers has a high impact when a change has to be made to convert one standard to another. We recruited as many experts on the subject of interoperability standards in healthcare as possible [18, 19]. This was done through the network of experts and organizations that focused on interoperability standards such as Nictiz. This study consisted of three rounds in which participants were requested to fill in an online form. To provide a better overview of the features and improve readability features were sorted into nine categories in the forms. These categories were ‘basic information and scope of the standard’, ‘information and semantics’, ‘functionality and interactions’, ‘application infrastructure aspects’, ‘technical aspects’, ‘flexibility, accuracy and extensibility’, ‘maturity, usage and official status’, ‘system lifecycle’ and ‘domain-specific features’ [13]. Each round of the Delphi study lasted two weeks. An e-mail to serve as a reminder to complete the round was sent to the participants halfway through each round.

(10)

10 In the first round experts were asked to review the inventory of features to assess its completeness and add any features they considered relevant that may not have been present in literature. Suggestions could be written down in text boxes. Suggested features that were not synonyms of features that were already present in the inventory were included for the following rounds. If suggested features were synonyms of features in the inventory they were not included. To determine whether two features were synonyms the online Cambridge dictionary was used again [17]. Participants were also asked to state what interoperability standards they had knowledge of.

In the second round the experts allocated weights to the features to indicate their impact. A seven point Likert scale was used to provide a range for the possible weights. A one would indicate that the subject a feature covers has a low impact when a change has to be made to convert one standard to another, while a seven would indicate a high impact. With each feature an example of the expected format of the information was provided. These format examples were based on the articles found during the literature search. The most common formats were ‘yes’, ‘no’, ‘a description’ and ‘not applicable’. The last format was applicable if a feature was outside the scope of the standard. With the results from this round histograms were made for each feature to show the distribution of allocated weights.

In the last round the same form was used with the addition of histograms that show the distribution of opinions in the previous round for each feature in the inventory. This was to give feedback to the experts so that they could compare their opinion to that of others and adjust their opinion if they so desired. Aside from the expected information format a definition was added to some features using the Cambridge dictionary. The formats and definitions are listed in Appendix A. Additionally, for features where opinions were divided participants were asked to clarify their input. The additions would indicate whether differences between opinions were caused by differences in the interpretation of the meaning of features. Opinions about a feature were considered divided if the interquartile range was three or higher. In these cases half of the answers would be spread across four points or more. The interquartile ranges were determined for both the second and third round to assess changes in consensus. The final weight for each feature was calculated by taking the median of the weights in the final round.

2.1.3 Comparison of frameworks and methods

For this study frameworks were defined as structures that underlie a system, concept or text while methods are systematic procedure that describe steps that are to be followed. An example of a framework can be a table. A method for comparisons can state categories of information that users can freely fill in with information about a standard. To compare the different frameworks and methods found a weighted score was created based on the features each framework or method considered and the weights assigned to these features. If a feature was considered by a method or framework this would be indicated by a one. Absent features were indicated by a zero. As the distance between responses in a Likert scale is not measurable the median was used to determine the final weight of each feature [20, 21]. The final weighted score for each framework and method was calculated by taking the sum of the product of the presence and weight of each feature. The framework or method with the highest score was utilized in the rest of the study to compare interoperability standards. The number of features that a method or framework assessed was also counted.

2.1.4 Comparison of two interoperability standards

To determine the two standards that would be used to validate the chosen framework, the number of occurrences of the standards in the articles found during the literature search were counted. Whether the specification for the standard was freely available was also taken into account. The framework was filled in with information from the specification. This was done according to the formats found in the

(11)

11 respective framework, which were also used in the Delphi study. The respective specifications were accessed in February 2019. To confirm the information gathered from the specifications members of the organizations that maintain the respective standards were asked to review the filled in framework. To measure the differences between interoperability standards a weighted score was created based on the features in which they differ and the weights of those features. The unweighted score was also determined, which is the number of features where two standards differ. If the standards differed with respect to a certain feature this would be indicated with a one. Likewise, no difference would be indicated by a zero. The presence or absence of difference would then be multiplied by the respective weight of the feature. By adding these results together the final weighted score of the comparison was calculated. This score indicates how much the standards differ from each other. This can be compared to the maximum score possible for differences between two standards to show how much two standards can differ in total. To assess the results of the comparison the score was compared with the findings of the articles found during the literature search. By assessing the results differences between the conclusions that were made and the types of features used could be revealed.

2.1.5 Application of the framework to MedMij and the Argonaut project

As an additional application, the chosen framework was applied to compare projects for interoperability that make use of interoperability standards. This was to show that the framework is not limited to only standards for interoperability. The two projects, which were developed in the past three years, were MedMij and the Argonaut project. The specifications were accessed in March 2019. The scoring for this comparison was similar to the previous comparison.

2.2 Use-case descriptions

2.2.1 Description of situational differences between countries

In addition to using a framework or method to compare standards, the different situations in countries were assessed in a case study. As differences between legislation or data items may cause one standard to be favorable or unsuitable these situations were examined. Because the situation in a country is separate from standards that are used this part is not included in the scores of the comparisons. Instead, the use case is a separate description. The utilized case was a situation where a product that was developed in a smaller country, the Netherlands, was to be made available in the USA. First, legislation concerning the exchange of patient data was retrieved from government websites and MedMij on April 9, 2019 [22-32]. To illustrate differences between data items specific to countries unique patient identifiers and diagnosis-related groups were taken as examples. On April 23, 2019 Google Scholar was searched on the subject of diagnosis-related groups and Google for information about unique patient identifiers, of which the Dutch citizen service number is an example. Sources that covered identification of patients, the use of identifiers, the various diagnosis related group systems or their differences were investigated. The following search terms were used:

Google Scholar: DBC and DRG

(12)

12

2.2.2 Assessment of two differences between MedMij and Argonaut

To answer the last research question two differences between the standards that were found in the features were taken as an example. Recommendations on how they may be solved were searched for. So as to have information about the differences only features for which the format of the information was a description were selected. Additionally, features that covered whether a specification included certain information or not were excluded. To find information about how to resolve the differences search strings were established and implementation pages were investigated if they were available. Figure 2 illustrates the methods used in this study step by step, along with substeps where possible.

(13)

13

3. Results

3.1 Comparison of standards, frameworks and methods

3.1.1 Literature search

The literature search gave 23,500 results from Google Scholar, 31 from PubMed and six results from the Cochrane library. Of the first fifty results of Google Scholar nine articles approached the subject of standards and comparisons and were selected for closer examination. In PubMed and the Cochrane library no articles for closer examination were found. As a result, four articles were excluded as they did not concern a comparison or compared standards in a structured way. Including the two sources of Nictiz and the Interoperability Standards Advisory, this resulted in seven sources [4, 11-13, 15, 16, 33]. As these sources used structures in the form of tables or forms to compare standards instead of describing a procedure with steps they are frameworks. From these sources a total of 223 features were identified. Of these features, 22 were synonyms for nineteen other features. The inventory therefore consisted of 201 features after merging synonym features.

Table 1 below depicts the standards that were discussed in each source. They are all standards that focus on the exchange of patient information. Each standard was assessed at least once in one of the sources. A black, closed bullet indicates that a source addressed that specific standard while a white, open bullet indicates the absence of that standard in a source. The more specific standards names were used where possible. The standards Health Level 7 version 3 (HL7 version 3) and Digital Imaging and Communication in Medicine (DICOM) were present the most often in articles. These standards occurred five and four times respectively. The standard specifications for HL7 version 3 and DICOM were also freely accessible. Because of this these two standards were used for validating the optimal method or framework.

Table 1: Overview of the sources and which interoperability standards they discuss (black) and which they do not discuss (white).

Standard Bender, D. et al (2013) [11] Mykkänen, J.A. et al (2008) [13] Spyrou, S.S. et al (2002) [12] Eichelberg, M. et al (2005) [4] Kitsiou, S. et al (2006) [33] Nictiz [15] Interoperability standards advisory [16] HL7 version 2 • o o o o • • HL7 version 3 • • o o • • • HL7 FHIR • o o o o • • HL7 CDA o o o • o o • HL7 CCOW o o • o o o o DICOM o o o • • • • CEN/TC 251 o o • • • o o CEN EN 13606 o o o • o o o CEN EN/ISO 12967 o o o o • o o

(14)

14 Integrating the Healthcare Enterprise o o o • o o • CORBAmed o o o o • o o OpenEHR o o o • o • o

3.1.2 Results of the Delphi study

Through the network of experts and Nictiz twenty experts were approached for participation in the Delphi study. Of these twenty experts fifteen agreed to participate. Table 2 gives an overview of what interoperability standards the experts have knowledge of. Standards of Health Level Seven International are the most common with frequencies of seven, six, six and ten. Of the participants that were willing to participate in the Delphi study thirteen, nine and eight responses were received for respectively the first, second and last round. From the suggestions made by the participants in the first round thirteen features were included. Suggestions that were synonyms might have been caused by participants of the Delphi study overlooking some features in the first round, as the inventory was extensive. This brought the total number of features in the list up to 214. The results of the second round showed that for 39 features the opinions of participants were divided. Providing definitions to features in the third round had no clear effect. The interquartile ranges for the second and third round showed that there was not more consensus achieved in the third round. For the majority of the features that had an interquartile range of three or more the interquartile range stayed the same. Similarly, clarifications given by participants did not indicate differences in interpretation of the features. The results of both rounds are listed in appendix B.

Table 2: Overview of the interoperability standards participants of the Delphi study had knowledge of.

Standard Frequency

Health Level 7 version 2 7

Health Level 7 version 3 6

Health Level 7 Clinical Document Architecture

6

Health Level 7 FHIR 10

openEHR 1

Edifact 2

Systematized Nomenclature of Medicine – Clinical Terms

3

DICOM 1

ISO 13606 3

ContSys 2

Logical Observation Identifiers Names and Codes

1 Integrating the Healthcare Enterprise 4 Cross-enterprise Document Sharing 1

ASTM 1238-88 1

Harmonie et Promotion de l’informatique Médicinale

1 Health and care information models 2

(15)

15 For each framework from Table 1 the weighted score and number of features they assessed were determined, as described in the methods section. The weighted scores of the frameworks that were found were 98.5 for Bender et al. (2013), 853 for Mykkänen et al. (2008), 44.5 for Spyrou et al. (2002), 46 for Eichelberg et al. (2005), 78.5 for Kitsiou et al. (2006), 47.5 for Nictiz and 59.5 for the Interoperability Standards Advisory. The total possible weighted score was 1,150. The frameworks consisted of respectively nineteen, 159, eight, eight, fourteen, eight and ten features. Adding these together brings the total number of features up to 223 when synonyms are not removed, as stated earlier. These scores indicate how much information in the form of features the frameworks in the sources cover compared to the number of features in the inventory. As the framework of Mykkänen et al. (2008) had the highest score and number of features it was chosen to compare interoperability standards.

3.1.3 Comparison of Health Level 7 version 3 and DICOM

As DICOM and HL7 version 3 were the standards that were discussed the most in the found sources, the framework with the highest score was applied to these standards. Differences between DICOM and HL7 version 3 were mostly found in the categories of ‘basic information and scope of the standard’, ‘information and semantics’, ‘functionality and interactions’, ‘technical aspects’ and ‘domain-specific features’. The scores indicate that the weighted and unweighted difference for each category are near half of the total possible points for each category, as shown by Table 3. The total weighted score for the comparison, 364 out of 853 points, shows that the difference between DICOM and HL7 version 3 is slightly less than half the total amount of difference that is possible. The unweighted score, with 68 out of 159 features, is nine items less than half the total amount of features. In the following paragraphs for each category a summary of the differences from the comparison in Appendix C will be described.

Table 3: The total scores and scores for each category indicating the amount of weighted and unweighted differences compared to the total possible amount of difference.

Category DICOM – HL7 version 3 MedMij – Argonaut Total

possible unweighted score Total possible weighted score unweighted weighted unweighted weighted

Basic information and scope of the standard 15 81 9 51 33 191 Information and semantics 12 72 17 98,5 29 165,5 Functionality and interactions 15 71,5 15 66,5 29 141,5 Application infrastructure aspects 4 21 6 31 14 72 Technical aspects 5 23,5 1 5 12 51 Flexibility, accuracy, extensibility 2 10 5 27 7 39

(16)

16 The first category, which has fifteen features that differ, consists of basic information such as the names, versions and organizations of the standards. Other differences that were found were the scope of each standard and the level of detail of aspects that were specified by the standards. In the area of information and semantics the two standards differ in the information models used, which differ in scope of the real world and concepts used. HL7 version 3 encompasses healthcare while the scope of DICOM is smaller with concepts such as patients, studies, equipment and frame of reference. Differences are also found in the naming and types of data elements and parameters. Both DICOM and HL7 version 3 utilize basic data types and more unique data types respective to the standards. HL7 version 3 and DICOM both use nullFlavors for missing or empty values, however for DICOM this is only for clinical reports.

With respect to functions and interactions the extent of services offered by HL7 version 3 is larger than that of DICOM. HL7 version 3, aside from the exchange of patient information, also offers decision support, location and updating services and identification. Workflows that show interactions between systems and users and systems also differ between the standards. Workflows are not covered by DICOM while HL7 version 3 utilizes storyboard to illustrate such interactions, including the responsibilities of participants and sequence of actions. DICOM does, however, note functionality and information in the same format and covers exceptions and error conditions, such as timeout and abortion of association, in detail.

For the fifth category DICOM and HL7 version 3 both specify network protocols and encryption. However, while DICOM specifies TCIP/IP and HTTP services, HL7 version 3 is less specific about the protocols that can be or are used. For encryption DICOM uses AES and TLS while HL7 version 3 uses ebXML messages wrappers for secure transport. Another difference between DICOM and HL7 version 3 is that the former does specify that application entities can have, for example, multiple TCP ports but states nothing about session and transaction management. the latter does specify session and transaction management such as acknowledgements, sequence numbers and stateless sessions but not addressing or discovery. Unlike HL7 version 3, DICOM also offers data transformation to and from HL7 CDA and the national cancer institute annotation and image markup.

The differences in the area of domain-specific features mostly concern the supportive functions specified by the standards. HL7 version 3 offers services such as decision support, scheduling, records management, administration and billing. While DICOM does not offer these services, it does support reporting as the real world model has concepts like studies and reports. For interchange agreements HL7 version 3 uses a Model Interchange Format while DICOM only uses negotiations for each communication instance. Appendix C contains the full comparison of Health Level 7 version 3 and DICOM, including whether the two standards differed with respect to a certain feature.

To assess how the results of the comparison relate to previous comparisons the literature found during the literature search was examined. In the literature that was found there were three sources that

Maturity, usage, official status 3 14 4 23 9 47 System lifecycle 1 5 2 10 9 49 Domain-specific features (healthcare) 11 64 10 59 17 97 Total score 68 364 69 371 159 853

(17)

17 included results for both HL7 version 3 and DICOM. The first was the Nictiz webpage, where the two standards differed in four of the seven features. These features where the business domain, healthcare professionals that use the standard, degree of adoption and the necessity of a license [15, 34]. The Interoperability Standards Advisory was the second source [16, 35]. Here the two standards differed in three out of the seven features. These were the standards process maturity, the implementation maturity and the degree of adoption. The last source was the article of Kitsiou et al. (2006) [33]. Of the eleven features the four layers of integration and the complexity and maturity of the standards differed. Overall, the found sources showed that DICOM and HL7 version 3 often differ for half or more of the features that were compared. This shows that with respect to the scores the results from previous comparisons are quite similar to the comparison in Table 3. However, while the scores are similar, the types of features differ. For example, in the three sources found no features concerning data or more technical aspects were addressed while these were present in the comparison in this study.

3.1.4 Comparison of MedMij and Health Level 7 Argonaut project

For MedMij and the Argonaut project the differences were mostly found in the categories ‘information and semantics’, ‘functionality and interactions’, ‘application infrastructure aspects’ ‘flexibility, accuracy, extensibility’, ‘maturity, usage, official status’ and ‘domain-specific features’. The score that indicates the difference between the two was 371 of the 853 points possible. In the following paragraphs for each category a summary of the differences from the comparison in Appendix D will be described.

Of MedMij and the Argonaut project only the former uses an information model. Data in the Argonaut project represents patients or their clinical documents whereas MedMij has a more generic model with concepts that represent services, service users and providers. For the data elements and parameters MedMij uses XML data types and does not specify restrictions to the possible values for data elements. The data types used by Argonaut are based on FHIR data types such as string and complex types. These data elements may have a number of possible values that may be accompanied by a definition. For these data elements Argonaut may also make use of terminologies such as SNOMED-CT and LOINC. MedMij also has version management for the XML lists it uses and logging of retrieved patient data. The Argonaut project does not utilize such management measures.

To specify the functionality Argonaut describes RESTful operations to retrieve resources and documents through GET and POST functions and workflows. MedMij is less specific, describing the actions to be taken by systems or participants rather than specifying functions. Both MedMij and Argonaut specify workflows between applications, though for the latter it is only for SMART security and scheduling. In the workflows how users interact with the systems, for example what actions or functions they perform, is also described. For MedMij each action or function is allocated to a participant, such as personal health environment server. In contrast to Argonaut, MedMij describes transactions and triggers. On the other hand, Argonaut specifies pre- and post-conditions in the form of preparations to be taken for use cases that are described in UML diagrams. With regards to exception and error conditions MedMij does not describe the handling of stated conditions. Argonaut specifies both conditions and handling, though the conditions both standards specify differ.

One of the differences in the category of application infrastructure is that Argonaut does not specify measures for multiple users such as communal databases or for one user. MedMij however not only has user-specific session but also XML lists as a communal database that multiple users utilize. Another difference is with respect to the timing of communication. Both MedMij and Argonaut specify synchronous communication, though for the latter this is only for decision support. Argonaut also specifies asynchronous communication.

(18)

18 For the category of flexibility, accuracy and extensibility MedMij offers less freedom than Argonaut concerning optional features. Argonaut allows variation in parameters and data elements as some may be optional. MedMij does require one specific feature for applications, which is a screen for authorizing the sharing and retrieving of patient data that all participants must use. In addition to required features MedMij also states Dutch laws that need to be adhered to. Products and implementations can receive a label for MedMij when it has been confirmed that they adhere to MedMij. The Argonaut project does not state such a service.

The category of domain-specific features Argonaut provides more various services than MedMij. Examples are clinical decision support, referring patients and managing appointments through portals. This may benefit developers if the product they develop has to offer such services. Through MedMij records can be managed as patient data can be retrieved to update current records. Argonaut and MedMij also differ in the area of authorization and authentication. For MedMij an identification number is retrieved through DigiD for authentication, which is a system specific to the Netherlands. Security measures for each component are also specified. Argonaut leaves the policies for the security of the authentication server mostly to the institutions. Only a few measures are stated by Argonaut itself. Similar to the comparison of DICOM and Health Level 7 version 3, Appendix D contains the full comparison of MedMij and the Argonaut project.

3.2 Use-case descriptions

3.2.1 Situational differences between the United States and the Netherlands

To describe situational differences between countries, which are external to standards, laws about health information exchange and diagnosis-related groups were examined. In the Netherlands laws about exchanging health information are quite extensive. For example when information is exchanged between care providers they have to inform the patient and request their permission. The data exchanged has to be limited to information relevant to the perceived purpose and can only be used for that purpose. Patients are also identified by a unique citizen service number [29-32].

The USA has implemented the Health Insurance Portability and Accountability Act, which works on a federal level [22-25, 27]. Table 4 gives a complete overview of relevant laws and the items they cover. As shown in the table, on a federal level there are no laws for patient permission, data minimization, data retention and processing of data. However, some states have implemented laws and policies on their own initiative such as consent policies, laws for disclosing mental health information and laws that apply a minimum necessary standard [26].

Table 4: Overview of the laws concerning the exchange of health information respective to the United States of America and the Netherlands.

United States of America The Netherlands

Laws applicable to whole country, but states may implement more detailed policies.

Laws applicable to whole country Health Insurance Portability and Accountability

Act Covers:

• Right of access to records • Correctness of data

• Unique identification for care providers

General Data Protection Regulation Covers:

• Transparency of data processing for patients, including permission of patients

• Purpose limitation • Data minimization

(19)

19 • Transparency of data processing for

patients

• Protection of administrative, care and billing information

• National standards for securing patient data

• Correctness of data • Data retention

• Protection of data against loss, destruction and unauthorized persons • Responsible parties must be able to

prove adherence to the stated rules • Right of access to records

• Data portability No federal law for patient permission for data

exchange between care providers

The Act on the Medical Treatment Agreement Covers:

• Duty to providers to maintain health records

• Right of patients to destruction of their records

No federal law for data minimization with respect to treatments

Processing of Personal Data in Healthcare Supplementary Provisions Act

Covers:

• Care providers need to register citizen service number

• Permission of patient for electronic data exchange

• Right of access to records No federal law about data retention Directive on electronic commerce

Covers:

• Transparency of data processing between care providers of members states of the European Union for patients

No federal law about processing of data

The literature search for information about diagnosis-related groups and unique patient identifiers gave respectively 1,000 and 77,400,000 results. With respect to unique patient identifiers two sources have been found that examine how patients are identified and how identifiers can be used. Of the results found for diagnosis related groups two sources were found that discussed various diagnosis related group systems and their differences.

Two areas where the Netherlands and the USA differ with respect to data are unique patient identifiers and diagnosis related groups (DRG). The Netherlands uses a citizen service number to uniquely identify patients. In Dutch it is called the Burgerservicenummer, or BSN for short. It can be used for multiple purposes, two examples being registration for health insurance and registration for healthcare. In contrast to the Netherlands, the USA does not have a unique identifier for patients. This causes difficulties for care providers when they try to obtain a picture of a patient’s healthcare. Without a unique personal identifier system it is difficult to connect data from different sources. Instead of a unique personal identifier the USA relies on demographic data such as name, birth date and sex [36, 37]. Nowadays it is common practice for providers to use different identifiers for the same patient. In

(20)

20 addition, the information about personal attributes that can be used to identify patients is rarely recorded in the same way by each entity [37].

Differences between the DRG systems are mainly between the number of groups in the system, how patients are categorized and the number of systems. The Netherlands has utilized Diagnose Behandel Combinaties (DBCs) since 2005. The system contained about 30,000 DBCs in 2010. Patients are categorized into these groups according to the specialty that they are treated by. The pricing for DBCs is split into two lists. The A list and B list. The A list contains DBCs that have a fixed price. Prices of DBCs that are the B list can be negotiated [38, 39].

DRG systems, which are used in the USA, categorize patients according to their principal diagnosis or main procedure. There are multiple systems used, each containing between 650 and 2300 groups. These DRG systems originate from the system Diagnosis Related Groups of the Health Care Financing Administration. Examples of four used systems are the Medicare Severity-Diagnosis Related Groups (MS-DRG), International All Patient Diagnosis Related Groups (IAP-DRG) and International Refined Diagnosis Related Groups (IR-DRG). The Medicare Severity-Diagnosis Related Groups system reflects treatments offered to patients of 65 years or older. As of 2008 it consists of 745 groups. The International All Patient Diagnosis Related Groups system was developed with regard to the European market. The groups were partially adapted to European practice and consisted of 1,046 groups. This system was renamed in 2000 to International Refined Diagnosis Related Groups. In 2006 the International Refined Diagnosis Related Groups system consisted of 1,175 groups. For surgical treatments diagnostic categories were based on the main procedure instead of the principal diagnosis [39].

3.2.2 Possible solutions for two differences between MedMij and Argonaut

As an example of how differences between standards can be overcome advice on how to resolve two differences between MedMij and Argonaut were examined closer. The two differences selected were the security for authorization servers and clinical documents that are offered by MedMij and Argonaut. These aspects came from the features “security (e.g. authentication, secure data exchange” and “Functions or operations offered by one system to another”, which satisfied the criteria stated in the methods section. For the clinical documents implementation pages specifying this component were available for both MedMij and Argonaut [30, 31]. These pages were searched on April 25, 2019. For security measures and risks the implementation pages of MedMij and Argonaut, the Open Web Application Security Project and the specification of the OAuth 2.0 standard were utilized [40-44]. Additionally, Google was searched on common security risks. For this the following search terms were used on April 12, 2019:

Security dangers common attacks

In healthcare it is important for the privacy of patients that their information is accessed responsibly. For this security is necessary. Common risks for security are phishing, Denial of Service, hijacking of sessions, SQL injection and reuse of credentials [42, 44]. The security standard that MedMij and Argonaut use, OAuth 2.0, specifies countermeasures for these risks. For each possible security risk multiple countermeasures are described that users can implement. However, while MedMij and Argonaut do use the same standard, the way they approach specifying required countermeasures differs [40, 41]. MedMij states in an overview exactly what measures are to be taken and what risks they negate. Per countermeasure is stated what risks it negates along with a reference to the relevant parts of the OAuth 2.0 standard. Argonaut leaves the decisions mostly up to the institution of the authorization server and app developers. Security measures that Argonaut does state for applications are:

(21)

21 1. Transport Layer Security;

2. not storing bearer tokens in cookies; 3. not executing input as code;

4. generating an unpredictable state parameter for each user session; 5. not forwarding values passed to the redirect URL to any other URL.

Almost all of the risks and countermeasure in Table 5 are used and specified by MedMij. Other risks and countermeasures come from the Open Web Application Security Project and the OAuth standard. Implementing these counter measures for Argonaut, or a set thereof, will reduce the difference between security measures. Additionally, it would streamline the security of authorization servers that are a part of an Argonaut implementation. While there may also be suggestions for countermeasures for MedMij, these have been kept to a minimum. MedMij states that it does not implement measures that go against its principles or those of other participants.

Table 5: Overview of security risks and possible countermeasures for authorization servers.

Risk negated Countermeasures

Online guessing of authorization ‘codes’ • Client id and secret

• Short expiry time for tokens • Bind authorization code to URI

Resource owner impersonation • CAPTCHAs

• One-time secrets

Denial of Service that exhaust resources • Limited number of access tokens per user

Password phishing by counterfeit authorization server

• Education of users about phishing attacks

• Make it possible for users to confirm authenticity of the site

• Transport Layer Security Denial of Service using manufactured

authorization ‘codes’

• Rate-limit or disallow connections from clients that exceed a threshold for invalid requests

Code substitution • Validate whether code has been

allocated to client Obtaining authorization ‘codes’ from

authorization server database

• Enforce standard SQL injection countermeasures

• Store access token hashes only • Enforce system security measures • Utilize best practice for credential

storage protection Obtaining access tokens from authorization

server database

• Enforce standard SQL injection countermeasures

• Store access token hashes only • Enforce system security measures Eavesdropping or leaking authorization ‘codes’ • One-time usage restriction

• Short expiry time for authorization ‘codes’

• Revoke all tokens if multiple attempts to redeem a code are observed

(22)

22 Authorization ‘code’ leakage through

counterfeit client

• Server associates authorization ‘code’ with the redirect URI and validates it with the redirect URI passed to the token’s endpoint

The second difference between MedMij and Argonaut was with respect to clinical documents. To place information in document files and communicate the information between care providers Nictiz specifies for MedMij the use of PDF/A documents [45]. The files of this format are unstructured. Argonaut utilizes the DocumentReference resource to retrieve clinical documents [46]. While the Argonaut specification does not provide a link to the description of the document structures the FHIR standard does. This standard also makes use of the DocumentReference resource [47, 48]. The document structure has three main parts, which are contained in a bundle. The first part is a subject resource with information about the object of the document. The second part is a composition resource, which is a set of healthcare-related information. This information is divided into sections and comes from resources that are referenced. The section with the composition resource constitutes of multiple components, which are references to resources. Those components are:

1. Subject, 2. Encounter, 3. Author, 4. Attester, 5. Custodian, 6. Event,

7. Author of a certain section, 8. Subject of a section,

9. Reference to the resource that contains the information of the section.

The third part of the document structure consists of the author, subject and resource reference of each section. There are multiple sections that contain narratives which represent the content of resources, along with a reference to the resource. These components shall be included in the document. However, depending on the cardinality specified in the composition resource some information components may not be present. For example, in an example document provided there is no attester component, though a confidentiality component is present. In the example document sections in the composition resource are ‘reason for admission’, ‘medications on discharge’ and ‘allergies’. These sections also have references to their resources.

After the composition resource are narratives of resources that were referenced directly or indirectly such as Observation, AllergyIntolerance and Practitioner. As an addition to the document a binary resource that contains a stylesheet may be included. Such a stylesheet can only take the form of Cascading Style Sheets. Records about entities or processes that are involved in influence a certain resource may also be stated. These are called provenance resources. The document may also contain identifiers and dates. Required items that are specified are the document instance identifiers, the date the document was assembled from the resources and the date the author wrote the document logically. Optional information is an identifier for the composition, the date the document was witnessed by attesters and the date the document was last modified.

The first two main parts, having a set of possible information items, are the most constant parts. There is always a composition resource with at least the date, status, author, document type and title stated. The third part and the information following the composition resource are more variable. The composition of sections and resources depend on the resources that were directly and indirectly referenced. For instance, a document not focused on allergies may not contain information from an AllergyIntolerance resource. Another possibility is implementing FHIR documents and the

(23)

23 DocumentReference resource instead of giving structure to PDF/A files, as MedMij already uses FHIR resources.

The Argonaut Document Query Implementation guide defines how patients and providers can retrieve clinical documents. One of the type specified is the Continuity of Care Document [49]. This document follows the same structure as described earlier. Basic information such as author, date, title and subject will be specified. For the sections the following information, of which the first six are mandatory, can be included:

• Allergies, • Problems, • Social history, • Medications, • Vital signs, • Results, • Procedures, • Family history, • Payers, • Advance directives, • Immunizations, • Medical equipment, • Functional status, • Encounters, • Plan of care, • Nutrition, • Mental status, • Goals.

4. Discussion

In this study seven frameworks were found, of which the majority assessed ten or less features. Of these frameworks the optimal choice to assess differences and measure the ability to convert between interoperability standards was the framework of Mykkänen et al. (2008) which had the highest scores [13]. The weighted score of this framework, which is the weighted sum of the present features, was around ten times as high for two other frameworks, around eighteen times as high as three other frameworks and fourteen times as high as one other framework. For both the comparison between DICOM and HL7 version 3 and the comparison between Argonaut and MedMij the ability to convert between the two standards was low. The unweighted and weighted scores for each comparison were slightly less than half the possible points of 853. The results of the DICOM and HL7 version 3 comparison were confirmed by the articles that examined DICOM and HL7 version 3.

Comparisons in previous literature showed that the majority of comparisons is limited. These comparisons often covered not more than ten features. In addition, these features would focus on basic information such as the adoption of a standard, the type of standard and the information model. In contrast to the comparisons of standards found in literature earlier, the comparisons in this study were more extensive [4, 11-13, 33-35]. Assessing multiple different aspects of standards revealed more differences which would not have been found if only general information was assessed. This confirms that including less than twenty features would not be sufficient to get a clear overview of differences between standards.

(24)

24 As anticipated, a large number of features were involved in the framework and inventory. As a consequence, if these sets of features were used as they are, without adjustments, performing a comparison in detail would be difficult to accomplish. Mykkänen et al. (2008) confirms this. This article that describes the framework used for the comparisons in this study states that a detailed analysis for each feature is not reasonable. They recommend that users should take a subset of the features in the framework. Such a subset could contain the features in which users are the most interested in or the features of a certain category. For this study, following the recommendation of the article, the comparisons have not been done in great detail. A second limitation of this study is that the articles found during the literature search only focused on standards for exchanging information. Consequently, other types of standards such as terminology standards could not be involved in the comparison of standards.

One downside of the methodology of this study is that no validation of the framework has been done. The results of the comparison between DICOM and HL7 version 3 have been compared with results of previous literature to assess how much the results differed. While this gives an indication of how the results relate to each other it does not show the consistency of the framework used in this study. Variations in repeated comparisons may still occur as a result of different interpretations of the features or the ability of users to find specific information in the specifications. We are aware of the limitation that some features that were suggested in the first round of the Delphi study were not present in any framework or method. However, while these were not used, the suggestions have revealed information that experts considered important. Among the suggestions were features like the frequency of updates, backward compatibility, access to data for patients, the governance process used for the standard and whether Uniform Resource Identifiers are utilized. These all had a weight of 5.5 or higher. Users of the framework may still choose to add these.

This approach to comparing standards has the potential to support the development of healthcare systems. When business develop a product they may have long-term goals, such as offering their product to multiple countries. Taking these goals into account a set of interoperability standards may be considered for implementation. Using the framework from this study a standard can be chosen that aligns with these goals the most. Another possibility is that a second or a third standard can be chosen as a fallback. In the situation that the first standard is not suitable the developers can fall back to the next standard, for which the differences and ability to convert to have already been made clear. Additionally, to make a better distinction between standards that are compared the type of the standard, such as a terminology or structure standard, should be stated. For instance, comparing two standards that are fundamentally different, such as terminologies and standards that define structures for the exchange of information will not give relevant results. These standards cannot replace each other as their aim and functions differ. This research could also be useful for uniting standards that show little differences. A comparison between two standards can be made to reveal the differences. Following this, the organizations responsible for the standards could come together and discuss how to resolve these differences to merge the standards. Consequently, the number of existing standards could be reduced. This way can work towards a number of standards.

It is recommended that further research should be undertaken with respect to definitions for the features. As the articles did not provide definitions along with the features they specified the interpretation of some features by users may be ambiguous. To ensure that features have a single meaning and improve the comparison process consensus about the definitions of the features should be reached. Future studies could also target Inter-rater reliability. In this study the results from the comparison were assessed by examining sources that also compared DICOM and HL7 version 3. To determine whether the results of comparisons are consistent the inter-rater reliability should be examined. If the consensus between raters is high this would mean that the framework used in this study can be used reliably to compare standards. A third focus can be on how the modularity of systems influences the ability to convert between interoperability standards. If a standard has been

Referenties

GERELATEERDE DOCUMENTEN

New institutional performance agreements with the Ministry of Education, Culture and Science (OC&W) resulted in a focus on the improvement of retention,

Finally, with the help of this case study and success measurements and factors for online communities in general, we have defined success measurements and

As a matter of text, then, the logical (or, even more limiting, causal) relations between two fragments as a matter of their signifier-level content limits the potential

The introduction of the supramolecular cross-links into the aliphatic and hydrophobic PPEs showed a signi ficant impact on the material properties: increased glass-transition and

Since the introduction of technology is performed in different cohorts, a cross-lag-design can be used to study the effects on central characte- ristics of the working conditions

The means indicated to this end were: re- commendations to Member Bodies for coordination and unifica- tion of national standards, international standards, exchange of

It is tempting to include in this chapter a general ap- proach for atom (group) transfer in sigmatropic shifts based on the coordination of the migrating atom

Welk bod is voor A het voordeligst?. Berekening