• No results found

Risk assessment of technology-induced errors in health care

N/A
N/A
Protected

Academic year: 2021

Share "Risk assessment of technology-induced errors in health care"

Copied!
152
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

by

Tien-Sung (David) Chio

BSc, National Defense University, Taiwan, 1988 MSc, National Defense University, Taiwan, 1993 MSc, Washington University in St. Louis, 2000

DSc, Washington University in St. Louis, 2002 A Thesis Submitted in Partial Fulfillment

of the Requirements for the Degree of MASTER OF SCIENCE

in the School of Health Information Science

© Tien-Sung (David) Chio, 2016 University of Victoria

All rights reserved. This thesis may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author.

(2)

Supervisory Committee

Risk Assessment of Technology-Induced Errors in Health Care by

Tien-Sung (David) Chio

BSc, National Defense University, Taiwan, 1988 MSc, National Defense University, Taiwan, 1993 MSc, Washington University in St. Louis, 2000

DSc, Washington University in St. Louis, 2002

Supervisory Committee

Dr. Andre Kushniruk, School of Health Information Science Supervisor

Dr. Elizabeth Borycki, School of Health Information Science Departmental Member

(3)

Abstract

Supervisory Committee

Dr. Andre Kushniruk, School of Health Information Science Supervisor

Dr. Elizabeth Borycki, School of Health Information Science Departmental Member

This study demonstrates that hybrid methods can be used for measuring the risk severity of technology-induced errors (TIE) that result from use of health information technology (HIT).

Objectives:

The objectives of this research study include:

1. Developing an integrated conceptual risk assessment model to measure the risk severity of technology-induced errors.

2. Analyzing the criticality and risk thresholds associated with TIE’s contributing factors.

3. Developing a computer-based simulation model that could be used to undertake various simulations of TIE’s problems and validate the results.

Methods:

Using data from published papers describing three sample problems related to usability and technology-induced errors, hybrid methods were developed for assessing the risk severity and thresholds under various simulated conditions.

Results:

A risk assessment model (RAM) and its corresponding steps were developed. A

computer-based simulation of risk assessment using the model was also developed, and several runs of the simulation were carried out. The model was tested and found to be valid.

Conclusion:

Based on assumptions and published statistics obtained by publically available databases, we measured the risk severity and analyzed its criticality to classify risks of contributing factors into four different classes. The simulation results validated the efficiency and efficacy of the proposed methods with the sample problems.

(4)

Table of Contents

Supervisory Committee ... ii

Abstract ... iii

Table of Contents ... iv

List of Tables ... vi

List of Figures ... viii

Acknowledgments ... x

Dedication ... xi

Chapter 1: Introduction ... 1

1.1 Health Information Systems... 1

1.2 Technology-Induced Errors ... 2

1.3 Risk Assessment of Technology-Induced Errors ... 2

1.4 Structure of Thesis ... 3

Chapter 2: Literature Review ... 5

2.1 Medical Errors ... 5

2.2 Health Information Systems... 6

2.3 Human Factors and Usability ... 9

2.4 Decision Support Systems, Safety of HIT and Risk ... 11

2.5 Technology-Induced Errors ... 13

2.6 Risk Assessment ... 14

2.7 Summary of Literature Review ... 18

Chapter 3: Objectives and Research Questions ... 19

3.1 Objectives ... 19

3.2 Research Questions ... 19

Chapter 4: Methods ... 20

4.1 Conceptual Model of Risk Assessment for Technology-Induced Errors ... 20

4.2 Mathematical Method for Calculating Risk Level ... 22

4.2.1 A Modified Risk Assessment Matrix for Classifying Risk Level ... 22

4.2.2 Mathematical Formula for Calculating Risk Rating ... 24

4.2.3 Criticality Analysis, Risk Thresholds, and Risk Classification ... 26

4.3 Analytic Hierarchy Process (AHP) Method ... 28

4.3.1 AHP Method for Evaluating Impact Value ... 32

4.3.2 AHP Method for Determining Boundary Condition of Risk Impact ... 37

4.4 Computer-based Simulation Method ... 39

Chapter 5: Research Design ... 42

5.1 Phase 1: Model Development ... 43

5.2 Phase 2: Model Elaboration ... 44

5.3 Phase 3: Model Testing ... 46

Chapter 6: Sample Problems and Results ... 48

6.1 Data Collection and Data Cleansing ... 48

6.2 Sample Problem 1: Technology-induced Errors and Usability Problem ... 49

6.2.1 Problem Description and Assumptions ... 49

6.2.2 Simulation Model ... 51

(5)

6.3 Sample Problem 2: Computer-related Patient Safety Incidents ... 57

6.3.1 Problem Description and Assumptions ... 57

6.3.2 Simulation Model ... 60

6.3.3 Results ... 61

6.4 Sample Problem 3: Technology-related Medication Errors in a Tertiary Hospital ... 66

6.4.1 Problem Description and Assumptions ... 66

6.4.2 Simulation Model ... 68

6.4.3 Results ... 69

Chapter 7: Discussion ... 75

7.1 Rigorous Approach for Evaluating Risk Severity of TIE ... 75

7.2 Improving Patient Safety with the Safety Integrity Matrix ... 77

7.3 Importance of Considering Risk Criticality as a Continuous Span ... 79

7.4 Extension Framework for Risk Assessment of TIE Problems ... 80

7.5 Study Limitations ... 83

Chapter 8: Conclusion ... 85

8.1 Implications of the Research Study ... 85

8.1.1 Implications for preventing latent errors in healthcare ... 86

8.1.2 Implications for selecting vendors of health information technologies ... 86

8.2 Future Work ... 86

Bibliography ... 88

Appendix ... 93

Appendix A: Raw Data of Sample Problem 1 ... 93

A-1: Detailed Steps ... 93

A-2: Simulation Model ... 98

A-3: Program Codes ... 101

A-4: Simulator and Simulation Outputs ... 104

Appendix B: Raw Data of Sample Problem 2 ... 107

B-1: Detailed Steps ... 107

B-2: Simulation Model ... 110

B-3: Program Codes ... 115

B-4: Simulator and Simulation Outputs ... 119

Appendix C: Raw Data of Sample Problem 3 ... 122

C-1: Detailed Steps ... 122

C-2: Simulation Model ... 126

C-3: Program Codes ... 132

C-4: Simulator and Simulation Outputs ... 136

(6)

List of Tables

Table 1 A modified risk classification and definitions proposed in thesis ... 27

Table 2 Judgment matrix with symbolic elements in Example 1. ... 31

Table 3 AHP pair-wise comparaison scale ... 32

Table 4 Judgement matrix of Example 2 ... 34

Table 5 A judgment matrix with given comparason weights of Example 2 ... 34

Table 6 Composite weight for contributing factors in Example 2 ... 36

Table 7 Impact values of contributing factors ‘slip, lapse, and mistake’ in Example 2 ... 36

Table 8 Computation of the priority vector for given judgment matrices of different modes 37 Table 9 Composite weights for ‘criterion-mode’ combinations ... 38

Table 10 Risk boundary for the contributing factors ... 38

Table 11 Basic building blocks of STELLA© ... 40

Table 12 Risk scores in the sample problem 1 ... 52

Table 13 Criticality analysis and risk thresholds of the sample problem 1 ... 54

Table 14 Risk boundaries and classification of the sample problem 1 ... 55

Table 15 Risk scores in the sample problem 2 ... 62

Table 16 Criticality analysis and risk thresholds of the sample problem 2 ... 63

Table 17 Risk boundaries and classification of the sample problem 2 ... 65

Table 18 Risk scores in the sample problem 3 ... 70

Table 19 Criticality analysis and risk thresholds of the sample problem 3 ... 71

Table 20 Risk boundaries and classification of the sample problem 3 ... 72

Table 21 Judgement matrix of criteria level ... 93

Table 22 Composite weights of criteria level ... 94

Table 23 Judgement matrix of contributing factors for criterion ‘Interface’ ... 95

Table 24 Composite weights of contributing factors for criterion ‘Interface’ ... 96

Table 25 Judgement matrix of contributing factors for criterion ‘Content’ ... 97

Table 26 Composite weights of contributing factors for criterion ‘Content’ ... 97

Table 27 Judgement matrix of criteria level in the sample problem 2 ... 107

Table 28 Composite weights of criteria level in the sample problem 2 ... 107

Table 29 Judgement matrix of contributing factors for ‘Information-input problems’ ... 108

Table 30 Composite weights of contributing factors for ‘Information-input problems’ ... 108

Table 31 Judgement matrix of contributing factors for ‘Information transfer problems’ .... 108

Table 32 Composite weights of contributing factors for ‘Information transfer problems’ .. 108

Table 33 Judgement matrix of contributing factors for ‘Information output problems’ ... 108

Table 34 Composite weights of contributing factors for ‘Information output problems’ .... 108

Table 35 Judgement matrix of contributing factors for ‘Machine technical problems’ ... 109

Table 36 Composite weights of contributing factors for ‘Machine technical problems’ ... 109

Table 37 Judgement matrix of criteria level in the sample problem 3 ... 122

Table 38 Composite weights of criteria level in the sample problem 3 ... 122

Table 39 Judgement matrix of contributing factors for ‘CPOE’ ... 123

Table 40 Composite weights of contributing factors for ‘CPOE’ ... 123

Table 41 Judgement matrix of contributing factors for ‘Bar-coded identification labels’ ... 123

Table 42 Composite weights of contributing factors for ‘Bar-coded identification labels’ . 123 Table 43 Judgement matrix of contributing factors for ‘Computer dispensing labels’ ... 124

Table 44 Composite weights of contributing factors for ‘Computer dispensing labels’ ... 124

Table 45 Judgement matrix of contributing factors for ‘Infusion pump related errors’ ... 124

Table 46 Composite weights of contributing factors for ‘Infusion pump related errors’ ... 124

(7)

Table 48 Composite weights of contributing factors for ‘Other errors’ ... 125 Table 49 Average consistencies of random matrices (R.I. values)... 141

(8)

List of Figures

Figure 1 Structure of thesis ... 4

Figure 2 Schematic representation of a risk management process ... 15

Figure 3 Tolerable risk and ALARP (IEC61508-5:1998, Annex B) ... 18

Figure 4 Flowchart of risk assessment (trimmed from Figure 2) ... 20

Figure 5 An conceptual Risk Assessment Model (RAM) of technology-induced errors ... 21

Figure 6 Matrix for the risk assessment ... 23

Figure 7 Modified matrix for risk assessment ... 24

Figure 8 Boundary conditions of impact value and risk severity ... 27

Figure 9 Three-tier hierarchical structure with top-down decomposition technique ... 30

Figure 10 An example of medical errors ... 33

Figure 11 Basic diagram of STELLA© model ... 41

Figure 12 Flowchart of a research design ... 42

Figure 13 Hierarchy structure of the sample problem 1 ... 50

Figure 14 Risk synthesis of the sample problem 1 ... 51

Figure 15 Simulation model of the sample problem 1 (see Appendix A-2) ... 51

Figure 16 Hierarchy structure of the sample problem 2 ... 59

Figure 17 Risk synthesis of the sample problem 2 ... 60

Figure 18 Simulation model of the sample problem 2 (see Appendix B-2) ... 61

Figure 19 Hierarchy structure of the sample problem 3 ... 67

Figure 20 Risk synthesis of the sample problem 3 ... 68

Figure 21 Simulation model of the sample problem 3 (see Appendix C-2) ... 69

Figure 22 Top-down approach for constructing the structure of TIE’s problem ... 76

Figure 23 Bottom-up approach for synthesizing risk levels of TIE’s problem ... 76

Figure 24 Risk assessment matrix proposed in thesis ... 78

Figure 25 Safety integrity matrix associated with risk regions ... 79

Figure 26 An extension framework for assessing risk levels of TIE’s problem ... 81

Figure 27 Risk severity reduced by improvements through iterations in use of RAM ... 82

Figure 28 Sub-model for criterion ‘Interface’ ... 99

Figure 29 Sub-model for criterion ‘Content’ ... 100

Figure 30 Simulator of the sample problem 1 ... 104

Figure 31 Simulation results of the criterion ‘Interface’ and its contributing factors ... 105

Figure 32 Simulation results of the criterion ‘Content’ and its contributing factors ... 106

Figure 33 Sub-model for criterion ‘Information input problems’ ... 111

Figure 34 Sub-model for criterion ‘Information transfer problems’ ... 112

Figure 35 Sub-model for criterion ‘Information output problems’ ... 113

Figure 36 Sub-model for criterion ‘Machine general technical problems’ ... 114

Figure 37 Simulator of the sample problem 2 ... 119

Figure 38 Simulation results of ‘Information input problem’ and its contributing factors .. 120

Figure 39 Simulation results of ‘Information transfer problem’ and contributing factors ... 120

Figure 40 Simulation results of ‘Information output problem’ and its contributing factors 121 Figure 41 Simulation results of ‘Machine technical problem’ and its contributing factors . 121 Figure 42 Sub-model for criterion ‘CPOE’ ... 127

Figure 43 Sub-model for criterion ‘Bar-coded patient identification labels’ ... 128

Figure 44 Sub-model for criterion ‘Computer generated dispensing labels’ ... 129

Figure 45 Sub-model for criterion ‘Infusion pump related errors’ ... 130

Figure 46 Sub-models for criterion ‘Other errors’ ... 131

(9)

Figure 48 Simulation results of ‘CPOE’ and its contributing factors ... 137 Figure 49 Simulation results of ‘Bar-coded patient ID labels’ and its contributing factors . 138 Figure 50 Simulation results of ‘Computer dispensing labels’ and its contributing factor .. 138 Figure 51 Simulation results of ‘Infusion pump related errors’ and its contributing factors 139 Figure 52 Simulation results of ‘Other errors’ and its contributing factor ... 139

(10)

Acknowledgments

This thesis would not have been possible without the support and guidance of many individuals. I would like to express my sincere thanks and appreciation to:

My supervisor:

Dr. Andre Kushniruk, for his valuable insights and support. His expert guidance and encouragement helped keep up my motivation throughout the study.

Departmental Committee Member:

Dr. Elizabeth Borycki, for her suggestions which increased the quality of this thesis.

The School of Health Information Science (HINF):

I express my gratitude to the faculty members and staffs, for helping me with my

coursework and academic research during my graduate years at UVic. Special thanks go to Mr. Dave Hutchinson, for his invaluable support during my work terms with the healthcare professionals; to Ms. Sandy Polomark and Ms. Sandra Boudewyn, for their help and support throughout this academic exploration; and also to HINF, for providing Denis and Pat Protti Student Enrichment Award for a short-term study.

Oral Defense Committee Member:

Dr. Hossein Nassaji and Dr. Alex Thomo, for their thoughtful questions and comments were greatly appreciated.

Finally, I would like to thank my parents, my wife Mindy, daughters Wendy and Lindy, my parents-in-law, and my friends Mr. George Fraser and Mrs. Berte Fraser who have encouraged me during my journey.

(11)

Dedication

I would like to dedicate this thesis to my family. To my parents and my wife (Mindy), who have always supported my goals and aspirations. Your love, support, and

(12)

Chapter 1: Introduction

1.1 Health Information Systems

There is an increasing body of knowledge indicating that errors can result from health information systems' (HIS) failure to support the inherent complexity and dynamism of clinical workflow. Nevertheless, it is essential to develop a measurement process for the risk assessment of technology-induced error to improve patient safety in healthcare. Technology-related errors have been defined as any error related to a technology used in the medication use process, which would not have happened if not for the use of this technology (Samaranayake, Cheung, Chui, & Cheung, 2012). Technology-induced errors (TIE) are defined as errors that result from use of health information technology (HIT) and are similar to the technology-related errors. More specifically, TIE are medical errors that arise from the: “design and development of a technology; implementation and

customization of a technology; and interactions between the operation of a new technology and the new work processes that arise from the technology use” (Borycki, 2010, page 47).,

Health information systems are an essential part of clinical decision making and have contributed to increased levels of accuracy and correct information in the patient record. However, despite the benefits of health information systems in healthcare, there are a number of challenges in terms of preventing technology-induced errors which may occur in the form of latent error that are much harder to detect and may require considerable analysis to discover, while active errors are immediately discernible (Reason, 1990). One of the most critical issues is the lack of effective and efficient ways to analyze and predict

(13)

latent errors related to human factors. Even though much work has been done in

preventing active errors, latent errors are still difficult to prevent due to the complexity of healthcare processes and work activities (Borycki & Keay, 2010).

1.2 Technology-Induced Errors

Technology-induced errors may be seen and measured across the full range of healthcare from government healthcare to the level of individual care. The optimal solution to these problems is to have a sound understanding of how to measure technology-induced error in a systematic way. It is important to find an effective approach to measuring the risk severity of technology-induced error since there is an increasing number and complexity of systems that provide healthcare professionals with an increasing volume and complexity of information. Therefore, an understanding of risk analysis and evaluation of health information systems is becoming increasingly

important.

1.3 Risk Assessment of Technology-Induced Errors

Risk is a combination of the probability of occurrence of harm and the severity of that harm (BS EN ISO:14971, 2012). IEC 61508-5 International Standard (1998) defined the tolerable region of potential risks is the range between the intolerable risk and acceptable risk regions. The importance for evaluating the tolerance regions is to provide evidence for safety integrity. Over the last 50 years, the aviation industry in particular has reduced the consequences of error significantly through a process of critical incident analysis. The principle involved is to analyze any untoward incident in depth to identify any potential cause of harm. Health care can be improved by applying the routine process of

(14)

continuous risk awareness, assessment and mitigation that has become a core part of the aviation culture at all levels in order to manage technology-induced error (Müller, Wittmer, & Drax, 2014).

Risk management is an application of management policies, procedures and practices to the activities of analyzing, evaluating and controlling risk. Based on the concept of risk management, there is a need to develop a conceptual risk assessment model and rigorous processes to evaluate the risk severity of technology-induced errors as a decision support tool. Such risk assessment would improve the effectiveness, efficiency and safety of healthcare and standardized methods and guidelines for assessing the potential risk values of technology-induced error.

1.4 Structure of Thesis

This thesis is organized as follows. In chapter 2, we present a literature review and summarize what is the current state of knowledge on the topic. Papers in several areas are reviewed for their potential to lead to a risk assessment processes for technology-induced errors reduction. In chapter 3, we identify problems and research questions. In chapter 4, a risk assessment model (RAM) for the problem of technology-induced errors is

proposed. Three methods are applied to measure the risk level, to analyze the risk criticality, and to validate the results with a simulation tool. In chapter 5, the detailed steps of research design are elaborated. Three sample problems and computer simulations with STELLA© modeling and simulation software are included in Chapter 6. A

discussion with the implications of the research is provided in Chapter 7. Finally, the conclusion is provided in chapter 8. The raw data and detailed stages of the sample

(15)

problems are demonstrated in Appendix A-C. A description of the flow of chapters through the thesis is shown in Figure 1.

(16)

Chapter 2: Literature Review

In this section, an extensive search and review of the articles on technology-induced error was conducted in several distinct areas. Moreover, the search strategy combined these following terms: usability, error-induced from health information system, risk management, and simulation, using Medline and the Cochrane database of systematic reviews covering 1977 to 2015.

2.1 Medical Errors

A report on medical error from the Institute of Medicine (IOM) has greatly increased people's awareness of the frequency, magnitude, complexity, and seriousness of medical errors (Kalra, 2004). As the eighth leading cause of death in the US, medical errors may account for as many as 98,000 preventable deaths per year (Zhang, Patel, Johnson, & Shortliffe, 2004).

The nature of medical errors is analyzed by four facets which are errors in

medications, treatment procedures, diagnosis, and clerical functions. This naturally led to the extension of the IOM classification of medical errors. The report found that two-thirds of the taxonomies classified systemic factors of medical errors and only a third utilized theoretical error concepts (Taib, McIntosh, Caponecchia, & Baysari, 2011). A cognitive taxonomy of medical error has been proposed at the level of individuals and their interactions with technology (Zhang, Patel, Johnson, & Shortliffe, 2004). They used four criteria that include execution slips, evaluation slips, execution mistakes, and

(17)

Reason (1990) defined violations as the deliberate deviation of acts from a safe operating procedure. Deliberate violations include practices that are not deemed to be against the written rules but have the potential to cause problems. Moreover, Reason proposed the image of "Swiss cheese" to explain the occurrence of system failures and that model has become the dominant paradigm for analyzing medical errors and patient safety incidents (Reason, 2000). Perneger showed that among quality and safety

improvement professionals, the meaning of the Swiss cheese model of medical error is far from univocal. The interpretations of specific features of the Swiss cheese model varied considerably among professionals (Perneger, 2005). In the medical field all kinds of errors are significant, and can have potentially devastating effects. Mistakes can involve errors in planning, although action may proceed as intended. The situation could have been inadequately assessed, and could have involved a lack of knowledge (Kopec, Kabir, Reinharth, Rothschild, & Castiglione, 2003). Therefore, unless errors involving technology are analyzed and transformed into a meaningful use, it is dangerous to argue that health information systems will reduce error as there is such a wide range of

approaches and ideas regarding the development, design, testing, customization, and operation of the electronic health record (Borycki et al., 2009).

2.2 Health Information Systems

Health information science is a discipline at the intersection of information science, computer science, and health care. It deals with the resources, devices, and methods required to optimize the acquisition, storage, retrieval, and use of information in health and biomedicine. Health informatics tools include not only computers but also clinical

(18)

guidelines, formal medical terminologies, and information and communication systems (Sullivan & Weiss, 2001). Health informatics has been applied to the areas of nursing, clinical care (Brass, 2003), dentistry, pharmacy, public health (Brandeau, Sainfort, & Pierskalla, 2004), occupational therapy, and medical research (Bottomley et al., 2005; Gonzalez & Vrbin, 2007).

Healthcare systems can be regarded as complex supply chain systems where all of the dependent functions may fail, when one component of a system which serves multiple functions fails (Sarimveis, Patrinos, Tarantilis, & Kiranoudis, 2008). Evaluation of information technology is usually performed in the real and complex health care environment, with its different professional groups such as physicians, nurses, patients, administration, IT staff hospital management, funding bodies, and its high dependency on external influences such as legislation, economic constraints, or patient clientele.

Therefore, this increases the difficulties and complexity of successful adoption of electronic health records in Canada (Protti, 2002). As the dependency on technology in complex systems increases and systems become more tightly coupled in time and sequence, so does the likelihood of accidents (Kopec et al., 2003).

Penuel, Statler, and Hagen (2013) argued that both human-induced and natural crises are tightly interconnected and interdependent than in the past. Whenever systems break down unexpectedly, crises can arise. If the initial breakdown is not managed effectively, crises can spread over space and time to threaten not only the effective function, but also the existence of the system itself (Penuel et al., 2013). Hence, health information system (HIS) developers should decrease HIS errors and increase technology suitability for tasks, self- descriptiveness, controllability, conformity with user expectations, error tolerance,

(19)

suitability for individualization, and suitability for user learning (Ehtesham, Isfahani, & Sadoughi, 2013).

From the experiences of other jurisdictions and countries, it has been noted that successful implementations are often service-oriented with an evolutionary approach and entail a “good fit” among user-technology-task (Health Chief Information Officer

Council, 2003; Ammenwerth, Iller, & Mahler, 2006). Elaine (2010) recorded personal accounts of errors made by several healthcare providers. The complexity of some systems has the potential to obstruct automatic aspects of decision-making.

Another potential source of information about heath information technology (HIT) problems comprises reports about equipment failure and hazards submitted by users and vendors. One such source is the US Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database, which contains reports of events involving medical devices (Chotikawanich, Korman, & Monga, 2011). Under the Federal, Food, Drug, and Cosmetic Act, HIT is a medical device. However, the FDA does not currently enforce its regulatory requirements with respect to HIT. Nevertheless, some manufactures have voluntarily listed their systems, and the FDA has revived reports of events involving HIT (Magrabi, Ong, Runciman, & Coiera, 2010). In the near future, more and more complex healthcare devices and information systems will appear. Some metaphors for mobile health and pervasive healthcare have been discussed (Borycki, Kushniruk, & Alofayson, 2011). The invisible, wireless, networked, interoperable, computational power-structure may add to the quality of life of patients, however, new HIT and HIS also enhance the probability of their occurring technology-induced errors.

(20)

Although the literature has shown that information systems may decrease error in medicine, the idiosyncratic and varied nature of design of health information applications argues for the usability testing of each of these systems and devices to predict aspects of design that may be related to error prior to their widespread dissemination (Kushniruk, Triola, Borycki, Stein, & Kannry, 2005). Some of these errors are related to human factors and can come in many forms, such as slips and mistakes (Borycki & Keay, 2010). Although some active medical errors may be detected, current approaches may not contribute to the problem of dealing with latent errors.

2.3 Human Factors and Usability

When the performance limitations of operators or workers are exceeded, there is an increase in the risk of human error with the probability of an injury. Therefore, human factors engineering strives to find a balance between system safety and productivity (France, Throop, Walczyk, Allen, Parekh, & Parsons, 2005). The human error problem can be viewed in two ways: the person-centered approach and the system-centered approach. In the system-centered approach, errors are seen as consequences rather than causes. These include recurrent error traps that errors tend to fall into recurrent patterns in the workplace and the organizational processes that give rise to them (Reason, 2000).

Some evaluation models used in checking human errors can be considered. A modeling process that helps decision makers in overcoming these perceptual and cognitive challenges was proposed to improve decision effectiveness (Marquard & Robinson, 2008). The issues related to the impact of human factors on decision quality in healthcare are considered in these models (Chio, Kushniruk, & Borycki, 2013). Chio

(21)

proposed an error prevention model (EPM) with some improving tools and techniques that can be used to analyze complex errors that may be considered latent.

In recent years usability engineering has become increasingly recognized as being important in guiding the design, evaluation and implementation of information systems (Nielsen, 2000). The idea of usability has been represented in Traditional Software Quality Models for at least three decades. For example, McCall, Richards, and Walters (1977) described one of the earliest of these models referred to as the GE (General Electric) model.

Usability refers to the ease of use and learnability of a human-made object. The object of use can be a software application, website, book, tool, machine, process, or anything a human interacts with. Usability includes methods for measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance (Nielsen, 1993). In human-computer interaction and computer science, usability studies are aimed at enhancing the elegance and clarity of user interaction with a computer program or application. One of the most important types of studies is evaluation of usability involving usability testing (Seffah, Donyaee, Kline, & Padda, 2006).

ISO 9241 is a multi-part standard from the International Organization for Standardization (ISO) covering ergonomics of human-computer interaction. This

standard presents usability guidelines and is used for evaluating usability according to the context of use of the software (Oppermann & Reiterer, 1997). ISO 9241-11 recommends a process-oriented approach for usability, by which a usable interactive system is

(22)

2015). Although patient care can be improved by using patient-centered methods and interventions (Kushniruk & Borycki, 2008), the principles of patient-centered design do not always align with the principles of safety-centered design from a traditional human factors engineering perspective (France, Throop, Walczyk, Allen, Parekh, & Parsons, 2005).

2.4 Decision Support Systems, Safety of HIT and Risk

The ultimate goal of care delivery organizations is to improve health-related outcomes while controlling costs. To achieve this, policy makers and planning administrators must be able to quantify and compare the benefits and costs of

interventions. It has been argued that the utility of traditional methods such as clinical trials in health economics is limited (Kemm, 2008).

Berner and Lande (2007) summarize current data on the use and impact of clinical decision support systems (CDSS). A clinical decision support system is an interactive decision support system, which is designed to assist physicians and other health

professionals with decision making tasks, such as determining diagnosis of patient data. A decision involves making a selection from a set of alternative choices in clinical activities. Broadly speaking, a decision-support system is simply a computer system that helps us make a decision by leveraging the multi-criteria decision-making model. A decision support system (DSS) provides a means for decision-makers to make decisions on the basis of more complete information and analysis (Kim, 2000).

The true goal of integrated decision-making support is to provide the decision-maker with the ability to look into the future, and to make the best possible decision based on

(23)

past and present information and future predictions (Grosan, Abraham, & Tigan, 2007). In the case of sustainable development, this means the decision maker should be able to predict in advance the risk and vulnerability of health system and infrastructure to hazards, both natural and man-induced. This requires that data be transformed into knowledge, and that the consequences of information use, as well as decision-making by practitioners in healthcare system be considered as noted by Kapiriri and colleagues (Kapiriri & Bondy, 2006). However, the framework Kapiriri proposed is without an illustrative example.

In order to bridge the gap between process and decision modeling, a formalized linkage model has been proposed to facilitate the integration of strategic, decision and process objectives within a single framework by some researchers (Neiger Churilov, & Flitman, 2009). Furthermore, a number of frameworks and models for considering technology-induced error have been compared to show how they are practically employed in the prediction and prevention of such error (Borycki et al., 2012).

In some research, computer programs have been shown to be a useful tool for risk management. A computerized system aimed at reducing the number of human errors has been developed by Tourassi (2005). The computer system was designed to alert

physicians as to the presence of any abnormal findings, emphasizing sensitivity over specificity, and requiring their acknowledgment. Computer-based simulation is a new trend for the analysis and evaluation of the technology-induced errors (Aronson, 2009).

There are many traditional purposes for performing simulations. Simulations are desirable when performing analyses by experiments that would otherwise be impossible, the experiment would be too dangerous, or the time frame to perform experiments is too

(24)

long. A simulation approach allows us to control one parameter at a time that might not be possible to do in the real world (Steed, 1992). As predicted by Borycki and et al. (2011), more and more complex healthcare devices and information systems appear today. The trends of computer-based simulation in conjunction with systems dynamics provides the insights which are progressively built up to show health care practitioners and researchers how we think about how structure determines behavior. Some of these insights could be used to perform virtual experiments as a way to learn more rigorously about complex interventions in usability complicated simulation (Forrester, 2007).

2.5 Technology-Induced Errors

As the rate of technology use in healthcare continues to increase in response to changing demographic and healthcare needs of patients so does the potential for

technology-induced error. Therefore, technology-induced error as a patient safety issue has become a source of increasing concern for system designers, developers,

implementers and users (Borycki, Kushniruk, Bellwood, & Brender, 2012); (Kushniruk et al., 2005). Using information in the incident record form, Samaranayake et al. (2012) identified errors that were related to technology.

Kushniruk and Borycki (2013) proposed a classification scheme which separates usability problems into two broad categories; interface problems and content problems. In their further work, the occurrences of technology-induced error can be predicted in a hierarchical structure with links between usability problems and medication errors (Anderson, Keay, Borycki, Kushniruk, Nicoll, & Anderson, 2009).

(25)

The objectives of health information systems undoubtedly are to reduce factors that lead to medical errors. Health information systems may appear to provide accurate useful information for decision makers. However, health information systems may not reflect true information due to induced errors. As described above, technology-induced errors (TIE) are medical errors that arise from “the design and development of a technology; implementation and customization of a technology; and interactions between the operation of a new technology and the new work processes that arise from the

technology use” (Borycki et al., 2009).

2.6 Risk Assessment

Rubin (2001) illustrates the need for a clear and common understanding of the terminology of risk management. Some argue that investments will improve quality of care, access and patient satisfaction, and help reduce costs by improving administrative efficiencies and reducing demand for acute care services and the incidence of

unnecessary and duplicate prescriptions, lab and imaging tests. However, there are few papers that incorporate risk assessment models of latent error in their studies. High-risk industries, such as aviation, chemical manufacturing, and nuclear power, have long been studying errors and ways of preventing their occurrence. These industries have developed reliable classification systems to describe and quantify errors and near misses

(Henneman, Blank, Gattasso, Williamson, & Henneman, 2006).

Risk management can be defined as the systematic application of management policies, procedures and practices to the tasks of analyzing, evaluating and controlling risk (Cheng & World Health Organization, 2003). Lander (2007) outlined different

(26)

methodologies for various phases of the process of risk management. He found the higher the level of risk each phase in the process becomes, the more detailed the methodology should become. In light of the risk management of medical devices, medical device standards outline the requirements for developing medical devices. These standards, however, do not outline how these requirements should be implemented causing difficulties for organizations entering the medical device domain (Flood, McCaffery, Casey, McKeever, & Rust, 2015).

ISO 14971 is an ISO standard, of which the latest revision was published in 2012, that details the requirements for application of a risk management system for medical devices (BS EN ISO:14971, 2012). This standard defined the relationship between risk management and risk assessment (Figure 2). This standard establishes the requirements for risk management to determine the safety of a medical device by the manufacturer during the product life cycle (Lander, 2007). Figure 2 is a generalized risk model that illustrates the risk management process.

(source: BS EN ISO:14971, 2012)

(27)

When a hazard has been identified and the risk estimated, the first question to be asked is whether the risk is already so low that there is no need to consider it and therefore no need to progress to risk reduction. This decision is made once for each hazard (BS EN ISO:14971, 2012). If the risk is so low, there is no need to consider it and we are prepared to live with it because of the associated benefits and the impracticality of reducing the risks. Otherwise, risk reduction should be considered if the risk is not so low. In ISO14971, risk evaluation is a judgment on the basis of risk analysis, of whether a risk which is acceptable has been achieved in a given context based on the current values of society. In 2011 the European Commission raised a concern around the legal text supporting presumption of conformity to the Medical Device Directives in BS EN ISO 14971:2009. It deals with processes for managing risks, primarily to the patient, but also to the operator, other persons, other equipment and the environment (BS EN ISO 14971).

Moreover, IEC 61508-5 International Standard (1998) defined the ‘tolerable region’ that is the range between the intolerable risk (Rit) and acceptable risk (Ra) regions, and is referred to as “As Low As Reasonably Practical (ALARP)” (IEC 61508-5 International Standard, 1998). ALARP is the tolerability region where an activity is allowed to take place provided the associated risks have been made as low as reasonably practicable. Tolerable here is different from acceptable - it indicates a willingness to live with a risk so as to secure certain benefits, at the same time expecting the risk to be kept under review and reduced as soon as possible when this can be done. Here, a cost benefit assessment is required either explicitly or implicitly to weigh the cost and the need or otherwise for additional safety measures (IEC 61508-5 International Standard, 1998).

(28)

Tolerable risk concepts are the following (IEC 61508-5 International Standard, 1998): a) The risk is so great that it must be refused altogether; or

b) The risk is, or has been made, so small as to be insignificant; or

c) The risk falls between the two states specified in a) and b) above and has been reduced to the lowest practicable level, bearing in mind the benefits resulting from its acceptance and taking into account the costs of any further reduction in risk.

The concept in criticality assessment of the risk management has emerged as an important evaluation tool. Franks (2013) defined a criticality assessment as a process that serves as a tool for helping identify those assets most critical to continuity of operations. He further stated that criticality assessment is different from risk and vulnerability assessments, but incorporates them into the matrix of consideration. Criticality assumes limited resources, meaning that there are not enough to harden every perceivable potential loss. A criticality assessment may help make determinations about where and how to deploy available resources and assets (Franks, 2013). A structured approach to evaluate the criticality of metrics is very important for any metric. The criticality analysis is aimed at classifying risks as intolerable, undesirable, tolerable, or as negligible risks. From a metrics perspective, the purpose of this is to have a selection guideline with respect to identifying the critical metrics and determining risk severity.

The intolerable region is for those risks in the unacceptable region. Immediate actions are required. The tolerability region is for those risks in ALARP region. Steps must be taken to reduce risks to as much as reasonably practicable. Remedial action is to be given high priority. Remedial actions are taken at an appropriate time. The acceptable region is

(29)

for those risks with less importance or urgency. Remedial action is discretionary. Procedures are to be in place to ensure that this risk level is maintained (Figure 3).

Figure 3 Tolerable risk and ALARP (IEC61508-5:1998, Annex B)

2.7 Summary of Literature Review

In this chapter, some papers related to healthcare information systems, medical error, human factors, technology-induced error, and risk assessment were surveyed. Although considerable research has been conducted in preventing decision error in healthcare, most of studies do not consider quantitative assessment for the potential risks of technology-induced errors. There is a need to propose a systematic way to evaluate and measure the risk severity of technology-induce errors due to the complexity of healthcare systems.

(30)

Chapter 3: Objectives and Research Questions

3.1 Objectives

This thesis describes a measurement approach to risk assessment of technology-induced errors involving mathematical modeling and computer-based simulations. This approach can be linked to the forecasting of the impact of system design features upon technology-induced errors in healthcare information systems.

3.2 Research Questions

The objectives of this thesis are aimed at developing a risk assessment model to measure the risk severity, and to analyze the criticality and risk thresholds of technology-induced errors. Five questions will be answered below:

1. Can a systematic approach for measuring the risk severity of technology-induced errors be developed?

2. Can the risk level of technology-induced errors be formulized by mathematical equations?

3. Can a useful model for assessing the risk value of technology-induced errors be developed that employs quantitative methods?

4. Can a risk criticality analysis be performed and quantified in a rigorous way? 5. Can measurable risk be simulated by computer simulation software?

(31)

Chapter 4: Methods

The concept of risk is a combination of the probability of occurrence of harm and the severity of that harm (ISO 14971, 2012). Risk assessment is a part of the whole process in the risk management cycle illustrated in Section 2.6. The thesis focuses on risk assessment that was defined as the processes comprising a ‘risk analysis’ and a ‘risk evaluation’, as shown in the block area below (Figure 4).

Figure 4 Flowchart of risk assessment (trimmed from Figure 2)

4.1 Conceptual Model of Risk Assessment for Technology-Induced

Errors

In this section, a measurable model of problems of technology-induced error (TIE) is introduced. We will propose a model for evaluating the risk level associated with

technology-induced errors and develop a systematic approach using mixed methods for modeling and simulating the risk severity of technology-induced error. A relevant review was conducted to describe frameworks, models and theories in the area of technology-induced error (Borycki, Kushniruk, Bellwood, & Brender, 2012). In order to extend these frameworks, our proposed model incorporates both quantitative and qualitative

(32)

measurement. To evaluate the risk level of technology-induced error, an integrated conceptual model is proposed by three methods to measure, to analyze, and to simulate the risk severity of technology-induced errors.

The international standard of ISO14971 on application of risk management to medical devices is used in the risk assessment model. Based on the flowchart for risk assessment, a risk assessment model (RAM) for TIE’s problems was proposed to measure and to simulate the potential risk severity for TIE’s problems (Figure 5). This model involves techniques adapted from the area of risk management, the concepts of criticality analysis, and also the application of simulation of real work contexts.

Figure 5 An conceptual Risk Assessment Model (RAM) of technology-induced errors

In part A, the process of a risk analysis composes of the analysis and calculation of the risk likelihood, risk impact, and risk level. In this stage, a mathematical method and an Analytic Hierarchy Process (AHP) method will be applied. Part B illustrates the risk evaluation in the process of a risk assessment that the evaluation of boundary condition,

(33)

risk thresholds, and risk classification are comprised in this stage. Analytic hierarchy process method will be applied again in this stage.

The re-evaluation stage, in Part C, is a process to validate the assessing results. The computed-based simulation method will be introduced in this stage to validate the model, and to evaluate the research designs. Based on the evaluated risk degree as assessed by A, B, and C stages in the model, application of relevant standards as part of the health

information system design criteria might constitute risk control activities that were excluded from the scope of the thesis. In the next section, the three methods will be applied to the proposed model.

4.2 Mathematical Method for Calculating Risk Level

4.2.1 A Modified Risk Assessment Matrix for Classifying Risk Level

A risk assessment matrix that assigns numbers to risk levels demonstrates the risk severity, composed by the occurrence and impact of the contributing factor (Hart, Colley, & Harrison, 2005). Traditionally, these numbers in the matrix are qualitative, not

quantitative. It is natural to turn the risk probability/impact chart into a three-by-three matrix (Figure 6).

(34)

Figure 6 Matrix for the risk assessment

However, the problem is where the lines dividing the quadrants of the matrix lie. For example, if we ignore a 25 percent probability risk that lies in the quadrant of the low-level risk, it may cause a 25 percent of maximum loss. In contrast, if this low-low-level risk was not ignored, we have to pay maximum attention to a risk that has only a 25 percent probability of occurring. In real practice, it is difficult to divide the quadrants evenly, not to mention the ambiguity intolerance on boundaries of two different quadrants.

Therefore, it is necessary to come up with a rigorous process using mathematical methods to measure both the occurrence probability and the risk impact to assess the risk severity of the contributing factor and its criticality. In this thesis, the risk

probability/impact chart has been modified to avoid those ambiguous conditions in assessing risks (Figure 7). More details about risk classifications will follow in a later section.

(35)

Figure 7 Modified matrix for risk assessment

Figure 7 shows the modified matrix for risk assessment that incorporates the concepts of tolerable risk depicted in Section 2.6. The arc line, as well as the risk boundary of two different risk classes, demonstrates the risk criticality. It is noticed that arc line is not fixed, but depends on the relative risk impact.

4.2.2 Mathematical Formula for Calculating Risk Rating

By definition, probability is a number between 0 and 1 that expresses the likelihood that a current state exists or a future event will occur (Borovkov, 2013). Probability is a quantitative means for expressing uncertainty to avoid ambiguity. In Figure 7, the

probability of each undesired event occurring is identified at each step of the risk analysis. Three approaches are commonly employed to estimate probabilities, i.e., use of relevant historical data, prediction of probabilities using analytical or simulation techniques, and use of expert’s judgment (BS EN ISO:14971, 2012).

(36)

Representing uncertainty about technology-induced error as a probability is an essential step. We will use the term “probability of the contributing factor” to represent uncertainty about the technology-induced error. The term “contributing factor” refers to the hypothesized factor that the technology-induced error is intended to measure. Published reports are the important influence on estimates of probability. Statistical analyses in published studies may organize the findings in a form that is useful for decision-making (Edwards, 1965).

A risk always has an impact by its nature. Risk impact, the potential loss in the event of risk occurrence, is used to evaluate the severity of the consequences. The size of the risk impact varies in terms of cost and impact on its contributing factors. Both the risk impact and its criticality will be discussed in Section 4.3.

The idea of assessing risk level of technology-induced error involves the risk index and thresholds. We adopted the risk metric as a systematic method to assess risk severity and the level of major errors induced in using the health information technologies or health information systems. In the following research, the risk value is defined as the multiplication of occurrence probability and risk impact (IEC 61508-5, 1998).

(Risk value) = (Probability value) x (Impact value) (1) and equation (1) can be simplified mathematically as:

(2)

where the risk indices are denoted by R, the probability denoted by P, and the impacts denoted by I respectively.

(37)

In order to calculate the occurrence probability of technology-induced error, descriptive statistics are used to measure the frequency of errors induced in use of information technology. The collected data will be summarized and compressed by grouping them into classes and recording how many data points fall into each class. The details on estimating the risk impact will be elaborated in a later section.

From the occurrence probability (P) of errors, we multiply it by impact value (I) into the risk score (R) to rank the severity of risk level for the technology-induced errors (TIE). The mathematical equation provides a useful measure that helps decision makers decide which risks need their attention with following the step:

1. List all of the likely risks of contributing factors for TIE’s problem.

2. Assign the probability (P) of each risk occurring based on the statistical data analysis.

3. Estimate the risk impact (I) of contributing factors (will discuss in the next section)

4. Calculate the risk score (R) by equation (1) and (2)

5. Map out the ratings and risk scores on the modified matrix of the risk assessment

4.2.3 Criticality Analysis, Risk Thresholds, and Risk Classification

In this section, we explore the meaning of criticality analysis and its importance in measuring risks of problems of technology-induced errors. The risk threshold is introduced to analyze the criticality of risk severity of technology-induced errors. The key measure of risk threshold is its ability to categorize the boundary of impact value.

(38)

As mentioned in Chapter 2, the ‘tolerable region’ is the range between the intolerable risk (Rit) and acceptable risk (Ra) regions, and is referred to as “As Low As Reasonably Practical (ALARP)” (Moriyama et al., 2009). A safety-related system implements the required safety functions necessary to achieve or maintain a safe state for the equipment under control (Langeron, Barros, Grall, & Bérenguer, 2008).

Criticality determines the boundary conditions of the risk impact and severity. The purpose of criticality analysis is to provide guidance for deciding on TIE risk classes. In order to analyze the criticality, three bounds were defined in this thesis, they are upper bound (Ub), middle bound (Mb), and lower bound (Lb). The above bounds serve as risk thresholds to classify the risk into the intolerable, undesirable, tolerable, or negligible class (Figure 8).

Figure 8 Boundary conditions of impact value and risk severity

To extend the ALARP concepts of the section 2.6, four risk classes were proposed in the thesis, and defined in Table 1.

Table 1 A modified risk classification and definitions proposed in thesis

(39)

Class I Intolerable risk Risk score is greater than the upper bound, i.e., Risk score > (PUb)

Class II Undesirable risk Risk score is less than the upper bound, and is greater than the middle bound, i.e., (PUb) > Risk score > (PMb)

Class III Tolerable risk Risk score is less than the middle bound, and is greater than the lower bound, i.e., (PMb) > Risk score > (PLb)

Class IV Negligible risk Risk score is less than lower bound, i.e., Risk score < (PLb)

Table 1 summarizes the four risk classes: intolerable, undesirable, tolerable, and negligible risk, that are indicated in a modified matrix of risk assessment in Figure 7. It was noted that arcs deviding the risk levels are the risk thresholds in measurement. Based on the definitions, risk classes are interpretated as follows (Blockley, 2003):

 Class I: Intolerable Risk, a risk is judged in the intolerable range that cannot be reduced.

 Class II: Undesirable Risk, a risk is judged in the undesirable range that risk reduction is impracticable or if the costs are grossly disproportionate to the improvement gained.

 Class III: Tolerable Risk, a risk is judged in the tolerable range if the cost of risk reduction would exceed the improvement gained.

 Class IV: Negligible Risk, a risk that is judged in the negligible range that can be ignored.

4.3 Analytic Hierarchy Process (AHP) Method

In the risk assessment of technology-induced errors, we not only take into account ‘probability’, but also the ‘impact’ when the errors happen. Normally, the probability of the TIE’s problems can be calculated using the frequency of incidents by descriptive

(40)

statistical methods. However, it is a challenge to measure the impact of the adverse events, such as medical errors. In this study, the idea for measuring the impact is to introduce Analytic Hierarchy Process (AHP) method. This section is about introducing the AHP method for further evaluating relative risk impact, its boundary conditions, and risk thresholds in terms of the criticality analysis. This approach incorporates the benefits of the subjective judgment and objective measurement, and hence quantify the TIE’s risk in the process of risk assessment.

AHP is one of Multi-Criteria Decision Making (MCDM) methods that was originally developed by Saaty (1990). He provided a theoretical foundation for the AHP, that is a decision support tool which can be used to solve complex decision problems taking into account tangible and intangible aspects. AHP is an effective quantitative tool that helps to measure the risk impact.

The basic step of AHP method is to construct the hierarchical structure of the evaluated goal. In the hierarchical structure, the first level (goal level) is the evaluated goal in which the level is used to formulate the total weightage for the risk analysis. The second one is the criteria level that is categorized into criteria under the evaluated goal. In some cases, each criterion is broken down into sub-criteria, the third level, that were expressed by contributing factors in this thesis.

By using the top-down techniques, the hierarchical structure therefore is built in Figure 9.

(41)

Figure 9 Three-tier hierarchical structure with top-down decomposition technique

Therefore, this hierarchical structure supports a decision maker to make decisions involving their experience, knowledge and intuition to make a Judgment Matrix, i.e., pair-wise comparison matrix, which can be formulated from equation (3) to (5).

1 1 1 1 2 2 2 2 1 2 1 2 ... ... . . ... . ... ij n A a n n n n n w w w w w w w w w w w w w w w w w w                           (3) where ij i j w a w  ,i j, 1, 2,...,n (4) 1 ji ij a a  ,i j, 1, 2,...,n (5) where aij represents the pairwise comparison rating between the element i and element

j of a level with respect to the upper level, denoted by w. The entries aij are governed by

the following rules: aij >0; aij =1/ aji; aii =1 for each i and j. In Example 1, it shows the

(42)

Example 1: Judgment matrix with three criteria

If there is a three-criteria hierarchical structure, the judgment matrix with symbolic elements can be shown in Table 2. An element aij represents the comparison weighting

(denoted by w) between criterion i and j, aij = wi /wj , that means the relative importance

between criterion i and j.

Table 2 Judgment matrix with symbolic elements in Example 1.

It was noticed that the numerical value is always equal to “1” due to self comparisons for criterion a11, a22, and a33. On the other hand, the elements a12, a13, and a23 represent the

comparison for A vs. B, A vs. C, and B vs. C, then a21, a31, and a32 will be determined

immediately because of the reciprocal symmetric matrix (i.e., a21 = 1/a12, a31 = 1/a13, and

a32 = 1/a23). 

Pair-wise comparison of the matrix in equation (3) is a crucial step to create a relative measurement. AHP is the method of relative measurement that is useful for few standard scales of measurements. The vector of priorities is the principal eigenvector of the matrix. It gives the relative priority of the criteria measured on a ratio scale (Saaty, 1990). Hence, a relative scale is essential to represent priority or importance if one is generating the relative scale.

In the AHP method, the weights can be calculated by introducing pair-wise

comparison between those criteria and contributing factors. According to Saaty (1990),

Criterion A B C

A Element a11 = 1 Element a12 Element a13

B Element a21 = 1/ a12 Element a22 = 1 Element a23

(43)

the priorities of the elements can be estimated by finding the principal eigenvector w of the matrix A, that is:

AW = λmaxW (6)

When the vector W is normalized, it becomes the vector of priorities of elements of one level with respect to the upper level. λmax is the largest eigenvalue of the matrix A. In cases where the pair-wise comparison matrix satisfies transitivity for all pair-wise comparisons it is said to be consistent and it verifies the following relation (see Appendix D):

aij = aik akj ∀i,j,k (7)

Table 3 represents the pair-wise comparison scale used in AHP developed by Saaty (Saaty, 1990). It allows us to convert the qualitative judgments into the numerical values, also with intangible attributes.

Table 3 AHP pair-wise comparaison scale

Numerical

values Verbal scale Explanation

1 Equal importance of both elements Two elements contribute equally 3 Moderate importance of one element over another Experience and judgment favour

one element over another 5 Strong importance of one element over another An element is strongly favoured 7 Very strong importance of one element over another An element is very strongly dominant 9 Extreme importance of one element over another An element is absolutely dominant

4.3.1 AHP Method for Evaluating Impact Value

In this section, an example is given of performing the AHP method for evaluating TIE’s risk impact value. The risk impact is transferred to a numerical value for the benefit of quantifying the risk impact value in the process of TIE’s risk assessment. In this thesis,

(44)

we assume the judgment matrix is to be given, i.e., questionnaires result of judgment matrix. Our emphasis is the introduction of AHP method for evaluating the impact values of TIE’s problem instead of collecting results from questionnaires.

In equation (6), the normalized principal eigenvector of the resulting comparison matrix provides a measure of each criterion’s relative importance with respect to the overall objective. Similarly, pairwise comparisons are made at the next level, the contributing factors, with respect to each criterion and the corresponding pair wise comparison matrices are obtained. An example 2 is a simple case that explains the steps to measure the impact values of TIE’s problem with contributing factors, but without criterion.

Example 2: Measurement of impact values by AHP method

In order to elaborate the process for calculating the impact value using the AHP method, we take a partial example of ‘Medical Error’ due to unsafe act’ (Nielsen, 1993). Figure 10 shows its hierarchy structure.

Figure 10 An example of medical errors

For simplicity, the intended action for violation of safe procedure was omitted. The basic errors are broken down into three contributing factors: ‘slip’,’ lapse’, and ‘mistake’.

(45)

At level 1, the ‘Errors due to unsafe acts’ is the evaluated goal. In level 2 three errors are the measurements characterizing each of the contributing factor. For the contributing factor ‘slip’, it introduces that a conducted action is not what was intended and is

observable. For contributing factor ‘lapse’, it is similar to a slip except that the error is not observable. For the contributing factor ‘mistake’, it is the action that proceeds as planned but fails to achieve it intended outcome. Using the concept of absolute measurement in the AHP method, the calculation of the relative risk impact for contributing factors is depicted in the form of judgment matrix, as shown in Table 4.

Table 4 Judgement matrix of Example 2

In the real practice, the judgment matrix is built by collecting the questionnaire results of domain experts. For simplicity, the numerical value of each element is given with assumption, e.g., the matrix of pair-wise comparison of contributing factors is given in Table 5.

Table 5 A judgment matrix with given comparason weights of Example 2

Contributing Factor Slip Lapse Mistake Slip a11 = 1 If a12 = 3 If a13 = 5

Lapse Then a21 = 1/3 = 0.33 a22 = 1 If a23 = 3

Mistake Then a31 = 1/5 = 0.2 Then a32 = 1/3 = 0.33 a33 = 1

Sum 1.53 4.33 9

Contributing Factor Slip Lapse Mistake

Slip a11 a12 a13

Lapse a21 a22 a23

(46)

Applying the equation (3) to (5), we obtain the judgment matrix for each element aij

as follows. Similarly, we obtain a square, reciprocal symmetric matrix in (8) to (9).

(8)

(9) As noted, the priorities of the elements can be estimated by finding the principal eigenvector w of the matrix A (Saaty, 2003), that is:

AW = λmaxW (10)

When the vector W is normalized, it becomes the vector of priorities of elements of one level with respect to the upper level. λmax is the largest Eigen value of the matrix A.

Let us show how to obtain the priority vector:

The composite weight of element a11, a21, a31, is calculated as followings:

Composite weight of element a11 = a11  (a11+ a21 + a31);

Composite weight of element a21 = a21  (a11+ a21 + a31);

Composite weight of element a31 = a31  (a11+ a21 + a31).

The composite weight of element a12, a22, a32, is calculated as followings:

Composite weight of element a12 = a12  (a12+ a22 + a32);

Composite weight of element a22 = a22  (a12+ a22 + a32);

Composite weight of element a32 = a32  (a12+ a22 + a32).

The composite weight of element a13, a23, a33, is calculated as followings:

(47)

Composite weight of element a23 = a23  (a13+ a23 + a33);

Composite weight of element a33 = a33  (a13+ a23 + a33).

For example, the composite weight of element a11 is calculated as 11.53 = 0.65, a21

as 0.331.53 = 0.22, and a31 as 0.21.53 = 0.13. Similar calculation applies to other

composite weights of elements. The various composite weights for the criterion-mode combinations are shown in Table 6 below.

Table 6 Composite weight for contributing factors in Example 2

Contributing Factor Composite Weight

Slip Lapse Mistake Slip 1 1.53 = 0.65 3  4.33 = 0.69 5  9 = 0.56

Lapse 0.33  1.53 = 0.22 1  4.33 = 0.23 3  9 = 0.33

Mistake 0.2  1.53 = 0.13 0.33  4.33 = 0.08 1  9 = 0.11

Sum 1.00 1.00 1.00

In the next step, the priority vector of the criterion ‘slip’ is calculated as: 0.65 + 0.69 + 0.56 = 1.90. Similarly, the priority vectors of criterion ‘lapse’ and ‘mistake’ can be shown to be 0.78 and 0.32, respectively.

Finally, the weight of contributing factor ‘slip’ is calculated as (1.90  3 = 0.6333) while ‘lapse’ and ‘mistake’ are as (0.78  3 = 0.2605) and (0.32  3 = 0.1062), as shown in Table 7 below.

Table 7 Impact values of contributing factors ‘slip, lapse, and mistake’ in Example 2 Contributing

Factor Composite Weight .

Priority

Vector Impact Value

Slip Lapse Mistake

Slip 0.65 0.69 0.56 1.90 63.33%

Lapse 0.22 0.23 0.33 0.78 26.05%

Referenties

GERELATEERDE DOCUMENTEN

These consist out of: improving population health, improving experienced quality of care and reducing costs (growth) [22] and should be simultaneously pursued. This means

To conclude on the first research question as to how relationships change between healthcare professionals, service users and significant others by introducing technology, on the

Compared to existing methods to measure weighting functions and attitudes toward uncertainty and ambiguity, our method is more efficient and can accommodate violations of

The present study demon- strated that patients with preoperative dyspeptic symptoms and patients using psychotropic medication are both at risk of persistence of the preexisting

In summary, round 1 and 2 of the Delphi part of the study resulted in a validated ERM implementation model (refer to Figure 6.5) where all the senior risk

this phase showed less mitigation strategies specific to the contracting risk, but rather was used to reduce the impact of risks stemming from the supplier selection, as

In this research I’ve examined the market response to the readability of risk disclosure, measured by share performance and corporate reputation, and the moderating effect

To validate if the risk prediction model developed in the general hospital patient population can predict medication errors at admission correctly in a university hospital