• No results found

Systematic quality improvement in healthcare: clinical performance measurement and registry-based feedback - Chapter 1: General introduction and outline

N/A
N/A
Protected

Academic year: 2021

Share "Systematic quality improvement in healthcare: clinical performance measurement and registry-based feedback - Chapter 1: General introduction and outline"

Copied!
11
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl)

UvA-DARE (Digital Academic Repository)

Systematic quality improvement in healthcare: clinical performance

measurement and registry-based feedback

van der Veer, S.N.

Publication date

2012

Link to publication

Citation for published version (APA):

van der Veer, S. N. (2012). Systematic quality improvement in healthcare: clinical

performance measurement and registry-based feedback.

General rights

It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).

Disclaimer/Complaints regulations

If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.

(2)

Chapter 1

(3)
(4)

General introduction and outline

This thesis addresses the subject of systematic quality improvement in healthcare. Specifically, we investigated the following three research topics: quality improvement strategies, patient experience as a clinical performance indicator, and the impact of registry-based feedback on the quality of care. The first two topics were explored within the clinical domain of renal replacement therapy; the Dutch National Intensive Care Evaluation (NICE) registry was used as a practical context to examine the impact of registry-based feedback.

This chapter introduces the three research topics, and then briefly describes the contexts in which they were explored. We conclude Chapter 1 with formulating the research questions and outlining the remaining chapters.

Introducing the research topics

Quality improvement strategies

There is persistent room for improvement in healthcare.1-3 This may partly be explained by the complexity of the healthcare system, which hampers the achievement of change.4-6 One approach to changing complex systems is systematic quality improvement (QI).7-11 This approach is characterized by its focus on solving problems in the system’s underlying processes rather than focusing on correcting the mistakes of individuals. It relies on data from healthcare professionals’ own setting to guide practice improvement, and it encourages working in multidisciplinary QI teams. The task of a QI team is to identify QI strategies. In this thesis we defined a QI strategy as a systematic attempt to improve the way care delivery is organized. It concerns interventions that need adaptation to the local setting by means of an iterative development process. This process is known as the Plan-Do-Study-Act (PDSA) cycle, and includes small-scale evaluations of the impact of a proposed strategy.7-11 QI strategies can be distinguished from best practices, which we defined as (a set of) clinical actions that are considered to improve outcomes in patients, regardless of where they are treated. For example, to increase the uptake of the best practice of prescribing prophylactic aspirin to patients hospitalized after acute myocardial infarction, implementing a computerized reminder system is a potentially effective QI strategy.12 Alternatively, in settings without a robust information technology infrastructure in place, providing comparative feedback reports on adherence rates combined with educational elements might be considered.13

Many systematic reviews have evaluated the impact of QI strategies on the quality of healthcare across medical domains.14-19 However, even though different clinical contexts may require different strategies to achieve change,20 reviews focusing on a specific clinical domain are sparse. Healthcare professionals may not be familiar with the concepts underlying systematic quality improvement,21 or may be unaware of which QI strategies apply to their particular setting. This may explain part of the existing opportunity for improvement in healthcare.

Clinical performance measurement – patient experience as an indicator

Measurement has traditionally been a part of quality improvement in healthcare. The pivotal role of clinical performance data was already acknowledged by Ernest Codman one hundred years ago, when he started to record medical errors and to link these errors to patient outcome in order to improve the care delivered in his “End Result Hospital”.22 Since then, continuous measurement of clinical performance is being more and more integrated into healthcare systems worldwide.23-25

Before introducing the topic of patient experience as an indicator of clinical performance, this section describes two types of performance measurement systems, and some general background on clinical performance indicators.

(5)

Chapter 1

8

FORMATIVE VERSUS SUMMATIVE SYSTEMS

In the literature, two main performance measurement systems are distinguished: formative systems focusing on internal quality control, and summative systems focusing on external accountability.5;26 In formative systems, performance measurement is primarily a tool for healthcare providers to monitor and improve their care processes without external interference or direct negative consequences for payment and reputation. The National Intensive Care Evaluation (NICE) foundation –aiming to improve the quality of intensive care– is an example of a formative initiative from the Netherlands.27 Pay-for-performance28 and public reporting programs29 are typical summative systems, mostly used by governments, payers, and patient organizations. They link low performance to reduced financial resources or reputation harm; once low performance has been established, care providers have limited opportunity to change their practice in order to prevent this. Examples from the Netherlands are the public reporting of hospital care quality using the Healthcare Inspectorate’s performance indicator set30 or the hospital standardized mortality ratio.31

CLINICAL PERFORMANCE INDICATORS

Regardless of the formative or summative nature of the system, clinical performance indicators form the core of any performance measurement initiative.5;23;32 Indicators are proxies of performance that indicate potential opportunities for improvement.5;33 Three classic categories can be distinguished34: structure, process, and outcome indicators. Structure indicators refer to factors associated with the healthcare setting, e.g., the availability of equipment. They are linked to performance by the assumption that the proper settings will result in high quality care. Process indicators refer to the care that is actually being delivered, and the extent to which this is in line with established clinical standards; for instance, the percentage of eligible patients that receive ß blockers after an acute myocardial infarction. Outcome indicators involve the ultimate status of the patient after having received treatment, such as the mortality rate among coronary artery bypass surgery patients, or quality of life after kidney transplantation.

When composing an indicator set one should strike a balance between covering all the important aspects of performance, and keeping data collection robust and feasible.5;32;35 This includes identifying reliable indicator data that are readily electronically available as byproducts of routine processes, or easily made available with minimal extra resources.32;36 Once reliable performance data are collected, they need to be translated into interpretable and actionable information; for example, by adjusting for case-mix factors, and developing a feedback strategy that matches stakeholders’ needs and preferences.32;37

PATIENT EXPERIENCE AS A CLINICAL PERFORMANCE INDICATOR

Patient experience is considered an important and relevant patient outcome by many stakeholders involved in the care delivery process,34;37-39 and, therefore, is an outcome indicator with high face validity.40 Several public reporting initiatives have incorporated the patient perspective as a part of clinical performance, such as the Consumer Assessment of Healthcare Providers and Systems (CAHPS) in the USA,41 and the Consumer Quality (CQ) Index initiative in the Netherlands.42

However, whereas the outcome indicator ‘death’ is objective, unmistakable and, therefore, relatively easy to measure, patient experience is not. Moreover, patient experience was shown to also be influenced by factors that are not attributable to healthcare.43-45 Hence, to use patient experience as an accurate indicator of clinical performance and facilitate its use for quality

(6)

General introduction and outline

improvement, a validated measurement instrument and knowledge on its determinants are warranted.

Impact of registry-based feedback on clinical performance

Clinical performance measurement and feedback have always been closely knit: besides collecting medical error data, Ernest Codman also made them publicly available to patients and other hospitals by publishing an annual report.22 Nowadays, performance feedback is considered a common QI strategy to change clinical practice.46

Providing performance feedback reports is a standard service offered by medical quality registries. A medical quality registry is a systematic and continuous collection of a standardized set of health and demographic data for a specific patient population, submitted by multiple users, held in a central database, and subjected to a data quality assurance protocol.35;47 Registry-based feedback often comprises data on a broad range of performance indicators, benchmarked against external standards or peer performance. The underlying assumption is that reports of inferior or inconsistent care are an incentive for healthcare providers to change their routine practice.48 Yet, in general, the impact of feedback on the quality of care was shown to be small to moderate, and it remains unclear how this impact can be further optimized.49

Introducing the research contexts

The topics of systematic quality improvement strategies, and of patient experience as a clinical performance indicator were explored within the clinical domain of renal replacement therapy. We used the feedback as provided by the Dutch NICE registry as a practical example to address the topic of registry-based feedback effectiveness. Both contexts are briefly described below.

Renal replacement therapy

End-Stage Renal Disease (ESRD) is a chronic condition in which the kidney function can no longer sustain life; ESRD patients require renal replacement therapy (RRT). RRT care comprises chronic dialysis and kidney transplantation. The essence of dialysis is removing toxins and excess water from the body, which can be done by a machine holding an artificial filtering device (hemodialysis), or via a dialysis solution (dialysate) that is infused into the patient’s abdominal cavity (peritoneal dialysis). Most hemodialysis patients receive their treatment at a dedicated outpatient dialysis center, which they visit three to four times a week; one dialysis session takes three to eight hours. Peritoneal dialysis requires renewal of the dialysate in the abdomen either manually four to five times during the day, by a machine at night, or by a combination of both; these procedures can be performed at home. ESRD patients that have received a kidney transplant no longer need dialysis. Although this (eventually) results in a lower treatment intensity, transplanted patients must take immunosuppressive drugs, and frequently visit the transplant clinic and other caregivers for the rest of their life.50

The long-term character of treatment of ESRD, and the intensive interaction between patient and healthcare provider make patient experience an important outcome indicator of the quality of RRT care. It also causes the delivery of RRT care to be complex, especially when considering that patients have frequent co-morbidity, and treatment involves healthcare professionals from multiple disciplines. Like stated earlier, this complexity hampers the achievement of change, which may partly explain the persisting room for improvement in the delivery of RRT care.51-53

(7)

Chapter 1

10

The National Intensive Care Evaluation (NICE) registry

The NICE registry is entrusted with collecting and reporting data on the quality of care delivered at Dutch intensive care units (ICUs). ICUs are complex organizational units within hospitals providing multidisciplinary and expensive care to a heterogeneous population; patients admitted to the ICU are usually in need of intensive monitoring and some form of mechanical or pharmacological support, and have a relatively high mortality and morbidity risk.54;55 In the intensive care domain, systematic QI and clinical performance measurement are ubiquitous,56-60 which is reflected by the many performance indicator sets61-64 and numerous ICU quality registries.65-69 In the Netherlands, the intensive care profession founded the NICE registry in 1996 with the aim to systematically and continuously monitor, compare, and improve the quality of ICU care.69 Data collection started with the outcome indicators case-mix adjusted hospital mortality and length of ICU stay. In 2006, the Netherlands Society for intensive care (NVIC) extended the indicator set to a total of eleven structure, process, and outcome measures, adding items such as nurse-to-patient ratio, proportion of out-of-range glucose measurement, and unplanned extubation rate.61 Currently, almost 90% of all Dutch ICUs voluntarily submit their data to the registry. Until recently, they received –as a regular NICE service– quarterly and annual benchmark reports on the indicators.

Research questions and outline of the thesis

We formulated one research question per topic, and explored the answer in one of the two research contexts.

Quality improvement strategies in RRT care

Despite the many literature reviews evaluating the impact of QI strategies, there had been no attempts to create an overview of the systematic QI strategies reported within the domain of RRT care. Still, we anticipated that many initiatives had been undertaken aiming to change the delivery of care to ESRD patients. Sharing the experiences from these initiatives was expected to accelerate the improvement of RRT care. This triggered our first research question.

Research question 1 – Which quality improvement strategies have been reported within the domain of RRT care, and what was their impact on the quality of care?

We address this research question in Chapter 2. This chapter describes the results of a systematic review of the literature on initiatives that aimed to increase the uptake of best RRT practice in daily care. We present a categorized overview of the identified QI strategies, and report on their impact on the quality of RRT care.

Patient experience as an indicator of the clinical performance of dialysis centers

The Consumer Quality (CQ) index initiative in the Netherlands publicizes data on the experience patients have with a broad range of healthcare services.70-73 This initiative provides a standardized method comprising criteria for developing the measurement instruments, and for subsequent analysis and reporting of patient experience data. Until now there was no CQ index instrument for chronic dialysis care. In 2002, the Dutch Kidney-patient federation (NVN) developed a survey to measure patient satisfaction with dialysis care, which was employed as part of the certification scheme for Dutch dialysis centers.74 In 2006, the NVN decided to revise the survey according to the CQ index criteria. This formed the basis for our second research question.

(8)

General introduction and outline

Research question 2 – How to use patient experience as an indicator of the clinical performance of dialysis centers?

This question is addressed in Chapter 3. This chapter regards the development and validation of two CQ index instruments to measure the patient experience with in-center hemodialysis, and peritoneal dialysis and home-hemodialysis care respectively.

In Chapter 4 we explore the relationship between characteristics of dialysis patients and the experience they have with their care.

The impact of the NICE registry feedback reports on ICU performance

To investigate the potential of the NICE registry reports to prompt healthcare providers to change their daily care, and to explore how the impact of feedback could be further optimized, we formulated the third research question.

Research question 3 – How can the impact of the NICE registry feedback reports on the quality of intensive care be increased?

To answer this research question, we first systematically reviewed the literature on how medical quality registries in general provide performance feedback to healthcare professionals in

Chapter 5. In this chapter, we additionally investigated the effect of registry-based feedback on

the quality of care, and identified the factors that were suggested as moderators of the effect.

Chapter 6 describes the development of a new multifaceted feedback strategy within the

context of the NICE registry, and the study protocol for the quantitative and qualitative evaluation of the strategy’s effectiveness. The results of the quantitative evaluation are presented in Chapter 7, where we conducted a cluster randomized controlled trial to assess the impact of the feedback strategy on ICU patient outcomes compared to standard NICE feedback reports. In

Chapter 8 we report on the results of the qualitative study, in which we explored potential

explanations for why the intervention was effective or not.

Finally, in Chapter 9 we synthesize and discuss the main findings presented in this thesis, and provide suggestions for future research.

(9)

Chapter 1

12

Reference List

(1) Lenfant C. Shattuck lecture--clinical research to clinical practice--lost in translation? N Engl J Med 2003; 349:868-874. (2) McGlynn EA, Asch SM, Adams J et al. The quality of health care delivered to adults in the United States. N Engl J Med

2003; 348:2635-2645.

(3) Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010; 363:2124-2134.

(4) Plsek PE, Greenhalgh T. Complexity science: The challenge of complexity in health care. BMJ 2001; 323:625-628. (5) Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature.

Health Serv Manage Res 2002; 15:126-137.

(6) Coiera E. Why system inertia makes health reform so difficult. BMJ 2011; 342:d3693.

(7) Lynn J, Baily MA, Bottrell M et al. The ethics of using quality improvement methods in health care. Ann Intern Med 2007; 146:666-673.

(8) Shortell SM, Bennett CL, Byck GR. Assessing the impact of Continuous Quality Improvement on clinical practice: what will it take to accelerate progress. The Milbank Quarterly 1998; 76:593-624.

(9) Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to

enhancing organizational performance. 2nd ed. San Francisco: Jossey-Bass Publishers, 2009.

(10) Berwick DM. Developing and testing changes in delivery of care. Annals of Internal Medicine 1998; 128:651-6. (11) Plsek PE. Quality improvement methods in clinical medicine. Pediatrics 1999; 103:203-14.

(12) Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med 2001; 345:965-970.

(13) Ferguson TB Jr., Peterson ED, Coombs LP et al. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: a randomized controlled trial. JAMA 2003; 290:49-56.

(14) Grol R, Grimshaw JM. From best evidence to best practice: effective implementation of change in patients' care. Lancet 2003; 362:1225-1230.

(15) Oxman AD, Thomson MA, Davis DA, Haynes B. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Can Med Assoc J 1995; 153:1423-1431.

(16) Grimshaw JM, McAuley LM, Bero LA et al. Systematic reviews of the effectiveness of quality improvement strategies and programmes. Qual Saf Health Care 2003; 12:298-303.

(17) Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, et al. Changing provider behavior. An overview of systematic reviews of interventions. Medical Care 2001; 39:II-2-II-45.

(18) Grimshaw J, Eccles M, Thomas R et al. Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966-1998. J Gen Intern Med 2006; 21 Suppl 2:S14-S20.

(19) Garg AX, Adhikari NK, McDonald H et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 293:1223-1238.

(20) Shekelle PG, Pronovost PJ, Wachter RM et al. Advancing the science of patient safety. Ann Intern Med 2011; 154:693-696. (21) Davies HTO, Powell AE, and Rushmer RK. Healthcare professionals' views on clinician engagement in quality

improvement: a literature review. 2007; The health foundation, London. Available at Accessed on (22) Neuhauser D. Ernest Amory Codman MD. Qual Saf Health Care 2002; 11:104-105.

(23) Damberg, C. L., Sorbero, M. E., Lovejoy, S. L., et al. An Evaluation of the Use of Performance Measures in Health Care. 2011; RAND Corperation, Santa Monica (CA). Available at http://www.rand.org/pubs/technical_reports/TR1148. Accessed on 6-January-2012.

(24) Groene O, Skau JK, Frolich A. An international review of projects on hospital performance assessment. Int J Qual Health

(10)

General introduction and outline

(25) Roski J, Kim MG. Current efforts of regional and national performance measurement initiatives around the United States.

Am J Med Qual 2010; 25:249-254.

(26) Solberg LI, Mosser G, McDonald S. The three faces of performance measurement: improvement, accountability, and research. Jt Comm J Qual Improv 1997; 23:135-147.

(27) Van der Voort PHJ, Bakshi-Raiez F, De Lange DW, Bosman RJ, De Jonge E, et al. Trends in time: results from the NICE registry. Neth J Crit Care 2009; 13:8-15.

(28) Stecher, B. M., Camm, F., Damberg, C. L., et al. Toward a culture of consequences: performance-based accountability systems for public services. 2010; RAND Corperation, Santa Monica (CA). Available at http://www.rand.org/pubs/monographs/MG1019.html. Accessed on 6-January-2012.

(29) Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health

Aff (Millwood ) 2003; 22:84-94.

(30) Berg M, Meijerink Y, Gras M et al. Feasibility first: developing public performance indicators on patient safety and clinical effectiveness for Dutch hospitals. Health Policy 2005; 75:59-73.

(31) Jarman B, Pieter D, van der Veen AA et al. The hospital standardised mortality ratio: a powerful tool for Dutch hospitals to assess their quality of care? Qual Saf Health Care 2010; 19:9-13.

(32) Loeb JM. The current state of performance measurement in health care. Int J Qual Health Care 2004; 16 Suppl 1:i5-i9. (33) Sheldon T. The healthcare quality measurement industry: time to slow the juggernaut? Qual Saf Health Care 2005; 14:3-4. (34) Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q 2005; 83:691-729.

(35) Arts DG, De Keizer NF, Scheffer GJ. Defining and improving data quality in medical registries: a literature review, case study, and generic framework. J Am Med Inform Assoc 2002; 9:600-611.

(36) Pringle M, Wilson T, Grol R. Measuring "goodness" in individuals and healthcare systems. BMJ 2002; 325:704-707. (37) Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing

performance of acute medical care: avoiding institutional stigma. The Lancet 2004; 363:1147-1154. (38) Blumenthal D. Part 1: Quality of care--what is it? N Engl J Med 1996; 335:891-894.

(39) Improving the 21st-century health care system. In: Institute of Medicine, ed. Crossing the quality chasm. A new health

system for the 21st century. 6th ed. Washington DC: National Academy Press; 2005;39-60.

(40) Pronovost PJ, Lilford R. Analysis & commentary: A road map for improving the performance of performance measures.

Health Aff (Millwood ) 2011; 30:569-573.

(41) Darby C, Crofton C, Clancy CM. Consumer Assessment of Health Providers and Systems (CAHPS): evolving to meet stakeholder needs. Am J Med Qual 2006; 21:144-147.

(42) Delnoij DM, Rademakers JJ, Groenewegen PP. The Dutch consumer quality index: an example of stakeholder involvement in indicator development. BMC Health Serv Res 2010; 10:88.

(43) Crow R, Gage H, Hampson S et al. The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health Technol Assess 2002; 6:1-244.

(44) Hall JA, Dornan MC. Patient sociodemographic characteristics as predictors of satisfaction with medical care: a meta-analysis. Soc Sci Med 1990; 30:811-818.

(45) Cohen G. Age and health status in a patient satisfaction survey. Soc Sci Med 1996; 42:1085-1093.

(46) Grol R, Wensing M. Selection of strategies. In: Grol R, Wensing M, Eccles M, eds. Improving patient care. The

implementation of change in clincial practice. London: Elsevier Butterworth Heinemann; 2005;122-134.

(47) Drolet BC, Johnson KB. Categorizing the world of registries. J Biomed Inform 2008; 41:1009-1020.

(48) Kiefe CI, Allison JJ, Williams OD, Person SD, Weaver MT, Weissman NW. Improving quality improvement using achievable benchmarks for physician feedback: a randomized controlled trial. JAMA 2001; 285:2871-2879.

(49) Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006; (2):CD000259.

(50) Stein A, Wild J. Kidney failure explained. 2nd ed. London: Class publishing, 2002.

(51) O'Hare AM, Rodriguez RA, Hailpern SM, Larson EB, Kurella TM. Regional variation in health care intensity and treatment practices for end-stage renal disease in older adults. JAMA 2010; 304:180-186.

(11)

Chapter 1

14

(52) Tangri N, Moorthi R, Tighiouhart H, Meyer KB, Miskulin DC. Variation in fistula use across dialysis facilities: is it explained by case-mix? Clin J Am Soc Nephrol 2010; 5:307-313.

(53) Wetmore JB, Mahnken JD, Mukhopadhyay P et al. Geographic variation in cardioprotective antihypertensive medication usage in dialysis patients. Am J Kidney Dis 2011; 58:73-83.

(54) Bennett D, Bion J. ABC of intensive care: organisation of intensive care. BMJ 1999; 318:1468-1470.

(55) Higgins AM, Pettila V, Bellomo R, Harris AH, Nichol AD, Morrison SS. Expensive care - a rationale for economic evaluations in intensive care. Crit Care Resusc 2010; 12:62-66.

(56) Garland A. Improving the ICU: part 2. Chest 2005; 127:2165-2179. (57) Garland A. Improving the ICU: part 1. Chest 2005; 127:2151-2164.

(58) McMillan TR, Hyzy RC. Bringing quality improvement into the intensive care unit. Critical Care Medicine 2007; 35:S59-65.

(59) Curtis JR, Cook DJ, Wall RJ et al. Intensive Care unit quality improvement: A "how-to" guide for the interdisciplinary team. Crit Care Med 2006; 34:211-18.

(60) Kahn JM, Fuchs BD. Identifying and implementing quality improvement measures in the intensive care unit. Curr Opin

Crit Care 2007; 13:709-713.

(61) De Vos M, Graafmans W, Keesman E, Westert G, Van der Voort P. Quality measurement at intensive care units: which indicators should we use? J Crit Care 2007; 22:267-74.

(62) Pronovost PJ, Berenholtz SM, Ngo K et al. Developing and pilot testing quality indicators in the intensive care unit. J Crit

Care 2003; 18:145-155.

(63) Martin MC, Cabre L, Ruiz J et al. [Indicators of quality in the critical patient]. Med Intensiva 2008; 32:23-32.

(64) Berenholtz SM, Pronovost PJ, Ngo K et al. Developing quality measures for sepsis care in the ICU. Jt Comm J Qual

Patient Saf 2007; 33:559-568.

(65) Harrison DA, Brady AR, Rowan K. Case mix, outcome and length of stay for admissions to adult general critical care units in England, Wales and Northern Ireland: the Intensive Care National Audit & Research Centre Case Mix Programme Database. Critical Care 2004; 8:R99-111.

(66) Stow PJ, Hart GK, Higlett T et al. Development and implementation of a high-quality clinical database: the Australian and New Zealand Intensive Care Society Adult Patient Database. J Crit Care 2006; 21:133-41.

(67) Cook SF, Visscher WA, Hobbs CL, Williams RL, the Project IMPACT Clinical Implementation Committee. Project IMPACT: Results from a pilot validity study of a new observational database. Crit Care Med 2002; 30:2765-70.

(68) Render ML, Freyberg RW, Hasselbeck R et al. Infrastructure for quality transformation: measurement and reporting in veterans administration intensive care units. BMJ Qual Saf 2011; 20:498-507.

(69) Bakshi-Raiez F, Peek N, Bosman RJ, De Jonge E, De Keizer NF. The impact of different prognostic models and their customization on institutional comparison of intensive care units. Crit Care Med 2007; 35:2553-60.

(70) Stubbe JH, Gelsema T, Delnoij DM. The Consumer Quality Index Hip Knee Questionnaire measuring patients' experiences with quality of care after a total hip or knee arthroplasty. BMC Health Serv Res 2007; 7:60.

(71) Stubbe JH, Brouwer W, Delnoij DM. Patients' experiences with quality of hospital care: the Consumer Quality Index Cataract Questionnaire. BMC Ophthalmol 2007; 7:14.

(72) Berendsen AJ, Groenier KH, de Jong GM et al. Assessment of patient's experiences across the interface between primary and secondary care: Consumer Quality Index Continuum of care. Patient Educ Couns 2009; 77:123-127.

(73) Damman OC, Hendriks M, Sixma HJ. Towards more patient centred healthcare: A new Consumer Quality Index instrument to assess patients' experiences with breast care. Eur J Cancer 2009; 45:1569-1577.

(74) van der Sande FM, Kooman JP, Ikenroth LJ, Gommers EP, Leunissen KM. A system of quality management in dialysis.

Referenties

GERELATEERDE DOCUMENTEN

42 When working with First Nations, I am occasionally called upon to facilitate the codification of Indigenous legal traditions – which generally ends up being a kind of amalgam

transformation is a lens for understanding conflict that emphasizes changes in structures and relations in order to promote capacity for ongoing dialogue. A worldview is the

Few of the buildings received substantial renovations in the fourth century, even though the structures were badly aging, and very few new public buildings were constructed in

In later stages of the study I was able to supplem ent the data collected in the field w ith other data obtained b y interviewing governm ent officials in Hanoi, by

As presented at the Implementing New Knowledge Environments gatherings in New York (September 2013), Whistler, BC (February 2014), and Sydney, NSW (December 2014) (see Powell

Again, during this period, I turned to the arts, and like Jung with his Red Book (2009), I was able to work through.. layers of unconsciousness into deeper levels of insight

Section 4.2.21 to the translation specification that are based on mapping the elements from the source representation to elements from the target representation (in some

I suggest that critical pedagogy and critical ontology posit less radical, but more meaningful transformations in our understanding of pedagogy and curriculum because they