• No results found

Performance improvements in health care through the implementation of value-based quality indicators and dashboards Paulineke Lumer

N/A
N/A
Protected

Academic year: 2021

Share "Performance improvements in health care through the implementation of value-based quality indicators and dashboards Paulineke Lumer"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Performance improvements in health care through

the implementation of value-based quality indicators

and dashboards

Master Thesis Supply Chain Management

University of Groningen, Faculty of Economics and Business

Paulineke Lumer

Student number: s2356333

Paulinekelumer@hotmail.com

Supervisor, University of Groningen:

Prof. dr. ir. C.T.B. Ahaus

Co-assessor, University of Groningen:

A.C. Noort, MSc

Co-examiner, University of Groningen:

Prof. dr. J.T. van der Vaart

April 16, 2018

(2)

2

Performance improvements in health care through

the implementation of value-based quality indicators

and dashboards

ABSTRACT

Purpose – The purpose of this research is to investigate how disease teams implementing value-based health care select indicators, and how this quality information presented using a dashboard can contribute to performance improvement. Design/methodology/approach – A multiple case study was performed by investigating six disease teams from four different hospitals. Two researchers conducted 20 semi-structured interviews with interviewees located both in the Netherlands and the United States of America. The data was then transcribed and coded in order to perform within-case analyses and cross-case analyses.

Findings – Findings reveal that disease teams incorporate the patients’ perspectives in order to reveal focal indicators. Moreover, limiting the amount of indicators to a minimum will be more effective when it comes to imitating improvements. Furthermore, the findings include that performance improvements are done across all cases, but that dashboards stimulate continuous improvement by providing real-time information. Sharing both real-time quality information and the results on initiatives will trigger improvements and motivate health care professionals. Lastly, it was found that including a data analyst in the multidisciplinary team will significantly improve analysis and presentation of quality information.

Originality/value – This paper contributes to existing value-based health care literature by gaining a greater understanding of how health care professionals select and use quality indicators in practice. Furthermore, this research focuses on how the resulting quality information, and in particular visualised on dashboards, affect performance improvement initiatives and how this impacts health care professionals. Keywords – Indicators, Value-based health care, Dashboards, Performance improvement

(3)

3

TABLE OF CONTENTS

(4)
(5)

5

1. INTRODUCTION

Every health care system has the aim to improve the health of its patients, with quality being of utmost importance (Cinaroglu & Baser, 2016). Over the years, sets of outcome measures (or indicators) have been introduced in order to ensure and improve quality in health care. However, the number of indicators that health care professionals are required to publicly report is immense and continues to grow (Meyer et al., 2012). With this overload of quality measures, frameworks are established to capture the essentials of what needs to be measured. Furthermore, existing health care literature depicts approaches towards the implementation of improvements using indicators (Boivin, Lehoux, Lacombe, Burgers & Grol, 2014; Deber & Schwartz, 2016; Freeman, 2002; Grol, Wensing, Eccles & Davis, 2013; Levesque & Sutherland, 2017; Smith, Mossialos & Papanicolas, 2008; Solberg, Mosser & McDonald, 1997). Yet, a debate about the relevance of specific quality measures and indicators emerged and even widely accepted critical quality indicators are questioned (“Are Today’s Providers Overwhelmed by Quality Reporting?” 2017; Meyer et al., 2012).

Moreover, Porter introduced the concept of value-based health care (VBHC) in response to the upcoming necessity of hospitals to reform their health care delivery (Abbasi, 2007). With VBHC, the underlying objective of health care delivery lies upon attaining high value for patients, with value being characterised as “health outcomes achieved per dollar spent” (Porter, 2010: 2466). Therefore the focus of indicators when practicing VBHC should lie on measuring value and health outcomes for patients. However, at present there is no study on how health care professionals decide upon focal indicators among the array of available measures and how theory gets put into practice when it comes to improving health care using indicators. It remains ambiguous how the decision on which indicators to measure is made, hence the first research question is proposed as follows:

How do health care professionals practicing value-based health care select indicators?

(6)

6 Spitzmueller, Petersen, Sawhney, & Sittig, 2013; Wilson, 2001). In order to benefit from relevant information, it was suggested to improve tools to visualise and organise information (Bird et al., 2003) and over the years these tools have been introduced and helped realise improvements in either processes or outcomes in the health care setting (Dowding et al., 2015; Green, 2011; Walburg, 2006). Moreover, Elg, Palmberg Broryd, & Kollberg (2013) argue that measuring performance can stimulate improvement initiatives. Accordingly, organising quality information on dashboards using quality and performance measures has been proved to enhance outcomes and improve performance in health care (Buttigieg, Pace, & Rathert, 2017; Dowding et al., 2015; Kroch et al., 2006). However, Dowding et al. (2015) argue that research still needs to be done on the consequences and the effectiveness on data presentation when implementing such tools. Combining their argument with the intended performance improvement, the second research question is proposed as follows:

How can quality information presented using dashboards contribute to performance of disease teams involved in value-based health care?

(7)

7

2. THEORETICAL FRAMEWORK

Concepts of interest that are mentioned in the previous section will be defined in the following section. First, the characteristics of VBHC will be specified and the importance of value in current health care practices are elaborated upon. Following that, the subject of performance improvement in health care will be discussed. Furthermore, the concept of indicators and the selection thereof will be reviewed. Thereafter, both concepts of quality information and dashboards will be defined, after which the conceptual model is presented and explained.

2.1 Value-Based Health Care

With the current movement of hospitals adopting a VBHC approach towards their health care delivery system, the definition of value in health care needs to be clarified. Quality of care is “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (Lohr, 1990: 21). The National Academy of Medicine (formerly known as the Institute of Medicine) established this definition in response to the increasing health care quality problems in the United States (“Crossing the Quality Chasm: The IOM Health Care Quality Initiative,” 2013). However, value in health care does encompass more than just the quality of health care (Katz, Franken, & Makdisse, 2017). People involved in health care all have their different perspectives on the meaning of value (Abbasi, 2007). As stated earlier, Porter (2010: 2466) identifies value in health care as the “health outcomes achieved per dollar spent”.

(8)

8 to patients relative to the cost of achieving those outcomes” (The Economist Intelligence Unit, 2016: 5).

In order to create a functioning and practical definition of quality and value, measures or indicators are established (Kamal, 2016). Quality indicators are defined as “measurable elements of practice for which there is evidence or consensus that they reflect quality and hence help change the quality of care provided” (Lawrence & Olesen, 1997: 104). With the definition of VBHC given, Katz et al. (2017) assume that different institutions need to embrace the same standardised measures due to the value-based competition that will arise between different health care providers as a result of adopting VBHC. At this moment, clinical quality measures such as in-hospital mortality or complication rates are collected by the vast majority of hospitals (Katz et al., 2017; Lane-Fall & Neuman, 2013; “Types of Quality Measures,” 2015). Yet, for delivering value to patients, alternative outcome measurements other than e.g. in-hospital mortality rate would be more relevant.

Accordingly, high value is not only defined by quality of care, but also by the patient’s understanding and experience of quality care (Katz, Franken & Makdisse, 2017). When having such a patient-centred care system with VBHC, identifying and measuring the preferences and priorities of the individual patient is important and a valid strategy for improving health outcomes (Luxford, Safran, & Delbanco, 2011; Mangin, Stephen, Bismah, & Risdon, 2016; Ruland, 1998). Health measures, such as the Patient Reported Outcome Measures (PROMs)1 can determine the outcome of care

received by patients by comparing a patient’s health at different points in time, whereas the aspects of humanity of care are focused upon by the Patient Reported Experience Measures (PREMs) (Black, 2013). Furthermore, Porter, Larsson, & Lee (2016) suggest that the International Consortium for Health Outcomes Measurement (ICHOM) combine the well-validated outcome measures (including PROMs) in order to create a set of standards that is efficient and tracks the minimum of measures that needs to be tracked. Nearly half of the global disease burden is covered by an ICHOM standard set (“ICHOM standard sets,” 2017). Although the previous examples of outcome measures

(9)

9 are both comprehensive and well-known in VBHC, the practical implementation thereof is still in its infancy.

2.2 Performance Improvement in Health Care

Batalden & Davidoff (2007) emphasise the importance of quality improvement in health care as this benefits patients’ outcomes, system performance and professional development. Furthermore, Porter (2010) states that progress in the health care system is driven by disciplined measurement and improvement of value. VBHC lends itself perfectly as a base to performance improvement: “Value-based health care worked as a trigger for initiating improvements related to processes, measurements and patients’ health outcomes” (Nilsson, Bååthe, Erichsen Andersson, & Sandoff, 2017: 10).

Integrating the patient perspective and the care process perspective in order to create value is both a quality improvement tool and a strategy for improving the organisations’ performance (Porter & Teisberg, 2006). Realising this strategy can only be done by consistency of action (Mintzberg, 1978). Moreover, to improve performance, the performance measures and performance measurement system must be transparent. This paper adopts the definition of Neely, Gregory & Platts (1995: 80) with a performance measure being “a metric used to quantify the efficiency and/or effectiveness of an action”. Furthermore, Neely, Gregory & Platts (1995: 81) defined a performance measurement system as “the set of metrics used to quantify both the efficiency and effectiveness of actions”. Walley, Silvester, & Mountford (2006) found that in practice there is a lack of understanding how decisions affect performance while using performance measurement systems. Therefore, Höög, Lysholm, Garvare, Weinehall, & Nyström (2016) argue that quality improvement and core organisational processes and activities need a close inter-connection, as well as disciplined follow-up of organisational results and the constant monitoring of health care (Benn et al., 2009; Elg, Palmberg Broryd, & Kollberg, 2013; French et al., 2009).

(10)

10 Benchmarking hospitals against each other can stimulate improvements, as well as creating awareness in performance measurements (Elg et al., 2013). Engaging management and clinical leaders in this quality improvement process will also more likely result in a high-quality health care delivery system (Elg, Palmberg Broryd, & Kollberg, 2013; Kroch et al., 2006). However, it remains crucial for organisations to not only measure specific outcomes but rather learn how to utilise this measurement information as support for decision making (Murdoch & Detsky, 2013; Nordin, Kork, & Koskela, 2017).

Dashboards are, as defined by Neely, Gregory, & Platts (1995), an example of a visualisation tool of performance measurement system that could be used by multidisciplinary disease teams in VBHC. Engaging users with the improvement of the dashboard and letting them decide upon new indicators will lead to a more thorough consideration about current measurement practices. A study by Porter, Baron, Chacko, & Tang (2012) showed that a disease team using a dashboard to share performance data had superior clinical outcomes. Moreover, all health care professionals associated with this disease team were able to propose new measures or new ways of analysing existing data. This is to optimise the dashboard and to increase critical thinking to keep the health care professionals engaged. Health care professionals are the main users of the dashboard (Porter et al, 2012) and having this interactive approach towards using and adapting the dashboard will most likely make the users want to work with the dashboard, too. Performance will ultimately be improved by this practice. However, the answer to how specific measures are chosen to be visualised on the dashboard remains ambiguous.

2.3 Selecting Indicators

(11)

11 Externally imposed indicators are mainly focused on accountability and benchmarking hospitals. This could in turn possibly lead to a decrease of financial resources for monitoring indicators to determine strategies and improvement opportunities (Solberg, Mosser & McDonald, 1997). Additionally, having to measure a large amount of indicators, it might take some time to determine which outcome measures determine value and improve the overall performance and quality of health care delivery, and which do not. Moreover, one must be careful with assessing specific indicators, as “incentivising certain behaviours or outcomes at the expense of others can have consequences, such as tunnel vision and measurement fixation (Mannion & Braithwaite, 2012; Rambur et al, 2013; Lester, Hannon & Campbell, 2011)” (Dowding et al., 2015: 89). In turn, this can result in health care professionals ignoring the other aspects of care delivery that are not being evaluated (Berwick, James, & Coye, 2003).

The processes of selecting indicators for either accountability reasons or for improvements differs. Therefore, the aim of measuring should be clear among the decision-makers (Freeman, 2002). Furthermore, the amount of indicators should be balanced as too little measures can again lead to negligence towards the aspects of care that are not measured, and too many indicators may result in confusion or apathy (Raleigh & Foot, 2010). Additionally, the ones that decide upon the focal measures should consist of the individuals that do the improving, as otherwise the indicators may not be used or could be deemed irrelevant (Solberg, Mosser, & McDonald, 1997). Moreover, Raleigh & Foot (2010) argue that performing well on a given set of indicators will not guarantee good care for the individual patient. However, Boivin, Lehoux, Lacombe, Burgers, & Grol (2014) found that involving patients can change improvement priorities. Yet no evidence was found whether patient involvement resulted in health care delivery changes.

(12)

12 Stephen, Bismah, & Risdon, 2016; Porter, 2009). This is done in numerous ways. E.g. with the pervasive usage of social media amongst patients, online feedback on patient-perceived quality can reveal improvement opportunities (Hawkins et al., 2016). However, due to the interdependent relation between care activities, value for patients could sometimes be uncovered merely over time as it embodies longer-term outcomes such as sustainable recovery (Institute of Medicine of the National Academies, 2006). Accordingly, patient satisfaction surveys remains one of the essential sources of information for hospitals in order to determine performance improvement initiatives (Al-Abri & Al-Balushi, 2014).

Even though current literature describes strategies of implementing improvements in health care (Boivin, Lehoux, Lacombe, Burgers & Grol, 2014; Deber & Schwartz, 2016; Freeman, 2002; Grol, Wensing, Eccles & Davis, 2013; Levesque & Sutherland, 2017; Smith, Mossialos & Papanicolas, 2008; Solberg, Mosser & McDonald, 1997), there is still a gap in literature on how exactly health care professionals select the focal indicators in practice.

2.4 Presenting Quality Information

(13)

13 As a patient, one is subject to the care of several service providers, e.g. physicians, nurses, specialists, surgeons, pharmacists and so on. Looking at the entire health care experience, one may argue that it looks similar to a supply chain, as Mentzer et al. (2001: 4) argue that a supply chain is “a set of three or more entities (organizations or individuals) directly involved in the upstream and downstream flows of products, services, finances, and/or information from a source to a customer”. Pitta & Laric (2004) argue that besides being based on a supply chain, the health care delivery system is a value chain where at each stage value is created for the patient. In supply chains, improved channel coordination and responsiveness of a partnership is a result of the exchange of high-quality information between partners, which will eventually lead to overall improved market performance (Kim, Cavusgil, & Calantone, 2006). Consequently, having the different health care providers in disease teams effectively exchanging high-quality information will ultimately lead towards improved health care delivery. Moreover, the overall performance of the disease teams will be significantly improved when an improvement initiative has the exchange of high-quality information on its agenda (Bartlett, Julien, & Baines, 2007).

(14)

14 A vital tool to promote quality improvement within hospitals has emerged in the form of dashboards (Buttigieg et al., 2017; Dowding et al., 2015; Kroch et al., 2006). An article from Ahli (2017) showed that using software to visualise and analyse process indicators on a dashboard helped significantly shorten the time between the diagnosis of a stroke with a patient and the required treatment significantly by twenty minutes. As doctors often phrase that, in stroke treatment, ‘time is brain’, every second counts as every moment the patient is not treated, vast numbers of important brain tissue are irretrievably lost (Saver, 2006). Implementing and using the dashboards permits users to analyse per individual patient where time has been lost, in order to recognise where they can optimise and improve their operations later on (Ahli, 2017), which ultimately leads to improved performance. However, Koopman et al. (2011) question whether having information more easily available will contribute to quality improvement of patient care, safety and health outcomes.

(15)

15 system for exchanging high-quality information, such as dashboards, will lead to the overall improved performance of the health care professionals.

2.5 Conceptual Model

A conceptual model can be derived from the previous sections of the literature review and is proposed in the following figure:

Figure 2.1. Conceptual model.

The previous sections concluded that health care professionals purposefully visualise particular quality measures on dashboards, but the question remains how they establish a prioritisation with the overload of quality measures (Kroch et al., 2006). This first research questions is embedded in the first variable ‘selection of indicators’. The decision on the strategy, that includes attaining high value for patients (Porter, 2010), and the choice of specific indicators is expected to influence the usage of quality information: inadequate indicators will be challenging to present relevantly, considering it is even difficult to present suitable indicators in a good manner. Moreover, there might be different ways of sharing and presenting the quality information obtained from the measures. This results in the creation of the variable ‘presentation of quality information’ where the second research question focuses upon. The use and communication of quality information has a direct relationship with performance, but the nature of this relationship depends on how quality information is being presented and used (Kim, Cavusgil, & Calantone, 2006). When improvements are initiated as a result of the quality information, it will have a positive impact on the performance, visualised as the variable ‘performance in value-based health care’.

Performance in value-based health care Selection of indicators Presentation of quality

(16)

16

3. RESEARCH METHODOLOGY

As the theoretical background previously depicted, there is minimal knowledge of how indicators are selected, and no empirical research analysed the relationship between the use of quality information on dashboards and performance in VBHC. In order to explore this relationship, this research required an inductive approach which entailed semi-structured interviews with health care professionals experiencing this phenomenon. In the following sections clarification will be given on why this design was chosen, as well as the methods used and information on how data was collected.

3.1 Research Design

In order to further expand the knowledge and understanding on indicator selection and evaluating the relationship of the use of quality information and performance in VBHC, an inductive approach was chosen (Kovács & Spens, 2005). Theory refinement is needed to better structure the theories in light of the results of this research (Handfield & Melnyk, 1998) which was done by conducting an in-depth investigation of the relationships among the concepts of indicators, ways to present quality information and performance in VBHC.

Moreover, the refinement of existing theory was investigated by using multiple case studies (Ketokivi & Choi, 2014). This methodological approach is the most suitable method, as it will be difficult to generalise results based on a single case study. Due to the fact that the practical implementation of VBHC is still in its infancy, multiple cases will give a deeper understanding as different health care professionals in different disease teams across hospitals might have different opinions on the matter that is being investigated. Furthermore, the aspects of the relationship have limited empirical substantiation (Eisenhardt, 1989; Eisenhardt & Graebner, 2007; Myers, 2009) as previous literature did not discuss the reasoning as to why health care professionals opt to measure certain indicators.

(17)

17 was considered. Nevertheless, multiple cases will both “augment external validity” as well as “help guard against observer bias” in this research (Voss, Tsikriktsis, & Frohlich, 2002: 203). Therefore, this research is a multiple case study.

3.2 Research Setting

The context of this research was set around the implementation of VBHC in multidisciplinary disease teams. The urge to reform and restructure the health care delivery system comes from increasing pressures costs have put on health care. More and more departments of hospitals are reinventing their traditional way of working and are moving towards increasing value for patients. Disease teams value not only health outcomes, but also the patient experience of their health care delivery. The use of measurement systems plays an important role in that. Therefore, collecting and measuring data of patients and health care professionals are considered important in the improvement process of health care delivery, where the vital users of these outcome measurements are the care providers (Murdoch & Detsky, 2013; Porter, Baron, Chacko, & Tang, 2012). Therefore, this research will focus upon health care professionals.

The concepts in the research question were operationalised the following: It was questioned to what extent the health care professionals use value measurement systems in their daily work. How the outcomes of these measurements were displayed, e.g. on dashboards, was questioned as well. Furthermore, how they decide upon which measures to include in dashboards and how they use and communicate quality information in order to deliver value in practice was questioned. The use of quality information was examined in a way that let the interviewee explain how they deal with the overload of indicators available, and how they collect and communicate information regarding value that matters to patients. Lastly, the impact of the usage of these indicators on performance and how performance improvements are initiated was questioned.

3.3 Case Selection

(18)
(19)

19 Case Country Interviewees Interview

duration (hh:mm:ss)

Transcript word count Hospice The Netherlands 1. Medical specialist 00:32:39 5,919 Mamma The Netherlands 1. Manager

2. Medical specialist 3. Medical specialist 4. Medical specialist 5. Coordinator 6. Data analyst 7. Medical specialist 00:44:35 00:50:51 00:39:55 00:11:44 00:59:40 00:57:33 00:42:14 7,605 9,949 7,447 2,636 11,390 10,685 8,718 Medicine The Netherlands 1. Manager

2. Coordinator sales team 3. Medical specialist 4. Medical specialist 5. Data analyst 6. Manager 7. Pharmacist 00:49:10 00:35:21 00:42:30 00:27:16 00:43:46 00:53:31 00:49:31 6,671 6,849 8,096 3,957 7,456 10,938 9,583 Blood The Netherlands 1. Medical specialist 00:42:00 7,213 Heart United States of

America 1. Clinical manager 2. Operations director 3. Medical specialist 01:12:58 00:36:43 00:53:27 10,039 5,543 7,209 Cancer The Netherlands 1. Quality manager

2. Clinical manager

00:57:09 (Combined interview)

10,875

Table 3.1. Case selection.

3.3.1 Hospice

(20)

20 3.3.2 Mamma

‘Mamma’ is short for ‘Mamma care’ and illustrates a breast cancer care team. The implementation of VBHC in this breast cancer team has changed the standards of health care practice in this hospital. There is a team revolving around patients, where the patients do not need to visit different departments of the hospital to receive the information they need. Instead, the health care professionals visit the patient in the doctor’s office. The interviewees consist of the manager of the team, the coordinator of the team, four medical specialists and one data analyst.

3.3.3 Medicine

The ‘Medicine’ case depicts a team concerned with expensive, or high-priced, medicine. In VBHC the facilitating staff may be overlooked, but this high-priced medicine team is making the implementation of certain aspects of VBHC work. This team does not revolve around a certain disease, but is nevertheless indispensable when it comes to VBHC implementation. From this team the interviewees consist of two managers, two medical specialists prescribing high-priced medicine, the coordinator of the sales team, one data analyst and one pharmacist.

3.3.4 Blood

VBHC calls upon using standardisation of indicators and uses ICHOM as a reference. ‘Blood’ represents a haematology team part of a hospital that is a true pioneer in VBHC due to it close cooperation with ICHOM. This disease team uses VBHC to implement indicators and improve surveys in order to be understandable by low-literate patients as well. A medical specialist focusing on this particular project was interviewed.

3.3.5 Heart

(21)

21 3.3.6 Cancer

The interviewees of this team consist of a clinical manager and a quality manager. Both managers are responsible for multiple disease teams, but answered the questions with regard to their oncology team. Therefore, this case is abbreviated as ‘Cancer’. Even though implementing a real-time dashboard incorporated with VBHC practices is a complicated process, the oncology team from this Dutch hospital possesses this system. Advantages of the usage of a dashboard in VBHC come to light, and initial improvement opportunities seen on the dashboards have already been implemented.

3.4 Data Collection

As the goal of this research is to discover insights, relevant features and/or issues that might elaborate on existing knowledge of VBHC, the data is collected by performing 20 semi-structured interviews with 21 interviewees from six different multidisciplinary disease teams. This method combines the advantages of consistency across interviews used in structured interviews with the opportunity for the interviewee to talk freely in unstructured interviews (Myers, 2009). A case study protocol was developed and can be found in the appendix. In order to improve the reliability of the study, the interviews were conducted with the use of a topic-based interview guide. The questions are based on the theoretical background of this paper (Yin, 2014), with topics of the guide based upon the research question and underlying tentative conceptual model. The interview guide can be found in the appendix. While interviewing, the answers were evaluated for their usability and whether they were valid, complete, relevant and clear. When these conditions were not met, probing questions were asked.

(22)

22 transcripts of the interviews were checked by a different researcher than the original transcriber, and were then sent to the corresponding interviewee for reviewing. Results could be revised when found incorrect by the interviewee. Despite being time-consuming, it did increase reliability of the data.

3.5 Data Analysis, Reliability and Validity

Reducing the data into categories is crucial for effectively analysing the data (Miles & Huberman, 1994). The fundamental analytic process used in this paper is theoretical coding, where in order to name the concepts and explain them in more detail, the data needs to be deciphered and interpreted (Böhm, 2004). A simplified versions of the coding trees of this research can be found in the appendix. Strauss & Corbin (1990) describe the three basic types of coding used:

 open – trying to find new insights by breaking down data analytically  axial – identifying relationships between the open codes

 selective – reduce the axial codes into core categories

In order to gain a deeper understanding of data, seemingly similar cases were checked for differences, and seemingly different cases were searched for similarities using both within-case analysis and cross-case analysis (Eisenhardt, 1989). Coding the 20 interviews, patterns were found across the different disease teams and explanations about the cases could be built to examine ‘how’ and ‘why’ certain indicators were chosen to be measured, and how this information is used. The cross-case analysis revealed that the usage of quality information differs between the cases that use dashboards and the cases that do not. Moreover, the engagement of patients on the decision-making on which measures to prioritise were found, as well as other means of presenting quality information on value in health care that were not anticipated before.

(23)
(24)

24

4. RESULTS

The presence of key elements was researched by doing within-case analyses, whereas the cross-case analyses compared cases to distinguish commonalities and dissimilarities (Ayres, Kavanaugh, & Knafl, 2003). Due to the extensiveness of this research, the six cases are addressed as single entities and the findings are discussed related to what came forward in the coding tree while analysing the data. This chapter separates the results into different sections to convey the findings of the interviews in a clear and structured manner.

4.1 Selecting Indicators

Analysing the data uncovered which factors are important when it comes to selecting indicators, and how the decision on choosing particular indicators is being made. The categories that emerged from the coding tree can be seen in the following figure:

(25)

25 Figure 4.1. Categories ‘selecting indicators’.

Extended versions of the coding trees can be found in appendix B.

4.1.1 Within-Case Analysis

There is consensus amongst all cases that the thought of VBHC should be fundamentally embedded in the indicators that are being measured. The cases agreed that the patients’ perspective is therefore important to incorporate in the selection making process of indicators. In order to understand what adds value for the patients during the entire care process, one must listen to the patients’ needs:

“The real value of value-based health care is that the patient feels heard and really feels that they are the central aspect of care. That the patient indicates what he/she needs in that specific moment, and that we can deliver that care.” (Mamma – 3)

Collecting information on this matter is done for instance via surveys (Mamma, Medicine, Blood & Heart), mirror conversations (Mamma) or patient focus groups (Heart). Furthermore, Mamma, Blood, Medicine and Cancer include representatives of patients associations in their team meetings to incorporate the patients’ view on what adds value. This is because oftentimes health care professionals overlook aspects that could be of importance to patients. An interviewee explained:

“Health care is evidence-based, but the soft side like guidance, psychosocial care and aftercare turned out to be very valuable for the patients”. (Mamma-5)

When focal indicators are chosen, pre-defined indicators are taken into account by most cases as well. Mandatory indicators are being measured by all cases for

(26)

26 accountability reasons, such as assessing processes, outcomes, safety and other quality related aspects. However, requirements from governments, insurers, inspections or other organisations are analysed to base the selection of the focal indicators on. E.g. Cancer gets their inspiration from ICHOM, Mamma takes a look at the CQ index, Medicine measures indicators that their health care specialist association provides and the QL index is checked by Blood.

Furthermore, there are several conditions of which an indicator should be subject to. According to Medicine, Heart and Cancer, an indicator should be able to be measured accurately. This means that the data source should be measured objectively and, where possible, standardised. For example, Hospice, Mamma, Medicine and Cancer argue that there is a lot of bias while using PROMs where one is hardly able to compare the outcomes of the measurements. Not only interpretation of questions can vary among patients, but subjective indicators such as ‘pain measurement’ are perceived differently by each patient. This subject is connected to the category of ‘usefulness of indicator data’ as well. One should take into account that the purpose and meaning of an indicator is clear to everyone involved. Blood explained that there are patients who have literacy problems and cannot comprehend the meaning of the questions. When entire patient groups experience this problem, the results are not trustworthy. In order to decrease these variances, Heart employed a data analyst that meets with respondents on a monthly basis. Test cases are performed on the indicators where they “come to an agreement on what the questions mean” (Heart – 1). This mitigates the reliability issue. Even though indicators should be measured objectively, defining measures is difficult when different patients have different needs or preferences concerning health care. It is hard to generalise a patient group as there are always exceptions. Therefore Hospice, Mamma and Medicine argue that it is not possible to standardise:

“[…] naturally every human is different, therefore you cannot standardise.” (Mamma – 3)

(27)

27 argue that an indicator should be formed in a way that it is able to explain causal relationships or reasoning behind e.g. mortality rates:

“Let's take heart valve surgery: When we have a mortality, we look at those [indicators] very carefully, but it's not good enough to explain why this person died. We want to know: what is it that is creating an environment that allows a person to die? Therefore, a leading measure for our heart surgery is the risk evaluation prior to the surgery.” (Heart – 1)

The category ‘data retrieval’ is another important factor when it comes to choosing indicators. Interviewee 3 of Mamma points out that “patients could feel patronised when asked to fill out a survey of which a patient feels he/she does not see the need to fill it out”. Hence, the length and number of patient surveys should be taken into account according to Blood and Cancer in order to keep a high response rate. Furthermore, the number of indicators that health care professionals already need to measure is substantial:

“We have to deliver 5800 indicators every year […] to insurers, patient associations or the institute of health care inspection. As a matter of fact, we are indicator-tired.” (Mamma - 5)

(28)

28 Furthermore, as measuring value indicators is one of the focal points in VBHC, one must be vigilant not to lose the core of VBHC out of sight: improving value that matters to patients. “You should not measure only for the sake of measuring” is what Mamma, Blood and Cancer pointed out. This comes together in the category ‘detect weaknesses and improvement opportunities’, where all cases agree that the data from the indicators should indicate where improvements can be made.

Lastly, selecting specific indicators to focus on performance improvement or increasing value for patients is done individually per team. When deciding upon which indicators the focus should lie on, most cases explained that this decision is a mutual agreement among the different stakeholders. In all four hospitals the disease teams have multidisciplinary team meetings to discuss the question ‘what adds value’. This way focal indicators are established that are deemed important by both health care specialists and patients, and captures value in health care:

“Everyone has his/her own angle of approach and adds to the discussion on his/her own way and what he/she thinks that is important, […] and together you decide what the important outcome measures are.” (Blood – 1)

4.1.2 Cross-Case Analysis

It is not unexpected that all cases agreed that the VBHC approach should be the fundament when selecting indicators. Yet both Heart and Cancer are in a more mature stage when it comes to implementing indicators in a VBHC manner than the other cases. Still, Mamma seems to stand out of the remaining four cases with the implementation of VBHC. Therefore, the cross-case analysis is performed comparing Mamma, Heart and Cancer to Hospice, Medicine and Blood.

(29)

29 Blood, where the indicators are not being selected based on specific strategies. Here, indicators are only being measured so that actions can be taken once variations happen.

Another important finding is that Mamma, Heart and Cancer employ a data analyst dedicated to that specific team. The data analyst of Medicine only evaluates costs concerning the expensive medicine usage, whereas both Hospice and Blood do not have a data analyst at all. The advantage of having a dedicated data analyst is that they are specialised in extracting the right information from the systems. The overload of data could be an opportunity to distinguish improvements, whereas without a data analyst the data can be perceived as an administrative burden. Furthermore, data analysts can help clarify the purpose of the indicators, and they are able to help determine whether the proposed indicators are effectively retrieving the cause of certain events.

Furthermore, when it comes to involving patients in the selection process, it is surprising that both Hospice and Medicine do not include patients’ view during team meetings as much as the other cases do. Mamma should be highlighted as this case tries to involve patients via numerous ways,

The indicators that are being selected should also enable benchmarking among health care specialists and hospitals. Mamma, Heart and Cancer argue that this is an important factor to detect weaknesses. Moreover, Mamma, Heart and Cancer state that process indicators are important to include in indicator sets, as well as PREMs. Blood agrees with the latter, but Hospice and Medicine do not take these into account. Another remarkable finding is that Medicine’s main focus is on costs and health outcomes, and focuses less on the value part of VBHC.

Lastly, Heart specifically selects indicators to eliminate both waste and variation. Mamma and Cancer agree with this view, and argue that during multidisciplinary team meetings it is also very important to ask the question ‘what does not add value’:

(30)

30 test that yields no results for their care, that is non-value added. Having a test done that doesn't make sense, is non-value added. So when we talk about value-based care, we're talking about the right care, in the right setting.” (Heart – 1)

4.2 Presenting Quality Information

After choosing and implementing the indicators, data will be collected. It became evident that health care professionals present this quality information in various ways in order to improve their performance and increase the value for their patients. The following figure shows the main categories that emerged from the coding tree:

Figure 4.2. Categories ‘presenting quality information’.

Extended versions of the coding trees can be found in appendix C.

4.2.1 Within-Case Analysis

(31)

31 are received daily. Finally, Cancer shares quality information real-time via a dashboard, like Heart does.

The purpose of presenting quality information is considered important by most cases. Heart for example eliminates non-value adding issues by examining the quality information carefully, and argues that eliminating waste and variation will lead to standardised care. They have implemented decision trees for treatment criteria, which saves costs as expensive treatments are used only on those patients that really need it. Furthermore, Medicine decreases costs by standardising medicine preparation and treatment criteria, and eventually checking whether medical specialists honour existing commitments regarding the prescription of medicines. However, interviewee 5 from Mamma claims that “it is very difficult to determine the cost of something, and it is very difficult to determine what the results of the entire treatment were”. Yet, Heart is able to track down how much costs were saved when implementing improvements:

“Whenever we look at any quality improvement or effort we've made, whether it's treating heart attacks or through blood pressure, we can easily look back how this has saved money.” (Heart—3)

While initiating an improvement project, Heart first collects baseline data to determine exactly what they are going to measure. Then they run a pilot in one of their hospitals to recognise the main problems and barriers and measure how successful that improvement was. When the test was successful, the improvement will be implemented to the other hospitals in their system. Results are communicated with the relevant teams. This goes back to the fact that some health care specialists are resistant to change. Oftentimes they do not know the reasoning behind the importance of the improvement or change. Mamma, Blood, Heart and Cancer agree that sharing such information stimulates improvements and that this increases transparency. This in turn increases the support of the health care specialists as they are more engaged with the indicators and the results they generate.

(32)

32 Moreover, quality information can motivate the health care specialists when data from before and after improvements can be shown:

“So you can’t do anything without knowing what you do or without showing it as change. And then you feed it back to the local teams, because you want to really implement locally, but inspire system wide and give them tools and directions to go in.” (Heart-3)

Hospice, Mamma and Medicine clarified the need for a real-time system in order to monitor health care and use information to steer on strategies. However, medical specialists from Medicine argued that a supporting department should undertake the implementation and support of such a dashboard. They highlighted the administrative burden on medical specialists, as explained in the previous sections:

“We would like to apply [a dashboard], but you need good IT-support to collect data and this is still underdeveloped. Therefore we do a lot of things still manually.” (Mamma – 1)

Only Heart and Cancer are currently using dashboards that are designed in a way that it can show combined overviews of a bundle of indicators, while being able to zoom in on individual patients as well. Moreover, Cancer has meetings in order to incorporate feedback from users in its dashboard, but also to create support from users. During these meetings outcomes are discussed to seek room for improvement:

“Having such a dashboard that shows more real-time results helps the improvement cycle as well.” (Cancer – 2)

Heart uses similar meetings to present the data from dashboards to the medical specialists in order to evaluate and strategize what needs to be improved. Furthermore, all cases agreed upon the fact that visualising information on dashboards can influence the performance of the team positively:

(33)

33 they’re progressing, and then they work on different things to improve. […] They look at the areas where they’re not doing well and then try to do improvement efforts at the local level to improve their patient engagement scores.” (Heart-2)

Furthermore, Mamma argues that the biggest force behind performance improvement is providing individual feedback on quality information for medical specialists:

“[…] it only helps when you see information real-time, and see short-term improvements as well.”(Mamma – 5)

This is exactly what Heart implemented. The performance of the medical specialists can be reviewed individually:

“We blind it so that [medical specialists] only see their own data. But I can see everything. So when I see that mortality is creeping up on an individual surgeon, I go and meet with that surgeon, and say: What's going on? Why is this happening?” (Heart – 1)

Thus, reviewing quality information personally can stimulate personal development. Accordingly, Mamma, Heart and Cancer highlight the most efficient way of performance improvement using individual results: benchmarking medical specialists against each other. Medical specialists are highly competitive and personal feedback on who is scoring good or bad on particular indicators triggers medical specialists to improve themselves, which ultimately adds value to the department and in the end, the patient. Doctors do not wish to be the best performers, but they most certainly do not want to be the worst performer:

“[...] physicians are a funny bunch, they're highly competitive. So, all you need to do is show them the outlier, that they're way out there, and then they start to fall in line very, very quickly.” (Heart – 1)

(34)

34 that the data is blinded so that medical specialists can only see their own results, and that only the managers are able to look at everyone’s data.

4.2.2 Cross-Case Analysis

This cross-case analysis is performed by comparing two groups of cases. The choice of comparing Heart and Cancer versus Hospice, Mamma, Medicine and Blood was determined by the fact that both Heart and Cancer have implemented a dashboard, whereas the other cases have not.

The first interesting finding is the fact that only Heart and Cancer possess a system where real-time quality information can be shared upon. This enables the cases to see direct results of improvement efforts and increase engagement of the medical specialists. The four other cases receive quality information only periodically. This could lead to negligence towards improvement opportunities. Mamma explains that they analyse their performance on (required) outcome measures only by the end of the year, as that is the moment when they need to deliver the mandatory data. Once delivered, the disease teams receive the results of these outcome measurements relatively late, oftentimes a year later. This makes it difficult to improve on outcome measures:

“Now we look back at one year and then we say ‘but we have improved that [indicator] already’, or ‘we’ve been working on that for a long time’. […] Subsequently, you will not think about it anymore and continue working as you were used to. The next year you find out ‘oh actually we weren’t [improving]’.” (Mamma – 5)

However, this does not mean that the cases that do not receive real-time information do not improve performance. Implementing VBHC has resulted in “every employee looking for possibilities to improve”(Mamma – 4). It was found that all six cases gather in team meetings in order to exchange information for improvement purposes.

(35)

35 determining the input of indicators and successfully analyse the consecutive data are important elements. Therefore, selecting the right indicators is important in order to be able to present the quality information in a good manner. Data analysts are specialised in guiding this process and are therefore key.

(36)

36

5. DISCUSSION

In this chapter a discussion of the interpretation of the findings in comparison to the theoretical background is presented. Furthermore, answers on the research questions are given, as well as propositions and the resulting managerial implications. This chapter concludes with the limitations of the research and directions for future research.

5.1 Selecting Indicators

VBHC is said to revolve only around quality measures to capture the value that matters to patients. In order to do so, Porter, Larsson, & Lee (2016) claim that hospitals should move towards short and standardised indicators sets, such as ICHOM. This is in line with the findings, where it was found that the focal indicator set should not be lengthy. Furthermore, Cellissen et al. (2017) argue that outside the VBHC setting, quality indicators are not routinely used to improve care as there is a lack of awareness of the potential benefits. There is some debate amongst the cases about the usefulness of certain specific indicators, but there seems to be no evidence of a lack of awareness concerning the benefits it entails. Furthermore, the overload of indicators that hospitals need to report to several organisations still exists (Meyer et al., 2012) and the administrative burden on health care professionals as Blumenthal et al. (2015) discussed is found across all cases. Therefore, the first proposition is stated as follows:

P1. One should use a small set of focal indicators, since this receives more attention than a large set and helps health care professionals to better focus on their strategies.

(37)

37 to decide upon the focal indicators. This aligns with the findings of Batalden & Davidoff (2007), where they indicate that joint and continuous effort of all stakeholders is essential to improve health care. Therefore, the next proposition is proposed:

P2. Involving patients in the indicator selection process can bring unforeseen aspects to light and helps identify the most important values for patients.

Furthermore, it was found that it is crucial to have a clear strategy or goal when selecting indicators. It is essential that what is being measured aligns with what one wants to manage. Additionally, the meaning of the indicators should be clear to all stakeholders. This is in line with the findings of Raleigh & Foot (2010), suggesting that the purpose, aims and goals should be specified in order to find the right indicators. From this, the following proposition is stated:

P3. Indicators should be measured objectively and accurately, and have a clear meaning for everyone involved. They should measure only those aspects that can be affected by the health care specialist..

5.2 Presenting Quality Information

(38)

38 P.4 Employing data specialists will lead to a more effective and successful way of presenting quality information, which enables health care specialists to better analyse their processes and improvement initiatives.

Heart and Cancer do have dashboards implemented within their teams and argue that medical specialists can benchmark against colleagues, while only seeing their own data. The findings of this research correlate with the findings of Elg et al. (2013) that measuring performance will help initiate bottom-up improvements: Doctors are very competitive and whoever is performing poorly will adjust their own practice and improve performance in order to stay in line with the rest. Furthermore, it was found that visualising indicators help determine improvement opportunities. These findings support the proposed conceptual model and existing theory that implementing dashboards improve performance (Buttigieg, Pace, & Rathert, 2017; Dowding et al., 2015; Koopman et al., 2011; Porter, Baron, Chacko, & Tang, 2012; Zaydfudim et al., 2009). Thus, the fifth proposition is as follows:

P5. Sharing quality information real-time can drive improvement opportunities and trigger initiatives.

Furthermore, the findings support the theory of Porter, Baron, Chacko, & Tang (2012) where engaging users with the improvement of the dashboard and letting them decide upon new indicators will make them think thoroughly about current measurement practices. Cancer uses meetings to create support amongst the medical specialists that use the dashboard and in order to improve the visualisation thereof. Moreover, Heart reduces variability in terms of processes and decision making as well. Findings support that it remains crucial for organisations to not only measure specific outcomes but also use it as decision making support (Murdoch & Detsky, 2013; Nordin, Kork, & Koskela, 2017). Heart has created several decision trees according to what they found was useful, in order to improve efficiency within their team. From this, the following proposition is stated:

(39)

39 Performance improvements were implemented across the six different cases with positive outcomes for e.g. patient and employee satisfaction. Results vary from developing apps to inform patients better, to decreasing diagnose waiting times and better communication and efficiency within the disease teams (Kim, Cavusgil, & Calantone, 2006). All cases had some sort of monitoring, but the follow-up on the result differs. It was found that continuous real-time monitoring stimulates performance improvements (Benn et al., 2009; Elg, Palmberg Broryd, & Kollberg, 2013; French et al., 2009). Furthermore, feedback on results on improvement initiatives was found to be of utmost importance by the cases. This relates to the findings of Berwick et al. (2003: 135), where “to improve performance, organizations and individuals need the capability to control, improve, and design processes, and then to monitor the effects of this improvement work on the results. Measurement alone will not suffice.” Moreover, dashboards enable benchmarking of medical specialists in a continuous way. This will create a big stimulus to improve performances of doctors, as well as creating awareness amongst the entire team to see where one should focus improvement initiatives on. Lastly, it is important to highlight that in contrary to the findings of Walley, Silvester, & Mountford (2006), the cases using dashboards can see actual results on efforts made by medical specialists to improve themselves, and this in turn will increase their motivation and satisfaction. This can be concluded in the seventh and last proposition:

P7. When implementing improvements, one should monitor and present the effects on the results. This leads to a better understanding of the change, will stimulate more improvements and keeps health care professionals motivated.

5.3 Managerial Implications

(40)

40 to be measured in an objective manner, and the meaning behind the indicator has to be clear to everyone involved.

Second, managers can consider to increase the patient involvement of the indicator selection process. This study revealed that e.g. including patients in meetings can bring unforeseen aspects to light, leading to detection of opportunities for improvements that health care professionals did not think of before. Furthermore, patients, or patient representatives, can help identify the matters that patients value most. This could be of influence on the strategy and goals set out by management as well.

Furthermore, presenting quality information real-time can drive bottom-up improvements as it enables health care specialists to analyse their processes carefully. However, successfully presenting quality information can be a complex and time-consuming task. This study found that employing a dedicated data-analyst benefits the health care professionals as quality information gets presented more effectively. Moreover, these professionals are specialised in analysing data, which in turn could uncover improvement opportunities as well.

(41)

41

5.4 Limitations and Directions for Future Research

This research brought insights on the decision-making process of disease teams and their indicators to improve performance, but it has some limitations as well. The interviewed teams’ respective hospitals are mainly at an early stage with implementing VBHC. Therefore, the indicator selection process and the way its quality information is presented can deviate from teams that have fully integrated VBHC in their health care delivery. Furthermore, the number of obliged measures that need to be delivered by several organisations is still high. Thus, it was difficult to determine whether the disease teams were measuring certain indicators for quality improvement purposes or for accountability reasons. Moreover, the number of interviewees is not evenly balanced among the cases. This is due to convenience and availability of the interviewees. Additionally, due to the fact that interviewees’ improvements based on the focal indicators are non-exhaustive, there is no clear view on which specific improvements can be attributed to certain ways of presenting information. Lastly, it is difficult to retrieve whether particular improvements were really implemented because of the way quality information is presented, or due to the urgency of the situation.

(42)

42

6. CONCLUSION

Contributions to existing literature are twofold. Firstly, this research contributes to the VBHC literature as little has been written about the selection of indicators, the presentation of quality information, what impact this has on the health care specialists and whether this stimulates improvements. This study explains how disease teams practicing VBHC select the indicators they should measure in order to capture the value that matters most to patients. Three propositions are provided in the findings, where it is revealed that the focus for improvement opportunities aligned with VBHC should lies on a small amount of indicators that are being measured objectively. Furthermore, it is important to incorporate the patients’ perspective, preferably by including them in team meetings, and find consensus amongst the stakeholders on the indicator selection.

(43)

43

7. REFERENCES

Abbasi, K. 2007. How to get better value healthcare. Journal of the Royal Society of Medicine, 100(10): 480.

Ahli, S. 2017. Ketenoptimalisatie leidt tot kostbare tijdswinst bij beroerte. Skipr. https://www.skipr.nl/actueel/id31620-ketenoptimalisatie-leidt-tot-kostbare-tijdswinst-bij-beroerte.html.

Al-Abri, R., & Al-Balushi, A. 2014. Patient satisfaction survey as a tool towards quality improvement. Oman Medical Journal, 29(1): 3–7.

Are Today’s Providers Overwhelmed by Quality Reporting? 2017. Definitive Healthcare. https://www.definitivehc.com/news/are-todays-providers-overwhelmed-by-quality-reporting.

Ayres, L., Kavanaugh, K., & Knafl, K. 2003. Within-Case and Across-Case

Approaches to Qualitative Data Analysis. Qualitative Health Research, 13(6): 871–883.

Bartlett, P. A., Julien, D. M., & Baines, T. S. 2007. Improving supply chain performance through improved visibility. The International Journal of Logistics Management, 18(2): 294–313.

Batalden, P. B., & Davidoff, F. 2007. What is “quality improvement” and how can it transform healthcare? Quality and Safety in Health Care, 16(1): 2–3.

Benn, J., Burnett, S., Parand, A., Pinto, A., Iskander, S., et al. 2009. Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research. Social Science & Medicine, 69(12): 1767–1776.

Berwick, D. M., James, B., & Coye, M. J. 2003. Connections between Quality Measurement and Improvement. Medical Care, 41(1): 130–138.

Bird, L., Heard, S., & Warren, J. 2003. Knowledge Management in Healthcare. Journal of Research and Practice in Information Technology, 35(2): 65–66. Black, N. 2013. Patient reported outcome measures could help transform healthcare.

(44)

44 Blumenthal, D., Malphrus, E., & McGinnis, J. M. 2015. Vital Signs: Core Metrics

for Health and Health Care Progress. Washington D.C.: National Academy Press. https://www.ncbi.nlm.nih.gov/books/NBK316114/.

Böhm, A. 2004. Theoretical Coding: Text Analysis in Grounded Theory. A Companion to Qualitative Research: 270–275. London: Sage Publications. Boivin, A., Lehoux, P., Lacombe, R., Burgers, J., & Grol, R. 2014. Involving patients

in setting priorities for healthcare improvement: a cluster randomized trial. Implementation Science, 9: 24.

Boyce, M. B., Browne, J. P., & Greenhalgh, J. 2014. The experiences of professionals with using information from patient-reported outcome measures to improve the quality of healthcare: a systematic review of qualitative research. BMJ Quality & Safety, 23(6): 508–518.

Buttigieg, S. C., Pace, A., & Rathert, C. 2017. Hospitals performance dashboards: a literature review. Journal of Health Organization and Management, 31(3): 385–406.

Cellissen, E., Franx, A., & Roes, K. C. B. 2017. Use of quality indicators by obstetric caregivers in the Netherlands: A descriptive study. European Journal of

Ostetrics & Gynecology and Reproductive Biology, 211: 177–181. Cinaroglu, S., & Baser, O. 2016. Understanding the relationship between

effectiveness and outcome indicators to improve quality in healthcare. Total Quality Management & Business Excellence, 1–18.

Crossing the Quality Chasm: The IOM Health Care Quality Initiative. 2013. The National Academies. http://www.nationalacademies.org/hmd/Global/News Announcements/Crossing-the-Quality-Chasm-The-IOM-Health-Care-Quality-Initiative.aspx.

Deber, R., & Schwartz, R. 2016. What’s Measured Is Not Necessarily What Matters: A Cautionary Story from Public Health. Healthcare Policy, 12(2): 52–64. Dowding, D., Randell, R., Gardner, P., Fitzpatrick, G., Dykes, P., et al. 2015.

(45)

45 Egan, M. 2006. Clinical Dashboards: Impact on Workflow, Care Quality and Patient

Safety. Critical Care Nursing Quarterly, 29(4): 354–361.

Eisenhardt, K. M. 1989. Building Theories from Case Study Research. Academy of Management Review, 14(4): 532–550.

Eisenhardt, K. M., & Graebner, M. E. 2007. Theory Building from Cases:

Opportunities and Challenges. Academy of Management Journal, 50(1): 25–32. Elg, M., Palmberg Broryd, K., & Kollberg, B. 2013. Performance measurement to

drive improvements in healthcare practice. International Journal of Operations & Production Management, 33(11/12): 1623–1651.

Embury, S. M., & Missier, P. 2014. Forget Dimensions: Define Your Information Quality Using Quality View Patterns. In L. Floridi & P. Illari (Eds.), The Philosophy of Information Quality. Cham: Springer.

English, L. P. 2009. Information quality applied: best practices for improving business information, processes, and systems. Indianapolis, IND: Wiley. Freeman, T. 2002. Using performance indicators to improve health care quality in the

public sector: a review of the literature. Health Services Management Research, 15(2): 126–137.

French, B., Thomas, L. H., Baker, P., Burton, C. R., Pennington, L., et al. 2009. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context. Implementation Science, 4(1): 28.

Garg, A. X., Adhikari, N. K. J., Beyene, J., Sam, J., & Haynes, R. B. 2005. Effects of Computerized Clinical Decision Support Systems on Practitioner Performance and Patient Outcomes. JAMA, 293(10): 1223–1238.

(46)

46 Grol, R., Wensing, M., Eccles, M., & Davis, D. (Eds.). 2013. Improving Patient

Care: The Implementation of Change in Health Care (2nd ed.). West-Sussex, UK: BMJ Books.

https://books.google.nl/books?hl=nl&lr=&id=oEEzUjFbDM8C&oi=fnd&pg=PT 9&dq=implementing+improvements+requires+focus&ots=6YbkQmiGQM&sig =ENMkKdDO9bhKo7LI9-AdbBWQB1M#v=onepage&q=improvements in healthcare may be partially&f=false.

Handfield, R., & Melnyk, S. A. 1998. The scientific theory-building process: a primer using the case of TQM. Journal of Operations Management, 16(4): 321–339. Hawkins, J. B., Brownstein, J. S., Tuli, G., Runels, T., Broecker, K., et al. 2016.

Measuring patient-perceived quality of care in US hospitals using Twitter. BMJ Quality and Safety, 25(6): 404–413.

Höög, E., Lysholm, J., Garvare, R., Weinehall, L., & Nyström, M. E. 2016. Quality improvement in large healthcare organizations: Searching for system-wide and coherent monitoring and follow-up strategies. Journal of Health Organization and Management, 30(1): 133–153.

ICHOM standard sets. 2017. The International Consortium for Health Outcomes Measurement. http://www.ichom.org/medical-conditions.

Institute of Medicine of the National Academies. 2006. Performance Measurement: Accelerating Improvement. Washington D.C.: The National Academies Press. https://www.nap.edu/read/11517/chapter/2#15.

Jinpon, P., Jaroensutasinee, M., & Jaroensutasinee, K. 2011. Business Intelligence and its Applications in the Public Healthcare System. Walailak Journal, 8(2): 97–110.

Kamal, R. N. 2016. Quality and Value in an Evolving Health Care Landscape. Journal of Hand Surgery, 41(7): 794–799.

(47)

47 Ketokivi, M., & Choi, T. 2014. Renaissance of case research as a scientific method.

Journal of Operations Management, 32(5): 232–240.

Kim, D., Cavusgil, S. T., & Calantone, R. J. 2006. Information System Innovations and Supply Chain Management: Channel Relationships and Firm Performance. Journal of the Academy of Marketing Science, 34(1): 40–54.

Koopman, R. J., Kochendorfer, K. M., Moore, J. L., Mehr, D. R., Wakefield, D. S., et al. 2011. A Diabetes Dashboard and Physician Efficiency and Accuracy in Accessing Data Needed for High-Quality Diabetes Care. Annals Of Family Medicine, 9(5): 398–405.

Kovács, G., & Spens, K. M. 2005. Abductive reasoning in logistics research. International Journal of Physical Distribution & Logistics Management, 35(2): 132–144.

Kroch, E., Vaughn, T., Koepke, M., Roman, S., Foster, D., et al. 2006. Hospital Boards and Quality Dashboards. Journal of Patient Safety, 2(1): 10–19. Lane-Fall, M. B., & Neuman, M. D. 2013. Outcomes measures and risk adjustment.

International Anesthesiology Clinics, 51(4): 10–21.

Lawrence, M., & Olesen, F. 1997. Indicators of Quality in Health Care. European Journal of General Practice, 3(3): 103–108.

Levesque, J.-F., & Sutherland, K. 2017. What role does performance information play in securing improvement in healthcare? A conceptual framework for levers of change. BMJ Open, 7(8): e014825.

Lohr, K. N. 1990. Medicare: A Strategy for Quality Assurance. Institute of Medicine, vol. I. Washington D.C.: National Academy Press. https://doi.org/10.17226/661. Luxford, K., Safran, D. G., & Delbanco, T. 2011. Promoting patient-centered care: A

Referenties

GERELATEERDE DOCUMENTEN

While the purpose of insurers' optimum volume norms was to organize optimal quality and efficiency in emergency care and thus optimize welfare economics, the purpose of the

More specifically, this study focuses on quality of care as experienced by patients, health of the population measured with health-related quality of life (HRQoL) outcomes,

As many PROMs which are used as measures for quality of care were developed without patient involved, in Chapter four we aimed to investigate whether such a PROM is still relevant

11) M onitoring of quality indicators does not take too much time (n=139) 10) M onitoring of quality indicators can be done without huge investments (n=140) 9) M onitoring of

This study found seven conditions to achieve integration of care in the context of VBHC, which are: professional and organizational alignment, division of care between

VBHC: Value-based health care; SDM: Shared decision-making; PEMP: Patient Empowerment discourse;; GOV: Governance discourse; PROF: Professionalism discourse; CRI: Critique

Words like leers instead of stevel were generally not accepted by the language professionals, whereas if someone, like Person 6, said a word in Dutch during

As explained earlier, the content of change has been clearly established by Porter and Lee (2013) through the six components of the value agenda: organisation of care into