• No results found

How do health care professionals select targets for improving their care when confronted with performance feedback? A laboratory and clinical study in cardiac rehabilitation

N/A
N/A
Protected

Academic year: 2021

Share "How do health care professionals select targets for improving their care when confronted with performance feedback? A laboratory and clinical study in cardiac rehabilitation"

Copied!
51
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

How do health care professionals

select targets for improving their

care when confronted with

performance feedback?

A laboratory and clinical study in cardiac rehabilitation.

Master’s thesis of

Wouter Thomas Gude

Medical Informatics

October 2013 – June 2014

Academic Medical Center

University of Amsterdam

(2)

2 Scientific research project:

How do health care professionals select targets for improving their care when confronted with performance feedback? A laboratory and clinical study in cardiac rehabilitation.

Student: W.T. Gude, BSc (Hons) Student number: 6046266 w.t.gude@amc.uva.nl Supervisors: N. Peek, PhD

S.N. van der Veer, PhD

M.M. van Engen-Verheul, MSc

Scientific research project coordinator: Prof. A. Abu-Hanna, PhD

Location:

Academic Medical Center / University of Amsterdam Department of Medical Informatics

Meibergdreef 15 1105 AZ Amsterdam Period: October 2013 – June 2014 Examination committee: Prof. J.H. Ravesloot, MD PhD N.F. de Keizer, PhD N. Peek, PhD

S.N. van der Veer, PhD

(3)

3

TABLE OF CONTENTS

Abstract ... 5

1. Introduction ... 7

2. Materials and methods ... 10

2.1. Study context ... 10

Cardiac rehabilitation in the Netherlands ... 10

CARDSS Online audit and feedback intervention ... 11

Randomised controlled trial evaluating the effectiveness of CARDSS Online ... 15

2.2. Theoretical framework ... 15

Mechanism of feedback according to control theory ... 15

Application of the theoretical model to CARDSS Online ... 16

2.3. Study design and data collection ... 18

Laboratory study ... 18

Clinical study: secondary analysis of CARDSS Online trial data ... 19

2.4. Data preparation ... 19 2.5. Statistical analysis ... 20 3. Results ... 22 3.1. Participant characteristics ... 22 Laboratory study ... 22 Clinical study ... 22

3.2. General description of the observed data ... 22

Laboratory study ... 22

Clinical study ... 24

3.3. Determinants of selecting quality indicators as targets for improvement ... 25

Analysis 1: Determinants’ effect sizes in laboratory setting ... 25

Analysis 2: Comparison of determinants’ effect sizes in laboratory setting to clinical practice ... 29

Analysis 3: Full analysis of health care professionals’ selection process in clinical practice ... 29

(4)

4

4. Discussion ... 33

4.1. Summary of main results... 33

Determinants of selecting quality indicators as targets for improvement ... 33

Reasons to deviate from feedback’s recommendations for selecting indicators ... 34

4.2. Strengths and limitations of the study ... 34

4.3. Comparison to related studies ... 35

4.4. Meaning of the study ... 36

4.5. Implications for practice ... 37

4.6. Unanswered questions and future research ... 38

5. Conclusion ... 39

References ... 41

Appendix A: Figures describing the observed data in the laboratory study ... 45

Appendix B: Figures describing the observed data in the clinical study ... 47

(5)

5

ABSTRACT

Background: Audit and feedback is widely used as a strategy to improve quality of care. It provides health care professionals with a summary of their clinical performance, often in combination with benchmark information. However, studies on audit and feedback interventions in health care have reported variable effectiveness in improving quality of care. To date, there has been little progress with respect to understanding the underlying feedback mechanism and how we can best employ audit and feedback to maximise their effect.

Objectives: (i) To identify determinants of selecting quality indicators as targets for improvement by health care professionals when they are confronted with performance feedback; and (ii) to explore reasons for those professionals to disregard benchmark information when selecting these targets. Methods: We used data extracted from CARDSS Online, a web-based audit and feedback system which is currently used in an intervention combined with outreach visits among Dutch cardiac rehabilitation (CR) centres. CARDSS Online provides CR professionals with feedback on eighteen quality indicators, for each indicator reporting a score and a colour (green, yellow or red) which represents benchmark information. We first undertook a laboratory study among individual CR professionals to study the determinants of indicator selection in the absence of organisational and social disturbances, and asked participants to motivate selections that were at odds with the benchmark information provided. Second, we performed a secondary analysis of the RCT data to study the determinants of indicator selection by multidisciplinary CR teams in clinical practice. We performed our analyses using multivariate mixed-effects logistic regression analysis.

Results: In the laboratory study, red and yellow indicators were much more often selected than green indicators. The association between colour and indicator selection was, although present, much weaker in the clinical study. The results of the clinical study also showed that outcome indicators were less often selected than process indicators. Nevertheless, indicator selection varied substantially between different indicators (in the clinical study more than in the laboratory study) and different participants (between individuals in the laboratory study more than between teams in the clinical study). Twelve percent of the indicator values that showed a red or yellow colour but were not selected as targets for improvement by individual professionals because they considered

(6)

6 that the improvement feasibility was low, priority lacked, the indicator was no important quality aspect of care, or that the score was high enough. Also, twelve percent of the indicator values that showed a green colour were selected; primarily because participants considered them as being important aspects of quality.

Conclusions: When confronted with performance feedback, health care professionals primarily base their selection of improvement targets on the reported colours. Outcome indicators are however less often selected than structure or process indicators. As health care professionals desire to have insight in their clinical performance on all quality aspects, they often select indicators that report no feedback when insufficient data are available. Nevertheless, perceptions about quality can induce substantial variation in the selection of indicators between different indicators and different professionals. To filter variation in the selection of indicators between different individual professionals, feedback should be discussed within teams.

(7)

7

1

INTRODUCTION

The number of chronically ill patients is increasing, requiring hospitals to reconsider their role and responsibility in chronic disease management [1–3]. At the same time, health care organisations are under public pressure to increase their accountability, and to deliver optimally efficient and effective care [4–7]. In response, health care organisations increasingly adopt audit and feedback strategies, often in combination with educational meetings or reminders, to monitor and improve their quality of care [8–10]. Audit and feedback consists of providing health care professionals with an objective summary of their clinical performance over a specified period of time [11]. Clinical performance is typically measured by a set of quality indicators derived from clinical guidelines or expert opinion – each indicator representing a quality aspect of care (e.g., use of a multidisciplinary patient record, proportion of patients receiving a treatment according to guideline recommendations, or mortality rates). Indicator-based performance feedback aids health care professionals in identifying what quality indicators may require improvement. As there are often multiple indicators eligible for quality improvement, professionals can use the feedback as a guide to select the indicators that are most relevant to focus their improvement efforts on.

Audit and feedback interventions have been applied in various areas of care and have been aiming to increase clinical performance of physicians, nurses, and pharmacists [11]. For example, one audit and feedback intervention study focused on increasing physicians’ prescription rates of secondary prevention medications (e.g., aspirin, β-blockers) at discharge of patients that had undergone coronary artery bypass graft surgery [12]. Another intervention aimed at increasing internists’ compliance to preventive health and chronic disease management practice guidelines [13]. Despite the widespread use of audit and feedback and the inherent efforts and costs put into their development and application, systematic reviews and meta-analyses of audit and feedback interventions reported variable effectiveness on improving quality of care. Grimshaw and colleagues [14] reported a median effect size of audit and feedback of +7% (ranging from 1.3% to 16%)

(8)

8 compared to no intervention on dichotomous process measures (e.g., proportion of patients adhering to their therapy plan). However, the same review reported non-significant effects of audit and feedback on continuous process measures (e.g., time between referral and intake). Ivers and colleagues [11] reported a median absolute increase in compliance with desired practice of +4.3% (interquartile range (IQR): 0.5% to 16%) on dichotomous measures and +1.3% (IQR: 1.3% to 28.9%) on continuous measures. Previous studies attributed much of this observed variability in effect of audit and feedback interventions to feedback design characteristics and contextual factors. They suggested audit and feedback to be most effective if provided by a supervisor or colleague, more than once, both verbally and in writing, if baseline performance is low, and if it includes explicit targets and an action plan [11,15,16]. Other effect modifiers are the perceived quality of the feedback data, motivation and interest of the recipient, organisational support for quality improvement [17,18], and how performance targets or benchmarks are derived [19].

Whereas those studies contribute to our knowledge about what factors may influence the impact of feedback, they do not help us understand the underlying mechanisms of how feedback interventions affect the quality of health care [20]. An essential first step in any successful feedback intervention is health care professionals’ selection of specific quality indicators as targets for improvement actions. Although audit and feedback interventions commonly attempt to highlight what indicators are most relevant for improvement, health care professionals are at all times free to disregard such recommendations when setting their improvement targets. Little progress has been made in understanding the selection process because most randomised controlled trials (RCTs) on audit and feedback interventions did not explicitly build on previous research or extant theory [21–23]. Instead, they treated the feedback intervention as a ‘black box’, focusing solely on the box’ output (i.e., a change in quality of care) while ignoring the mechanism inside. An investigation of health care professionals’ selection process is imperative to improve our understanding of how, when, and why audit and feedback achieves large effects [24]. Therefore, we have formulated the following research questions:

1. What are determinants for health care professionals selecting specific quality indicators as targets for improvement when confronted with feedback on their clinical performance? 2. What are reasons for health care professionals to disregard feedback’s recommendations

when setting improvement targets?

The context of our study is a theory-informed audit and feedback intervention called CARDSS Online; its effectiveness is currently being evaluated in an RCT and will be published elsewhere. CARDSS Online provides multidisciplinary cardiac rehabilitation (CR) teams in the Netherlands with periodic

(9)

9 feedback on a set of quality indicators in combination with benchmark comparisons and educational outreach visits. We answered our research questions by studying CR professionals’ selection of improvement targets in a laboratory setting, as well as by performing a secondary data analysis of the data collected during the CARDSS Online RCT. We used a theoretical framework based on Carver and Scheier’s control theory [25] to inform our methods and interpret the findings of our study.

(10)

10

2

MATERIALS AND METHODS

2.1. Study context

Cardiac rehabilitation in the Netherlands

CR is a multidisciplinary outpatient intervention which aims at physical and psychosocial recovery and lifestyle changes for coronary heart disease patients after a cardiac event (such as a myocardial infarction) or coronary revascularization [26]. CR can reduce mortality, future cardiovascular risk, and symptoms of depression and anxiety, can improve quality of life, and facilitate work resumption [27– 32]. CR has been shown to be cost-effective in economic evaluations in North America and Europe [30]. However, in many Western countries CR practice is poorly standardised and does not follow the available scientific evidence [30,33,34]. A recent study in the Netherlands showed that only a minority of patients eligible for CR actually receive it (28.5% among patients with an acute coronary syndrome or coronary intervention) [35]. Furthermore, a recent survey showed that the content of exercise training programs varies widely between CR centres [36]. Another study in the Dutch CR context suggested that some centres organise their services differently than others [37].

To stimulate evidence-based CR services, the Netherlands Heart Foundation (a patients’ interest organisation) and the Netherlands Society for Cardiology (a professional organisation) have published national guidelines for CR [38]. Consistent with international standards [29,30,39], the national guidelines state that patients should be offered an individualised rehabilitation programme consisting of four types of therapy: exercise training, education therapy (education about the consequences of the patient’s disease), lifestyle change therapy (risk related behavioural adjustment), and relaxation and stress management training. During a needs assessment procedure at the onset of rehabilitation, generally two weeks after hospital discharge, 80 to 130 data items concerning the patient’s medical, physical, and psychosocial needs are collected [40]. Based on this information, the multidisciplinary CR team decides on the content of the patient’s individual rehabilitation programme during their weekly meeting. The team, which usually includes physical

(11)

11 therapists, nurses, psychologists, dieticians, social workers, rehabilitation physicians, and cardiologists, is jointly responsible for the execution of this programme during the next eight to twelve weeks. All outpatient CR services act under the responsibility of cardiologists.

There are two commercial vendors in the Netherlands that offer an electronic patient record system for CR with computerised decision support functionalities called CARDSS (Cardiac Rehabilitation Decision Support System). The CARDSS systems show which therapies are recommended by the guidelines based on the patients’ information entered during the needs assessment procedure. In a multi-centre cluster-randomised trial, CARDSS proved effective in improving the concordance of multidisciplinary team decisions to the clinical guidelines [41]. Yet, there remained large variation in CR programs between the centres [42]. One explanation for this finding was that clinical computerised decision support does not facilitate organisational change, such as creating capacity for exercise training, or making test results available to physiotherapists who compose the exercise training program [37].

CARDSS Online audit and feedback intervention

To further improve concordance to the guidelines and facilitate organisational change, our research group developed an audit and feedback intervention for CR centres in the Netherlands. Part of the intervention was a web-based quality management system called CARDSS Online. CARDSS Online incorporates a Plan-Do-Study-Act (PDSA) cycle, which forms the core of the Model for Improvement [43]. The PDSA cycle promotes a continuous process of systematic quality improvement focusing on improving the underlying care processes and structures rather than on correcting mistakes of individuals. Performance feedback is a crucial element within the Plan and Study steps in the PDSA cycle. A Cochrane review concluded that such feedback may be more effective when it includes both an action plan and explicit goals [11]. This matches goal-setting theory, which states that feedback and goals are indeed a successful combination, especially when the goals are well-specified [44]. The theory also suggests that people tend to be more committed to attaining a certain goal if they are involved in setting it. Goal commitment further increases if (the outcome of) goal attainment is seen as important, and if people believe they are capable of accomplishing it. CARDSS Online uses principles of the Model for Improvement and goal-setting theory to create an effective way of involving and guiding CR teams in improving their practice, by supporting CR teams in (i) monitoring their performance based on quality indicators for CR, (ii) identifying and selecting aspects of care that need improvement, (iii) developing a quality improvement plan, and (iv) periodically monitoring and adjusting the quality improvement plan [45]. During the educational outreach visits CARDSS Online actively involves the team in planning and monitoring their improvement efforts.

(12)

12 CR teams are provided with performance feedback on a set of eighteen quality indicators for CR (Table 1). The indicator set was developed using a combination of literature search, a review of clinical guidelines, and knowledge of CR experts and patients in a consensus procedure [46]. From these eighteen quality indicators, five indicators address structures (e.g., presence of preconditions to be adherent to CR guidelines), eight address processes (e.g., frequency with which clinical measurement instruments are employed), and five indicators pertain to outcomes of CR (e.g., changes in patient health status after rehabilitation). For all quality indicators the required data are automatically derived from the CARDSS electronic patient record system. Figure 1 displays an example feedback report. Performance results on structure indicators are represented by ‘yes’ or ‘no’ values (centre-level dichotomous). Results on process and outcome indicators are percentages (patient-level dichotomous) or median numbers (patient-level continuous). To assist the CR teams with selecting the indicators that may require improvement, coloured icons are shown next to the indicator results indicating whether the performance is acceptable (green checkmark), borderline (orange checkmark), or poor (red exclamation mark) compared to the benchmark. Thresholds for the colours were predetermined at time of the development of CARDSS Online, based on literature, peer performance, and clinical experience (Table 2). In case insufficient data is available from a CR centre to determine the score on a specific indicator, no score is displayed and a grey colour is shown. CARDSS Online enables CR teams to develop a quality improvement plan, consisting of improvement actions that are linked to specific quality indicators (i.e., specific quality aspects of care). To develop the plan, the team first selects quality indicators as targets for improvement based on the feedback information. The number of indicators to be selected is unlimited, but during the outreach visits teams are encouraged to focus on a limited number instead of trying to improve all at once. Second, the team can specify in free text the problem, presumed causes, improvement goal, and concrete actions on how to reach that goal. For each action responsible team members can be appointed and a deadline can be defined. During, but also in between, follow-up feedback moments the team can access CARDSS Online to revise the existing quality improvement plan based on updated results on the quality indicators. For each action the team can enter whether it was completed, cancelled, or to be continued. Actions marked as ‘to be continued’ are automatically transferred to the revised quality improvement plan, entering a new PDSA cycle.

(13)

13 Table 1. Characteristics of the quality indicators used in the CARDSS Online feedback intervention.

ID Type Description Measured Arm†

1 Process Average time between hospital discharge and start of rehabilitation Median A & B 1a Process time between hospital discharge and CR intake Median A & B 1b Process time between CR intake and start rehabilitation Median A & B 2 Process Complete data collection during needs assessment for rehabilitation Percentage A & B 2a Process concerning physical functioning Percentage B 2b Process concerning psychological functioning Percentage A 2c Process concerning social functioning Percentage A 2d Process concerning cardiovascular risk factors Percentage B 2e Process concerning lifestyle factors Percentage A

3 Process Patients are offered a rehab programme tailored to their needs Percentage A & B 4 Structure Rehab professionals work with a multidisciplinary patient record Yes / no A & B 5 Structure Specialized education for patients with chronic heart failure Yes / no A 6 Process Patients finish their rehabilitation programme Percentage A & B

6a Process education programme Percentage A

6b Process exercise therapy Percentage B

6c Process relaxation and stress management training Percentage B

6d Process lifestyle change therapy Percentage A

7 Process Rehabilitation goals are evaluated afterwards Percentage A & B 8 Process Cardiovascular risk factors are evaluated after rehabilitation Percentage B 9 Outcome Patients improve their exercise capacity during rehabilitation (watts) Median B 10 Outcome Patients improvement their quality of life during rehabilitation Median ‡ A 11 Outcome Patients successfully resume work Percentage B

12 Outcome Patients quit smoking Percentage A

13 Outcome Patients meet the physical activity norms Percentage B

13a Outcome exercise norm Percentage B

13b Outcome fit norm Percentage B

14 Process Rehabilitation goals are evaluated afterwards Percentage A & B 15 Process Cardiologists receive a report after the rehabilitation Percentage A & B 16 Structure Long-term patient outcomes are assessed Yes / no A & B 17 Structure Clinics perform internal evaluations and quality improvement Yes / no A & B 18 Structure Patients participate in patient satisfaction research Yes / no A & B

† CR centres enrolled in the RCT evaluating the effectiveness of CARDSS Online were randomly allocated in one of two study arms (A and B) and received performance feedback on a only a subset of all quality indicators. ‡ for indicator 10 scores were based on patients’ absolute difference in quality of life scores after rehabilitation, measured with a validated questionnaire.

(14)

14 Table 2. Thresholds for assignment of colours based on indicator scores in CARDSS Online.

Indicator group Indicator IDs Colour thresholds

Centre-level dichotomous indicators 4-5; 16-17 Green: Yellow: Red: score ‘yes’

score ‘no’; ≥ 50% of peers also have score ‘no’ score ‘no’; < 50% of peers also have score ‘no’ Patient-level dichotomous indicators 2-3; 6-8; 14-15 Green: Yellow: Red: score ≥ 66% OR

score 33% - 66%; score ≥ peer average score + 10 score 33% - 66%; score < peer average score + 10 score < 33% Patient-level continuous indicator 1a Green: Yellow: Red: score < 28 days score 28 - 42 days score ≥ 42 days Patient-level continuous indicator 1b Green: Yellow: Red: score < 14 days score 14 - 28 days score ≥ 28 days Patient-level continuous indicator 9-10 Green: Yellow: Red: score ≥ 20 OR

score 10 - 20; peer average score < 10 score 10 - 20; peer average score ≥ 10 score < 10

(15)

15

Randomised controlled trial evaluating the effectiveness of CARDSS Online

An RCT evaluating the effectiveness of the CARDSS Online feedback intervention is currently being conducted among eighteen CR centres in the Netherlands. The intervention is multifaceted, combining three-monthly periodic performance feedback on quality indicators with educational outreach visits. CR teams developed and revised their quality improvement plans during those visits. All Dutch CR centres that used the CARDSS electronic patient record system with computerised decision support during the CR needs assessment procedure and were willing to dispose their data for research were eligible to participate in the CARDSS Online audit and feedback intervention study. To promote participation and avoid volunteer effects, all CR centres that used the CARDSS patient record system received a written invitation and, when accepted, were visited by a member of the CARDSS research team. During this introduction meeting the intervention and study protocol were explained to all CR team members, including the manager and cardiologist. Furthermore, the study was announced at national CR meetings of professional associations involved in CR and during a the two-yearly national CR congress (visited by more than 350 Dutch CR professionals). The eighteen CR centres included in the trial were randomly divided into two study arms and received feedback on a subset of all quality indicators (incomplete block design) (Table 1).

2.2. Theoretical framework

Mechanism of feedback according to control theory

Our basic understanding of how audit and feedback may lead to improvement in quality of care can be derived from Carver and Scheier’s control theory [25]. Control theory, or cybernetics, is a general approach to understanding self-regulatory systems. It has had major impact on areas of work as diverse as engineering, applied mathematics, economics, and medicine [47]. The key assumption of control theory is that feedback recipients will be prompted to change behaviour when confronted with a discrepancy between their actual performance and a certain target performance.

The basic unit of control theory is a negative feedback loop consisting of four elements: an input function, a reference value, a comparator, and an output function (Figure 2). The input function is a sensor and brings information on the environment into the feedback loop. Next, the comparator compares this information to a certain reference value. This comparison yields one of two outcomes: the values are discriminably different from one another or they are not. What follows is an output function, or behaviour. If the comparator observed a discrepancy, the output function is activated, aiming to reduce the discrepancy (hence a negative feedback loop).

(16)

16 Feedback processes can occur in diverse physical systems. A popular example is the thermostat, with the elements of a thermostat (comparator) and ancillary devices such as a heat pump (output function). The system continuously samples the current air temperature (input function). As long as this temperature matches the thermostat’s setting (reference value), nothing happens. As soon as the thermostat detects a difference between its setting and the air temperature, it turns on the heater in order to increase the air temperature (effect on environment). Once the thermostat can no longer differentiate between air temperature and setting, it turns the heater back off. The air temperature may not always be merely influenced by the heater, but also by, e.g., cold wind or sunlight (external disturbances with an adverse or favourable effect, respectively).

Figure 2. The feedback loop mechanism according to Carver and Scheier’s control theory [25].

Application of the theoretical model to CARDSS Online

When applying the same principles to audit and feedback interventions in health care setting, clinical performance is audited using quality indicators (input function) and fed back to health care professionals (comparator). Health care professionals compare their performance a certain benchmark (reference value). If they observe a discrepancy, they will plan and execute actions (output function) to improve their clinical performance (effect on environment). Examples of external disturbances include organisational factors such as lack of time or resources or organisational support for quality improvement, or patient-related factors such as a more complex patient population. An important difference with the original theoretical model is that when applied to audit and feedback in health care settings, the input function commonly consists of multiple parameters (i.e., multiple quality indicators), and not just of a single one (e.g., air temperature in the thermostat example). In such cases, each quality indicator forms input for its own feedback loop, rather than there being one overall loop for clinical performance as whole.

(17)

17 We can extend on Carver and Scheier’s model by adding a two-step comparison that applies to most audit and feedback interventions such as CARDSS Online (Figure 3). Audited performance levels are first objectively compared to benchmarks (derived from clinical practice guidelines or peer performance data). Feedback reports provided to the health care professionals do not only include the audited clinical performance (scores on quality indicators), but also the results of the objective comparison (represented by the colours). In relation to our first research question, we assumed that both the clinical performance levels and the objective comparison as presented in the feedback held relevant determinants of health care professionals’ selection of indicators as targets for improvement (i.e., the subjective comparison).

Kluger and DeNisi argued that, whereas objective benchmark comparisons are designed to make comparisons by health care professionals straightforward, subjective comparisons are complex processes. For instance, health care professionals may compare feedback to more than one internal target, such as a prior expectation, past performance levels, performance of others, or an ideal goal or norm [48]. As a consequence, they may assess their clinical performance differently than the feedback system does and may decide to disregard the feedback’s recommendations. To this end we have made an adaptation to Carver and Scheier’s model by including the use of internally defined targets as a potential determinant of the subjective comparison (the dotted element in Figure 3).

Figure 3. Extended feedback loop mechanism applied to audit and feedback interventions in health care settings based on a two-step comparison and potential use of multiple (internal) targets.

(18)

18

2.3. Study design and data collection

To answer our research questions we undertook two studies. First, we conducted a laboratory study among individual CR professionals. The laboratory setting enabled us to study the determinants of CR professionals’ selection of quality indicators as improvement targets in the absence of organisational and social context. Second, we performed a secondary analysis of data collected during the CARDSS Online trial. The purpose of this analysis was to investigate determinants of CR professionals’ selection of quality indicators as targets for improvement in clinical practice, and compare these to the findings of our laboratory study. By comparing the study findings, we could investigate to what extent the selection process of CR professionals in clinical practice is influenced by external factors.

Laboratory study

The laboratory study was conducted among individual CR professionals. Participants were eligible if their centre participated in the CARDSS Online trial. They received an invitation with up to two reminders (after 2.5 and 5 weeks) by e-mail. The invitation included a personal account for CARDSS Online, where we asked them to evaluate two real performance feedback reports. We randomly selected these two reports from all earlier reports from CR centres within the CARDSS Online trial (one from each study arm), and presented them in random order without revealing information on centre and outreach visit number. Respondents did not know from which CR centre and from what outreach visit number the report originated. Respondents were asked to select the quality indicators that they would think should be selected for the quality improvement plan of a random CR centre other than their own. We set no limitation for the number of selectable indicators, but encouraged respondents to facilitate a feasible improvement plan that was to be evaluated after three months. For the determinant analysis (research question 1), we recorded data on all potential determinants of CR professionals selecting a specific indicator according to our theoretical framework (Figure 3). For each indicator this included the type of quality indicator (structure, process, outcome); the reported score; the reported colour (green, yellow, red, or grey).

To explore reasons for disregarding the feedback’s recommendations for selecting improvement targets (research question 2), we asked respondents to motivate any deviations from the feedback’s recommendations for indicator selection that resulted from the objective benchmark comparison. We considered this the case if (i) respondents chose not to select an indicator while it showed a red or yellow colour, or if (ii) respondents did select an indicator while it already showed a green colour. Respondents could state their motivation using a drop-down menu showing statements that aimed to identify the origin of their deviation. Table 3 lists the statements used in the laboratory study. If no statement applied, respondents could give their motivation in a free-text field.

(19)

19 Table 3. List of statements and their origin for respondents’ deviation from the feedback’s

recommendations for selecting improvement targets in the laboratory study.

Statement Origin

Indicator not selected as target while showing a red or yellow colour

A: This quality indicator does not belong in the set of quality indicators for cardiac rehabilitation.

Internal target: ideal goal/norm B: Improving the score of this quality indicator is not feasible. Internal target: prior expectation C: I think the indicator score is high enough; further improvement is not

necessary.

Internal target: ideal goal/norm D: I would wish to improve upon this quality indicator, but other indicators

have higher priority.

Prioritisation of multiple improvement targets

Indicator selected as target while showing a green colour

A: This quality indicator belongs in every quality improvement plan; it is an essential component of cardiac rehabilitation quality.

Internal target: ideal goal/norm B: It is easy to improve this quality indicator. Internal target: prior expectation C: I think the indicator score too low; improvement is necessary. Internal target: subjective norm

Clinical study: secondary analysis of CARDSS Online trial data

For the second part of our study we performed a secondary analysis of the data collected during the CARDSS Online trial. The practical setting of CARDSS Online involved decision making (including the ‘subjective comparison’ step in our framework) by CR teams rather than individual professionals, and exposed those teams to potential influencing factors originating from the organisation, the team, or the patient population. For the analysis, we used data on the same determinants as in the laboratory study. Additionally, we included data items regarding the outreach visits and centre characteristics. These included the outreach visit number (discrete value ranging from 1 to 4; which is equivalent to the PDSA cycle number); whether an indicator was previously included in the quality improvement plan (yes or no); the number of CR team members present during the outreach visit; centre type (two categories: (i) university hospital, teaching hospital, or independent rehabilitation centre, or (ii) non-teaching hospital); centre size (based on annual CR patient volume during the trial). At the time we performed the secondary data analysis, the CARDSS Online trial was still running.

2.4. Data preparation

Prior to the statistical analysis we prepared the data for investigating determinants for indicators’ selection by executing three data preparation actions. First, we removed the hierarchical structure of the set of the quality indicators used in the feedback (Table 1) from our dataset. In the hierarchical structure of the indicator set, the high-level indicators (indicator 1, 2, 6, and 13) are aggregations of a combination of inter-related low-level indicators (e.g., 13a and 13b). Although the high-level indicators are useful in the feedback process to facilitate interpretation, they make our statistical analysis complicated because they contain (partially) the same information as the lower-level

(20)

20 indicators. For example, if a CR team selects both a lower-level indicator and its corresponding high-level indicator to include in their quality improvement plan, they may merely aim to improve a single quality indicator rather than two. Alternatively, if CR professionals select a high-level indicator and none of its corresponding low-level indicators, they may target multiple quality indicators rather than a single one. For these reasons, we removed all high-level indicator values from our dataset. Among those high-level indicator values that were selected as targets for improvement, two researchers (WG and MvE) independently reviewed the improvement goals as described in free-text in the quality improvement plan to elicit information on which low-level indicator or indicators were targeted for improvement. We then modified the dichotomous data variable of those low-level indicator values indicating whether they were selected into ‘yes’ in our copy of the dataset. After these preparation steps our dataset contained observed values on a non-hierarchical set of 27 quality indicators.

Second, we excluded all observations of indicator 1b due to a bug in the CARDSS software causing an erroneous export of data on indicator 1b.

Third, we excluded all observations of indicators 6a through 15 pertaining to the first feedback moment in CR centres in the clinical study, because by definition these indicators displayed no score and showed a grey colour at the first feedback moment.

2.5. Statistical analysis

In order to identify the determinants of health care professionals’ selection of quality indicators as targets for improvement in both the laboratory and clinical study, we performed three multivariate logistic regression analyses. In all analyses our dependent variable was whether a quality indicator was selected as target for improvement by an individual CR professional in the laboratory study or by a CR team in the clinical study. For Analysis 1 and 2 we developed a logistic regression model using the characteristics of the feedback (indicator type; reported colour; reported score) as independent variables. In Analysis 1 we applied the model to the dataset of the laboratory study whereas in Analysis 2 we applied the same model to the dataset of the clinical study, to allow for a direct comparison between laboratory setting and clinical practice. For Analysis 3 we extended our regression model by also adjusting for variables solely applicable to the clinical study (outreach visit number; whether the indicator was previously included in the quality improvement plan; number of CR team members present during outreach visit; centre type; centre size). We used this model to fully analyse the dataset of the clinical study.

(21)

21 To take into account repeated measures on each quality indicator (being fed back in each feedback report), and on each participant in the laboratory study and each CR centre in the clinical study (responding to the feedback in each report), we fitted random effects in our regression models (also known as multi-level models [49]) to adjust for potential correlations between the repeated measures. To this end, our models included random intercepts for quality indicator and for CR professional (in the laboratory study) or CR team (in the clinical study).

Because there were quality indicators for which no score was available (due to insufficient data) and because measurement of scores varied considerably between different types of indicators (patient-level dichotomous or continuous, and centre-(patient-level dichotomous measures), we performed all three analyses (i) on the entire set of quality indicators but leaving out ‘reported score’, and (ii) on the subsets of indicators in which scores were reported (hence, excluding indicator values showing a grey colour) and where the scores were measured similarly.

For all determinants, we calculated the odds ratios (ORs) and 95% confidence intervals (CIs). From the models we assessed the baseline probability and its 95% CI for selecting any quality indicator as target for improvement. We assessed cluster-level variations of this baseline probability using the random effects to investigate selection behaviour that related to the participant (individual CR professional or CR team) and the individual quality indicator. Furthermore, we conducted Chi-square tests to determine whether selection frequency was associated with quality indicators themselves or with participants. All analyses were performed using R version 3.0.1 (R Foundation for Statistical Computing; Vienna, Austria).

(22)

22

3

RESULTS

3.1. Participant characteristics

Laboratory study

For the laboratory study, a total 42 individual CR professionals (response rate, 31%) accepted our invitation by providing their selections of improvement targets based on the two feedback reports; Table 4a summarised their characteristics. The majority of respondents were CR nurses and physiotherapists (74%), had a coordinating function (52%), and reported to spend more than half of their time on direct patient care (64%). They attended a mean of 1.83 educational outreach visits in the CARDSS Online trial. Two of the eighteen CR centres from the clinical study (see below) were not represented in the study sample; one was a teaching hospital and one a non-teaching hospital.

Clinical study

Eighteen CR centres from different parts of the Netherlands participated in the CARDSS Online trial; Table 4b shows their characteristics. Most were part of a non-teaching hospital (83%). As the trial was still running at the time we conducted our analysis, centres had completed different numbers of outreach visits, ranging from 1 to 4 times (mean 2.18). On average, five CR professionals of various disciplines attended the outreach visits.

3.2. General description of the observed data

Laboratory study

In the laboratory study, the 84 evaluated feedback reports (two by each of the 42 participants contained a total of 1300 measured quality indicator values (after executing the data preparation steps; see paragraph 2.4). For 515 indicator values (40%) too few data were available to calculate a score, implying the reporting of no score and a grey colour. Of the remaining 785 values, 349 (27%) concerned centre-level dichotomous measures (structure indicators; score ‘yes’ or ‘no’), 330 (25%)

(23)

23 Table 4a. Characteristics of the laboratory study participants.

Characteristic CR professionals (n=42)

Male gender 19 (45%)

Mean age in years (SD) 45.6 (10.7)

Mean medical experience in years (SD) 23.3 (10.5) Mean visits attended (min-max) 1.83 (0-4) Discipline CR nurse 18 (43%) Physiotherapist 13 (31%) Psychologist 3 (7%) Social worker 2 (5%) Dietician 1 (2%) Sports physician 1 (2%) Cardiologist 2 (5%)

Manager / head of CR department 2 (5%)

Coordinating function 22 (52%)

Time spent on direct patient care

Less than 25% 6 (14%)

25% to 50% 9 (21%)

50% to 75% 14 (33%)

More than 75% 13 (31%)

Table 4b. Characteristics of CR centres included in the CARDSS Online trial; used in our clinical study.

Characteristic CR centres (n=18)

Centre type

University hospital 1

Teaching hospital 4

Non-teaching hospital 11

Independent rehabilitation centre 2 Mean CR patients per year (SD) 433 (190) Mean outreach visits received (min-max) 2.18 (1-4) Mean number of attendants in outreach visits (SD) 5.23 (1.12)

(24)

24 concerned patient-level dichotomous measures (process or outcome indicators; score in percentage), and 106 values (8%) concerned patient-level continuous measures (process or outcome indicators; numerical score).

On average, participants selected 62% (range, 35 to 100%) of all quality indicators presented within the two feedback reports as targets for improvement (Figure 5a); this proportion varied between participants (p = <0.001; Chi-square test). How often a specific indicator was selected ranged between 19% and 100% (Figure 5b), and varied between indicators (p = <0.001; Chi-square test). Indicators most often selected were: proportion of patients who finish their relaxation and stress management training (process indicator 6c), proportion of patients whose cardiovascular risk factors are evaluated after rehabilitation (process indicator 8), and proportion of patients who successfully resume work after rehabilitation (outcome indicator 11). These were also the indicators that most often showed a grey colour (Figure 7). Least often selected indicators were: proportions of patients for whom all data are collected during the needs assessment concerning psychological functioning and lifestyle factors (process indicators 2b and 2e), and whether the team works in a multidisciplinary patient record (structure indicator 4), whether long-term patient outcomes are assessed (structure indicator 16), and whether centres perform internal evaluations and quality improvement (structure indicator 17). Those were also the indicators that most often showed a green colour (Figure 7).

Clinical study

In the clinical study, the eighteen participating CR centres evaluated a total of 50 feedback reports containing 850 values of quality indicators included in our analyses. As part of our data preparation, we excluded 138 values (16%) of indicators 6 through 15 from our analysis that showed a grey colour by definition; Of these excluded values, 46% were selected by the CR team as targets for improvement. Of the remaining 712 included indicator values 172 (19%) showed a grey colour because too few data were available to calculate a score. Of the 540 indicator values showing a non-grey colour, 216 (24%) concerned centre-level dichotomous measures, 260 (29%) were patient-level dichotomous measures, and 64 values (7%) concerned patient-level continuous measures.

On average, CR teams selected 36% of all quality indicators as targets for improvement. The proportion of indicators selected during an outreach visit varied between CR centres (p = <0.001; Chi-square test), ranging from 20% to 70% (Figure 6a). Similar to the laboratory study, how often an indicator was selected varied between the different indicators (p = 0.01; Chi-square test), ranging from 0% to 85% (Figure 6b). Most frequently selected indicators were: proportions of patients for whom all data are collected during the needs assessment concerning physical functioning and

(25)

25 cardiovascular risk factors (process indicators 2a and 2d), and proportion of patients who finish their education programme (process indicator 6a). Those indicators were among those that showed a yellow or red colour most often (Figure 7). Least often selected indicators were: process indicators 2b and 2e (similar to the laboratory study), proportion of patients who successfully resume work (outcome indicator 11), and proportion of patients who meet the physical activity fit norm (outcome indicator 13b).

Table 5a shows the results of Analyses 1 (laboratory study dataset), and 2 and 3 (clinical study dataset) on the entire set of quality indicators, while not adjusting for ‘reported score’. Table 5b shows the results on the subset of indicators with centre-level dichotomous measures. Table 5c shows the results of the subset with patient-level dichotomous measures. Due to the low number of observations in indicators with patient-level continuous measures (indicators 1a, 9, and 10) we were unable to perform any analyses on that subset to investigate the effect size of ‘reported score’. The reported baseline probabilities in the tables concern process indicators with a reported score of 66% and a green colour (Analysis 1 and 2), fed back during on visit number 1, in a presence of five CR team members, in a non-teaching hospital that treats 450 patients annually (Analysis 3). We will refer to an indicator with these characteristics as a ‘reference indicator’.

3.3. Determinants of selecting quality indicators as targets for improvement

Analysis 1: Determinants’ effect sizes in laboratory setting

The multivariate logistic regression analysis of the laboratory study dataset explored the effect of feedback characteristics on the selection of quality indicators as target for improvement by individual CR professionals in a laboratory setting. In our analysis of the entire set of quality indicators (Table 5a, column ‘Analysis 1’), the probability for a reference indicator to be selected was 10% (95% CI; 6% to 22%). The estimated variation of this probability as a result of the participant-level random effect ranged from 1% to 73%, whereas the estimated variation as a result of the indicator-level random effect ranged from 4% to 29%. Both estimations were corrected for the variation in feedback characteristics (indicator types, reported scores, and colours) that occurred in the feedback reports. We only identified the reported colour as having significant impact on the selection of indicators as targets for improvement. All respondents selected 100% of the indicators that showed a grey colour (i.e., no score was reported). Indicators with a red or yellow colour were more likely to be selected than those with green colour, with red having a larger effect size than yellow (OR; 37.67 and 17.84, respectively. Indicator type and reported score showed no association with indicator selection.

(26)

26 Figure 5a. Percentage of selected indicators per participant in the

laboratory study.

Figure 5b. Percentage frequency with which a quality indicator was selected by participants in the laboratory study.

Figure 6a. Percentage of selected indicators per CR team in the clinical study.

Figure 6b. Percentage frequency with which a quality indicator was selected by participants in the clinical study.

(27)

27 Figure 7. Distribution of colours reported in all feedback reports in the CARDSS Online trial.

Table 5a. Determinants of selecting quality indicators for CR as targets for improvement in laboratory and clinical study; all quality indicator values included in the analysis.

Analysis 1 (n=1300) Laboratory study Analysis 2 (n=712) Clinical study Analysis 3 (n=712) Clinical study

Intercept Baseline probability (95% CI) Baseline probability (95% CI) Baseline probability (95% CI) Intercept 12% (6%, 22%) 26% (15%, 40%) 17% (10%, 29%)

Random effects Cluster-level variation Cluster-level variation Cluster-level variation

Random intercept: participant 1% to 73% 12% to 47% 10% to 30% Random intercept: indicator 4% to 29% 5% to 68% 8% to 36%

Determinants Odds ratio (95% CI) Odds ratio (95% CI) Odds ratio (95% CI)

Indicator type Structure vs. process 0.73 (0.37, 1.69) 0.98 (0.34, 2.77) 1.15 (0.58, 2.28) Outcome vs. process 0.52 (0.21, 1.27) 0.19 (0.06, 0.57) 0.24 (0.10, 0.57) Reported colour Red vs. green 37.67 (18.35, 77.32) 4.31 (2.31, 8.01) 4.09 (2.09, 8.01) Yellow vs. green 17.84 (9.35, 34.01) 2.00 (1.13, 3.57) 2.58 (1.41, 4.74) Grey vs. green ∞ † 4.85 (2.67, 8.83) 4.36 (2.33, 8.14) Visit number 0.54 (0.42, 0.69)

Selected in previous QI plan 19.01 (11.12, 32.50)

No. of team members present 1.03 (0.91, 1.16)

University hospital, teaching

hospital, or rehabilitation centre 1.65 (0.84, 3.26)

Patients/year (per 100 increase) 0.88 (0.73, 1.06)

Abbreviations: CI, confidence interval; QI, quality improvement. † In the laboratory study (Analysis 1), all quality indicators with a grey colour were selected by all participants. To prevent our regression model from failing due to this phenomenon, we imputed three observations (one for each indicator type) of non-selected indicators that showed a grey colour.

(28)

28 Table 5b. Determinants of selecting quality indicators for CR as targets for improvement in

laboratory and clinical study; only centre-level dichotomous measures (structure indicators; score ‘yes’ or ‘no’) with reported scores included in the analysis.

Analysis 1 (n=349) Laboratory study Analysis 2 (n=216) Clinical study Analysis 3 (n=216) Clinical study

Intercept Baseline probability (95% CI) Baseline probability (95% CI) Baseline probability (95% CI) Intercept 8% (3%, 19%) 30% (15%, 51%) 23% (11%, 41%)

Random effects Cluster-level variation Cluster-level variation Cluster-level variation

Random intercept: participant 0 to 72% 6% to 72% 23% to 23% Random intercept: indicator 3% to 22% 9% to 64% 9% to 46%

Determinants Odds ratio (95% CI) Odds ratio (95% CI) Odds ratio (95% CI)

Reported colour †

Red vs. green 83.14 (25.41, 272.05) 2.38 (0.96, 5.90) 2.31 (0.89, 6.02) Yellow vs. green 11.81 (3.28, 42.52) 0.69 (0.20, 2.34) 1.12 (0.32, 3.95)

Visit number 0.51 (0.33, 0.79)

Selected in previous QI plan 26.49 (9.72, 72.19)

No. of team members present 1.22 (1.02, 1.45)

University hospital, teaching

hospital, or rehabilitation centre 1.46 (0.62, 3.44)

Patients/year (per 100 increase) 0.79 (0.63, 1.01)

Abbreviations: CI, confidence interval; QI, quality improvement. † Reported score is not used in the analyses of centre-level dichotomous measures as the score could be derived from the reported colour: indicators with score ‘no’ may have a red or yellow colour as a result from the benchmark comparison, indicators with score ‘yes’ are always green.

Table 5c. Determinants of selecting quality indicators for CR as targets for improvement in laboratory and clinical study; only patient-level dichotomous measures (process or outcome indicators; score in percentage) with reported scores included in the analysis.

Analysis 1 (n=330) Laboratory study Analysis 2 (n=260) Clinical study Analysis 3 (n=260) Clinical study

Intercept Baseline probability (95% CI) Baseline probability (95% CI) Baseline probability (95% CI) Intercept 13% (5%, 27%) 28% (12%, 53%) 32% (12%, 60%)

Random effects Cluster-level variation Cluster-level variation Cluster-level variation

Random intercept: participant 1% to 71% 10% to 59% 13% to 60% Random intercept: indicator 6% to 24% 4% to 81% 10% to 67%

Determinants Odds ratio (95% CI) Odds ratio (95% CI) Odds ratio (95% CI)

Indicator type

Outcome vs. process 0.75 (0.27, 2.11) 0.25 (0.04, 1.69) 0.49 (0.10, 2.31) Reported colour

Red vs. green 9.39 (1.16, 76.07) 2.97 (0.26, 33.56) 2.40 (0.20, 28.96) Yellow vs. green 17.06 (5.83, 49.94) 2.12 (0.66, 6.81) 2.23 (0.69, 7.21) Reported score (per 10% increase) 0.97 (0.74, 1.26) 0.84 (0.61, 1.15) 0.84 (0.61, 1.15)

Visit number 0.51 (0.33, 0.77)

Selected in previous QI plan 8.72 (3.56, 21.35)

No. of team members present 0.81 (0.26, 2.51)

University hospital, teaching

hospital, or rehabilitation centre 0.90 (0.73, 1.09)

Patients/year (per 100 increase) 1.03 (0.76, 1.40)

(29)

29

Analysis 2: Comparison of determinants’ effect sizes in laboratory setting to clinical practice

Analysis 2 entailed the same multivariate logistic regression model as Analysis 1, but on the clinical study dataset. It enabled a comparison of the effect of feedback characteristics on indicator selection by individual clinicians in a laboratory setting to selection by CR teams in clinical practice. In our analysis of the entire set of quality indicators (Table 5a, column ‘Analysis 2’), the baseline probability for a reference indicator to be selected was 26% (95% CI; 15% to 40%). The estimated variation of this probability as a result of the team-level random effect ranged from 12% to 47%, whereas the estimated variation as a result of the indicator-level random effect ranged from 5% to 68%. Again, these estimations were corrected for the variation in feedback characteristics that occurred in the clinical study feedback reports (in contrast to the variations observed in the clinical study, displayed in Figure 6a and 6b). The baseline probabilities and estimated cluster-level variations showed similar results in the analyses of the subsets of centre-level and patient-level dichotomous measures (Table 5b and 5c, column ‘Analysis 2’).

In our analysis of the entire set of indicators (Table 5a, column ‘Analysis 2’), we identified the characteristics ‘indicator type’ and ‘reported colour’ as having impact on the selection of indicators as targets for improvement. Outcome indicators were less likely to be selected (OR; 0.19, 95% CI; 0.06 to 0.57) than process indicators, whereas selection of structure indicators was as likely (OR; 0.98, 95% CI; 0.34 to 2.77). Indicators showing grey, red, or yellow colours were more likely to be selected than indicators showing green colours, with grey having largest impact (OR; 4.85), followed by red (OR; 4.31), and then yellow (OR; 2.00). In our analysis of the centre-level dichotomous measures for which scores were reported (Table 5b, column ‘Analysis 2’), we found that a yellow colour had no effect on indicator selection. In our analysis of the patient-level dichotomous measures for which scores were reported (Table 5c, column ‘Analysis 2’), no characteristics showed impact on indicator selection.

Analysis 3: Full analysis of health care professionals’ selection process in clinical practice

In Analysis 3 we extended our multivariate logistic regression model from Analyses 1 and 2, to explore additional characteristics that may be determinants for the selection of indicators as targets for improvement. In our analysis of the entire set of quality indicators (Table 5a, column ‘Analysis’), the baseline probability for an indicator to be selected was 17% (95% CI; 10% to 29%), varying from 10% to 30% as a result of the team-level random effect and from 8% to 36% as a result from the indicator-level random effect. In our analysis of the centre-level dichotomous measures with reported score (Table 5b, column ‘Analysis 3’), the baseline probability for an indicator to be selected increased, the team-level variation disappeared, being totally explained by the independent variables in the model, whereas the indicator-level variation increased (9% to 46%). In our analysis of the

(30)

30 patient-level dichotomous measures with reported score (Table 5c, column ‘Analysis 3’), the baseline probability and variations on both cluster levels increased.

Similar to Analysis 2, ‘indicator type’ and ‘reported colour’ were identified as having impact on indicator selection in our analysis of the entire set of quality indicators, but were not present in the analyses on the centre-level and patient-level dichotomous measures where scores were reported (Table 5b and 5c, column ‘Analysis 3’). Each outreach visit (or feedback moment), a quality indicator is almost twice less likely to be selected as target for improvement (OR; 0.54, 95% CI; 0.42 to 0.69). However, those indicators that have been selected as target in an quality improvement plan in a previous visit, are 10 times more likely to be selected again (OR; 19.01, 95% CI 11.12 to 32.50). A calculation example can be found in Box 1. None of the centre-level characteristics (team size, centre type, and centre size) showed impact on indicator selection. Our analyses on the centre-level and patient-level dichotomous measures with reported score (Table 5b and 5c, column ‘Analysis 3’) revealed similar findings.

Box 1. Calculation example for Analysis 3 in the overall analysis.

In our overall analysis (Table 5a, Analysis 3 column), the baseline probability that a reference indicator (green process indicator with a reported score of 66%) was selected as target for improvement was 17%, i.e. an odds of 0.205. For indicators that have not been selected in a previous feedback moment, the odds reduces with approximately 50% (i.e., odds 0.111; probability 10%). This means that 10% of the reference indicators are selected at the second feedback moment (instead of 17% at the first moment). For indicators that have been selected previously, their odds ratio becomes 0.54 × 19.01 = 10.03. Thus, the probability of a reference indicator to be selected again is 68%. In other words, 2/3 of previously selected indicators is selected again whereas 1/3 is not.

3.4. Reasons to deviate from feedback’s recommendations for selecting indicators

For 785 out of the 1300 quality indicator values that were fed back in the laboratory study, sufficient data were available to calculate a score and make a recommendation for the indicator’s selection by showing a red, yellow, or green colour. In 190 (24%) cases, respondents deviated from those recommendations, either by not selecting indicator values that showed a red or yellow colour (n =

(31)

31 98; 52%), or by selecting values that showed a green colour (n = 92; 48%). Appendix C shows for each quality indicator how many times its selection was not in line with the feedback’s recommendations, and what reasons respondents reported for their deviation from the recommendations.

Respondents’ reasons for not selecting the 98 indicator values that showed a red or yellow colour are depicted in Figure 7a. Respondents reported in 21 (21%) cases that they thought the indicator did not represent a relevant quality aspect of CR. This concerned in particular structure indicators 16 (whether long-term patient outcomes are assessed, 29%), and 18 (whether patients participate in patient satisfaction research, 24%). Respondents reported in 28 (29%) cases that they believed improving the indicator was not feasible; in particular structure indicators 16 (whether long-term patient outcomes are assessed, 36%) and 14 (proportion of patients whose rehabilitation goals are evaluated after rehabilitation, 11%), and outcome indicators 12 (proportions of patients that quit smoking, 14%) and 9 (median increase in patients’ exercise capability after rehabilitation, 11%). Respondents reported in 17 (17%) cases, 15 of which concerned process indicators, that they reckoned the reported score to be high enough. The cases included fourteen yellow and three red indicators. Twenty-four (24%) indicators were not selected because they lacked priority. Ten of those 22 indicators values pertained to structure indicators 16 (whether long-term patient outcomes are assessed), and 18 (whether patients participate in patient satisfaction research). Other reasons for deviating from the feedback’s recommendations (provided as free-text comments) included the necessity of more detailed information to assess the quality indicator score, and that the indicator goal is already met but not registered electronically.

Reasons for respondents’ selecting the 92 quality indicator values that showed a green colour are depicted in Figure 7b. Respondents reported for 73 (80%) cases that they thought the indicator should belong in every quality improvement plan, as they composed essential components for quality of CR. In 6 (7%) cases, structure and process indicators were selected because further improvement would be easily achievable. Respondents assessed that the scores of 12 (13%) indicator values were too low and reckoned that improvement was necessary. Five of those concerned indicator 1a (average time between hospital discharge and CR intake) and reported a mean score of 17.3 days (±6.5 SD), whereas non-selected green indicators 1a reported a mean score of 16.5 days (±4.9 SD). Three concerned centre-level dichotomous measures (i.e. structure indicators) that reported a ‘yes’ score.

(32)

32 Figure 7a. Statements for not selecting a red or yellow quality indicator as target for

improvement in the laboratory study (n = 98).

Figure 7b. Statements for selecting a green quality indicator as target for improvement, in the laboratory study (n = 92).

(33)

33

4

DISCUSSION

4.1. Summary of main results

We explored CR professionals’ decision-making behaviour with regard to selecting targets for quality improvement when they are confronted with indicator-based performance feedback in a laboratory setting (among individual CR professionals in an environment without influences of organisational and social context, and patient-related factors), and in clinical setting (among multidisciplinary CR teams in clinical practice).

Determinants of selecting quality indicators as targets for improvement

In a laboratory setting individual CR professionals seemed to consider data completeness important as they always selected grey indicators (i.e., indicators for which insufficient data were available to report a score). Also, individual clinicians were primarily guided by the feedback’s objective benchmark comparison – represented with green, yellow or red colours – when selecting indicators as targets for improvement: red and yellow indicators were more likely to be selected than green indicators. We found a similar relationship between indicator selection and reported colour in the clinical study, but the relationship was weaker than in a laboratory setting. We also found that outcome indicators were less likely to be selected than process indicators in clinical practice, although no association between indicator type and selection was found in the laboratory study. The probability that specific quality indicators were selected as targets for improvement varied substantially between different indicators; even after correction for the characteristics of their feedback such as colour. Furthermore, we observed large variations between how many indicators were selected by the different participants. The variation between participants was larger in the laboratory study than in the clinical study, while the variation between different indicators was larger in clinical settings. A part of those variations in the clinical study was explained by the outreach visit number (or feedback moment) and whether the quality indicator was selected in a previous quality improvement plan. Each visit, a quality indicator was almost twice less likely to be selected as target

Referenties

GERELATEERDE DOCUMENTEN

study, it was shown that using MMPSense in atherosclerotic plaques, mRNA expression of MMP-9 was found to be increased in areas with high intensity (hot spots) compared to areas

To implement a ISO 9001:2008 QMS that will help the organization increase customer satisfaction and continuous improvement, the SME must overcome several

De bezoekers die tot nu bij geen van de reismotivaties naar voren zijn gekomen, maar wel in de bezoekersboeken van De Wilde, Smetius en Vincent vermeld staan, zijn de vorsten

A strong positive correlation is found between health and safety and the casino employees’ economic and family domain, social domain, esteem domain, actualisation

Besides, the number of observations that differ between the single subject design (N=41 for the wider and for the hotspot area) and multiple subject design (N=955 for the wider

Daarom zou in het protocol moeten worden toegevoegd dat de onderzoeker niet alleen vraagt of een gebruiker weet hoe hij of zij nu verder moet, maar dat de onderzoeker ook aangeeft dat

Deze Koninklijke bezoeken zorgden, volgens kranten zoals The Illustrated London News en The Times, voor een band tussen Victoria en haar onderdanen.. 109 Positieve

Tans word daar van die begrip wins wegbeweeg as gevolg van die negatiewe konnotasie wat daaraan verbonde is vanwee sekere uit- buitingspraktyke. Sedert Karl Marx