• No results found

Understanding the use of an electronic audit and feedback intervention by health professionals in Dutch intensive care: A mixed-methods study

N/A
N/A
Protected

Academic year: 2021

Share "Understanding the use of an electronic audit and feedback intervention by health professionals in Dutch intensive care: A mixed-methods study"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Understanding the use of an

electronic audit and feedback

intervention by health professionals

in Dutch intensive care

A mixed-methods study

Macy Johanna Sharoubim, BSc

(2)
(3)

3

Understanding the use of an electronic audit

and feedback intervention by health

professionals in Dutch intensive care

A mixed-methods study

Student

Macy Johanna Sharoubim, BSc Student ID: 10440348

E-mail: m.j.sharoubim@amc.uva.nl

Traineeship address

Academic Medical Center (AMC) Department of Medical Informatics Meibergdreef 9 1105 AZ Amsterdam-Zuidoost Period November 2016- June 2017 Mentor W.T. Gude, MSc

Department of Medical Informatics Academic Medical Center (AMC)

Tutor

Prof. Dr. N.F. de Keizer

Department of Medical Informatics Academic Medical Center (AMC)

(4)
(5)

5

Acknowledgements

I would like to thank everyone who helped me in

any way so that I could complete my thesis.

I would like to extend special thanks to Wouter Gude for his daily supervision,

his valuable feedback and suggestions on my work,

and the helpful weekly meetings we had during my project.

I would also like to thank Marie-José Roos-Blom for her useful feedback

on my thesis and all the help she provided me with during my project.

And lastly, I would like to thank Nicolette de Keizer,

also for her useful feedback on my thesis,

(6)

6 Abstract ... 9 Samenvatting... 11 1. Introduction ... 13 2. Methods ... 17 2.1. Study context ... 17

2.1.1 ICUs in the Netherlands ... 17

2.1.2 NICE dashboard audit and feedback intervention ... 17

2.2 Theoretical framework ... 20

2.3 Study design and data collection ... 22

2.4 Participants ... 22 2.5 Outcomes of interest ... 23 2.6 Data analysis ... 24 2.6.1 Quantitative evaluation ... 24 2.6.2 Qualitative evaluation ... 25 3. Results ... 27 3.1 Participant characteristics ... 27 3.2 Use of intervention ... 29 3.2.1 Interaction ... 29 3.2.2 Information received ... 33 3.2.3 Decision changed ... 37

3.2.4 Care process altered ... 40

4. Discussion ... 43

4.1 Main findings ... 43

4.2 Interpretation of results ... 44

4.3 Comparison with other studies ... 47

4.4 Strengths and limitations ... 49

4.5 Impact of results and future work ... 50

(7)

7

References ... 53

Appendix A ... 57

Appendix B... 59

(8)
(9)

9

Abstract

Introduction: Health care organisations progressively adopt audit and feedback (A&F) to gain

insight into their clinical practice and to improve quality of care in their clinical practice. Nevertheless, the effectiveness of A&F interventions has been found to be highly variable. The poor understanding of the mechanisms of use behind the interventions’ success or failure can be the cause of the lack of progress made regarding effective A&F.

Aim: to understand how health professionals use an electronic A&F intervention in clinical

practice, with or without an action implementation toolbox with predefined improvement actions.

Methods: We studied the use of an A&F intervention implemented in Dutch intensive care units

(ICUs), in the form of an online dashboard with four indicators in the domain of pain management. The dashboard provides the user with feedback on the four indicators and the possibility to create actions to improve. The intervention group had access to an action plan with a toolbox with predefined improvement actions, while the control group had only access to an empty action plan template. We evaluated their use with the intervention quantitatively by analysing log data of the dashboard and qualitatively by carrying out monthly semi-structured phone calls with every ICU.

Results: ICUs logged in with an average of 4.1 (± 3.0 SD) times per month: intervention ICUs

5.1 (± 3.6 SD) times per month, and control ICUs 3.3 (± 2.3 SD) times per month. ICUs created on average 4.9 (± 3.0 SD) actions per ICU: intervention ICUs 6.0 (± 2.9 SD) and control ICUs 4.0 (± 3.0 SD). They completed on average 2.2 (± 3.3 SD) actions per ICU: intervention ICUs 3.6 (± 4.2 SD) and control ICUs 1.0 (± 2.0 SD). Facilitators for the use of the intervention were: feedback on process indicators, the usability of the dashboard, team meetings, an action plan, predefined actions and staff involvement and commitment. Barriers for the intervention were: different working shifts of ICU professionals, a busy clinical practice, staff absence, low (perceived) data quality and the dependency on people from outside the ICU to alter the care process.

Conclusions: ICU professionals rather change decisions in their practice for feedback based on

process indicators, than for feedback based on outcome indicators. Access to a toolbox with predefined actions causes ICU professionals to create and complete more actions than without access to the toolbox. Team meetings are essential for ICU professionals to view information from the dashboard and change decisions in their practice. There is often one ICU professional most dedicated to the intervention. Timeliness and good quality of the data on which their feedback is based, contributes positively to changing decisions in their practice. Involvement and commitment of ICU professionals, and good communication networks amongst ICU professionals facilitates in altering their care process.

(10)
(11)

11

Samenvatting

Introductie: Zorginstellingen gebruiken steeds meer audit en feedback (A&F) om inzicht te

krijgen in hun geleverde zorg en de kwaliteit daarvan te verbeteren. Toch blijkt de effectiviteit van A&F-interventies in de gezondheidszorg zeer variabel te zijn. Het gebrek aan kennis over hoe de interventie gebruikt wordt en hoe dat het succes van de interventie beïnvloedt, kan een reden zijn dat er nog weinig progressie is gemaakt in de effectiviteit van A&F-interventies.

Doel: begrijpen hoe zorgverleners een elektronische A&F-interventie in hun klinische praktijk

gebruiken, met of zonder een toolbox met voor-gedefinieerde verbeteracties.

Methode: Wij bestudeerden een A&F-interventie, die geïmplementeerd was in Nederlandse

intensive care units (ICUs). De interventie bestond uit een online dashboard met vier indicatoren in het domein van pijn beheersing. Het dashboard verschaft de gebruiker feedback op deze vier indicatoren en geeft de mogelijkheid om acties te definiëren om op de indicatoren te verbeteren. De interventiegroep had toegang tot het actieplan met een toolbox met mogelijke, voor-gedefinieerde acties, terwijl de controlegroep alleen toegang had tot een lege actieplan template. Wij bestudeerden het gebruik van het dashboard kwantitatief door log data van het dashboard te analyseren en kwalitatief door maandelijks semi-gestructureerde telefoongesprekken te voeren met elke deelnemende ICU.

Resultaten: ICUs logden gemiddeld 4.1 (± 3.0 SD) keer per maand in: interventie ICUs 5.1 (±

3.6 SD) keer per maand en controle ICUs 3.3 (± 2.3 SD) keer per maand. ICUs maakten gemiddeld 4.9 (± 3.0 SD) acties per ICU: interventie ICUs 6.0 (± 2.9 SD) en controle ICUs 4.0 (± 3.0 SD). Zij voltooiden gemiddeld 2.2 (± 3.3 SD) acties per ICU: interventie ICUs 3.6 (± 4.2 SD) en controle ICUs 1.0 (± 2.0 SD). Facilitators voor het gebruik van de interventie waren: feedback op proces indicatoren, de bruikbaarheid van het dashboard, teambijeenkomsten, een actieplan, de voor-gedefinieerde acties en betrokkenheid en inzet van ICU zorgverleners. Barrières voor de interventie waren: verschillende werkroosters van ICU zorgverleners, drukte door andere werkzaamheden, afwezigheid van werknemers, lage datakwaliteit en de afhankelijkheid van personen buiten de ICU om de zorgprocessen te veranderen.

Conclusies: ICU zorgverleners veranderen hun zorg eerder voor feedback op basis van proces

indicatoren, dan voor feedback op basis van uitkomst indicatoren. Toegang tot de toolbox met voor-gedefinieerde acties zorgen ervoor dat ICU zorgverleners meer acties maken en voltooien dan wanneer zij niet zo’n toolbox hebben. Teambijeenkomsten zijn essentieel voor ICU zorgverleners om informatie van het dashboard te bekijken en om over te gaan tot verandering in hun praktijk. Vaak is één ICU zorgverlener het meest toegewijd aan de interventie. Tijdigheid en een goede kwaliteit van de data waarop de feedback is gebaseerd, dragen positief bij aan het veranderen van besluiten in hun praktijk. Betrokkenheid en inzet van ICU zorgverleners en goede communicatie tussen de zorgverleners faciliteren in het veranderen van het zorg proces.

(12)
(13)

13

1. Introduction

Health care organisations progressively adopt audit and feedback (A&F) about their clinical practice in order to improve the quality of care in their clinical practice and to increase accountability [1-4]. An A&F intervention provides health professionals with an objective summary about their clinical performance on different quality indicators over a specified period of time [1, 5]. When implementing an A&F intervention in health care, it is expected that health professionals decide to adapt changes in their practice when feedback indicates that their clinical practice is inconsistent with the accepted guidelines or their peers. These changes should lead to better health outcomes and therefore improved quality of care for patients.

Feedback reports that were usually provided paper-based can nowadays also be provided electronically through a web-based quality dashboard. The interactive interface of a web-based quality dashboard creates new possibilities for health professionals to act upon their feedback, by for example: registering actions to improve their clinical performance and viewing their performances over time [6]. Thereby, feedback appears to be more effective when it is provided timely [7-9]. With a web-based quality dashboard, feedback about clinical performances and benchmark comparisons can be updated directly as soon as new data is available.

A Cochrane review of A&F that included 140 randomised controlled trials (RCTs) found that the effectiveness of A&F in health care is highly variable: they reported a median 4.3% absolute improvement (interquartile range (IQR) 0.5% to 16%) [1]. Different factors have been found that might contribute to the effectiveness of an A&F intervention. For example, A&F appears to be more effective when it includes explicit targets and an action plan, underlying data of good quality and when it is provided more than once, both verbally and written [1, 10, 11]. The health professionals’ intention to improve their practice might be influenced by a low baseline performance and benchmark comparisons that they consider realistic to pursue [1, 12-15]. Furthermore, health care organisation should encourage health professionals to have a positive

(14)

14

attitude towards change and the organisation should be organised in a way that feedback is provided by a supervisor or a senior colleague [1, 10].

Despite the identification of these factors, there has been little progress in the increase of A&F’s effectiveness [16, 17]. In previous studies little understanding has been obtained about the mechanisms through which A&F influences quality of care, which can be a cause of the lack of progress made regarding effective A&F [16, 18, 19]. When the success factors of the use of an A&F intervention are known, it will bring the opportunity to design better A&F interventions which could contribute to improvements in the quality of care [17]. Therefore, the challenge is to increase our understanding about the mechanisms of use of A&F interventions, so that success factors and barriers for implementation can be identified.

Process evaluations are typically used to achieve this understanding: process evaluations can be used to clarify causal mechanisms in the usage of the intervention and to explain variation in outcomes by identifying contextual factors [20, 21]. Process evaluations traditionally adopt qualitative research methods such as interviews, focus groups or observations. Qualitative methods can be labour-intensive and time-consuming, because researchers need to spend time on collecting data and thereafter transcribing the data will even cost them more time [22, 23].

The electronic nature of modern A&F systems creates new opportunities to study the mechanisms of use of A&F interventions. Specifically, quantitatively through analysing log data. A quantitative process evaluation can identify interesting pathways or obstructions, creates a fine-grained picture of actual use in the real world and could contribute in optimising the effectiveness of A&F interventions [5]. Selection bias is not at risk when analysing log data as it can be in qualitative methods: it is not always possible to interrogate the complete relevant population or to have a proper randomised population sample [24]. Also, response bias is avoided when analysing log data as it can occur in qualitative methods, caused by for example: the way questions are asked, the behaviour of the interrogator and the desire of the participator to give socially desirable answers [25-27].

A mixed-methods of both a quantitative method and a qualitative method could add complementary insights into the interactions of health professionals with an A&F intervention [28]. The quantitative method would reveal that certain events occurred, while the qualitative method would reveal the reasons for these events occurring [5]. Quantitative data could also inspire to create more specific questions that can then be used for the qualitative method. Subsequently, the qualitative method would be less labour-intensive and time-consuming and data would be more structured. For example, if a quantitative evaluation shows that no interaction with the A&F intervention system took place, a follow-up qualitative evaluation may explore the reasons of the health professionals for not interacting with the system.

(15)

15

The primary aim of this study is to understand how health professionals use an electronic A&F intervention in clinical practice, with or without a toolbox with predefined improvement actions. We answered the following sub-questions:

1. a) How often do they interact with the electronic A&F intervention system and what is

the duration of these interactions?

b) What are barriers and facilitators to interacting with the system?

2. a) What information do they receive from the A&F intervention system?

b) What are barriers and facilitators to receiving information from the system?

3. a) How often do they change decisions in their practice after receiving information

from the A&F intervention system?

b) What are barriers and facilitators to changing decisions in their practice?

4. a) How many decisions lead to an altered care process in their practice?

b) What are barriers and facilitators to altering their care process?

5. a) How do the above questions differ for health professionals with access to a toolbox

that includes predefined improvement actions and for health professionals without access to this toolbox?

This study is part of a larger study evaluating effectiveness of an A&F intervention in Dutch intensive care units (ICUs) through the addition of an action implementation toolbox [29]. The A&F intervention comprises the use of a web-based quality dashboard that provides ICU teams with feedback on a set of pain management quality indicators. The toolbox provides a set of predefined possible bottlenecks in the care process or organisation, and predefined improvement actions to resolve them in order to improve pain management in the ICU. We analysed log data of the dashboard and conducted semi-structured telephone interviews with each ICU team’s contact person.

(16)
(17)

17

2. Methods

2.1. Study context

2.1.1 ICUs in the Netherlands

The setting of this study is Dutch ICUs. Dutch ICUs are mixed medical-surgical and a majority of them are closed-format, which means that the intensivists are the patients’ primary attending physicians and available 24 hours a day, 7 days a week. All ICUs (n = 83) in the Netherlands provide data about their ICU admissions to the National Intensive Care Evaluation (NICE) foundation.

In 1996 the NICE foundation was launched by intensivists [30]. NICE provides a registry to enable the ICUs to monitor and improve their quality of care [30, 31]. NICE provides every ICU with feedback reports twice a year on indicators such as mortality and length of ICU stay and access to an online tool, in which the ICUs can perform analyses on their data [30, 32]. NICE works together with the department of Medical Informatics of the Academic Medical Center (Netherlands, Amsterdam). This department is responsible for importing the data into a central database, reporting ICUs on quality indicators and performing additional analyses on the data [30].

2.1.2 NICE dashboard audit and feedback intervention

NICE has developed a new web-based dashboard where Dutch ICUs receive feedback on different indicators in four domains: pain management, blood transfusions, antibiotics use and mechanical ventilation. Feedback on indicators is refreshed every time new data is uploaded, which is usually monthly. Up to now, four indicators in the domain of pain management have been implemented into the dashboard. The other three domains will be added to the dashboard

(18)

18

later. All the materials in the dashboard were designed after literature review and in consultation with ICU experts [33].

ICU professionals can monitor how their team performs on the four different pain indicators by examining the score achieved by their own ICU, the median score of all Dutch ICUs and the average score achieved by the top 10% best performing Dutch ICUs, all calculated based on the last three months [15]. The dashboard also shows a performance assessment for every indicator by a coloured icon (green = good performance; yellow = room for improvement; red = improvement strongly recommended). The upper half of the screenshot of the dashboard in Figure 1 shows the four pain indicators, the indicator scores and the coloured icons.

The indicators

Table 1 describes the four indicators in the domain of pain management that have been developed by Roos-Blom et al. [33] commissioned by NICE, and have been implemented in the dashboard so far. Each indicator has a numerator and denominator that are used to calculate the indicator scores. Indicators 1 and 3 target the process of care, while indicators 2 and 4 target the outcome of care.

Table 1 - Quality indicators in the domain of pain management, developed by Roos-Blom et al. [33]

Indicator Type of indicator Numerator Denominator 1. Perform pain measurements each shift

Process Sum of the total number of patients per shift with at least one pain measurement

Sum of the total number of patients per shift

2. Achieve acceptable pain cores

Outcome Sum of the total number of patients per shift with acceptable scores for all pain measurements

Sum of the total number of patients per shift with at least one pain measurement

3. Repeat pain measurements in case of unacceptable scores within one hour

Process Sum of the total number of patients per shift with at least one pain measurement per shift and for whom after a pain measurement with an

unacceptable score pain was re-measured within one hour

Sum of the total number of patients per shift with at least one pain measurement per shift and an observed unacceptable pain score

4. Normalise unacceptable pain scores within one hour

Outcome Sum of the total number of patients per shift with at least one pain measurement per shift and for whom after a pain measurement with an

unacceptable score pain was re-measured within one hour and normalised

Sum of the total number of patients per shift with at least one pain measurement per shift and an observed unacceptable pain score and for whom pain was re-measured within one hour

(19)

19

The pages

For each indicator four pages can be viewed: ‘Details’, ‘Patients’, ‘Info’ and ‘Action plan’. Appendix A includes screenshots of the four different pages. The ‘Details’ page shows a graph of the average scores of a specific indicator over time and provides the possibility to view the patient-lists. The 'Patients' page provides data analyses on different patient subgroups (e.g. surgical patients and patients during day, evening or night shifts). The 'Info' page provides background information about the indicator, such as how the score of the indicator was determined (the numerator and denominator and in- and exclusion criteria for patients included in the data).

The ‘Action plan’ page includes the action plan, in which ICU teams can define actions for each indicator (see bottom half of Figure 1). The idea is that the score of the indicators will be improved by performing these actions. When ICU professionals define an action, they can ‘create’ the action by determining a title and a deadline, assigning the action to certain colleagues and adding a short description. After creating the action, they have the possibility to ‘complete’ or ‘cancel’ the action. To develop structured action plans, ICU teams can also define bottlenecks in their care process or organisation for each indicator that are related to the actions.

In our study, ICUs in the control group had access to an empty action plan template at the beginning of the use of the dashboard, while ICUs in the intervention group had access to an action plan with the addition of an action implementation toolbox. This action implementation toolbox includes 18 unique predefined bottlenecks, 26 unique predefined actions and supporting materials for six actions to facilitate the action’s implementation (e.g. promotional posters, educational PowerPoint presentations and protocols). The predefined bottlenecks and actions are expert-based and literature-driven [33]. On average, the toolbox lists twelve predefined bottlenecks per indicator, whereas one unique bottleneck can be listed under more indicators. When the dashboard is used for the first time, all the 18 unique predefined bottlenecks are already selected. ICU teams can deselect potential bottlenecks from the predefined list of bottlenecks when they consider them as not relevant for their ICU. The predefined actions for all indicators are associated to the predefined bottlenecks within the toolbox. When users decide to start working on a predefined action from the toolbox, they can activate the action and assign a deadline and responsible persons to it. The action will then be moved to the action plan.

Before an ICU starts using the dashboard, a kick-off meeting with members of the ICU’s quality improvement (QI) team and NICE employees with expertise on the dashboard takes place. During this kick-off meeting, the functionality of the dashboard is explained and a first step in setting up the action plan is taken to ensure that all the QI team members understand the use of the

(20)

20

dashboard. After the kick-off meeting they can officially start using the dashboard. Every member of the ICU’s QI team has access to the dashboard with their personal login account.

Figure 1 – NICE dashboard (translated from Dutch), the action plan of the fourth indicator. From Roos-Blom et al. [6]

2.2 Theoretical framework

Coiera’s information value chain framework [34] describes the mechanism through which health informatics interventions, such as the NICE dashboard, lead to improved health outcomes. The chain starts with interaction of the user with the system. Some of these interactions might provide the user with information. This information can encourage the user to change decisions in their clinical practice, which can lead to an altered care process. Finally, the altered care process might lead to changes in health outcomes.

Figure 2 describes the information value chain when applied to the NICE dashboard. The ICU professional logs in into the dashboard and interacts with the system (Interaction). The dashboard

(21)

21

will provide the ICU professional with information during the interactions (Information received). For example, the ICU professional receives information about their indicator scores and possible actions they can perform to improve their scores. Desirably, if there is room for improvement, this received information will encourage them to change decisions in their practice by creating actions in the action plan (Decision changed). Performing and completing these actions should lead to an altered care process (Care process altered), which could lead to better care and health outcomes for the patients (Outcome changed) and thereby higher indicator scores.

Evaluation can take place at each of the five steps in the information value chain [34]. Evaluating the different steps can be interesting when for instance no change in health outcomes was discovered after the implementation of an intervention. For example, when the user had not logged in into the system (Interaction), the next four steps of the chain were not passed and it might be logical that the use of the system had not affected the health outcomes (Outcome changed).

By analysing log data of the NICE dashboard, we could evaluate the interactions of ICU professionals with the dashboard, the received information from the dashboard, the created and completed actions, and the changes in outcomes. Qualitatively we could determine why they chose to make certain decisions. Thus, what made them interact with the system and what received information led them to change their decisions in clinical practice and to alter their care process? In this study, we only evaluated the first four steps of the information value chain, because to measure the influence of the first four steps on the fifth step (Outcome Changed), a longer observation would be necessary.

(22)

22

2.3 Study design and data collection

As mentioned (see section 1), this study is part of a larger RCT study evaluating the effectiveness of an A&F intervention through the addition of an action implementation toolbox [29]. Because of the RCT-nature of the larger study, our participants were also divided into an intervention and control group: the intervention group who had access to the toolbox with predefined improvement actions and bottlenecks and the control group who had no access to the toolbox (see section 2.1.2).

Our study was performed with mixed-methods: a quantitative and a qualitative method. We collected both the quantitative and qualitative data to determine the ICU professionals’ use of the intervention. To collect quantitative data, we followed the ICUs in time by analysing log data of the dashboard. The log data of the dashboard was registered into a database and consisted of information about the ICU professionals’ interaction with the dashboard, such as: logins, mouse clicks and changes made in the action plan.

Two researchers of this study collected the qualitative data by conducting monthly semi-structured telephone interviews with one member of every ICU’s QI team. The phone calls had a duration of circa fifteen minutes and started one month after the ICU’s kick-off meeting of the RCT. The two researchers used a summary report, extracted from the log data to prepare themselves for the interview. These summary reports showed the activities of the ICU’s QI team with the dashboard since their kick-off meeting and consisted of information about the frequency and duration of logins, the indicators and pages viewed, and activities in the action plan. The topics discussed during the phone calls were: how they used the dashboard in practice, how the QI team collaborated, their perceived usefulness of the quality indicators and various dashboard components, and their progress with respect to creating and completing actions. A brief outline of the phone call was sent to the contact person with whom the phone call was held for review.

2.4 Participants

We included the same participants for our study as the larger RCT study [29]. ICUs were eligible if they allocated a QI team with at least one intensivist, one nurse and one contact person for NICE research and when they submitted data of sufficient quality to the NICE registry every month [29, 35]. The submitted data were of sufficient quality when their retrospective dataset had case-completeness of at least 95% and item-completeness of 100%. Every QI team was asked to spend at least four hours per month on the intervention.

(23)

23

2.5 Outcomes of interest

Our outcomes of interest were related to the first four steps of Coiera’s information value chain (Figure 2): interaction, information received, decision changed and care process altered.

1. Interaction

We were interested in the number of login sessions and the duration of login sessions and in facilitators and barriers to interacting with the dashboard. The topics discussed during the phone calls about the use of the dashboard and the collaboration of the QI team, could explain the quantitative data about the frequency and duration of login sessions.

We defined one login session as one day that a user had logged in at least once. Thus, when a user had logged in three times on one day, we still counted it as one login. We also defined the duration of a login session as the total duration that was logged in on one day. Reason for doing so, was that the user would be logged out automatically when no interaction was observed in the dashboard for thirty minutes. It was possible that the ICU professional was busy with other work and could not complete their interaction with the dashboard within the set thirty minutes and therefore had to log in multiple times on one day.

2. Information received

We were interested in the number of times indicator scores, the indicator ‘Details’ pages and the other three pages of the indicators were viewed and we were interested in facilitators and barriers to receiving information from the dashboard. The topic discussed during the phone calls about perceived usefulness of indicators and various dashboard components, could explain the quantitative data on why certain indicators and pages were viewed.

When a certain page was viewed multiple times on one day, we counted it as one view. When the user clicks on an indicator in the dashboard, the indicator’s page ‘Details’ is shown automatically. Therefore, the number of views of an indicator was equal to the number of views of the ‘Details’ page of that indicator. We also determined how the colours of the indicator icons were distributed over the study period and how often the ‘Details’ pages were viewed when indicator icons had a certain colour.

3. Decision changed

We were interested in the number of (predefined and self-defined) actions that were created during the study period and in facilitators and barriers to changing decisions in their clinical practice. The topic discussed during the phone calls about their activities in the action plan could explain the quantitative data on why they did or did not change decisions in their practice.

(24)

24

We determined how many actions for which indicators were created and what colour the indicator icons had when the actions were created. We evaluated which predefined actions from the toolbox were chosen by the intervention ICUs, and how the self-defined actions of the control ICUs were similar to the predefined actions of the toolbox. We also determined the number of selected and deselected bottlenecks.

4. Care process altered

We were interested in the number of completed and cancelled (predefined and self-defined) actions during the study period, the duration between creating and completing actions, and in facilitators and barriers to altering their care process. The topic discussed during the phone calls about their activities in the toolbox could explain the quantitative data on why they did or did not alter their care process.

2.6 Data analysis

2.6.1 Quantitative evaluation

To determine whether there were statistical differences between the intervention and control group on all the quantitative outcome measures, we performed a Mann-Whitney U test. When more than two groups were compared with each other (e.g. differences between the four indicators), we performed the Kruskal Wallis test. These tests were chosen because data were continuous and divided into non-paired groups. We also assumed that the data were not normally distributed, because of the small samples of the data. All analyses were performed using R v.3.3.2 (R Foundation for Statistical Computing; Vienna, Austria). The amount of available data differed for each ICU, since the amount of data depended on the date they started using the dashboard. The difference was found statistically proven when the p value was ≤ 0.05.

To determine whether the control group defined similar actions as compared to the predefined actions of the toolbox, two researchers evaluated together all self-defined actions from the control group and compared them to the predefined actions from the toolbox. To determine the exact number of self-defined actions, the two researchers analysed whether one action consisted out of one or more actions, whether one ICU had created multiple actions for one indicator which were actually the same actions and whether one ICU had created the same action twice for a different indicator. We divided the self-defined actions in the same categories of Flottorp et al. [36] as the predefined actions from the toolbox were divided in by Roos-Blom et al. [33].

(25)

25

2.6.2 Qualitative evaluation

To analyse the qualitative data, we used context-mechanism-outcome configurations (CMOCs) to develop causal pathways and to evaluate the mechanisms through which contextual factors influenced our A&F intervention [37]. The context is a circumstance or coincidence that can make a certain mechanism work and this mechanism influences the outcomes of the intervention. Because we did evaluation at the first four steps of Coiera’s information value chain, we evaluated CMOCs for each step with the step as outcome of the CMOC. This means that we evaluated contexts and mechanisms that influenced: interaction with the dashboard; receiving information from the dashboard; changing decisions in their clinical practice; and altering their care process. The ‘outcome’ of the CMOC should not be confused with the fifth step of Coiera’s information value chain ‘Outcome changed’.

As an example, a meeting with the QI team (context) in which they would discuss the feedback of the dashboard (mechanism) may lead to interaction with the dashboard (outcome = Interaction). However, a busy clinical practice, absence of colleagues and competing priorities (contexts) hampers to plan a team meeting (mechanism) in which new actions would have been created (outcome = Decision changed). Another example is, when there is no good communication between the QI team and other ICU staff (context), it could impede the implementation of an action when involvement of all ICU staff was necessary (mechanism) and may prevent the care process from being altered (outcome = Care process altered).

The findings of the phone calls were evaluated by one researcher and grouped into the four steps of Coiera’s information value chain. The same researcher coded each of the CMOCs using a bottom-up approach. Next, the CMOCs were discussed with another researcher. They used an iterative and consensus-based method to generalise the findings and came to final CMOCs. We mapped each CMOC’s contextual factor to a category based on the factors that may explain the variability in effectiveness of A&F, determined by on-going work from Brown et al. [38].

(26)
(27)

27

3. Results

3.1 Participant characteristics

We analysed data of all eleven ICUs that were included in the RCT at the time of this study (until 15 May 2017). Three(27%) ICUs were located in a teaching hospital and five (45%) ICUs had access to the toolbox. The first ICUs started using the dashboard in January 2017, while the latest ICUs started in March 2017. The average study duration of intervention ICUs were 3.2 (± 0.8 SD) months and for control ICUs 2.9 (± 0.7 SD) months (Table 2 and 3). Short summaries of the phone calls are presented in Appendix B. We did not carry out the same number of phone calls for all ICUs because this depended on the duration of their participation (Table 3). Table 4 shows for every member of the QI teams which role they practiced within the ICU.

Table 2 - ICU (n=11) characteristics

Intervention group (n = 5)

Control group (n = 6)

p value

Number of teaching hospitals 1 (20%) 2 (33%) -

Average size of QI teams 4.2 (± 0.8 SD) 3.8 (± 1.2 SD) 0.70

(28)

28

Table 3 - Contextual characteristics ICUs

ICU* Study duration

(months) Number of phone calls Toolbox QI team size I1 4.0 3 Yes 5 I2 3.7 3 Yes 4 I3 3.5 3 Yes 4 I4 3.0 2 Yes 5 I5 2.0 1 Yes 3 C1 3.7 3 No 5 C2 3.6 3 No 4 C3 3.2 3 No 5 C4 2.9 2 No 3 C5 2.5 2 No 2 C6 1.7 1 No 4

*ICUs I1-I5 are intervention ICUS and C1-C6 are control ICUs

Table 4 – Roles of QI team members within the ICU

ICU Medical head Intensivist Team leader IC/HC* nurse Quality staff Policy staff Data manager ICT CC* I1 1 - 1 2 - 1 - - - I2 - 2 - 1 - - 1 - - I3 - 2 - 1 1 - - - - I4 - 1 1 2 - - - - 1 I5 - 2 - 1 - - - - - C1 1 1 - 1 - - - - 2 C2 - 1 - 2 - - 1 - - C3 - 2 1 1 1 - - - - C4 - 1 - 2 - - - - - C5 - 2 - - - - C6 - 1 - 1 1 - - 1 -

(29)

29

3.2 Use of intervention

3.2.1 Interaction

In total, all ICUs together logged into the dashboard 144 times: the intervention group 84 times and the control group 60 times. The ICUs logged in with an average of 4.1 (± 3.0 SD) times per month: the intervention ICUs with an average of 5.1 (± 3.6 SD) times per month, and the control ICUs with an average of 3.3 (± 2.3 SD) times per month. The average duration of login sessions for both groups was 18 (± 12 SD) minutes: the intervention group 14 (± 7 SD) minutes and the control group 20 (± 15 SD) minutes. One control ICU (C6) had one much longer session of 47 minutes, while the remaining sessions of the control ICUs were on average 15 (± 9 SD) minutes. Table 5 describes the number and duration of login sessions for both groups. Figure 3 shows the number of login sessions of every ICU during the study period and Figure 4 shows the total duration of login sessions per month for every ICU.

Nine ICUs (82%) had one member in their QI team who was responsible for more than 50% of the login sessions within their QI team. The team member who interacted most with the dashboard was an intensivist in four ICUs; a data manager in two ICUs and for the remaining ICUs it differed.

Table 5 - Number and duration of login sessions

Intervention group (n=5) Control group (n=6) p value Average number of login sessions,

per month per ICU

5.1 (± 3.6 SD) 3.3 (± 2.3 SD) 0.20

Average of total session durations, per month per ICU

57 (± 23 SD) minutes 49 (± 30 SD) minutes 0.40

(30)

30

Figure 3 – Total number of login sessions of every ICU per month, during the follow-up time

(31)

31

Table 6 shows the experienced barriers and facilitators for the outcome: interaction with the dashboard. Nine ICUs mentioned to be satisfied about the dashboard’s usability, which made it easier for them to interact with the dashboard. C5 and C6 did not mention it because they barely interacted with the dashboard. The QI team of C5 was fallen apart, and they lacked a person in their team who was dedicated to pain management and the dashboard: they spent more time in finding new QI team members than in interacting with the dashboard. ICU C6 already had another dashboard in their ICU for insight in their pain scores and found their scores already high enough. Six ICUs (I1, I2, I4, C1, C2 and C4) interacted with the dashboard because they were interested in seeing how they scored on the indicators. Five ICUs (I1, I3, I4, C1, and C2) held a meeting with members of the QI team in which they discussed the dashboard and interacted with the dashboard together. Only C3 held a team meeting that did not lead to interaction with the dashboard: they spent more time on discussing how to improve the data quality in the dashboard, which they perceived as low.

Six different contextual factors were mentioned that hampered to plan a meeting with the QI team. Five ICUs (I1, C1, C2, C3 and C4) said that the different working shifts of the team members made it difficult to plan a meeting. Three ICUs (I1, C3 and C5) explained that a busy clinical practice made it difficult to plan a meeting and to interact with the dashboard individually. Three ICUs (I3, I4 and I5) perceived it as low relative advantage to hold a meeting: I4 mentioned to await the effect of the completed actions before holding a next team meeting.

(32)

32

Table 6 – Contexts (split into facilitators and barriers) and mechanisms for the outcome: interaction with dashboard

Category Context Mechanism ICUs

phone call Facilitators 1* 2** 3*** Feedback message characteristics

Usable system High usability of the dashboard made it easy and fun to interact with the dashboard.

I1,I2, I3,I4, I5,C1 C2,C4 I1, I2 I3, C1 C2,C3 C4 I1,C1 C2,C3 Recipient characteristics Perceived relative advantage

ICU professionals were interested in viewing how they scored on the indicators and interacted with the dashboard.

I2,I4 C1 I1,I4 C1,C4 C2 Organisational context

Team meetings Members of the QI team were present during team meetings, which was necessary to make decisions; therefore, dashboard interactions were (preferably) done during team meetings.

I3,I4 I1,I3 C1,C2 I3,C1 Organisational context Working individually or with a small team

Working individually or with a small team was effective to prepare themselves for an eventual QI team meeting in which bigger decisions could be made.

I1 Barriers Organisational context Different working shifts

It was difficult to plan a QI team meeting when QI team members were working in different shifts. I1,C1 C3,C4 C2 Organisational context Busy clinical practice

Because of busy practice, QI team members had less time to interact with the dashboard individually. C5 C3,C5 I1 Organisational context Busy clinical practice

Because of a busy practice, no QI team meeting could be held / yet be planned.

C5 C3 I1 Recipient

characteristics

Low (perceived) relative advantage for holding a team meeting

ICU professionals deemed it unnecessary to held a team meeting. For example, when they were still awaiting to see results from completed actions.

I3,I5 I4 External

influence

Staff absence Individuals did not interact with the dashboard when being absent from the ICU.

C1,C3 I2 Audit

characteristics

Low (perceived) data quality

The QI team wanted to make sure the data in the dashboard was correct, before they interacted with the dashboard or planned a team meeting. C3 I2 C3 Organisational context Lack of pain management expert

The QI team was lacking a team member with pain management as their field of expertise, and the intervention was not really started until they were able to recruit such a nurse.

C3,C5 C5 External

influence

Staff absence If QI team members were absent, team meetings were postponed.

C3 I1 Recipient characteristics Low (perceived) relative advantage of the dashboard

They already had another source of information about pain and did not need to interact with the dashboard.

C6 Organisational

context

Pain management expert in team from outside the ICU

QI team members who not worked at the ICU were not allowed to have a login account for the dashboard. C3 Organisational context Lack of QI team involvement and commitment

Other QI team members did not interact with the dashboard and were not motivated to hold a team meeting.

I5

*Phone call 1 (All ICUs included). ** Phone call 2 (I5 and C6 not included). ***Phone call 3 (I4, I5, C4, C5 and C6 not included)

(33)

33

3.2.2 Information received

In the 144 times users logged into the dashboard, they were confronted with 144 × 4 indicators = 576 indicator values. Of these 576 values, 147 (26%) showed a green icon (good performance); 206 (36%) a yellow icon (room for improvement); and 210 (36%) a red icon (improvement recommended). In 13 (2%) cases, no score and coloured icon was shown to the user due to an error with the database which was already solved within one week. The ‘Details’ pages were consulted in 61% of the cases a green icon showed; in 50% for a yellow icon; and in 57% for a red icon (p-value = 0.40).

The average performance score on indicator 1 (pain measurements each shift) was 63 (± 18 SD); on indicator 2 (achieve acceptable pain scores) 83 (± 6 SD); on indicator 3 (repeat pain measurements) 18 (± 15 SD); and on indicator 4 (normalise pain scores) 15 (± 13 SD), while their (theoretical) target value was 100. Figure 5 shows the colour distribution of the four indicator icons for all the ICUs during the study period.

From the 576 times an indicator value was seen, indicator ‘Details’ pages were viewed 334 times (58%) in total: 190 times by the intervention group and 144 times by the control group. From the 334 times that the indicator ‘Details’ pages were viewed, the ‘Details’ page of indicator 1, 2, 3 and 4 were viewed respectively: 119 (36%) times; 81 (24%) times; 72 (22%) times; 62 (19%) times (p-value = 0.20). From the 334 times that the indicator ‘Details’ pages were viewed, the ‘Info’, ‘Patient’ and ‘Action plan’ pages were viewed respectively, 53 (16%) times; 83 (25%) times; 117 (35%) times (p-value of 0.06). Table 7 shows the numbers of indicators and pages viewed for both groups.

(34)

34

Figure 5 - Colour distribution of the four indicator icons

Table 7 – Indicators and pages viewed in both groups

Intervention group (n=5) Control group (n=6) Total of both groups Total of logins 84 60 144

1. Pain measurements each shift 63/84 (75%) 56/60 (93%) 119/144 (83%)

Details* 63/63 (100%) 56/56 (100%) 119/119 (100%)

Info 9/63 (14%) 10/56 (18%) 19/119 (16%)

Patients 22/63 (35%) 16/56 (29%) 38/ 119 (32%)

Action plan 27/63 (43%) 20/56 (36%) 47/ 119 (39%)

2. Achieve acceptable pain scores 49/84 (58%) 32/60 (53%) 81/144 (56%)

Details* 49/49 (100%) 32/32 (100%) 81/81 (100%)

Info 8/49 (16%) 4/32 (13%) 12/81 (15%)

Patients 15/49 (31%) 3/32 (9%) 18/ 81 (22%)

Action plan 17/49 (35%) 10/32 (31%) 27/ 81 (33%)

3. Repeat pain measurements 42/84 (50%) 30/ 60 (50%) 72/144 (50%)

Details* 42/42 (100%) 30/ 30 (100%) 72/72 (100%)

Info 9/42 (21%) 7/30 (23%) 16/72 (22%)

Patients 12/42 (29%) 8/30 (27%) 20/72 (28%)

Action plan 19/42 (45%) 12/30 (40%) 31/72 (43%)

4. Normalise pain scores 36/ 84 (43%) 26/ 60 (43%) 62/144 (43%)

Details* 36/36 (100%) 26/ 26 (100%) 62/62 (100%)

Info 1/36 (3%) 5/26 (19%) 6/62 (10%)

Patients 2/36 (6%) 5/26 (19%) 7/62 (11%)

Action plan 5/36 (14%) 7/26 (27%) 12/62 (19%)

*When clicking on an indicator, the ‘Details’ page is automatically viewed.

The percentages of the indicators 1, 2, 3 and 4 were calculated with denominator: total of logins. The percentages of info, patients and action plan were calculated with denominator: number of times the ‘Details’ page of that indicator was viewed.

(35)

35

Table 8 shows barriers and facilitators for the outcome: information received from the dashboard. Six ICUs (I2, I3, C1, C2, C3 and C4) focussed on a certain indicator because they considered this indicator as most important to improve in their practice. ICU C6 also mentioned to focus on an indicator because they considered it as most important; however, they did not interact with the dashboard and did not receive information about this indicator (see section 3.2.1). Five ICUs (I1, I3, I4, C2 and C4) mentioned to focus on an indicator because improving on this indicator would also influence the scores of other indicators. Two ICUs (I1 and I3) focussed on an indicator because the indicator was more actionable and therefore easier to improve on. The indicators that were mentioned most as important, actionable, or influential on other indicators were: indicator 1 (pain measurements each shift) and/or indicator 3 (repeat pain measurements).

Seven ICUs (I1, I2, I4, C1, C2, C3 and C4) were interested in viewing their indicator scores presented on the dashboard’s landing page. Five ICUs (I1, I3, I4, C1 and C2) viewed the ‘Details’ page because they were interested to assess their performance scores over time. Five ICUs (I1, I2, I3, I4 and C2) mentioned to be interested in the action plan to update or just view the actions. Lack of knowledge on what to improve (I1, C2) and a low perceived data quality (I2, I3, C3 and C4) were for these ICUs a reason to make an export from the patient-lists. Two ICUs (I5 and C2) said they would have viewed more information from the dashboard when data would have been updated: they already knew what information to expect in the dashboard.

(36)

36

Table 8 – Contexts (split into facilitators and barriers) and mechanisms for the outcome: information received from the dashboard

Category Context Mechanism ICUs

phone call Facilitators 1* 2** 3*** Audit characteristics Important indicator

When ICU professionals considered an indicator as important they viewed more information about the indicator.

C1,C2 C3,C4 I2, I3, C1,C4 I2,C1 Audit characteristics Improving on one indicator may also affect others

ICU professionals focussed on this indicator and viewed information about the indicator, because it was more effective to focus on this one first. I1,I4 C4 C2,I4 I3,C2 Audit characteristics Easy to improve on indicator

The QI team chose to focus on the more actionable indicators which were easier to improve and they viewed information about these indicators. I1,I3 I3 I3 Recipient characteristics (perceived) low indicator score

ICU professionals wanted to improve on this indicator and viewed information about the indicator. I5 Recipient characteristics Perceived relative advantage

Because ICU professionals were interested in viewing their performance assessment, they viewed the indicator scores on the

dashboard’s landing page.

I2,I4, C1 I1,I4, C1,C2, C3,C4 Feedback message characteristics Performance trends

ICU professionals wanted to view their performances over time.

I4 I1,I3, I4,C2 I1,C1 Action characteristics Actions were updated or just viewed

ICU professionals wanted to view their action plan and keep their action plan up-to-date.

I1,C2 I1,I2, I3,I4 I2 Audit characteristics Low (perceived) data quality

ICU professionals wanted to verify the correctness of the feedback in the dashboard by viewing the patient-lists provided by the dashboard. I2 I2,I3, C3,C4 I2 Recipient characteristics Lack of knowledge on what to improve

ICU professionals wanted to identify potential causes for low performance by identifying patients using the patient-lists or the patient subgroups. C2 I1, C2 C2 Barriers Audit characteristics Information not provided by the dashboard

They wanted to view the patient subgroups over a smaller period and over time, which was not possible in the dashboard: they could not receive the desired information.

C2 C2 Audit characteristics Data not (frequently enough) updated

Because data was not updated (as frequently as they wanted) they were not able to view changes in their scores.

I5 C2 Recipient characteristics Low (perceived) relative advantage

The ICU memorised which actions were created or why their score was low and deemed it unnecessary to view the dashboard’s information. I5 C2 Audit characteristics Description in dashboard not clear

The description in the dashboard was not clear and they expected to receive other

information than that they received from the dashboard.

I2,I3 *Phone call 1 (All ICUs included). ** Phone call 2 (I5 and C6 not included). ***Phone call 3 (I4, I5, C4, C5 and C6 not included)

(37)

37

3.2.3 Decision changed

In total, all ICUs together created 54 actions; the intervention group 30 and the control group 24. We took into account that three self-defined actions consisted of multiple actions and that in seven cases an ICU created two similar actions for the same indicator. Seven of the self-defined actions from the control ICUs were similar to the predefined actions of the toolbox. Appendix C shows a list of all the predefined and self-defined actions created by the ICUs.

ICUs created on average 4.9 (± 3.0 SD) actions: intervention ICUs on average 6.0 (± 2.9 SD), from which 1.8 (± 1.6 SD) were self-defined; whereas control ICUs created on average 4.0 (± 3.0 SD) actions per ICU, all were self-defined as they did not have access to the predefined actions. One control ICU (C2) created ten actions, while the other control ICUs created on average three actions. Table 9 shows the activity of both groups in the action plan. Table 10 shows how many actions were created for the four different indicators, and which colours the indicator icons had by time the actions were created.

Intervention ICUs had 18 unique predefined bottlenecks already selected in the toolbox when they first started using the dashboard (see section 2.1.2). In total, all the intervention ICUs unchecked 31 predefined bottlenecks, this was on average 6.2 (± 2.3 SD) unchecked bottlenecks per ICU. Only one intervention ICU self-defined a bottleneck. Control ICUs, who had no access to the predefined bottlenecks, self-defined and selected a total of 15 bottlenecks with an average of 2.5 (± 2.8 SD) bottlenecks per ICU.

Table 9 - Activity in the action plan

Intervention group (n=5) Control group* (n=6) p value

Total number of created actions 30 24 0.83

Predefined actions 21 (70%) NA -

Self-defined actions 9 (30%) 24 (100%) 0.53

Average of created actions per ICU 6.0 (± 2.9 SD) 4.0 (± 3.0 SD) 0.30

Predefined actions 4.2 (± 1.6 SD) NA -

Self-defined actions 1.8 (± 1.6 SD) 4.0 (± 3.0 SD) 0.20

(38)

38

Table 10 - Actions created in relation to the indicator and colour of indicator icon

Actions created (N=54) Indicator

1. Pain measurements each shift 26 (48%)

2. Achieve acceptable pain scores 6 (11%)

3. Repeat pain measurements 21 (39%)

4. Normalise pain scores 1 (2%)

Colour of indicator icon when action was created

Red 18 (33%)

Yellow 24 (44%)

Green 12 (22%)

Table 11 shows the experienced barriers and facilitators for the outcome: decision changed in clinical practice. All five intervention ICUs found the predefined actions of the toolbox useful and used at least one of these actions. A team meeting was mentioned by three ICUs (I3, C1 and C4) as a facilitator to create actions because new decisions were made during the team meetings. ICU I3 said: “When you don’t come together with the team, nothing will happen”. Two ICUs (C4 and I4) mentioned they had not made new decisions yet, because they wanted to do that during a team meeting which was not held yet.

Barriers to create new actions were for four ICUs (I1, C1, C2 and C4) the already created actions that were not completed yet; they first wanted to complete these actions before creating new ones. Three ICUs (I3, I4, and C2) first wanted to see effect of the already completed actions, before making new decisions in their practice. Also, a low (perceived) data quality was a barrier for three ICUs (I2, C1 and C3) to create actions; they gave more priority to improving their data quality and they attributed the low scores rather to incorrectness of the data than to actual low performance. Three ICUs (I2, I3 and C3) changed decisions in their practice in relation to pain management, but they had not registered it in the action plan. In the phone calls, we asked them to also register these actions in the action plan.

(39)

39 Table 11 – Contexts and mechanisms (split into facilitators and barriers) for the outcome: decision changed

Category Context Mechanism ICUs

phone call Facilitators 1* 2** 3*** Co-intervention characteristics Suggested predefined actions provided by the toolbox

Predefined actions of the toolbox gave ICU professionals useful ideas about what they could do to improve, which would otherwise be unknown. I1,I4 I5 I2,I3 Organisational context

Team meetings During team meetings, all QI team members were present which was necessary to make decisions for their practice i.e. plan actions.

I3,C4 I3,C1 Organisational context Good communication networks amongst team members

An action was being thought of by one member, who asked for feedback (by e-mail) from other colleagues. Next, it was implemented into the dashboard.

I1 Feedback message

characteristics

Usable system High usability of the dashboard made it easy to plan actions, both self-defined as those predefined by the toolbox.

I1

Barriers

Action characteristics

Previous actions were not yet completed

Participants did not plan new actions until their previous actions were completed.

I1,C2 C4 I1 C1 Audit characteristics Low (perceived) data quality

Participants prioritised improving data quality over planning actions.

I2,C1 C3 C3 C3 Action characteristics Changes of completed actions not yet visible

Participants wanted to assess the impact of their previous actions on performance before planning new actions.

I4,C2 I4 I3 Organisational

context

Registration of action planning not well embedded in daily practice

Actions were being performed but the action plan was not updated, because updating the action plan was not integrated in the team's workflow.

I3,C3 I2 Organisational

context

No team meeting Because no meeting with the team was held, no new decisions for their practice were made.

C4 I4,C4 External influences Staff absence If a team member was not present at the team

meeting, it was difficult to define actions or to assign actions to specific persons; therefore action planning was postponed.

I3 Organisational

context

Busy clinical practice

Because of busy practice, they did not have the time yet to plan new actions.

I1 Recipient

characteristics

Low (perceived) relative advantage

This ICU thought their pain scores were already good enough, and did not have to change decisions in their practice.

C6

*Phone call 1 (All ICUs included). ** Phone call 2 (I5 and C6 not included). ***Phone call 3 (I4, I5, C4, C5 and C6 not included)

(40)

40

3.2.4 Care process altered

In total, all ICUs together completed 24 actions out of the 54 created actions: the intervention group 18 out of 30 (60%) and the control group 6 out of 24 (25%). They completed on average 2.2 (± 3.3 SD) actions per ICU: the intervention group 3.6 (± 4.2 SD) per ICU and the control group 1.0 (± 2.0 SD) per ICU. Five completed actions in the control group were from one ICU (C2) and four control ICUs did not complete any actions yet. Only one action was cancelled by an intervention ICU. Table 12 shows the number and duration of completed actions for both groups.

Table 12 - Number and duration of completed actions

Intervention group (n=5) Control group* (n=6) p value

Total number of completed actions 18 out of 30 6 out of 24 0.26

Predefined actions 12 (67%) NA -

Self-defined actions 6 (33%) 6 (100%) 0.85

Average of completed actions per ICU 3.6 (± 4.2 SD) 1.0 (± 2.0 SD) 0.07

Predefined actions 2.4 (± 2.6 SD) NA -

Self-defined actions 1.2 (± 1.6 SD) 1.0 (± 2.0 SD) 0.60

Average duration between creating and completing actions

36 (± 22 SD) days 20 (± 17 SD) days 0.05

Predefined actions 44 (± 22 SD) days NA -

Self-defined actions 26 (± 18 SD) days 20 (± 17) days 0.30

*Control group could only self-define actions.

Table 13 shows the experienced barriers and facilitators for the outcome: care process altered. Six ICUs (I1, I3, I4, C1, C2 and C4) stated that good communication networks amongst all the staff in the ICU contributed to altering their care process (e.g. sending newsletters to all ICU professionals with information about changes made in their care process). Also, involvement and commitment of all ICU staff members including the QI team members, were for three ICUs (I4, C1, C2 and C6) a facilitator to alter their care process. ICU C6 did not interact with the dashboard (see section 3.2.1), but they still completed two actions that they had planned during the kick-off meeting and therefore they altered their care process.

Six ICUs (I1, I2, C1, C2, C3 and C4) postponed their deadline because completing the action took more time than they had thought. Four ICUs (I1, C1, C3 and C5) mentioned that completing an action was not within the control of their team: they were dependent on people from outside the QI team and sometimes they had not heard any response from that person yet. Another barrier for

(41)

41

three ICUs (I5, C2 and C3) was absence of the team member (e.g. sickness or holiday) who was responsible for the completion of an action. In five ICUs (I3, I5, C1, C2 and C6) an action was completed, but it was not registered in the action plan yet. During the phone calls, we asked them to register the completed actions in the dashboard.

Table 13 – Contexts (split into facilitators and barriers) and mechanisms for the outcome: care process altered

Category Context Mechanism ICUs

phone call Facilitators 1* 2** 3*** Organisational context Good communication networks amongst ICU staff

Awareness of altered care processes were disseminated through newsletters, emails and staff meetings. I1,I4 C2,C4 I4,C2 I3,C1 Organisational context Staff involvement and commitment

ICU professionals with an active attitude towards altering the care process made it easy to implement changes in their practice.

I4,C2 C6 I4,C1 C2 C1 Action characteristics Achievable actions with quick deadlines

Actions were divided into short weekly actions that were quickly completed and motivated to create and complete more actions.

C2 Feedback message

characteristics

Usable system The structured action plan provided a clear overview of what actions had to be completed and by when. C1 Barriers Action characteristics Underestimation of time needed for completing actions

Completing actions took more time than expected, which is why the care process could not be altered yet.

I1,C4 I2,C2 C3

I1,C1 Action

characteristics

Actions not within the control of the team

QI team members had no control over

completing actions that were assigned to people outside the team (e.g. IT department).

C1,C5 I1,C1 C3 I1 Organisational context Registration of action completing not well embedded in daily practice

Actions were being executed, but the action plan was not updated because updating the action plan was not integrated in the team's workflow.

I3,I5 C2,C6

I3 I3,C1 External influences Staff absence The responsible team member for the action was

absent and could not complete the action.

I5,C2 C3 Organisational context Lack of staff involvement and commitment

The action was more difficult to implement, because active attitudes towards altering the care process of staff missed.

I5,C3 C4 Organisational

context

Busy clinical practice

Because of competing priorities, actions could not be completed yet.

I1 I1 Audit

characteristics

Low (perceived) data quality

QI team members prioritised improving data quality over implementing actions.

I2 C3 Action

characteristics

Repeatable action The action was going to be repeated (e.g. sending newsletters) over time, so was not completed in the dashboard.

I1 Action characteristics Difficulty of completing the action

The action was difficult to complete, which is why the care process could not be altered yet.

C6

*Phone call 1 (All ICUs included). ** Phone call 2 (I5 and C6 not included). ***Phone call 3 (I4, I5, C4, C5 and C6 not included)

(42)

Referenties

GERELATEERDE DOCUMENTEN

De aanzienlijke vorderingen die geboekt zijn bij het wegnemen van deze beperkende factoren hebben ender meer in de USA en de BRD qeresulteerd in aanbevelingen

echter niet verwacht worden dat verbeterde instruktieprogramma's binnen afzienbare tijd deze volledige 13% zouden kunnen doen verdwijnen. Indien al resultaat geboekt

Uit nieuw onderzoek van WUR Glastuin- bouw naar de effectgrenswaarden voor NOx en etheen blijkt dat de grenswaarde voor langdurige blootstelling aan etheen gaat naar het niveau

De gegevens worden elk twee minuten naar de computer overgezonden waar op basis van de locatie van de verschillende sensoren een grafische representatie van het klimaat wordt

The framing device was present in the three media groups quite similarly; The Financial Times featured the frame the most frequently (20% of the articles), followed by Xinhua

The effect of column height on the bubble properties, such as bubble velocity, local void fraction, interfacial area and equivalent diameter, will now be

De bezoekers die tot nu bij geen van de reismotivaties naar voren zijn gekomen, maar wel in de bezoekersboeken van De Wilde, Smetius en Vincent vermeld staan, zijn de vorsten

A strong positive correlation is found between health and safety and the casino employees’ economic and family domain, social domain, esteem domain, actualisation