• No results found

University of Groningen Quality improvement in radiology reporting by imaging informatics and machine learning Olthof, Allard

N/A
N/A
Protected

Academic year: 2021

Share "University of Groningen Quality improvement in radiology reporting by imaging informatics and machine learning Olthof, Allard"

Copied!
17
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Quality improvement in radiology reporting by imaging informatics and machine learning

Olthof, Allard

DOI:

10.33612/diss.168901920

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from

it. Please check the document version below.

Document Version

Publisher's PDF, also known as Version of record

Publication date:

2021

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

Olthof, A. (2021). Quality improvement in radiology reporting by imaging informatics and machine learning.

University of Groningen. https://doi.org/10.33612/diss.168901920

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 59PDF page: 59PDF page: 59PDF page: 59

IMPROVEMENT OF RADIOLOGY REPORTING

IN A CLINICAL CANCER NETWORK: IMPACT OF AN

OPTIMISED MULTIDISCIPLINARY WORKFLOW

4

Allard W. Olthof Jaap Borstlap Wilfried W. Roeloffzen Petra M.C. Callenbach Peter M.A. van Ooijen

(3)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 60PDF page: 60PDF page: 60PDF page: 60

Abstract

Purpose

To assess the effectiveness of implementing a quality improvement project in a clinical cancer network directed at the response assessment of oncology patients according to RECIST-criteria.

Methods

Requests and reports of computed tomography (CT) studies from before (n=103) and after (n=112) implementation of interventions were compared. The interventions consisted of: a multidisciplinary working agreement with a clearly described workflow; subspecialisation of radiologists; adoption of the Picture Archiving and Communication System (PACS); structured reporting

Results

The essential information included in the requests and the reports improved significantly after implementation of the interventions. In the requests, mentioning Start date increased from 2% to 49%; Date of baseline CT from 7% to 64%; Nadir date from 1% to 41%. In the reports, Structured layout increased from 14% to 86%; mentioning Target lesions from 18% to 80% and Non-target lesions from 11% to 80%; Measurements stored in PACS increased from 76% to 97%; Labelled key images from 38% to 95%; all P values < 0.001.

Conclusion

The combination of implementation of an optimised workflow, subspecialisation and structured reporting led to significantly better-quality radiology reporting for oncology patients receiving chemotherapy. The applied multifactorial approach can be used within other radiology subspecialty areas as well.

62 | Part II

(4)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 61PDF page: 61PDF page: 61PDF page: 61

Introduction

In the Netherlands, multiple hospital mergers have taken place in recent years. Improving patient outcomes should be one of the goals of a hospital merger [1], although an increased size of a hospital does not guarantee improved quality [2]. For radiologists, a merger offers both opportunities and challenges. The increased size of the group makes it easier to arrange subspecialties, and the increased work volume ensures that subspecialised radiologists see enough cases within their subspecialty to guarantee sufficient quality. Indeed, subspecialisation of radiologists has been reported to improve quality [3, 4]. Challenges are the different workflows between locations, which demands smart solutions, such as integration of the Picture Archiving and Communication System (PACS) at each location and organising a clinical cancer network.

Imaging plays an important role in assessing the tumour burden in patients receiving chemotherapy. Radiology reporting for this patient category can be performed using the Response Evaluation Criteria in Solid Tumours (RECIST) guidelines [5]. These guidelines describe how tumour lesions and lymph nodes can be categorised as measurable or non-measurable and how tumour response assessment can be performed by documentation of ‘target’ and ‘non-target’ lesions. In 2000, a task force proposed a set of standardised criteria, due to differences in interpretation and application of the tumour response criteria of the World Health Organisation, published in 1979. The revised edition (RECIST 1.1) was published in 2009 to optimise and simplify the original criteria [6].

Even though the role of the radiologist is clearly described in the criteria [6, 7], it remains difficult to organise a flawless implementation of the RECIST methodology in general hospitals. In a standard, untailored PACS, it can be difficult for a radiologist to perform accurate comparisons. The reasons for this difficulty include, for example, the lack of relevant information about previous treatments and previous studies to be used for comparison. This relevant information should be provided by referring physicians. Close collaboration between oncologists and radiologists is therefore imperative.

Technical developments facilitate radiologists in viewing and reporting on oncology patients. [8]. However, many improvements can still be made, as radiologists tend to underuse the possibilities of a PACS system [9]. Adequate information on the imaging request, key images in the PACS, hyperlinks in the report to annotated images and structured reporting are all considered important [10]. The complex interaction of technical and human factors means that a multifactorial intervention has the highest chance of success. This view is in line with human factor engineering [11], where the technical context is optimised to reduce human

Chapter 4 | 63

(5)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 62PDF page: 62PDF page: 62PDF page: 62

error and improve quality.

The quality of radiology reporting can be improved by structured reporting. Referring oncologists prefer structured reporting of oncologic measurements, and free-form text reports may not be sufficiently accurate [12, 13].

Improving RECIST reporting can be seen in the broader context of the transition to value-based healthcare. The radiologist not only has to produce a report but also must make sure he adds value to the process [14, 15].

Our project describes the impact of optimisation of workflow and technical systems on quality of the response assessment of patients receiving chemotherapy in the setting of a clinical cancer network, consisting of a regional hospital with 3 locations. We sought to optimise the use of the PACS system in order to improve workflow and implemented structured reporting for response assessment.

Furthermore, we evaluate the impact of a set of interventions on the quality of the request and the radiology report, as measured by objective quality criteria. Our hypothesis is that these interventions lead to a significant improvement in the quality of the response assessment.

Methods

Project design and setting

This quality improvement project was initiated in September 2015 in the Radiology Department of Treant Health Care Group, in close collaboration with the Oncology Department. The hospital has 3 locations and has arisen after a merger, with medical specialists forming an integrated group since January 2015. Oncology care is organised in a clinical cancer network.

Before the project plan was discussed with all stakeholders (oncologists, radiologists, technicians, administrative staff, manager), a baseline survey on the quality of response reporting was conducted among referring oncologists. The results of this baseline survey were used as input for the design of the quality improvement project. In another survey, all radiologists were asked to choose to subspecialise in RECIST reporting (RECIST radiologist) or to stop with RECIST reporting (general radiologist).

64 | Part II

(6)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 63PDF page: 63PDF page: 63PDF page: 63

The new way of requesting and reporting started from February 2016. First, we describe the implementation of this quality improvement project. Then we describe the study that evaluates the results of this project.

Protocol <###> Comparison Baseline: <###> Nadir: <###> Previous: <###> Target lesions

Lesion 1: <###>, now <###> mm, Nadir <###> mm, Baseline <###> Lesion 2: <###>, now <###> mm, Nadir <###> mm, Baseline <###> Lesion 3: <###>, now <###> mm, Nadir <###> mm, Baseline <###> Lesion 4: <###>, now <###> mm, Nadir <###> mm, Baseline <###> Lesion 5: <###>, now <###> mm, Nadir <###> mm, Baseline <###> SOD: now <###> mm, Nadir <###> mm, Baseline <###> mm

Non-target lesions <###> New lesions <###> Other findings <###> Conclusion <###>

Figure 1. Structured report.

The radiologist can activate the template by giving a voice command in the voice-recognition speech module. Pushing the insert button of the speech microphone advances the cursor to the next <###>. At the first <###> of every lesion, the anatomical location is described.

SOD = sum of diameters. In the section ‘Other findings’, all relevant other findings are listed.

Working agreement and subspecialisation

The RECIST radiologists and oncologists were instructed regarding the principles of RECIST in a collaborative meeting just before the start of the project and received relevant literature [5, 16]. During this meeting, a working agreement was discussed and approved in which all steps of the workflow from request to report were described. In January 2016, we developed

Chapter 4 | 65

(7)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 64PDF page: 64PDF page: 64PDF page: 64

a structured report template based on the RECIST criteria and the discussion between radiologists and oncologists concerning the minimal requirements for both requests and reports (Figure 1).

Administrative staff were instructed to provide feedback to the Oncology Department in cases of incomplete requests. The oncology departments of all three locations were provided with a laminated copy of the working agreement, which was also available in the digital hospital system.

PACS technicians created new examination labels and a worklist for all RECIST studies to enhance visibility in the PACS system (IDS7 TM, Sectra). The order communication system

indicated the required information if the RECIST label was used. In the PACS, a function was enabled that automatically (instead of manually) labelled images when measurements were added. Without label, previous measurements performed in the images cannot be found nor displayed automatically.

In the baseline situation until January 2016, no special requirements existed for requests or reports and no RECIST label was used. Radiologists used free text reporting, but had the opportunity to manually make a structured report lay-out. There were no standardised report templates available. CT studies from each location were reported by the radiologist who was responsible for CT on that day at that location. The work schedule was based on modality and not subspecialty.

In the new situation (from February 2016), every day one of the RECIST radiologists was appointed to report the daily RECIST studies for all three locations, which were available in a special work list in the integrated PACS. The RECIST radiologists were instructed to use the RECIST reporting template for all studies with the RECIST label. The CT studies without the RECIST label were placed in the general worklist, independent of the reason for referral. It was possible that a CT was intended to evaluate a patient receiving chemotherapy, but that the oncologist or administrative staff accidentally omitted the RECIST label. These studies were reported by either RECIST radiologists or general radiologists. RECIST radiologists were allowed to use the RECIST reporting template for these unlabelled studies.

Data collection and outcome measures

Written informed consent was not required for this study because patient data were anonymously used for the study, no interventions took place for the study and obtaining informed consent would have taken a disproportionate effort. Institutional Review Board approval was not required because of the retrospective nature of the study.

66 | Part II

(8)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 65PDF page: 65PDF page: 65PDF page: 65

The data collection for the baseline group and the evaluation group was performed by the same author (AO) and checked for consistency by a second author (JB), both fellowship-trained abdominal/oncological radiologists. Discrepancies were solved by consensus.

The study material consisted of requests and reports of computed tomography (CT) studies from two time periods. All CT studies requested by all oncologists of the hospital group from May to October in 2015 (before improvement project, baseline group) and from May to October in 2016 (after improvement project, evaluation group) were identified in the PACS by running a query. All these requests were scanned manually, and all consecutive CT studies for evaluation of chemotherapy for solid tumours in both time periods were included, whereas CT studies requested for other reasons were excluded. An assessment of all included requests and radiology reports was done by scoring the minimal requirements to be used according to the working agreement. For the requests, oncologist, tumour type, presence or absence of starting date of chemotherapy, date of baseline CT, and date of Nadir CT were assessed. Date baseline CT identifies the CT that should be used for comparison when the lesions decrease in size. Nadir date is the date of the CT with the largest response. For the CT studies and reports, examination type, structured layout, description of target lesions, description of non-target lesions, measurements stored in PACS, additional key images of measurements in PACS, radiologist, radiologist group (RECIST radiologists or general radiologists) were scored. Key images are labelled images that can be found and displayed effortless.

Statistical analysis

Statistical analysis was performed using IBM SPSS Statistics software version 23.

The type of radiologist (RECIST radiologists or general radiologist), the types of CT examination (Neck, Chest, Abdomen, or a combination), tumour types, and the quality parameters of the request and the radiology reports were compared between the baseline group and the evaluation group using the chi-squared test, and partially with Fisher’s exact tests (if one or more cells had expected values of less than 5). The same analyses were done by comparing the requests and reports without RECIST label (from both time periods) with those with RECIST label.

In the baseline group, the radiologists who became RECIST radiologist and those who became general radiologist were compared with each other with the chi-squared test. The comparison between the baseline group and evaluation group was also performed for RECIST radiologists and general radiologists separately using the chi-squared test.

Chapter 4 | 67

(9)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 66PDF page: 66PDF page: 66PDF page: 66

All analyses were performed with a criterion of P < 0.01 signifying statistical significance.

Results

Between June and October 2015, the oncologists requested 653 CT studies (477 patients). In the same period in 2016, 546 CT studies (431 patients) were requested. Of these, 103 (103 patients) were for response assessment of chemotherapy for solid tumours in the baseline group (2015), and 112 (107 patients) in the evaluation group (2016). In the evaluation group, 86 studies had the RECIST label, whereas 26 did not.

All requests in the baseline group were made by all eight oncologists of the three hospital locations. Seven of these oncologists were also part of the evaluation group, and one new oncologist participated. In the baseline group, 16 radiologists (all radiologists) made the reports. Of these, 7 became RECIST radiologists. Four of the other 9 radiologists did not make any reports in the evaluation group, and 5 made 15 reports in total.

The radiologists who became RECIST radiologists made a higher percentage of the reports in the evaluation group than in the baseline group (87% vs 56%; P < 0.001).

There was no significant difference in the distribution of the CT types (Table 1, P = 0.452) and tumour types (Table 2; P = 0.759) between both groups.

Table 1. Examination types in both groups

Baseline group Evaluation group

Neck/Chest CT 1 (1%) 2 (2%) Neck/Chest/Abdominal CT 8 (8%) 8 (7%) Chest CT 2 (2%) 0 Chest/Abdominal CT 77 (75%) 79 (71%) Abdominal CT 15 (15%) 23 (21%) Total 103 112

Quality assessment of requests

The quality of the requests improved significantly, as demonstrated by the higher percentage of requests with essential information in the evaluation group compared with the baseline group (Table 3), and especially in the group with the RECIST.

68 | Part II

(10)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 67PDF page: 67PDF page: 67PDF page: 67

Table 2. Tumour types in both groups.

Baseline group Evaluation group

Breast cancer 37 (36%) 32 (29%) Colon cancer 26 (25%) 27 (24%) Prostate cancer 10 (10%) 15 (13%) Rectal cancer 6 (6%) 12 (11%) Ovary cancer 6 (6%) 11 (10%) Pancreatic cancer 8 (8%) 5 (5%) Oesophageal cancer 3 (3%) 4 (4%) Cholangiocarcinoma 2 (2%) 1 (1%) Endometrial cancer 2 (2%) 1 (1%)

Urothelial cell carcinoma 1 (1%) 2 (2%)

Unknown primary 1 (1%) 1 (1%)

Renal cell carcinoma 1 (1%) 0 (0%)

Neuroendocrine tumour 0 (0%) 1 (1%)

Total 103 112

Table 3. Essential information at request.

Baseline group Evaluation group P-value

Start date 2 / 103 (2%) 43 / 112 (38%) < 0.001 (*)

Date baseline CT 8 / 103 (8%) 56 / 112 (50%) < 0.001 (*)

Nadir date 1 / 103 (1%) 35 / 112 (31%) < 0.001 (**)

No RECIST label RECIST label P-value

Start date 3 / 129 (2%) 42 / 86 (49%) < 0.001 (*)

Date baseline CT 9 / 129 (7%) 55 / 86 (64%) < 0.001 (*)

Nadir date 1 / 129 (1%) 35 / 86 (41%) < 0.001 (*)

Essential information was compared between the baseline group and the evaluation group, and between the groups with and without the RECIST label (* = Fisher’s exact test, ** = chi-squared test). In the second part all requests without RECIST label (baseline group and evaluation group) are compared with all requests with the RECIST label (evaluation group).

Start date is the date when chemotherapy started.

Quality assessment of reports

The quality of the reports improved significantly, especially in the group with the RECIST label compared with the group without RECIST label (Table 4).

In the baseline group, the RECIST radiologists performed better than general radiologists for three items: structured layout (P = 0.004), Target lesions in report (P = 0.007) and Non-target lesions in report (P = 0.004). In the baseline group there was no difference between RECIST radiologists and general radiologists for the items Measurements stored in PACS (P = 0.051) and Key-images available (P = 0.518).

In the comparison between the baseline group and evaluation group for RECIST radiologists

Chapter 4 | 69

(11)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 68PDF page: 68PDF page: 68PDF page: 68

and general radiologists separately, the availability of labelled key images improved significantly for both groups (P < 0.001). Measurements stored in PACS, did not improve in either group (RECIST radiologist P = 0.18, general radiologists P = 0.40 ). The other three items improved significantly for the RECIST radiologists ( structured layout P < 0.001, target lesions in report P < 0.001, and non-target lesions in report P < 0.001) but not for the general radiologists ( P = 0.081, P = 0.73, P = 0.081).

In the evaluation group, 96.5% of the studies with a RECIST label were reported by a RECIST radiologist.

Table 4. Quality criteria of reports

Baseline group Evaluation group P-value

Structured layout 10 / 103 (10%) 82 / 112 (73%) < 0.001 (*)

Target lesions 16 / 103 (16%) 76 / 112 (68%) < 0.001 (*)

Non-target lesions 7 / 103 (7%) 76 / 112 (68%) < 0.001 (*)

Measurements stored in PACS 80 / 103 (80%) 101 / 112 (90%) ns 0.041 (**)

Key images 29 / 103 (29%) 102 / 112 (91%) < 0.001 (**)

No RECIST label RECIST label

Structured layout 18 / 129 (14%) 74 / 86 (86%) < 0.001 (*)

Target lesions 23 / 129 (18%) 69 / 86 (80%) < 0.001 (*)

Non-target lesions 14 / 129 (11%) 69 / 86 (80%) < 0.001 (*)

Measurements stored in PACS 98 / 129 (76%) 83 / 86 (97%) < 0.001 (**)

Key images 49 / 129 (38%) 82 / 86 (95%) < 0.001 (**)

Quality criteria of reports in the baseline group and evaluation group, with and without the RECIST label. (* = Fisher’s exact test; ** = chi-squared test; ns = not significant)

Discussion

Hospital merger and subspecialisation

Even though the RECIST guidelines have been available for a long time, each of the three hospital locations in our study did not implement them in their practice, as demonstrated by the low percentage of requests and reports with essential information in the baseline group. This lack of implementation may have been due to the small group of radiologists in each location, with as consequence a limited opportunity for subspecialisation. This is a recognised and previously reported problem for small radiology departments [17]. The radiologists who joined the subspecialty group demonstrated a significant improvement in report quality with respect to RECIST reports, whereas the general radiologists did not improve. The higher volume of subspecialty cases read by the RECIST radiologists increased the likelihood that

70 | Part II

(12)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 69PDF page: 69PDF page: 69PDF page: 69

both a baseline scan and a follow-up scan were read by the same radiologist, which is thought to improve reproducibility [18]. All RECIST-radiologists have experience in both chest and abdominal radiology, and practically became ‘cancer radiologist’ instead of strict organ system specialists.

The hospital merger provided the opportunity to subspecialise and to work with a multi-location PACS implementation. Similarly, Donnelly et al. described the importance of Information Technology infrastructure, an increased degree of subspecialisation and increased participation in quality improvement projects by radiologists after integration of radiology services at different locations [19].

Working agreement and RECIST label

Our study fits within the long-standing continuous successful attempts to improve the quality of reporting by improving the request of the referring physician [20]. Both the requests and reports of studies with a RECIST label showed significant improvement.

PACS worklist

Part of the working agreement was the special PACS worklist for all RECIST studies. Nearly 100% of these studies were reported by RECIST radiologists, which indicates that this approach is suitable to ensure that the right examination is reported by the right radiologist. The valuable role of optimised worklists has also been described in a project for improved screening mammogram workflow [21]. The small percentage of RECIST reports made by general radiologists reflects the dilemma occurring when RECIST-radiologists are temporarily unavailable. The possibility exists that these general radiologists chose to make final RECIST reports at the same day instead of leaving the examinations for a RECIST-radiologist for the next day. This can be solved by optimised scheduling.

Report template and structured reporting

The significant improvements in the quality we observed are in agreement with earlier reported significant improvements in report quality after implementation of a structured report for prostate multiparametric magnetic resonance imaging (MRI) reports [22]. In contrast with our study, however, they combined structured reports with a computer-aided diagnosis (CAD) tool.

Value-based healthcare

In healthcare, an increasing need for metrics to quantify healthcare quality exists. These metrics can be divided into three categories: structural, process, and outcome metrics [15]. Our study illustrates improvements measured at all three levels. The structural metric

Chapter 4 | 71

(13)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 70PDF page: 70PDF page: 70PDF page: 70

“subspecialty availability” improved because of the introduction of RECIST radiologists; the process metric “percentage adherence to practice guidelines” improved because of the adherence to the RECIST guidelines. The outcome metric “provider satisfaction” was not systematically assessed, but the oncologists indicated they were content with the project results.

Outcome measures at different levels can be related. Because we know that physicians and radiologists believe that image-rich radiology reports add value (outcome metric) [23], our improved availability of key images (structural metric) may have also led to the improved satisfaction of the referring oncologists (outcome metric). Therefore, for adapting the workflow, it is important to recognise that certain steps in a process can be valuable, and certain steps can be useless. Towbin et al. have described this by explaining the value stream in a radiology department [24].

Limitations

One limitation of our study was its greater focus on the Radiology Department than on the Oncology Department. This focus likely resulted in the higher degree of improvement for the radiology reports, compared with the improvement of the request of the oncologists. More abundant communication between radiologist and oncologist could have resulted in even better results.

Even though all steps in the workflow were carefully designed, in the evaluation group a considerable percentage of requests entered the regular worklist because the absence of the RECIST label. Additional automated and manual checks at different places in the process could have reduced this problem.

Future research

An evaluation meeting where the results of the project were presented allowed mutual discussion and led to further improvements and future plans. This annual reflection is important, as described by Nagy: “In quality improvement, reflection is part of the PDCA cycle process: after the improvement idea is tested, the results are reviewed, and the team reflects on reasons for failure or success and how these lessons can be harvested to strengthen the next intervention towards achieving improvement” [25, 26].

In a future project further improvement can be expected by making the essential items of the request obligatory in the order management workflow.

Even though the visibility of RECIST CTs was improved, the displaying of all series at the

72 | Part II

(14)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 71PDF page: 71PDF page: 71PDF page: 71

PACS workstation was manual. In the future, automated display protocols could improve efficiency.

The same project plan could be applied to the response assessment of haematological malignancies, as well as other fields within radiology where improvements in reporting practice are necessary.

Conclusion

The combination of optimised workflow, subspecialisation and structured reporting that was implemented led to significantly better-quality radiology reporting for oncology patients receiving chemotherapy. The applied multifactorial approach can be used within other radiology subspecialty areas.

Chapter 4 | 73

(15)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 72PDF page: 72PDF page: 72PDF page: 72

References

1. Dafny LS, Lee TH (2015) The Good Merger. N Engl J Med 372:2077–9

2. Frakt AB (2015) Hospital consolidation isn’t the key to lowering costs and raising quality. JAMA 313:345 3. Lindgren EA, Patel MD, Wu Q, Melikian J, Hara AK (2014) The clinical impact of subspecialized radiologist

reinterpretation of abdominal imaging studies, with analysis of the types and relative frequency of interpretation discrepancies. Abdom Imaging 39:1119–26

4. Bell ME, Patel MD (2014) The degree of abdominal imaging (AI) subspecialization of the reviewing radiologist significantly impacts the number of clinically relevant and incidental discrepancies identified during peer review of emergency after-hours body CT studies. Abdom Imaging 39:1114–8

5. Eisenhauer EA, Therasse P, Bogaerts J, et al (2009) New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer 45:228–47

6. van Persijn van Meerten EL, Gelderblom H, Bloem JL (2010) RECIST revised: implications for the radiologist. A review article on the modified RECIST guideline. Eur Radiol 20:1456–67

7. Nishino M, Jagannathan JP, Ramaiya NH, Van den Abbeele AD (2010) Revised RECIST guideline version 1.1: What oncologists want to know and what radiologists need to know. AJR Am J Roentgenol 195:281–9

8. Abajian AC, Levy M, Rubin DL Informatics in radiology: improving clinical work flow through an AIM database: a sample web-based lesion tracking application. Radiographics 32:1543–52

9. Jorritsma W, Cnossen F, Dierckx RA, Oudkerk M, van Ooijen PMA (2015) Pattern mining of user interaction logs for a post-deployment usability evaluation of a radiology PACS client. Int J Med Inform. doi: 10.1016/j. ijmedinf.2015.10.007

10. Folio LR, Nelson CJ, Benjamin M, Ran A, Engelhard G, Bluemke DA (2015) Quantitative Radiology Reporting in Oncology: Survey of Oncologists and Radiologists. AJR Am J Roentgenol 205:W233-43

11. Siewert B, Hochman MG (2015) Improving Safety through Human Factors Engineering. Radiographics 35:1694– 705

12. Travis AR, Sevenster M, Ganesh R, Peters JF, Chang PJ (2014) Preferences for structured reporting of measurement data: an institutional survey of medical oncologists, oncology registrars, and radiologists. Acad Radiol 21:785–96 13. Marcal LP, Fox PS, Evans DB, Fleming JB, Varadhachary GR, Katz MH, Tamm EP (2015) Analysis of free-form

radiology dictations for completeness and clarity for pancreatic cancer staging. Abdom Imaging 40:2391–7 14. Patel BN, Gupta RT, Zani S, Jeffrey RB, Paulson EK, Nelson RC (2015) How the radiologist can add value in the

evaluation of the pre- and post-surgical pancreas. Abdom Imaging. doi: 10.1007/s00261-015-0549-y

15. Narayan A, Cinelli C, Carrino JA, Nagy P, Coresh J, Riese VG, Durand DJ (2015) Quality Measurements in Radiology: A Systematic Review of the Literature and Survey of Radiology Benefit Management Groups. J Am Coll Radiol 12:1173–1181.e23

16. Tirkes T, Hollar MA, Tann M, Kohli MD, Akisik F, Sandrasegaran K (2013) Response Criteria in Oncologic Imaging: Review of Traditional and New Criteria. RadioGraphics 33:1323–1341

17. Fleishon HB, Itri JN, Boland GW, Duszak R (2016) Academic Medical Centers and Community Hospitals Integration: Trends and Strategies. J Am Coll Radiol. doi: 10.1016/j.jacr.2016.07.006

74 | Part II

(16)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 73PDF page: 73PDF page: 73PDF page: 73 18. Muenzel D, Engels H-P, Bruegel M, Kehl V, Rummeny EJ, Metz S (2012) Intra- and inter-observer variability

in measurement of target lesions: implication on response evaluation according to RECIST 1.1. Radiol Oncol 46:8–18

19. Donnelly LF, Merinbaum DJ, Epelman M, Grissom LE, Walters KE, Beasley RA, Gustafson JP, Choudhary AK (2015) Benefits of integration of radiology services across a pediatric health care system with locations in multiple states. Pediatr Radiol 45:736–742

20. Wilson MA (1983) Improvement in referral practices elicited by a redesigned request format. Radiology 146:677– 679

21. Pham R, Forsberg D, Plecha D (2017) Improved Screening Mammogram Workflow by Maximizing PACS Streamlining Capabilities in an Academic Breast Center. J Digit Imaging 30:133–140

22. Silveira PC, Dunne R, Sainani NI, Lacson R, Silverman SG, Tempany CM, Khorasani R (2015) Impact of an Information Technology-Enabled Initiative on the Quality of Prostate Multiparametric MRI Reports. Acad Radiol 22:827–833

23. Patel BN, Lopez JM, Jiang BG, Roth CJ, Nelson RC (2016) Image-Rich Radiology Reports: A Value-Based Model to Improve Clinical Workflow. J Am Coll Radiol 14:57–64

24. Towbin AJ, Perry LA, Larson DB (2017) Improving efficiency in the radiology department. Pediatr Radiol 47:783– 792

25. Kadom N, Nagy P (2016) Quality Improvement and Leadership Development. J Am Coll Radiol 13:182–3 26. Kelly AM, Cronin P (2015) Practical Approaches to Quality Improvement for Radiologists. Radiographics

35:1630–42

Chapter 4 | 75

(17)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Referenties

GERELATEERDE DOCUMENTEN

Quality Improvement in Radiology Reporting by Imaging Informatics and Machine Learning PhD thesis, University of Groningen. Cover image:

• How can feedback systems, structured reporting, natural language processing, and artificial intelligence be applied to radiology workflow?. • What are the results of

Furthermore, we accept our second hypothesis that a higher level of expertise of the neurologist correlates with a lower perceived value of the report of radiologists because

In this methodology reports were only reviewed when they were clinically relevant in the routine clinical practice of a colleague radiologist, and when that radiologists decided

The proportion of radiology reports containing communication of critical findings increased after the implementation of a structured reporting template from 0.79% in the

We systematically collected and structured information in a relational database and coded for the characteristics of the applications, their functionalities for the

The projects demonstrated that, in clinical practice, radiologists can use imaging informatics and AI tools in the radiology workflow to improve radiology reporting.. Scientific

Olthof AW, van Ooijen PMA, Cornelissen LJ (2021) Impact of dataset size and prevalence on performance of deep learning natural language processing in radiology (submitted) 2..