• No results found

University of Groningen Quality improvement in radiology reporting by imaging informatics and machine learning Olthof, Allard

N/A
N/A
Protected

Academic year: 2021

Share "University of Groningen Quality improvement in radiology reporting by imaging informatics and machine learning Olthof, Allard"

Copied!
23
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Quality improvement in radiology reporting by imaging informatics and machine learning

Olthof, Allard

DOI:

10.33612/diss.168901920

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from

it. Please check the document version below.

Document Version

Publisher's PDF, also known as Version of record

Publication date:

2021

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

Olthof, A. (2021). Quality improvement in radiology reporting by imaging informatics and machine learning.

University of Groningen. https://doi.org/10.33612/diss.168901920

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 37PDF page: 37PDF page: 37PDF page: 37

IMPLEMENTATION AND VALIDATION

OF PACS INTEGRATED PEER REVIEW

FOR RADIOLOGY REPORTING.

3

Allard W. Olthof Peter M.A. van Ooijen

(3)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 38PDF page: 38PDF page: 38PDF page: 38

Abstract

Purpose

The purpose of this work is to demonstrate the possibility of implementation of a PACS-integrated peer feedback system based on RADPEERTM classification providing a stepwise

implementation plan utilizing features already present in the standard PACS implementation and without the requirement of additional software development. Furthermore, we show the usage and effects of the system during the first 30 months of usage.

Methods

To allow fast and easy implementation into the daily workflow the key-word feature of the PACS was used. This feature allows to add a keyword to an imaging examination for easy searching in the PACS database (e.g. by entering keywords for different kinds of pathology). For peer review we implemented a keyword structure including a code for each of the existing RADPEERTM scoring language terms and a keyword with the phrase “second

reading” followed by the name of the individual radiologist. Results

The use of the short-keys to enter the codes in relation to the peer review was a simple to use solution. During the study 599 reports were peer reviewed. The active participation in this study of the radiologists varies and ranges from 3 to 327 reviews per radiologist. The number of peer review is highest in CT and CR.

Conclusion

There are no significant technical obstacles to implement a PACS-integrated RADPEERTM

-system based on keywords allowing easy integration of peer review into the daily routine without the requirement of additional software. Peer review implemented in a non-random setting based on relevant priors could already help in increasing the quality of radiological reporting and serve as continuing education among peers. Decisiveness, tact and trust are needed to promote use of the system and collaborative discussion of the results by radiologist.

40 | Part I

(4)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 39PDF page: 39PDF page: 39PDF page: 39

Introduction

Quality assurance in radiology involves many issues that have been reported on previously [1–5] including the peer review of radiology reports. However, the quality of the core business of radiology, consisting of the analysis of the images and subsequent formulation of the report, is difficult to assess even by peers. Furthermore, integration of peer review into the normal workflow has been attempted before [6] but is still not readily available in many (commercial) digital reporting environments. Because of these issues, the systematic assessment and peer review of reports is not yet widespread although this is an important concept and has already demonstrated its value in improving patient safety [7–9].

In the UK the importance of peer review to increase quality in radiology reporting has been recognized and guidelines are provided. The aim in the UK is to obtain a level of 5% of reports to be systematically peer reviewed by December 2018. To achieve this, the Royal College of Radiologists (RCR) has designed guidelines on how to setup the peer feedback process that is covered in three documents publicly available from the web [10–12] The claim is that IT systems such as the Radiological Information System (RIS) should support the review and one of the key recommendation of the RCR is:

All RIS in the UK should have a package that allows peer feedback in a time-efficient fashion. All new RIS/Picture Archiving and Communication Systems (PACS)/reporting systems installed after 2015 should include an integrated QA module, or provide a facility for electronic integration to a bespoke system.[12]

The RCR defines a discrepancy as follows:

Reporting discrepancy occurs when a retrospective review, or subsequent information about patient outcome, leads to an opinion different from that expressed in the original report [11] .

Significant discrepancy rates lower the perceived quality of the radiology report by clinicians and undermine the mutual confidence between radiologist and between radiologists and clinicians. The decisions made during the diagnosis, treatment and follow-up of a patient are based on this confidence, therefore an unacceptable discrepancy rate is a risk for patient safety.

Despite the clear negative effect of discrepancies, the question remains how to act upon a detected discrepancy. This can be a difficult decision for many radiologists since a delicate balance exists between blame and constructive feedback. For a radiologist it would be very beneficial to have adequate tools to give constructive and approachable feedback to his

Chapter 3 | 41

(5)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 40PDF page: 40PDF page: 40PDF page: 40 or her peers. According to the RCR the peer feedback includes a number of processes

in a structured framework. This framework includes the Multidisciplinary Team Meetings (MDTM), dedicated discrepancy meetings and clinical audit.

The aim is to include all radiologists into the peer review process as part of their daily workflow. This can be achieved through different options such as: Ad hoc routine second review, Second Review during MDT meetings, Second Review of randomly selected reports, or implementation of formal Discrepancies Meetings.

Another issue in the implementation of peer review are the technical and procedural requirements. Peer review should be possible in a time and cost effective way, but currently is still hampered by the lack of systems and tools that allow integration of peer review into routine practice.

In his series of papers on Strategies for Radiology Reporting and Communication, Reiner also deals with Quality Assurance and Education [1]. He states that because of the decentralized working environment of radiologists the routine interpersonal (face-to-face) communication for peer-to-peer education, quality assurance and case review has decreased. He proposes that by correctly implementing digital reading environments it should be possible to replace this interpersonal communication by a digital communication platform. Opportunities lie in peer review, real-time consultation, and post-reporting feedback with as major requirement that the procedure used is integrated in the normal workflow and non-disruptive.

In a joint discussion of all attending radiologists of a regional hospital in the Netherland in 2012 concerning discrepancies in oncological radiology it was decided to implement a peer review system based on the RADPEERTM protocol as proposed by Jackson [13] . This peer

review protocol is published and already implemented in multiple institutions in the USA. Our goal was to avoid blaming and to use peer review to give each other feedback to have the opportunity to improve quality. Peer Review is the assessment of a radiology report by another radiologist. Peer feedback is about the communication of peer review back to the original author of the report [9] .

The purpose of this work is to demonstrate the possibility of implementation of a PACS-integrated peer review system based on RADPEERTM classification providing a step-wise

implementation plan utilizing features already present in the standard PACS implementation and without the requirement of additional software development. Furthermore, we show the usage and effects of the system during the first 30 months of usage and its relation with peer feedback.

42 | Part I

(6)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 41PDF page: 41PDF page: 41PDF page: 41

Methods and Materials

To allow fast and easy implementation into the daily workflow the key-word feature of the PACS (IMPAX 5.x, Agfa, Mortsel, Belgium) was used. This feature allows to add a keyword to an imaging examination for easy searching in the PACS database (e.g. by entering keywords for different kinds of pathology). For peer review we implemented a keyword structure including a code for each of the existing RADPEERTM scoring language terms (table 1) and a keyword with

the phrase “second reading” followed by the name of the individual radiologist.

Keyword Code RADPEERTM term Shortkey

1 Concur with interpretation [alt] [1]

2A Discrepancy in interpretation/not ordinarily expected to be made (understandable miss) Unlikely to be clinically significant

[alt] [2]

2B Discrepancy in interpretation/not ordinarily expected to be made (understandable miss) Likely to be clinically significant

[alt] [w]

3A Discrepancy in interpretation/should be made most of the time Unlikely to be clinically significant

[alt] [3]

3B Discrepancy in interpretation/should be made most of the time Likely to be clinically significant

[alt] [e]

4A Discrepancy in interpretation/should be made almost every time—misinterpretation of finding

Unlikely to be clinically significant

[alt] [4]

4B Discrepancy in interpretation/should be made almost every time—misinterpretation of finding

Likely to be clinically significant

[alt] [r]

For easy implementation the two sets of key-word were assigned to shortkeys (table 1) (figure 1)

Figure 1. Picture from the radiologist’s instruction with keyboard with the relevant shortkeys. First, the radiologist

uses the combination of the purple key (alt) and the blue key (the first letter of his initials) to assign the label “second reading” and his name. Then he uses the purple key (alt) and one of the other colors. Green for the label “RadPeer 1”, and yellow, orange and red for respectively “RadPeer 2”, “RadPeer 3” and “RadPeer 4”. The upper row is for the “A” category (Discrepancy, unlikely to be clinically significant), the lower row for the “B” category (Discrepancy, likely to be clinically significant).

Chapter 3 | 43

(7)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 42PDF page: 42PDF page: 42PDF page: 42 All five radiologists working at the department were instructed to assign both the

RADPEERTM-keyword and their own name-keyword to cases they encounter in daily practice,

both for discrepant and concurrent reviews. Paper instructions were placed besides every workstation. Reports were reviewed on two possible cases. First when a case was scheduled for a multidisciplinary team meeting the radiologist who participated in that meeting reviewed the reports involved that were originally dictated by one of his colleagues. In this situation the timing of the review was somewhere between 1 day and 2 weeks after the first report, and before the clinicians started therapy.

The second possible situation was reviewing the report of the previous examination when a patient returned for a follow-up examination. In that case the timing of the review could be any time between 1 day and several months. In this methodology reports were only reviewed when they were clinically relevant in the routine clinical practice of a colleague radiologist, and when that radiologists decided to perform a review of the report.

The instruction to the radiologist contained information about how to add a comment to the numerical score. This was done by the Study Comments function of the PACS.

The peer review registration was performed in addition to the regular informal incidental peer feedback. Peer review registration data was collected from July 2012 until December 2014. The total number of registered peer reviews were collected from the PACS at variable time intervals and stored in a spreadsheet.

To improve the comparability of peer review implementations both the date of the original report on which peer review is performed, and the date of the actual review were registrated. The number and characteristics of the examinations involved in the peer reviews were retrospectively analyzed.

Results

The use of the short-keys to enter the codes in relation to the peer review was a simple to use solution. Using the short-keys the adjudication to a RADPEERTM category can be done

within seconds. Retrieval of the data concerning the Peer review using the search function of the PACS takes some minutes. Adding free text comments to the reviewed studies was not easy, because it took some additional steps for the radiologist. This functionality was barely used. Selective data retrieval of free text comments was not possible. In daily practice

44 | Part I

(8)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 43PDF page: 43PDF page: 43PDF page: 43 the registration of the peer review went along with face-to-face or email communication

between radiologist concerning the majority of discrepant reviews.

Figure 2. Cumulative number of performed peer reviews every month. Report: original date of reports that were

reviewed. Review: date when reviews were performed.

During the study 599 reports were peer reviewed. In figure 2 the retrospective nature of the applied method is illustrated by the distance between the two lines. On starting a retrospective RADPEERTM review system discrepancies will be found involving the preceding

years. In the course of time both lines approach each other. The distance between both lines depends on the number of reports that are retrospectively reviewed from the period before implementation and on the average time between reporting and reviewing. In case of a method in which every report is peer reviewed the same day, both lines would be identical. In 2012 – the year with the most reviews – 58678 examinations were performed from which 269 (0,46%) were reviewed. From these examinations, 4944 were CT-scans, of which 128 were reviewed (3%). The active participation in this study of the radiologists varies and ranges from 3 to 327 reviews per radiologist (table 3). Radiologist C and D worked until respectively March 2013 and January 2014 in this department. “Unknown” means that a RADPEERTM keyword was assigned, but not a radiologist keyword. Table 4 demonstrates the

number of RADPEERTM reviews per category and per imaging modality clearly showing that

the number of peer review is highest in CT and CR.

Chapter 3 | 45

(9)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 44PDF page: 44PDF page: 44PDF page: 44

Table 3. Number of RADPEERTM reviews per radiologist

Radiologist Number of reviews

A 7 B 327 C 3 D 168 E 79 Unknown 15 Total 599

Table 4. RADPEERTM reviews per imaging modality and total.

CR MG US CT MR RF Total RADPEERTM 1 31 2 6 121 65 0 225 38% 2A 30 2 6 24 9 0 71 12% 2B 56 6 6 63 12 0 143 24% 3A 7 0 1 9 3 1 21 4% 3B 33 3 14 50 15 2 117 20% 4A 1 0 1 0 0 0 2 0% 4B 6 0 1 11 2 0 20 3% total 164 13 35 278 106 3 599

CR=conventional radiology, MG=mammography, CT=Computed Tomography, MR=Magnetic Resonance Imaging, RF=Fluoroscopy.

The identification and analysis of discrepancies may lead to a rehearsal of previous lessons learned or lead to direct adaptations in daily workflow in a radiology department. Illustrative cases for this of category RadPeer 4B are demonstrated in figure 3 to 6.

One of the first lessons of a radiology resident is to look at previous images during reporting. Figure 3 and figure 4 demonstrate the value of this lesson. The evaluation of the case of figure 5 contributed to the working agreement that allowed the CT radiologist to work without interruption, while another radiologist appointed as first contact for phone calls of referring physicians and questions of technicians. Figure 6 demonstrates the importance of using all information that is available in the images.

46 | Part I

(10)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 45PDF page: 45PDF page: 45PDF page: 45

Figure 3a. Abdominal CT in portal phase after contrast medium. Hypodense lesion medial in segment 8 initially

explicitly diagnosed as metastasis on this single phase examination. However, clinical course and later imaging confirmed the benign nature of the lesion.

Figure 3b. Retrospective analysis of prior ultrasound three years before the initial CT examination revealed an echogenic

lesion of the same size at the same location. Al imaging data was consistent with the diagnosis hemangioma. Using prior examination is mandatory to avoid misclassification of lesions. If no prior imaging was available in this case to make the diagnosis of hemangioma, than four-phase CT or MRI should have been advised to characterize this lesion.

Chapter 3 | 47

(11)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 46PDF page: 46PDF page: 46PDF page: 46

Figure 4a. Abdominal CT in bone setting. Sclerotic lesions were described and interpreted as metastases, but as

unchanged compared with prior examinations.

Figure 4b. Comparison with a prior examination at exactly the same level revealed that the lesions in the iliac wing

were new. Comparing with previous examinations should be done carefully to differentiate stable disease and progressive disease.

48 | Part I

(12)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 47PDF page: 47PDF page: 47PDF page: 47

Figure 5. Coronal multiplanar reformation of abdominal CT. Large pelvic mass, not mentioned in the initial report,

even though all anatomic areas were systematically reviewed and described. Retrospective analysis of the radiology information system showed that the radiologist made three reports for conventional radiographs between the starting time and finishing time of the CT report. Interruption is a major risk for making faults.

Figure 6a. Chest CT in lung window/level setting. Pulmonary mass in left lower lobe, suspicious for tumor.

Chapter 3 | 49

(13)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 48PDF page: 48PDF page: 48PDF page: 48

Figure 6b and 6c. Same CT as figure 6a, now in bone window/level setting. Lytic vertebral lesion, highly suspicious for

bone metastasis, not described in the initial report. Focusing on an obvious lesion is a risk for missing other relevant findings. Thereby it is recommended to evaluate chest CT examinations not only in lung and soft-tissue window/level settings, but also in bone setting to detect bone lesions.

Discussion

The proposed RADPEERTM system is easy to implement using key-words. Radiologists can use

this system without being disruptive to the daily workflow. The selection of cases for group discussion is quite easy, using the search function of the PACS.

A similar discrepancy registration implementation also based on keywords in PACS was previously reported by Issa et al.[14]. They report on the discrepancy rate between preliminary and official reports of emergency radiology studies. Compared with this system, the advantage of our method is the direct adjudication of the grade of discrepancy by using several different keywords.

A useful framework to compare goals and methods of peer review is provided by Fotenos et al.[15] . We will employ their terminology on review richness, review adjudication, review case selection, and review timing in the discussion below.

Review Richness

In our study the review richness was a “score”: the reviewer assigns a numerical score reflecting agreement or discrepancy with a report. The advantage of a score is that agreement is also recorded. A disadvantage is that a score is subjective and variable [16] .

In calculating the discrepancy rate, it is questionable whether the effect of unregistered discrepancies might be neglected. Unlabeled cases may be cases without discrepancy, but

50 | Part I

(14)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 49PDF page: 49PDF page: 49PDF page: 49 also discrepant cases in which the label was omitted. The radiologists had the opportunity

to add free text comments to the score. However, the free text function of the system used was barely used because of limited functionality of the PACS system. Recently a new PACS was implemented at our department. In this new PACS of a different vendor the free text comments can be added directly linked to the Keyword function possibly lowering the threshold for the radiologist to use this function.

Review Adjudication

The review adjudication in our study was “closed”, which means that it was based on an administrative system. Fotenos et al. opposed this to an “open” review, where discrepancies are discussed in a conference. Of course, different combinations of open and closed adjudication are possible. In our study the closed adjudication was planned to go along with a meaningful open discussion.

Review Case Selection

Our review case selection was “non-random”: radiologists were involved in case selection. In the study of Issa at al. all preliminary reports of residents were reviewed, which is also “non-random”. Because of selection bias, studies with non-random selection should only be compared with caution. The advantage of non-random selection is the opportunity to increase efficiency by doing peer review in cases that most likely benefit from peer review. Random assignment of peer review cases leads to lower discrepancy rates, as is illustrated by the 0,69% discrepancy rate in the paper of O’Keeffe [17] . This can be explained by inclusion of large amounts of easy cases, which are unlikely to lead to discrepancies in peer review. Future studies are needed to clarify possible differences in effect between random and non-random peer review case selection.

Sheu et al. [18] advocate a mathematical model for case selection to improve the efficiency and effectiveness of peer review. They present a peer review system in which cases are selected bases relevant factors as previous errors and the costs and the likelihood of morbidity from an error. In comparing studies on peer review and peer feedback, it is important to realise that an unbiased assessment of the original report can only be made when the scorer has exactly the same clinical information and imaging studies as the original reporter. In our study it was not the purpose to guarantee unbiased peer review, but to use a peer review methodology to give feedback in a structured way, and to have the opportunity to identify cases that can make an educational contribution to improve quality.

Chapter 3 | 51

(15)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 50PDF page: 50PDF page: 50PDF page: 50 Review Timing

The review timing in our study was “relevant prior”, compared with “near-sign-out” in the study of Issa et al., were reports were reviewed soon after preliminary interpretation. The timing of the peer review depends on goals and logistic possibilities. In case of expected high discrepancy rates and major impact on patient management, the need of near-sign-out peer review is obvious. When near-sign-out peer review puts a major strain on daily work routine without benefits on patient outcome, low radiologist compliance can be expected.

An issue that greatly affects the number of category 1 cases is that with selecting cases based on relevant prior, very few normals are included into the peer review.

Finding registration

In the data of Jackson et al. [13] 97,1 % of RADPEERTM reviews were category 1 (no

discrepancy), compared to only 38 % in our data. Furthermore, the meta-analysis of Wu et al.[19] demonstrates a pooled total discrepancy rate of 7,7% for CT. While our discrepancy rate for CT is 56%.

An explanation for this difference could be that in our institution the RADPEERTM review was

performed and reported mainly when discrepancies are found, and otherwise was omitted. This would result in a major underregistration of category 1 peer review.

The underregistration of A-categories (discrepancies not likely to be clinically significant) also stands out in our results. This indicates that the RADPEERTM registration system was

mainly used in case of clinically significant discrepancies and did not become routine every time a previous examination was encountered. Another explanation could be that we are using a relevant prior review timing with non-random case selection since this method leads to the selection of the more difficult cases and diagnoses to be entered into the peer review process. Random assignment from all examinations performed would include more easy examinations and examinations without pathology leading to a higher number of both category 1 and A-categories.

As a demonstration of this, CT is most commonly involved in our peer review because of the habit of reviewing the previous CT when reviewing a CT examination, and because CT is the modality most frequently discussed in Multidisciplinary Oncology Meetings. Therefore, introducing a bias towards inclusion of more complicated diagnoses. Obviously, the distribution of the different modalities in which peer review is performed, depends on the selection of cases within the radiology department. The case mix within the radiology department depends on the referring medical departments. For example the data of

52 | Part I

(16)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 51PDF page: 51PDF page: 51PDF page: 51 Brook [2] demonstrates a much higher percentage of peer review in ultrasound cases

(72%) compared with our data (table 4), in a publication on peer review in obstretic and gynaecologic imaging.

In the period after implementation of the system, the slope of the lines in figure 2 is steep. The lines flatten in the course of time. A reason for the declining reviews per month could be a lowering motivation after a certain time and the fact that in advance no broad-based arrangements were made about the form and frequency of periodical group-discussions on the results. A result of this is also demonstrated by the striking difference in RADPEERTM

reviews per radiologist. Lack of regulation could have contributed to the uneven use of the RADPEERTM system among the radiologists. During the project the radiologist did not

receive structural periodical feedback on the number of peer reviews they performed. Based on our results we advocate to make structural feedback of the results to the participating radiologist an integral part of peer review implementation. Iyer describes their positive experience with providing comment-enhanced, monthly peer-review feedback scorecards to enhance radiologist’s participation and to use this feedback as educational tool to enhance future performance [20].

One of the reasons for disappointing involvement of some radiologists in peer review could be the lack of perception of added value of peer review. Therefore, on starting peer review, it is recommended to formulate goals in advance. As mentioned in the introduction, those targets may be imposed from outside by health insurance companies or professional associations. To ensure quality improvement it is also important to combine peer review with peer feedback.

In the years after starting our project in 2012 several authors published their results of peer review methodologies, including user surveys.

Eisenberg reports that 44% of respondents agreed that the RADPEER-like system they used is a waste of time, and 58% believed it is done to meet hospital or regulatory requirements [21]. The majority of the respondents of a survey that was distributed before the implementation of a non-anonymous peer review system agreed that peer review is important for improving patient care (86%), but the majority favoured anonymous peer review (92%) [22]. O’keeffe reports that 94% of participating radiologist felt that the quality assurance programme was worthwhile, and 90% that the feedback they received was valuable [17]. These results support our findings that feedback is valuable, but that careful implementation is needed.

Chapter 3 | 53

(17)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 52PDF page: 52PDF page: 52PDF page: 52 Shortcomings

One of the shortcomings of our study is that compared to the daily workload, only in a very small percentage the RADPEERTM review was applied. To reach the 5% of peer-review,

as proposed by the RCR [12], more attention should be paid to peer-review. Despite the availability of a paper instruction next to the radiologist’s workstation, a lack of formal training in the system might have contributed to the uneven distribution of participation. Indeed, there is an association between training and satisfaction of users of an Electronic Medical Record system [23]. Analysis of barriers to implementation of digital systems, facilitates a targeted approach [24]. In our case an early evaluation of the system, would have contributed to improving its usage.

Furthermore, the implementation of a more random assignment for peer review and not solely the dependency on the willingness of the radiologists to perform the peer review could help to increase the numbers of peer review. However, this incomplete registration is no reason to stop performing peer review. Enough 3B and 4B categories are reported and discussed to justify the efforts of this project and to increase the quality level of the radiological reporting.

One of the shortcoming of our system is that the data retrieval was manual and not at regular time intervals. However, future implementations should allow automation of these tasks depending on the abilities of the PACS [25–27].

Despite unsuccessful implementation of group discussions in daily practice, individual feedback was more approachable this period in the hospital of this study. General acceptance of the existence and possibility of feedback and familiarity with the RadPeer-jargon, facilitates this individual feedback.

It is recommended to secure the group discussion by making arrangements about this simultaneously with implementing the technical part of the PACS-integration. Depending on the local situation, RADPEER peer review can be combined with elements of the Consensus-Oriented Group Review (COGR) method [28, 29].

A possible shortcoming are the low usage numbers of our implementation. Although the ease of use of our implementation was apparent from our experience, the low usage numbers do not allow us to provide hard evidence. The reasons of discontinuation of the current implementation in December 2014 are two external factors: a hospital merge with two other hospitals and a change in PACS. In 2016 all three hospital locations were connected to the new PACS, together with increased quality awareness of the merged group of radiologists,

54 | Part I

(18)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 53PDF page: 53PDF page: 53PDF page: 53 increased scientific evidence, and increased pressure of external organizations, paved the

road to a new peer feedback implementation.

Conclusion

There are no significant technical obstacles to implement a PACS-integrated RADPEERTM

system based on key-words allowing easy integration of peer review into the daily routine without the requirement of additional software. Peer review implemented in a non-random setting based on relevant priors could already help in increasing the quality of radiological reporting and serve as continuing education among peers.

Decisiveness, tact, and trust are needed to promote use of the system and collaborative discussion of the results by radiologist.

Chapter 3 | 55

(19)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 54PDF page: 54PDF page: 54PDF page: 54

References

1. Reiner BI (2014) Strategies for radiology reporting and communication : part 4: quality assurance and education. J Digit Imaging 27:1–6

2. Brook OR, Romero J, Brook A, Kruskal JB, Yam CS, Levine D (2015) The complementary nature of peer review and quality assurance data collection. Radiology 274:221–9

3. Kelly AM, Cronin P (2015) Practical Approaches to Quality Improvement for Radiologists. Radiographics 35:1630–42

4. van Tubergen A, Heuft-Dorenbosch L, Schulpen G, Landewé R, Wijers R, van der Heijde D, van Engelshoven J, van der Linden S (2003) Radiographic assessment of sacroiliitis by radiologists and rheumatologists: does training improve quality? Ann Rheum Dis 62:519–25

5. Larson DB, Kruskal JB, Krecke KN, Donnelly LF (2015) Key Concepts of Patient Safety in Radiology. Radiographics 35:1677–93

6. McEnery KW, Suitor CT, Hildebrand S, Downs RL (2000) Integration of radiologist peer review into clinical review workstation. J Digit Imaging 13:101–104

7. Mahgerefteh S, Kruskal JB, Yam CS, Blachar A, Sosna J (2009) Peer review in diagnostic radiology: current state and a vision for the future. Radiographics 29:1221–31

8. Lauritzen PM, Andersen JG, Stokke MV, Tennstrand AL, Aamodt R, Heggelund T, Dahl FA, Sandbæk G, Hurlen P, Gulbrandsen P (2016) Radiologist-initiated double reading of abdominal CT: retrospective analysis of the clinical importance of changes to radiology reports. BMJ Qual Saf. doi: 10.1136/bmjqs-2015-004536

9. Strickland NH (2015) Quality assurance in radiology: peer review and peer feedback. Clin Radiol 70:1158–64 10. The Royal College of Radiologists. (2014) Cancer Multidisciplinary Team Meeting – Standards for Clinical

Radiologists. In: R. Coll. Radiol. https://www.rcr.ac.uk/sites/default/files/publication/BFCR(14)15_MDTMs.pdf. Accessed 11 Jun 2015

11. The Royal College of Radiologists. (2014) Standards for Learning from Discrepancies meetings. https://www.rcr. ac.uk/sites/default/files/publication/BFCR(14)11_LDMs.pdf. Accessed 11 Jun 2015

12. The Royal College of Radiologists. (2014) Quality Assurance in radiology reporting: peer feedback. https://www. rcr.ac.uk/sites/default/files/publication/BFCR(14)10_Peer_feedback.pdf. Accessed 11 Jun 2015

13. Jackson VP, Cushing T, Abujudeh HH, Borgstede JP, Chin KW, Grimes CK, Larson DB, Larson PA, Pyatt RS, Thorwarth WT (2009) RADPEER scoring white paper. J Am Coll Radiol 6:21–5

14. Issa G, Taslakian B, Itani M, Hitti E, Batley N, Saliba M, El-Merhi F (2015) The discrepancy rate between preliminary and official reports of emergency radiology studies: a performance indicator and quality improvement method. Acta Radiol 56:598–604

15. Fotenos A, Nagy P (2012) What are your goals for peer review? A framework for understanding differing methods. J Am Coll Radiol 9:929–30

16. Bender LC, Linnau KF, Meier EN, Anzai Y, Gunn ML (2012) Interrater Agreement in the Evaluation of Discrepant Imaging Findings With the Radpeer System. Am J Roentgenol 199:1320–1327

17. O’Keeffe MM, Davis TM, Siminoski K (2016) Performance results for a workstation-integrated radiology peer

56 | Part I

(20)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 55PDF page: 55PDF page: 55PDF page: 55 review quality assurance program. Int J Qual Health Care 7:425–430

18. Sheu YR, Feder E, Balsim I, Levin VF, Bleicher AG, Branstetter BF (2010) Optimizing radiology peer review: a mathematical model for selecting future cases based on prior errors. J Am Coll Radiol 7:431–8

19. Wu MZ, McInnes MDF, Macdonald DB, Kielar AZ, Duigenan S (2014) CT in adults: systematic review and meta-analysis of interpretation discrepancy rates. Radiology 270:717–35

20. Iyer RS, Munsell A, Weinberger E (2014) Radiology Peer-Review Feedback Scorecards: Optimizing Transparency, Accessibility, and Education in a Children׳s Hospital. Curr Probl Diagn Radiol 43:169–174

21. Eisenberg RL, Cunningham ML, Siewert B, Kruskal JB (2014) Survey of Faculty Perceptions Regarding a Peer Review System. J Am Coll Radiol 11:397–401

22. Loreto M, Kahn D, Glanc P (2014) Survey of radiologist attitudes and perceptions regarding the incorporation of a departmental peer review system. J Am Coll Radiol 11:1034–7

23. Alasmary M, El Metwally A, Househ M (2014) The Association between Computer Literacy and Training on Clinical Productivity and User Satisfaction in Using the Electronic Medical Record in Saudi Arabia. J Med Syst 38:69

24. Ahmadian L, Khajouei R, Nejad SS, Ebrahimzadeh M, Nikkar SE (2014) Prioritizing Barriers to Successful Implementation of Hospital Information Systems. J Med Syst 38:151

25. Forsberg D, Rosipko B, Sunshine JL (2016) Factors Affecting Radiologist’s PACS Usage. J Digit Imaging 1–7 26. Forsberg D, Rosipko B, Sunshine JL, Ros PR (2016) State of Integration Between PACS and Other IT Systems: A

National Survey of Academic Radiology Departments. J Am Coll Radiol. doi: 10.1016/j.jacr.2016.01.018 27. Forsberg D, Rosipko B, Sunshine JL (2016) Analyzing PACS Usage Patterns by Means of Process Mining: Steps

Toward a More Detailed Workflow Analysis in Radiology. J Digit Imaging 29:47–58

28. Alkasab TK, Harvey HB, Gowda V, Thrall JH, Rosenthal DI, Gazelle GS (2014) Consensus-oriented group peer review: a new process to review radiologist work output. J Am Coll Radiol 11:131–8

29. Harvey HB, Alkasab TK, Prabhakar AM, Halpern EF, Rosenthal DI, Pandharipande P V., Gazelle GS (2016) Radiologist Peer Review by Group Consensus. J Am Coll Radiol 13:656–662

Chapter 3 | 57

(21)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

(22)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Processed on: 14-4-2021 PDF page: 57PDF page: 57PDF page: 57PDF page: 57

PART II

(23)

558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof 558259-L-sub01-bw-Olthof Processed on: 14-4-2021 Processed on: 14-4-2021 Processed on: 14-4-2021

Referenties

GERELATEERDE DOCUMENTEN

van den Driessen Mareeuw, F et al 2019 Defining clinically relevant quality indicators that matter to people with Down syndrome.. 1-8,

Furthermore, we accept our second hypothesis that a higher level of expertise of the neurologist correlates with a lower perceived value of the report of radiologists because

The type of radiologist (RECIST radiologists or general radiologist), the types of CT examination (Neck, Chest, Abdomen, or a combination), tumour types, and the quality parameters

The proportion of radiology reports containing communication of critical findings increased after the implementation of a structured reporting template from 0.79% in the

We systematically collected and structured information in a relational database and coded for the characteristics of the applications, their functionalities for the

The projects demonstrated that, in clinical practice, radiologists can use imaging informatics and AI tools in the radiology workflow to improve radiology reporting.. Scientific

Olthof AW, van Ooijen PMA, Cornelissen LJ (2021) Impact of dataset size and prevalence on performance of deep learning natural language processing in radiology (submitted) 2..

Quality improvement in radiology reporting by imaging informatics and machine learning Olthof,