Original articleHealth services research and policyRadiologist Peer Review by Group Consensus
Introduction
Physician peer review is widely recognized as a fundamental component of health care quality assurance [1]. Experts believe that physician peer review will result in better clinical outcomes by monitoring the quality of care, increasing adherence to recognized standards, and creating a culture of transparency around issues of patient safety 2, 3, 4. Trying to measure the impact of peer review programs on clinical practice and patient outcomes is fraught with difficulties, and studies to date have been limited in scope with mixed findings 5, 6, 7, 8, 9. Nonetheless, a large Cochrane review found that audit and feedback interventions, including peer review, can drive quality improvement if physician feedback remains a primary focus [5].
The radiology community was an early adopter of physician peer review, with workflow-integrated peer review systems in widespread use as early as 2006 10, 11. Since then, many accrediting bodies and third-party payers have mandated the adoption of radiologist peer review by diagnostic imaging groups [12]. RADPEER is a workstation-integrated peer review system developed by the ACR and represents the earliest and the most widely used peer review system in diagnostic imaging [10]. Modeled from the traditional process of double reading, the ease and convenience of RADPEER-style peer review has driven its widespread adoption. Yet critics of RADPEER feel that it is too limited in its focus and fails to address many important aspects of quality in diagnostic imaging, such as report clarity and length and adherence of the interpretation to national guidelines and standards [13]. RADPEER also has relatively weak feedback mechanisms, an essential aspect for effective peer review [5].
Attempting to harness the strengths of RADPEER while increasing the robustness of peer review, our department developed a novel peer review process for radiologists, known as consensus-oriented group review (COGR) [13]. The COGR process has been previously described in detail, but in brief, it is a software-enabled peer review process in which groups of radiologists meet regularly to review randomly selected cases and record consensus on the acceptability of the issued reports [13]. The purpose of this study is to evaluate the feasibility of the COGR method of radiologist peer review within a large subspecialty radiology department.
Section snippets
Human Subjects Compliance
This retrospective, HIPAA-compliant study was approved by our institutional review board. The need to obtain patient consent was waived.
Peer Review Data Collection
The study was performed in a radiology department at a 950-bed tertiary care academic center. The radiology department has more than 100 staff radiologists, greater than 85% of whom are subspecialized by organ system, and more than 500,000 diagnostic imaging studies are performed and interpreted in the radiology department annually. All data were collected
Peer Review Participation Metrics
Over a two-year period from October 2011 to September 2013, a total of 11,222 cases underwent COGR at 2,027 conferences attended by 83 radiologists. There was an average of 3.3 ± 1.5 conferences per subspecialty division per week, with an average of 4.3 ± 1.8 radiologists participating in each conference and 5.7 ± 3.5 cases reviewed per conference (Table 1). Individual radiologists participated in an average of 1.2 ± 0.6 conferences per week and had an average of 3.3 ± 1.7% of their available
Discussion
This study presents the first peer review data collected using COGR and demonstrates that COGR can be effectively deployed in an academic department with a sufficient caseload reviewed to meet regulatory mandates.
The discordance rate in our study was 2.7%. This is similar to diagnostic imaging discordance rates reported in the literature, the majority ranging from 2% to 7%, although rates have been reported as low as 0.4% and as high as 26% 14, 15, 16, 17. In general, we believe that the level
Take-Home Points
- ■
The average radiologist participated in 1.2 COGR peer review conferences per week and had 3.3% of his or her available CT, MRI and ultrasound studies peer reviewed over the two-year period, a level of case review that exceeds regulatory mandates.
- ■
Discordance rates for COGR peer review averaged 2.7% (95% CI, 2.4%-3.0%), comparable with discordance rates for other radiology peer review models reported in the literature.
- ■
Discordant cases identified by COGR peer review are associated with
References (21)
- et al.
Peer review in diagnostic radiology: current state and a vision for the future
Radiographics
(2009) - et al.
Quality and variability in diagnostic radiology
J Am Coll Radiol
(2004) - et al.
Peer review in clinical radiology practice
AJR Am J Roentgenol
(2012) Radiology peer review as an opportunity to reduce errors and improve patient care
J Am Coll Radiol
(2004)- et al.
Audit and feedback: effects on professional practice and healthcare outcomes
Cochrane Database Syst Rev
(2012) - et al.
A randomized trial of peer review: the UK National Chronic Obstructive Pulmonary Disease Resources and Outcomes Project: three-year evaluation
J Eval Clin Pract
(2012) The objective impact of clinical peer review on hospital quality and safety
Am J Med Qual
(2011)Quality improvement by peer review in primary care: a practical guide
Qual Health Care
(1994)- et al.
Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback
Qual Saf Health Care
(2006) - et al.
RADPEER peer review: relevance, use, concerns, challenges, and direction forward
J Am Coll Radiol
(2014)
Cited by (27)
Radiation Oncology Peer Review in a Community Setting: The Value of Prospective Review
2024, Medical DosimetryInterventional Radiology Peer Learning Platform and Adverse Event Reporting (IR-PEER): Initial Experience Implementing a Team-based Novel Peer Learning System in Interventional Radiology
2024, Journal of the American College of RadiologyPeer learning in breast imaging
2022, Clinical ImagingPeer review and its ethical implications
2021, Seminars in Pediatric SurgeryCitation Excerpt :Harvey et al reported a consensus-oriented peer review of diagnostic imaging in which 11,222 studies were reviewed over a two-year period. The authors identified a 2.7% discordance between original interpretation and group review and concluded that this practice allowed the authors to identify sources of error in diagnostic imaging reports.18 Still, the clinical patient-centered outcomes of these types of reviews were not further described.
Radiologist Opinions of a Quality Assurance Program: The Interaction Between Error, Emotion, and Preventative Action
2021, Academic RadiologyCitation Excerpt :Pinto et al (29) concluded that a radiology safety culture depends upon radiologists viewing feedback about errors as a positive learning experience. Alternative models of radiologist peer review have been proposed and tested by many authors whereby errors are shared with others in a group to promote a potentially less punitive, more positive, constructive environment of feedback, and collective learning (16,30–32). Score-based peer-review practices have not been generally advocated (33,34), however, it has been proposed that if a scorecard is used that they include detailed information about types of error, educational resources, and positive feedback (35).
Overcoming Barriers to Effective Peer Review
2019, Journal of the American College of Radiology
Outside the submitted work, Dr Gazelle has received personal fees from GE Healthcare. Dr Pandharipande has received research funding from the Medical Imaging and Technology Alliance, for unrelated research.