Original article
Health services research and policy
Radiologist Peer Review by Group Consensus

https://doi.org/10.1016/j.jacr.2015.11.013Get rights and content

Abstract

Purpose

The objective of this study was to evaluate the feasibility of the consensus-oriented group review (COGR) method of radiologist peer review within a large subspecialty imaging department.

Methods

This study was institutional review board approved and HIPAA compliant. Radiologist interpretations of CT, MRI, and ultrasound examinations at a large academic radiology department were subject to peer review using the COGR method from October 2011 through September 2013. Discordance rates and sources of discordance were evaluated on the basis of modality and division, with group differences compared using a χ2 test. Potential associations between peer review outcomes and the time after the initiation of peer review or the number of radiologists participating in peer review were tested by linear regression analysis and the t test, respectively.

Results

A total of 11,222 studies reported by 83 radiologists were peer reviewed using COGR during the two-year study period. The average radiologist participated in 112 peer review conferences and had 3.3% of his or her available CT, MRI and ultrasound studies peer reviewed. The rate of discordance was 2.7% (95% confidence interval [CI], 2.4%-3.0%), with significant differences in discordance rates on the basis of division and modality. Discordance rates were highest for MR (3.4%; 95% CI, 2.8%-4.1%), followed by ultrasound (2.7%; 95% CI, 2.0%-3.4%) and CT (2.4%; 95% CI, 2.0%-2.8%). Missed findings were the most common overall cause for discordance (43.8%; 95% CI, 38.2%-49.4%), followed by interpretive errors (23.5%; 95% CI, 18.8%-28.3%), dictation errors (19.0%; 95% CI, 14.6%-23.4%), and recommendation (10.8%; 95% CI, 7.3%-14.3%). Discordant cases, compared with concordant cases, were associated with a significantly greater number of radiologists participating in the peer review process (5.9 vs 4.7 participating radiologists, P < .001) and were significantly more likely to lead to an addendum (62.9% vs 2.7%, P < .0001).

Conclusions

COGR permits departments to collect highly contextualized peer review data to better elucidate sources of error in diagnostic imaging reports, while reviewing a sufficient case volume to comply with external standards for ongoing performance review.

Introduction

Physician peer review is widely recognized as a fundamental component of health care quality assurance [1]. Experts believe that physician peer review will result in better clinical outcomes by monitoring the quality of care, increasing adherence to recognized standards, and creating a culture of transparency around issues of patient safety 2, 3, 4. Trying to measure the impact of peer review programs on clinical practice and patient outcomes is fraught with difficulties, and studies to date have been limited in scope with mixed findings 5, 6, 7, 8, 9. Nonetheless, a large Cochrane review found that audit and feedback interventions, including peer review, can drive quality improvement if physician feedback remains a primary focus [5].

The radiology community was an early adopter of physician peer review, with workflow-integrated peer review systems in widespread use as early as 2006 10, 11. Since then, many accrediting bodies and third-party payers have mandated the adoption of radiologist peer review by diagnostic imaging groups [12]. RADPEER is a workstation-integrated peer review system developed by the ACR and represents the earliest and the most widely used peer review system in diagnostic imaging [10]. Modeled from the traditional process of double reading, the ease and convenience of RADPEER-style peer review has driven its widespread adoption. Yet critics of RADPEER feel that it is too limited in its focus and fails to address many important aspects of quality in diagnostic imaging, such as report clarity and length and adherence of the interpretation to national guidelines and standards [13]. RADPEER also has relatively weak feedback mechanisms, an essential aspect for effective peer review [5].

Attempting to harness the strengths of RADPEER while increasing the robustness of peer review, our department developed a novel peer review process for radiologists, known as consensus-oriented group review (COGR) [13]. The COGR process has been previously described in detail, but in brief, it is a software-enabled peer review process in which groups of radiologists meet regularly to review randomly selected cases and record consensus on the acceptability of the issued reports [13]. The purpose of this study is to evaluate the feasibility of the COGR method of radiologist peer review within a large subspecialty radiology department.

Section snippets

Human Subjects Compliance

This retrospective, HIPAA-compliant study was approved by our institutional review board. The need to obtain patient consent was waived.

Peer Review Data Collection

The study was performed in a radiology department at a 950-bed tertiary care academic center. The radiology department has more than 100 staff radiologists, greater than 85% of whom are subspecialized by organ system, and more than 500,000 diagnostic imaging studies are performed and interpreted in the radiology department annually. All data were collected

Peer Review Participation Metrics

Over a two-year period from October 2011 to September 2013, a total of 11,222 cases underwent COGR at 2,027 conferences attended by 83 radiologists. There was an average of 3.3 ± 1.5 conferences per subspecialty division per week, with an average of 4.3 ± 1.8 radiologists participating in each conference and 5.7 ± 3.5 cases reviewed per conference (Table 1). Individual radiologists participated in an average of 1.2 ± 0.6 conferences per week and had an average of 3.3 ± 1.7% of their available

Discussion

This study presents the first peer review data collected using COGR and demonstrates that COGR can be effectively deployed in an academic department with a sufficient caseload reviewed to meet regulatory mandates.

The discordance rate in our study was 2.7%. This is similar to diagnostic imaging discordance rates reported in the literature, the majority ranging from 2% to 7%, although rates have been reported as low as 0.4% and as high as 26% 14, 15, 16, 17. In general, we believe that the level

Take-Home Points

  • The average radiologist participated in 1.2 COGR peer review conferences per week and had 3.3% of his or her available CT, MRI and ultrasound studies peer reviewed over the two-year period, a level of case review that exceeds regulatory mandates.

  • Discordance rates for COGR peer review averaged 2.7% (95% CI, 2.4%-3.0%), comparable with discordance rates for other radiology peer review models reported in the literature.

  • Discordant cases identified by COGR peer review are associated with

References (21)

  • S. Mahgerefteh et al.

    Peer review in diagnostic radiology: current state and a vision for the future

    Radiographics

    (2009)
  • H.R. Alpert et al.

    Quality and variability in diagnostic radiology

    J Am Coll Radiol

    (2004)
  • R. Kaewlai et al.

    Peer review in clinical radiology practice

    AJR Am J Roentgenol

    (2012)
  • M.J. Halsted

    Radiology peer review as an opportunity to reduce errors and improve patient care

    J Am Coll Radiol

    (2004)
  • N. Ivers et al.

    Audit and feedback: effects on professional practice and healthcare outcomes

    Cochrane Database Syst Rev

    (2012)
  • C.M. Roberts et al.

    A randomized trial of peer review: the UK National Chronic Obstructive Pulmonary Disease Resources and Outcomes Project: three-year evaluation

    J Eval Clin Pract

    (2012)
  • M.T. Edwards

    The objective impact of clinical peer review on hospital quality and safety

    Am J Med Qual

    (2011)
  • R. Grol

    Quality improvement by peer review in primary care: a practical guide

    Qual Health Care

    (1994)
  • G. Jamtvedt et al.

    Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback

    Qual Saf Health Care

    (2006)
  • H. Abujudeh et al.

    RADPEER peer review: relevance, use, concerns, challenges, and direction forward

    J Am Coll Radiol

    (2014)
There are more references available in the full text version of this article.

Cited by (27)

  • Peer learning in breast imaging

    2022, Clinical Imaging
  • Peer review and its ethical implications

    2021, Seminars in Pediatric Surgery
    Citation Excerpt :

    Harvey et al reported a consensus-oriented peer review of diagnostic imaging in which 11,222 studies were reviewed over a two-year period. The authors identified a 2.7% discordance between original interpretation and group review and concluded that this practice allowed the authors to identify sources of error in diagnostic imaging reports.18 Still, the clinical patient-centered outcomes of these types of reviews were not further described.

  • Radiologist Opinions of a Quality Assurance Program: The Interaction Between Error, Emotion, and Preventative Action

    2021, Academic Radiology
    Citation Excerpt :

    Pinto et al (29) concluded that a radiology safety culture depends upon radiologists viewing feedback about errors as a positive learning experience. Alternative models of radiologist peer review have been proposed and tested by many authors whereby errors are shared with others in a group to promote a potentially less punitive, more positive, constructive environment of feedback, and collective learning (16,30–32). Score-based peer-review practices have not been generally advocated (33,34), however, it has been proposed that if a scorecard is used that they include detailed information about types of error, educational resources, and positive feedback (35).

  • Overcoming Barriers to Effective Peer Review

    2019, Journal of the American College of Radiology
View all citing articles on Scopus

Outside the submitted work, Dr Gazelle has received personal fees from GE Healthcare. Dr Pandharipande has received research funding from the Medical Imaging and Technology Alliance, for unrelated research.

View full text