The practice of emergency medicine/brief research report
Same Physician, Different Location, Different Patient Satisfaction Scores

Presented at the Society for Academic Emergency Medicine annual meeting, May 2015, San Diego, CA.
https://doi.org/10.1016/j.annemergmed.2015.12.021Get rights and content

Study objective

We assess whether patient satisfaction scores differ for individual emergency physicians according to the clinical setting in which patients are treated.

Methods

We obtained Press Ganey satisfaction survey results from June 2013 to August 2014 for patients treated in either an urban hospital emergency department (ED) or 2 affiliated suburban urgent care centers. The same physicians work in all 3 facilities. Physicians with available survey results from at least 10 patients in both settings were included. Survey scores range from 1 (very poor) to 5 (very good). Survey questions directly assessed physicians’ courtesy, ability to keep patients informed about their treatment, concern for patient comfort, listening ability, and the overall care at the facility. We calculated differences in mean urgent care and ED scores for individual physicians, along with the mean of these differences. Our primary outcome was the mean difference between urgent care and ED score with respect to physician courtesy.

Results

Seventeen physicians met inclusion criteria. For all 17 physicians, the point estimate for the mean urgent care courtesy score was higher than the point estimate for the mean ED courtesy score. The mean difference in courtesy scores between urgent care and the ED was 0.35 (95% confidence interval 0.22 to 0.49). ED scores were also consistently lower than urgent care scores for keeping patients informed about their treatment, concern for patient comfort, listening ability, and overall care rating.

Conclusion

Although these results are limited by small sample size, we found that physicians consistently received lower satisfaction ratings from ED patients than from urgent care patients. This challenges the validity of using satisfaction scores to compare providers in different practice settings.

Introduction

Patient satisfaction surveys are commonly used by government agencies, hospital administrators, and the public to gauge satisfaction with the health care experience. Increasingly, payers are also using these scores to modify provider reimbursement by rewarding high-performing providers and punishing low performers.1 In 2016, 25% of the adjustments made to hospital Diagnosis-Related Group payments under the Value-Based Purchasing program will depend on how hospitals perform on satisfaction surveys administered to admitted patients. Currently, the Centers for Medicare & Medicaid Services is field testing the Emergency Department Patient Experience of Care survey instrument, which will specifically assess the patient experience in the emergency department (ED).2

Editor’s Capsule Summary

What is already known on this topic

Patient satisfaction surveys are increasingly used to compare provider performance and often linked to reimbursement.

What questions this study addressed

This study compared Press Ganey survey responses for the same physician in 2 locations, an urban tertiary care emergency department and 2 suburban urgent care centers in the same care delivery organization.

What this study adds to our knowledge

Physician scores were systematically higher in the urgent care centers. The effect of differences in insurance status and waiting time between the locations and other variables, including changes in physician behavior by location, could not be assessed in this small sample.

How this is relevant to clinical practice

Physician ratings are not simply stable properties of individual physicians but are systematically affected by other unknown factors, challenging their validity for interphysician comparisons.

Despite increasing scrutiny of patient satisfaction data and the use of these data to drive medical reimbursement, the effect that these trends will have on resource use and health outcomes is uncertain. For example, linking financial incentives to patient satisfaction scores has been shown to increase rates of advanced imaging without first attempting conservative measures for patients with low back pain.3 In general, satisfaction scores also do not consistently correlate with the technical quality of care delivered, and higher patient satisfaction has been associated with increased health care spending, including prescription drug expenditures.4, 5

It has also been demonstrated that ED patient satisfaction scores depend on both ED and patient characteristics, including wait times, age, and race.6 Less research has focused on satisfaction among patients treated in urgent care clinics, though there is evidence that patients view the care provided in urgent care centers favorably with respect to both quality and value.7 It is unknown whether different facilities and practice settings may have site-specific responder biases that cloud attempts to compare survey results between different care settings.8

Our objective was to determine whether patient satisfaction scores for individual emergency physicians differed according to the clinical setting in which patients were treated.

Section snippets

Study Design and Setting

This was a retrospective comparison between satisfaction scores obtained from patients treated at an urban tertiary care center and those treated at 2 nearby suburban urgent care centers. Cooper University Hospital is an academic center located in a culturally diverse urban environment, which is staffed by board-certified or -prepared emergency medicine attending physicians. These emergency physicians also staff 2 urgent care sites, which are both within 10 miles of the hospital but situated in

Characteristics of Study Subjects

Seventeen physicians met study inclusion criteria and were included in the analysis. A median of 17 ED surveys (interquartile range 13 to 22) and 79 urgent care surveys (interquartile range 29 to 124) were available for each physician for the selected study period. Survey respondents treated in the ED were more likely than those treated in urgent care to be uninsured or covered by Medicaid (Table 1).

Main Results

Point estimates for mean ED courtesy scores were lower than for mean urgent care courtesy scores

Limitations

This analysis has several limitations that should be considered when the results are interpreted. First, we describe patient satisfaction scores from emergency physicians in a single health system; these results may differ for providers in other systems. Second, the attending providers included in this study routinely work with residents and nurse practitioners or physician assistants at both the ED and urgent care sites. In some cases, survey responses are likely to be influenced by patient

Discussion

This retrospective study compared patient satisfaction scores from an urban tertiary care center with those from 2 suburban urgent care clinics for physicians who work in both settings. We observed that patients treated in the urban hospital environment consistently gave physicians lower satisfaction ratings compared with patients treated in the urgent care clinics. This pattern was consistent across survey questions assessing multiple distinct aspects of the patient-physician interaction. It

References (10)

There are more references available in the full text version of this article.

Cited by (25)

  • The impact of interprofessional education interventions in health professional student clinical training: A systematic review

    2023, Journal of Interprofessional Education and Practice
    Citation Excerpt :

    When interpreting these positive patient satisfaction and experience outcomes, confounding factors and outcome measure design need to be considered. Multiple confounding factors have been identified in the measurement of patient satisfaction such as wait times, patient confidence in care providers, level of pain control, age, gender, ethnicity, patient expectations, self-rated health scores, level of health insurance coverage and location and type of health service setting.104–108 Lack of use of comparator groups further contributes to the uncertainty considering these confounding factors.

  • Anchoring Vignettes as a Method to Address Implicit Gender Bias in Patient Experience Scores

    2021, Annals of Emergency Medicine
    Citation Excerpt :

    Many of these criticisms relate to the concern that the objectivity and interinstitutional reliability of a national survey instrument are potentially compromised by differences in both characteristics of survey respondents and physicians being evaluated. Prior work has noted that the same physician may receive different scores when working at different hospital sites,6-8 and multiple prior studies have shown either higher9 or lower3-5 satisfaction scores on the basis of female gender. Editor’s Capsule Summary

  • Same provider, different location: Variation in patient satisfaction scores between freestanding and hospital-based emergency departments

    2020, American Journal of Emergency Medicine
    Citation Excerpt :

    Hwang et al. showed that the development of an ED Fast Track improved patient satisfaction by increasing ED capacity while providing quicker service [22]. Bendesky et al. demonstrated that the same physician consistently received lower scores when practicing in an ED setting compared to an urgent care setting, suggesting that scores may be more about the venue than the provider [23]. To date, no published research has compared patient satisfaction scores of physicians or physician assistants practicing at freestanding emergency departments (FEDs) with hospital-based emergency departments (HBEDs).

  • The effect of practice settings on individual Doctor Press Ganey scores: A retrospective cohort review

    2019, American Journal of Emergency Medicine
    Citation Excerpt :

    Since the goal of the study was to evaluate the attending physician, the response from “overall rating of doctors in training (residents, interns)”, which is the 5th doctor specific question included in the PG survey, was excluded. The questions which were included were; [1] “courtesy of the doctor”, [2] “degree to which the doctor took the time to listen to you”, [3] “doctor's concern to keep you informed about your treatment”, and [4] “doctor's concern for your comfort while treating you”. Each question is graded on a continuous scale ranging from 1 (very poor) to 5 (very good).

View all citing articles on Scopus

Please see page 532 for the Editor’s Capsule Summary of this article.

Supervising editor: Robert L. Wears, MD, PhD

Author contributions: BSB conceived the study. BSB, MAK, and CWJ designed the study and collected the data. KH provided statistical advice. KH and CWJ performed statistical analyses. BSB and CWJ drafted the article, and all authors contributed to its revision. CWJ takes responsibility for the paper as a whole.

Funding and support: By Annals policy, all authors are required to disclose any and all commercial, financial, and other relationships in any way related to the subject of this article as per ICMJE conflict of interest guidelines (see www.icmje.org). Dr. Jones has received research grants from Roche Diagnostics, Inc, and AstraZeneca.

A podcast for this article is available at www.annemergmed.com.

View full text