Background
Physician rating websites (PRW) have become a popular tool for increasing transparency regarding the quality of care of physicians in the outpatient sector [
1‐
5]. One intention of PRWs is to provide information regarding patient satisfaction to enable patients to make an informed choice when selecting a physician. Besides a scaled survey, most PRWs implement a free commentary field [
6] so that patients can report on their experience without any constraints other than word limitation. So far, studies have shown the increasing popularity of such websites when it comes to the number of ratings [
2,
7,
8], the traffic rank [
8,
9], and the awareness of the population [
10], and others have addressed the content and nature of narrative comments [
3,
11,
12] or analyzed the applied patient satisfaction surveys [
6]. One result is that a large proportion of online ratings is positive in the countries analyzed, such as the USA [
2,
5,
12‐
15], the UK [
16], Germany [
8,
9], and Canada [
17].
However, it remains unclear whether patients should rely on the ratings displayed on such sites when choosing a physician [
1]. Selecting a physician on the basis of online ratings would increase the likelihood of receiving better healthcare provision. Greaves and colleagues showed in a pioneer study a relationship between online hospital ratings and objective measures of clinical quality in the UK. They concluded that patients who base their decision on this information can be assured that the ratings are not entirely misleading and may provide relevant information about health care [
18]. However, whether patients should base their selection on online ratings for a physician in the outpatient sector remains less clear [
16].
In this context, the present study aims at adding further knowledge on whether patient satisfaction results displayed on PRWs demonstrate an association with structural and quality of care measures of healthcare providers. This will allow for an analysis of the value of online ratings for patients searching for a physician online.
Discussion
The question of whether or not there is an association between patient satisfaction in general and the quality of care of the healthcare provider is not new. For example, a recently published systematic review explored the evidence on the links between patient satisfaction and clinical safety and effectiveness outcomes and included 55 studies in the analysis. The authors demonstrated positive associations between patient experience and self-rated and objectively measured health outcomes, adherence to recommended clinical practice and medication, preventative care, and resource use. However, this result is based on studies carried out with several criteria, such as the exclusion of studies with fewer than 50 subjects, or the use of validated survey instruments (e.g., Picker surveys, the Hospital Consumer Assessment of Healthcare Providers and Systems survey) [
30]. In contrast, these preconditions were not fulfilled for the case of PRWs, since the majority of rated physicians on such sites is still far below this threshold [
2,
7,
8] and most instruments measuring satisfaction have not been validated. Furthermore, although traditional surveys have the advantage of random allocation, respondents on PRWs offer unsolicited opinions, particularly when they have experienced extremes of care [
31]. It thus remains questionable whether the above-mentioned association applies to online ratings and, consequently, whether patients should rely on these ratings when selecting a physician.
We assessed the association between online ratings and several structural and quality of care measures for a sample of 65 physician practices. Compared with previously published results for Germany [
8,
9], the relatively high average number of ratings per physician could be attributed to some Weisse Liste advertisements in QuE reports, discussion of PRWs in general in quality circles, and other QuE events that might have led to higher awareness and use levels [
20]. Some studies with a similar research question were identified by means of our systematic search procedure (see above) [
16,
18,
31,
32]. First, one study from the UK examined hospital-level associations between 10,274 web-based patient ratings displayed on the NHS Choices website and indicators of clinical outcomes as well as healthcare-acquired infections of all NHS acute hospital trusts in England [
18]. The positive recommendations of hospitals on NHS Choices were significantly associated with lower standardized mortality ratios, lower mortality from high-risk conditions, and lower readmission rates. Both healthcare-acquired infection measures were significantly associated with the online rating of hospital cleanliness. In another study, Greaves and colleagues analyzed the associations between internet-based patient ratings and conventional surveys of patient experience in England [
31]. Web-based ratings for 146 hospitals displayed on NHS Choices (
N = 9997) were compared with five similar questions from a national paper-based survey. As a result, statistically significant associations were demonstrated for all questions (
p = 0.31 – 0.49,
p < 0.001 for all). The third study assessed the relationship between website ratings from Yelp.com and traditional hospital performance measures in the USA [
32]. The latter included patient experience (Hospital Consumer Assessment of Healthcare Providers and Systems) and outcomes for myocardial infarction, heart failure, and pneumonia. The authors showed a significant correlation of the Yelp scores for five of six outcome measures, indicating that better ratings are associated with better medical outcomes. In addition, the study demonstrated a significant correlation of high ratings on Yelp and HCAHPS (
p = 0.49;
p < 0.001) as well as its domains (
p ≤ 0.001 for all domains). Even though these results are valid for the hospital sector they demonstrate that online ratings in general may be more useful than is often thought [
18].
We found only one study which focused on the association between online physician ratings and measures of clinical quality and conventional measures of patient experience [
16]. The data contained 16,952 ratings of family practices from NHS Choices. These were compared with the results of the mail-based National General Practice Patient Survey containing approximately 2.1 million responses. The clinical data encompassed seven measures. Here, the authors showed significant associations between online ratings and the mail-based patient experience survey for all five assessed questions (
p = 0.36 – 0.48,
p < 0.001 for all) but only weak associations with measures of clinical care (Spearman p less than ±0.18,
p < 0.001 for six of seven variables). Significant associations were shown for measures such as the proportion of patients with diabetes receiving flu vaccinations, controlled HbA1C in patients with diabetes, cervical screening rates, and admission rates for ambulatory care conditions.
These findings are partly in line with our presented results. We also demonstrated a strong association between online and conventional patient satisfaction survey results for both German PRWs. There again, regarding preventative services, results from the UK indicate a weak but significant association for cervical screening rates and for diabetic patients receiving flu vaccinations. Our results indicate an association for only one of ten measures with the preventative measure. Nevertheless, differences might be owed to the different measures used in the studies since we assessed the general preventative examination for patients aged 35 years or older. A similar conclusion can be drawn when we compare the results for the clinical indicators addressing the three chronic conditions diabetes type 2, coronary heart disease, and asthma. In contrast to the UK results, we found strong associations for two diabetes measures and one asthma measure. There again, no associations were found for the four coronary heart disease measures. One possible explanation for this result might be the age of patients since the literature has shown declining Internet [
33‐
36] and PRW [
10,
37] use with increasing age. The fact that the ages of patients in our study with coronary heart disease (72.83 years, SD 3.86) and diabetes (69.22 years, SD 3.31) were relatively high compared with the age of those with asthma (58.37 years, SD 10.33) might thus explain, at least to some extent, why none of the coronary heart disease measures was associated with the online ratings.
Regarding the cost-targeting measures, both studies detected meaningful associations. In the UK study, a very weak negative association was determined between ratings and low-cost statin prescriptions [
16]. In contrast, we showed a strong association between the online ratings and the medication cost per case for three of four measures (
p = 0.297 – 0.384,
p < 0.05 for all) indicating that higher costs were related to better ratings. We further differentiated between general practitioners and specialists and determined the association to be true only for specialists and not for general practitioners (data not shown here). This finding might be explained to some extent by the long-term relationship of general practitioners with their patients [
38] and the fact that specialists are consulted for more specialized interventions or because patients are suffering from more serious diseases [
8,
14]. Patients might thus have a greater desire for getting medication prescribed when seeing a specialist.
In another study analyzing 386,000 national ratings from the US PRW RateMDs, Gao and colleagues investigated the association of online ratings with structural measures [
2]. The authors found that online ratings were more positive for physicians who graduated in more recent years, are board-certified, graduated from highly rated medical schools, and those without malpractice claims. This might suggest a positive correlation between the online ratings and the physician quality, even though the magnitude was shown to be small. Our study can partly confirm those findings since a significant association between the age of the physician and the online ratings was found on one PRW. More recently, another study measured the association between online ratings from eight US PRWs and traditional quality measures of clinical and patient experience for a sample of 1299 physicians who completed an American Board of Internal Medicine Hypertension or Diabetes Practice Improvement Module [
39]. In line with the results shown above, the authors also found small and statistically insignificant associations between online ratings and clinical quality measures as well as small but statistically significant associations with patient experience measures.
The results of this study extend the knowledge of previous studies since the patient per doctor ratio in a practice was strongly associated with all 10 included measures; i.e., the more patients physicians treat in a practice, the lower the ratings. This finding is not surprising but it highlights the importance of good physician-patient communication. Physicians should plan to spend sufficient time with patients rather than treating as many patients as possible. Of course, it is questionable whether physicians can dedicate more time to each patient in practice since most reimbursement systems do not include financial incentives for “talking medicine” treatment [
40]. We further could not detect any significant correlations with clinical care measures for the elderly (e.g., medication therapy). This might demonstrate a limited usefulness of online ratings for older patients. However, this should be assessed more in detail in further studies.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
ME and TA contributed to the conception, design, analysis and interpretation of data. ME, TA, US, VW, and JL discussed the results and implications and commented on the manuscript at all stages. ME and US drafted and revised the manuscript. ME, TA, US, VW, and JL approved the final manuscript.