Do the changes really indicate improving quality of care?
We have found a decline in the SHMI of 24% over the 5 year period. We have previously suggested that effects like this should be put to a number of tests before they are accepted as indicating real changes in performance [
12]. These tests include:
i.
Is any change in the SHMI the result of a change in the observed death rate or the expected death rate?
ii.
Is a difference in the SHMI sensitive to the methods used? For example, is it sensitive to how the standardisation is carried out or the weightings used?
iii.
Is there any corroborating evidence from related quality of care indicators?
Determining all of the individual factors that have influenced the change in SHMI would be extremely challenging. More broadly, we have looked at changes in the observed death rate and found that deaths up to 30-days post discharge have fallen by 15% from 4.7 to 4.0 per 100 admissions over this 5 year period. Explanations include improved clinical care, more deaths in the community without accessing secondary services and improving population health.
The number of expected deaths has increased by 15% from 3.9 per 100 admissions in Q2 2005/06 to 4.5 per 100 admissions in Q2 2010/11. Changes in SHMI variables that drive the increase in expected deaths include a small increase in the average age of patients (50.5 to 51.6), an increase in the proportion admitted as emergencies (75% to 78%), and a large increase in the proportion of patients recorded with comorbidities (26% to 35%) (see Table
1), all of which are assigned greater risk of death in the SHMI model. The 15% fall in the observed death rate is amplified by the increasing age of patients and the increase in the proportion of patients admitted as emergencies, patient groups more likely to die than their younger elective counterparts. Whilst the changes in age and method of admission may reflect the characteristics of the population or admission policy/thresholds, the change in comorbidities may just reflect a change in coding practice.
Population age should only increase the expected number of deaths if the age-specific risk is constant over time. Indirect standardisation models used to produce standardised mortality rates (SMRs) like the SHMI assume that the risk associated with a risk factor such as age is constant between places and over time [
19]. So, for example, the model assumes that the risk associated with a particular age is the same at the beginning of the five year period as at the end. Population mortality rates have improved by 10% over the same period [
20] suggesting that, due to improving population health, the risk at a particular age is declining and this will result in a fall in SHMI.
An increase in the number of admissions coded as emergency over this period has been reported elsewhere as a result of a growth in admissions lasting a day or less, and predominantly in people aged 25 to 60 years of age [
2]. A likely explanation is that some emergencies previously managed out of hospital are being admitted, leading to the growth of short length of stay admissions. It is possible therefore that the reduction in the SHMI is due to an increase in less severe cases who are more likely to survive. This along with the concurrent decrease in elective admission mortality and improvements in all bands of comorbidity (Figures
4 and
5) suggests a difference in admission case-mix is not responsible for improvements in the SHMI.
Our finding that the model without comorbidities found an estimated annual change in the SHMI of −3.6 compared to −4.9 with comorbidities, indicates that changes in coding of comorbidities do not explain the majority of the reduction in the SHMI over this 5 year period. The change in comorbidity over this period may reflect a genuine increase in underlying comorbidity in admitted patients but it more likely reflects an improvement in the hospital’s capacity to record underlying comorbidity.
It looks therefore as if part of the improvement in the SHMI is due to a reduction in the numbers and rate of death brought about by improvements in care; part is an artificial effect caused by changes in the coded comorbidities over time; and the remainder may be due to other real or artificial effects due to changes in case-mix or non-constant risk.
There is some corroborating evidence that there have been real improvements in care from more detailed audits of outcomes in specific clinical conditions such as acute myocardial infarction and stroke [
21], chronic obstructive pulmonary disease [
22], head injury [
23], and hip fracture [
24], which have found a fall in in-hospital mortality during this period. These reductions have been ascribed to improvements in care brought about for many reasons such as advances in medical technologies and the introduction and implementation of evidence-based guidelines. It should also be remembered that during this period NHS net expenditure in England increased by 34% to 99.8bn pa [
8], increased competition between hospitals was created, payment by results introduced, and a number of programmes focusing specifically on quality and safety of hospital care introduced which has resulted for example in a 64% reduction in C. difficile and a 78% reduction in MRSA reported infections in hospitals over this period [
25].
A more direct comparison with a mortality measure such as the Dr Foster HSMR was not performed as publically available data are recalibrated annually and would mask changes in the expected death rate over time. Theoretically the SHMI should be more robust to changes in discharge and community care policy than the HSMR as it incorporates death at 30 days from discharge.
Variation between hospitals
We have also examined variation between hospitals in this trend. The results show that improvements have been widespread but there are some hospitals where almost no improvement has been seen and others where large improvements have been recorded. One hospital has shown an exceptional improvement and that is the Mid-Staffordshire Hospital Trust which reduced its SHMI at about 4.4 percentage points each quarter and was well outside the 99.9% control limit. Whilst the SHMI is described as being used by the DH to
monitor hospital performance [
9], in reality because the weights are recalculated every quarter, the expected values change and it is actually only being used to
compare hospital performance. We think that the Department of Health should monitor trends in order to identify any hospitals where the SHMI is going in the wrong direction, or changing their coding practice so that hospital comparisons are unreliable. We don’t think this needs analysis over a five year period as we have done. A sensible approach would be for a rolling analysis which compared two consecutive years using funnel plots to see year on year differences between hospitals.