Background
Effective cancer care relies on timely access to treatment. For patients, delays in accessing needed treatment may worsen health outcomes, impair quality of life, induce anxiety and distress, and prolong sick leave and loss of income [
1,
2]. For the public, delays in access can exacerbate health inequities, and lead to media and political pressure to address barriers to care [
3,
4]. To improve the timeliness of elective surgery for cancer and other conditions, many countries have introduced healthcare reforms involving national targets and performance measures [
5]. Previous studies have shown that patients with cancer support public reporting of care quality, timeliness, and patient satisfaction/experience [
6,
7].
Waiting time for elective surgery is one of the most common measures of healthcare system performance. To improve the quality of hospital services, several countries publish national waiting times on websites, by specialties and procedures, using hospital administrative data [
8]. Some countries, such as England, report at hospital and surgeon-levels and publish patient narratives [
9,
10]. Public performance reporting (PPR) of hospital data is thought to improve quality of care by informing patients’ choice of provider, or by prompting organisations to address areas of underperformance [
11]. Waiting times are generally measured as the time between the specialist placing the patient on the waiting list and the date of surgery. The wait between a general practitioner’s (GP) referral and the initial specialist appointment is not included, giving rise to a ‘hidden’ waiting list [
12]. To better capture waiting times across the whole patient journey, some Nordic countries seek to measure the time between a GP’s referral and the date of surgery [
13].
International evidence suggests that elective surgery targets can reduce waiting times and improve access over time [
14]. The evidence is stronger in England [
15,
16]. Ambitious waiting time targets with rewards and sanctions, increased funding and robust performance management, were cited as key factors contributing to the English success [
14,
17]. Despite the reduction in waiting times, there was variation in waiting times between specialties, operative procedures and hospitals [
18]. For PPR, there was a lack of evidence to suggest that publishing elective surgery waiting times motivated changes in providers’ behaviour to reduce waiting times or encouraged patients to choose providers based on PPR [
19]. Targets and PPR had unintended consequences such as increased pressure on clinicians [
20] and gaming of performance data [
19,
21].
Previous research on waiting times has focused on certain specialities or all elective surgery [
22‐
24]. Few studies have addressed delays in accessing cancer care [
1,
2], so the impact of targets and published waiting times for elective surgery for cancer remains uncertain. Given the significant burden of morbidity and mortality associated with cancer [
25], and concerns about the fragmentation of cancer care, some countries have developed specific strategies for improving access to cancer care [
26]. These include collecting and monitoring waiting times from referral to first treatment for cancer. For example, England introduced targets for maximum waiting times across the cancer care pathway: a specialist appointment within 14 days of GP’s referral (93% target); first treatment within 31 days of diagnosis (96% target); and first treatment within 62 days of GP’s referral (85% target) [
27]. There has been limited exploration of these issues in other international contexts including Australia.
In 2011, the Australian government introduced national elective surgery targets and PPR of hospital data, as part of a suite of national healthcare reforms to improve health outcomes. The objectives of the reforms were to optimise hospital care quality and timeliness for Australians [
28]. State and Territories governments were offered financial rewards if certain national elective surgery targets were reached [
29]. Implementation of the 2011 reforms were the responsibility of States and Territories, with the Commonwealth government responsible for PPR. The national elective surgery targets, underpinned by the Performance and Accountability Framework [
30], included: elective surgery waiting times by urgency category (i.e. urgent - within 30 days, semi-urgent - within 90 days and non-urgent – within 12 months) and waiting times for cancer care. The MyHospitals website was established in 2010. All public hospitals were mandated to publicly report their performance on the MyHospitals website [
31]; PPR is voluntary for private hospitals. The indicators publicly reported for breast, bowel and lung cancer at the time of the study included elective surgery waiting times by urgency and the number of patients treated within recommended elective surgery waiting times by urgency.
Victoria is the second most populous state in Australia; during the study period its population grew from 5.13 to 6.24 million – an increase of over 17% [
32]. Australia has a universal healthcare system publicly funded through the Medicare scheme [
33], including free access to treatment in public hospitals. A typical treatment pathway for cancer involves initial assessment by a GP or emergency department (ED) physician followed by referral to a specialist in either the public or private healthcare sectors. Acute conditions (such as a bowel cancer causing bowel obstruction) are treated immediately. Otherwise, patients are booked for ‘elective’ treatment or placed on a waiting list. Waiting times for publicly funded healthcare services arise when the demand for services exceeds the available supply. Population growth, an ageing population, increasing prevalence of chronic disease, funding constraints, and workforce shortages all contribute to this imbalance [
34]. In the absence of price-rationing within the public system, waiting lists based on clinical need act as a form of non-price rationing to balance the demand for, and the supply of, health services. Around 40% of Australians choose to purchase private healthcare, or self-fund private care, which enables them to “skip the queue” in the public system by accessing elective surgery in private hospitals [
35].
To our knowledge, our study is the first to examine the impact of the introduced targets and of PPR on cancer surgery waiting times in Victoria, Australia, principally in the public hospital system. Linkage of hospital admissions and elective surgery waiting time data provided an opportunity to examine whether the introduction of targets and PPR led to improvement in cancer elective surgery access during the 2006–2016 period, as measured by cancer waiting times indicators reported on the MyHospitals website. We sought to address the following research question: Does the introduction of PPR reduce waiting times for cancer types in which elective surgery waiting times are publicly reported, as compared to other cancer types in which elective surgery waiting times are not publicly reported?
Discussion
There are several possible explanations for the limited change in waiting times following PPR. First, lack of public knowledge that waiting times are publicly reported may have prevented the use of PPR for quality improvement. Previous studies have shown that patients and providers, including medical officers and GPs, are generally not aware of PPR data and are unclear what it is [
41‐
44]. Second, the MyHospitals website does not provide “real-time” data and the data are not reported by diagnosis. The most recent time-period data published on the MyHospitals website were 2012–13 for cancer elective surgery waiting times and 2017–18 for elective surgery waiting times. Although the latest elective surgery waiting times were reported; data are provided by urgency category, specialty of surgeon, and intended procedure, but not diagnosis. This means that neither patients nor providers were able to access timely information about cancer-specific waiting times on the performance reporting website. As a result, there were limited opportunities for patients or providers to change their behaviour in response to such data.
Third, patients with breast, bowel or lung cancer waited on average 13 days for urgent care and 40 days for semi-urgent care, well below the recommended 30 and 90 days, respectively. The availability of relatively timely care within the Victorian jurisdiction means that pressure for change following PPR may have been less than in jurisdictions with unacceptably long waiting times. Notably, there was a slight increase in waiting times for patients with lung cancer following PPR which likely reflected the marked increase in lung cancer incidence rate among females in Victoria [
45] and Australia [
46]. This is likely to be attributed to the smoking pattern in the past decades (the prevalence of smoking in female peaked in the mid-1970s). Other factors which may explain the continued increase in lung cancer incidence rate include the changing composition of modern cigarettes and changes in age and size of the population.
Fourth, the population of Victoria grew by more than 1.1 million people during the study period. It is possible that the PPR led to improvements in efficiency within the healthcare system which were masked by the significant increase in demand associated with the rapidly growing population. As of 30th July 2020, there were over 56,000 patients on the waiting list across all urgency categories [
47]. It was not possible to ascertain how many cases were cancer-related as ESIS records intended procedure and not patient diagnosis. Knowledge of diagnosis requires linkage to VAED following an admission to hospital [
36]. As such, the data included only patients with cancer who were admitted to hospital for surgery and excluded those who may still be on the waiting list at the end of the study period or dropped out from the waiting list (e.g. surgery was no longer required, they passed away) during the study period. Further research is warranted to investigate the demand of cancer elective surgery and the capacity of the healthcare system to deliver timely cancer services.
Despite achieving the elective surgery waiting time targets, it is unclear whether the waiting times for cancer treatment are clinically appropriate given that they are prioritised using the same waiting list system as other elective surgeries such as hip replacement. Setting a generic waiting time for all cancer surgeries has been criticised for not considering the complexity of the disease, the phases of cancer care and the different treatment modalities of the various cancer types [
48]. International guidelines in the UK for acceptable waiting times focus on the following phases of cancer care: days between GP’s referral and first specialist appointment; days between GP’s referral and diagnosis; days between decision to treat and start of treatment; and days between GP’s referral and start of treatment [
27]. In Australia, Cancer Council Australia have introduced optimal cancer care pathways for 15 cancer types, with variation in recommended waiting times across cancer care phases [
49]. There is an opportunity to capture and report waiting time indicators by diagnosis across the entire patient journey by using linked primary care and hospital data. This will help identify areas for improvement in care co-ordination, clinical practice and health services delivery at each stage of the cancer journey to be identified. Linkage of primary care and hospital data for cancer care is currently underway in Victoria, Australia, but collection of primary care data for linkage is not yet widespread [
50‐
52].
Waiting times to elective surgery is one of the most common generic health service ‘quality’ measures publicly reported, alongside length-of-stay, complications and mortality. These measures are relatively simple to collect with the use of administrative data but provide limited insights into the quality of healthcare delivery. De-identified linkage of cancer registries with administrative data (e.g. electronic medical records) could provide a more complete picture of the quality of cancer care and health system performance [
53]. Using these data, Spinks et al. [
48] proposed collecting and reporting the following meaningful quality indicators across the cancer care continuum: outcomes (e.g. recovery, functional restoration, and survival); structure (e.g. physical facilities, nurse-to-patient ratios); process (e.g. screening, prevention, diagnosis and staging); efficiency (e.g. adherence to guidelines); cost of care (e.g. direct and indirect costs); and patients’ perceptions of care (e.g. satisfaction).
Strengths and limitations
The study included state-wide population coverage of cancer elective surgery admissions over a period of 10 years. Despite the large time-period and a suitable comparator group, the findings should be interpreted in the context of several study limitations. A prerequisite of DID analysis is finding a control group for which the parallel trends assumption is met, in which the pre-intervention trends in outcomes are the same between the treatment and control groups. Ideally, the only difference between the two groups would be exposure to the policy. In practice, such a group may be difficult to find. The characteristics of the control group differ slightly from each of the intervention group but their trends in pre-treatment outcomes followed a similar trajectory. As such, selection bias may exist, and the results should be interpreted with caution.
The Australian healthcare reforms included several hospital and related care policies not limited to reducing waiting times for elective surgery and making service performance information publicly available [
28]. As such, we were unable to disentangle the effect of confounding influences on cancer elective surgery waiting times, thus making it difficult to attribute the observed changes to a specific policy. Furthermore, it was unclear what quality improvement initiatives (if any) were implemented in the hospitals following the introduction of PPR that would drive or impede improvement in cancer elective surgery waiting times. Further research is required to investigate the causal pathways in which PPR influence waiting times via quality improvement processes.
There has been some evidence of manipulation of the elective surgery waiting list in Victoria, Australia [
54,
55]. Patients classified as urgent or semi-urgent whose waiting times for surgery were approaching the target for their category were reclassified as “not ready for care – patient initiated”. This ensured that category waiting time targets were not exceeded and that the hospital met elective surgery key performance indicators [
56,
57]. It is unclear how prevalent data manipulation is and whether this influenced our results.
The total elective surgery waiting times does not include the waiting time to see a specialist following a GP’s referral, which underestimates the total waiting times across the cancer care continuum, masking the true demands on the public hospital sector and the impact on public patients. There is no national administrative system in Australia that captures data on how many patients are waiting for these appointments, nor how long they have waited for them; although individual hospitals may collect such information in their internal database as referrals and appointments are dated and documented. Further research is warranted to capture the complete picture of waiting times for cancer elective surgery for policy makers to make fully informed decisions about public hospital service planning, delivery and resourcing.
Improved elective surgery waiting times would not necessarily represent an improvement in patient care; we did not have information on cancer tumour stage and clinical outcomes of patients. Future research is warranted to better understand the relationships between timeliness of care delivery and clinical outcomes. The study included public hospitals and generalisability to the private sector is unknown; however, it is expected that private patients treated in private hospitals would have shorter waiting times [
58‐
60]. During the study period, the proportion of Australians with private health insurance increased from 9 million (44% of the population) to 11 million (47% of the population) [
61]. Access to the Medicare Benefits Schedule data [
62] or the National Hospital Morbidity Database [
63] could provide insights into the number of cancer patients who underwent surgical treatment in the private sector. Medicare Benefits Schedule is a list of Medicare services subsidised by the Australian government which includes private patients in public or private hospitals who made a Medicare claim. The National Hospital Morbidity Database includes episode-level records from admitted patient morbidity data collection systems in Australian public and private hospitals. Further research is warranted to explore waiting times differences between public and private patients, and whether publicly reported waiting times in public hospitals influence cancer patients’ decision to seek treatment in the private sector.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.