Background
Breast cancer is a major public health issue. It has the highest incidence amongst cancers in women (52,000 new cases in 2010) and is the first cause of death in women aged 35–65 years in France (11,300 deaths in 2008) [
1]. However, measuring quality of care delivered to breast cancer patients is a challenging issue. In 2004, the United States Federal Agency for Healthcare Research and Quality (AHRQ) highlighted the paucity and need to develop validated quality measures to assess the quality of breast cancer care [
2]. This need for “reliable, validated quality measures […] to afford accountability, improvement, and research” was reiterated many times in the USA and in Europe.
Since then, European guidelines for quality assurance, produced under the auspices of the European Commission, have listed 39 performance indicators for screening and diagnosis [
3]. A 2010 position paper from the European Society of breast cancer specialists (EUSOMA) has proposed 17 main quality indicators (QIs) covering diagnosis, staging, surgery and loco-regional treatment, systemic treatment, counselling, follow-up and rehabilitation [
4]. In France, the development of QIs in breast cancer care was flagged as a high priority in 2007. The treatment plan for each cancer patient has now to be discussed in a multidisciplinary team meeting (MDTM) held according to the rules laid down by the French National Cancer Institute (
Institut National du Cancer - INCa) and the French National Authority for Health (
Haute Autorité de Santé - HAS) [
5,
6].
Many of the QIs developed in the wake of the 2004 AHRQ report have been quality of life and patient satisfaction indicators [
2]. However, more recently, in view of the importance given to waiting times by patients and many health care organisations, emphasis has also been placed on QIs measuring the timeliness of care [
7‐
11]. The EUSOMA position paper proposes time to obtain mammography results, time between mammography results and the first consultation or between the core biopsy and surgical excision [
4]. The second French national Cancer Plan (2009–2013) has urged that more is learnt about waiting times in order to reduce inequalities in access to care that may arise from undue delay [
12]. Deviations from guidelines on timeliness can adversely affect 5-year survival rates [
13‐
15], and patients who receive their test results promptly are less prone to anxiety [
16‐
19].
To respond to this enquiry from the French health authorities, there is a need for simple, validated QIs that can be used to measure and compare quality of care in hospitals in order to identify room for improvement. Key methodological concerns, on which depends QI validity, are standardisation of data collection, reducing the workload of collection, and monitoring of QI inter-hospital variability. Only validated QIs can be implemented nationally or internationally, for instance in quality improvement programmes or paying for quality schemes, or used for public reporting.
The objective of this study was to establish the validity, for comparing hospitals, of a simple set of 8 easy-to-use QIs that assess the timeliness of key steps in the care of patients with infiltrating, non-inflammatory and metastasis-free breast cancer undergoing surgery.
Discussion
Having defined quality as compliance with the care process, as this has been shown to be associated with patient outcomes, we developed 7 process QIs relating to the timeliness and organisation of breast cancer care. All 7 QIs were robust as indicated by their metrological properties and feasibility. In addition, all 7 highlighted considerable inter-hospital variability, thus revealing that there is substantial room for improvement in the quality of care.
Three of the 7 QIs are ready for nationwide implementation, namely, QI 2 (time to surgery), QI 3 (time to postoperative MDTM), and QI 8 (conformity of postoperative MDTM). Although some hospitals misunderstood the wording of QI 2 in the feasibility test, no change was made in the performance assessment test. The meaning has, however, since been clarified with a view to nationwide implementation of this QI. QI 2 now refers unambiguously to the date of the appointment when the decision to perform surgery is taken and not to the date of the appointment when the surgeon diagnoses suspected cancer and orders tests before deciding to operate.
The four other validated QIs (QI 6 – patient information, QI 4 – waiting time to first appointment after surgery, QI 5 – time to first postoperative treatment, and QI 7 – traceability of information relating to prognosis) are applicable only to hospitals that can offer both surgery and postoperative radio- or chemotherapy. Comparing all hospitals is rather hazardous as data for these QIs was often missing (11 %–40 % missing data). QI 6 had a very low mean conformity score (12.8 %) because of poor traceability of the information given to patients.
An 8th QI we developed (QI 1 – waiting time to first appointment with the surgeon) proved to be too ambiguous to be used for comparisons among hospitals.
The external validity of our results may be considered satisfactory because (i) our patient sample was fairly representative as the 70 volunteer hospitals were a good reflection of available facilities for breast cancer care in France, (ii) it was homogeneous as we focussed on a subset of breast cancer patients, (iii) the number of audited and analysed medical records was large, (iv) results were insensitive to the reactive effects of testing and reactive settings because of the retrospective nature of the audit.
Our results reflect real-life conditions, i.e. the technical and organisational constraints observed when implementing QIs in hospitals. We anticipated the problems, taking into account the absence of validated quality measures of breast cancer care, leading to define quality as compliance with the process of care that has been shown to correlate with patient outcome. A systematic review, published in 2006, underlined the paucity of validated indicators of quality measures in breast cancer care and the need to develop “reliable, validated quality measures […] to afford accountability, improvement, and research” [
32]. Several health care facilities have emphasized the importance of measuring timeliness of care from screening to pathology results, allowing to compare institutional performances, and (in these times of patient centred care) when patients were asked which aspects of care they would improve if they could, aspects relating to waiting times were most frequently mentioned [
7]. So we decided to concentrate on timely access as a good representative of “quality care”.
Although we tried to forestall many of the problems that might arise when designing our QIs, we nevertheless had to contend with several hurdles.
The first hurdle was absence of all the required information in the French PMSI database which was used to randomly select the 80 medical records. We used restrictive inclusion criteria (“infiltrating, non-inflammatory breast cancer”) to obtain a homogeneous population. We excluded patients with carcinoma in situ and patients with prior breast cancer treatment. This had, however, to be done manually and represented a fairly heavy workload. The extra 20 records selected from the database to compensate for exclusions did not always make up for the recorded 28 % exclusion rate.
A second hurdle was that, in the French health care system, each hospital does not have access to all the data on a given patient. For example, QI 4 and QI 5 could not be calculated when follow-up or all care did not take place within the same hospital (e.g. appointment in private practice (QI 4), and appointment in one hospital with treatment in another hospital (QI 5)). The situation was even more complex when these hospitals had a different status (public, private, or not-for-profit).
A third hurdle, which also represent a limitation of our results, was that criteria on waiting times and delays are based on consensus among experts and not on standards derived from practice guidelines with a high level of evidence, which are normally used to construct QIs. Each country has its own standards [
33]. The good practice guide produced in 2009 by the National Collaborating Centre for Cancer for NICE (National Institute for Health and Clinical Excellence) recommends not more than a 4-week delay from diagnosis to treatment, and starting chemotherapy or radiotherapy within 31 days of surgery [
34]. In contrast, French guidelines published back in 2002 recommend a 21-day delay from the first appointment with the surgeon to surgery (similarly to the National Initiative on Cancer Care Quality (NICCQ) recommendation in the US [
33]), a 30-day delay from surgery to chemotherapy, and 56 days from surgery to radiotherapy [
25].
This hurdle could be partly overcome by using as targets the proportion of patients treated within set times. Such targets better satisfy health professionals for whom delays should reflect organisational constraints and not include patient-related causes (e.g. patient not turning up for the appointment, treatment postponed at the patient’s request). According to European guidelines, a threshold of 90 % is acceptable for ≤15 working days between the decision to operate and surgery and 70 % for ≤10 days [
3]. According to EUSOMA, the minimum standard is >75 % and the target is >90 % for surgery performed within 6 weeks after the first diagnostic examination in the breast unit [
4]. The Dutch auditing system has established a 90 % standard for 5 QIs [
35]. However, a comparison with our results is difficult because of differences in QI definitions. Should the French health authorities take 90 % of patients registered in each time period as standard, there is much room for improvement in many hospitals as shown in Table
2.
Recent experiences in Europe and the USA have shown that QI implementation at a local [
33]. or national level using a variety of methodologies can improve the quality of care of breast cancer patients but that this takes time [
35‐
37]. According to the Dutch experience, none of 9 QIs met standards in 2002 whereas 4 did in 2008, with a significant improvement in all 9 QIs. Because hospitals simply perform better when they know they are being evaluated (Hawthorne effect), but also because comparison is able to promote a better registration process and compliance with best clinical practice, improvements can be expected in France also.
Whether QI scores may qualify hospitals in the certification of breast cancer centres is a moot point. We could propose to follow the example of the National Quality Measures for Breast Care (NQMBC), which uses the degree of participation to on-line registration of the answers to a set of quality questions to grant 3 levels of certification for quality breast health care [
17,
36].
Competing interests
The authors declare they have no competing interests.
Authors’ contributions
MF participated in the acquisition of the data, was primarily involved in the conception, design, screening of all levels and drafting of the manuscript. MC assisted with data abstraction, and was responsible for the interpretation and statistical analysis of the data. GN has made substantial contribution to design of the manuscript, has been involved in the drafting and has given final approval of the version to be published. SM has been involved in the conception, design of the manuscript and in the acquisition of the data. DS revised the manuscript and brought important intellectual content. EM revised critically the manuscript, collaborated in conceptualizing the article elements and has given final approval of the version to be published. All authors read and approved the final manuscript.