Discussion
Reducing the gap between best evidence and clinical practice in cancer care will improve patient morbidity and mortality [
6‐
8] and reduce healthcare costs [
9]. This review identified a small volume of research in cancer care addressing this important issue. Of the 176 publications identified, the majority (91%) provided new data. Only 9% of publications were reviews, commentaries, or summaries of the existing evidence base. This is an encouraging finding and suggests that a substantial proportion of research effort is directed toward empirical work. However, further examination revealed that 94% of data-based publications were descriptive studies, with only ten intervention studies identified. The number of intervention studies did not increase over time. Studies most commonly focused on breast cancer care.
While descriptive research provides important information about current practice, it does not maximize research benefits by comparing the effectiveness of approaches to improve care. The small number of intervention studies, and in particular the small number that used rigorous evaluation designs, may be explained by the difficulty of carrying out well-controlled intervention trials in this field. Changes to clinical practice may require changes to processes of care, clinician knowledge, and organizational culture, so the unit of analysis for such studies is often the hospital, ward, or clinic. These types of system-focused interventions are often incompatible with randomized controlled designs where the unit of randomization is the individual [
29]. An alternative to the traditional randomized controlled trial suitable for evaluating interventions involving organizational change is the cluster randomized controlled trial. However, this design poses complex logistical challenges, such as the need to obtain agreement for implementation from all clinicians within the cluster, difficulties obtaining a large enough sample of hospitals or clinics, and the substantial costs associated with evaluating the intervention across many sites [
30]. Intervention studies also require multi-disciplinary collaboration and a specific repertoire of research skills, while descriptive research requires relatively less time and fewer resources. In a professional environment where both volume and impact of research output is valued, researchers may be more motivated to undertake descriptive work rather than complex and time intensive intervention studies, or focus their effort on randomized controlled trials of individual-level interventions. The design challenges may be overcome by the use of alternative research designs such as interrupted time series or multiple baseline designs, and by commitment from funding bodies to finance robust intervention trials. Targeted funding rounds that prioritize research on strategies to close the evidence-practice gap may also be important to increasing productivity in this area.
The small number of intervention trials may also be the result of a perception in clinical oncology that optimal cancer care is already being delivered to patients and, therefore, research to ensure translation of clinical research into practice is not needed. It is simplistic to presume, however, that new evidence is routinely integrated into clinical practice guidelines, policy, and then clinical care [
31,
32]. Changing established patterns of care is difficult, and often necessitates the involvement of cancer patients and their advocacy groups, individual practitioners, senior administrators of healthcare organizations, and policy makers [
32]. Passive dissemination of evidence via clinical practice guidelines and publications produce only small changes in clinical practice, and there is insufficient evidence to show that such strategies have any effect on patient outcomes [
33]. Further, uptake of new evidence is inconsistent even when active implementation strategies are undertaken [
4,
34]. For example, one descriptive study examined the impact of the American Society of Clinical Oncology (ASCO) guidelines regarding use of hematopoietic colony-stimulating factors (CSF) on cancer care. Six months after active dissemination and implementation of the guidelines, it found that only 61% of CSF prescriptions complied with ASCO guidelines [
35]. In addition, the finding that 94% of data-driven papers identified in this review described evidence-practice gaps in cancer care highlights the importance of increasing implementation research effort in the field.
The finding that a large proportion of research effort has been directed toward breast cancer care is in accordance with previous findings that breast cancer research dominates the quality of life research field [
36]. Prostate cancer is similar to breast cancer in terms of incidence [
37], and other high-incidence cancers such as lung cancer and bowel cancer have higher mortality rates than breast cancer [
37]. Therefore, the focus on breast cancer is not likely to reflect burden, but rather the high profile of this disease and the availability of specific funding for breast cancer research. This suggests a need for more effort toward examining and addressing evidence-practice gaps in care for other cancers with poor outcomes.
Limitations
These results should be considered in light of several limitations. First, grey literature such as reports, policy documents, and dissertations were not included, nor were protocol papers. While this information may be relevant, grey literature is not peer-reviewed and therefore may not meet the high standards of quality associated with peer-reviewed publication. Inclusion of grey literature may also have biased the review given that papers related to work known by the authors and their network would have been more likely to have been identified than other works. Second, only one author screened the retrieved citations/abstracts. While a random 10% of included studies were double coded by a second reviewer with perfect agreement, this is a limitation of the search execution. Third, there are limitations to using volume of research output as a measure of research effort [
36,
38]. Due to publication bias, studies with unfavorable results may not be published, leading to under-representation of the true amount of work carried out in the field [
39,
40]. Finally, only studies that self-identified as addressing the evidence-practice gap in cancer care were included. While this may have resulted in some relevant studies not being included, it was not feasible to retrospectively define whether studies were addressing evidence-practice gaps. The use of a large number of search terms that covered evidence-based practice, guideline development, and the evidence-practice gap is likely to have limited the number of relevant studies missed. However, the large number of ways papers relating to evidence-practice gaps or implementation science are indexed is a limitation, compromising the ability of the field to advance in an optimal fashion.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
JB and RSF conceived of and designed the review. JB, AB, KJ, and MC undertook data extraction. All authors contributed to drafting of the manuscript and have read and approved the final manuscript.