Background
Clinical practice guidelines are widely developed to inform and augment health care policy, planning, delivery, evaluation, and quality improvement. Guidelines offer many potential benefits to patients, health care professionals, and health systems by supporting decision-making and enhancing the efficiency and quality of health services, while reducing practice variation [
1]. However, numerous population-based studies have shown that guideline implementation is complex and challenging [
2,
3] due to the influence of numerous, multilevel (patient, provider, team, organization, system), often competing factors [
4]. Considerable research has been undertaken over the last three decades to identify effective single or multifaceted interventions for implementing guidelines [
5‐
7]. While many of those interventions are promising, their impact on health care delivery and outcomes has been modest and inconsistent [
8,
9]. Given the important role of guidelines in translating scientific evidence to practice, further research is needed to generate knowledge on how to optimize guideline implementation and use.
To address this need, the field of implementation science has pursued research in two predominant themes. One theme focused on elaborating and refining the guideline implementation planning process so that the most promising interventions for a given context are employed [
10]. Another theme focused on identifying the interventions and active components of interventions that are associated with effective guideline implementation so that, when employed, they result in beneficial impact. In 2005, a Cochrane systematic review by Shaw et al. found that guideline implementation interventions selected and tailored according to the advance identification of potential barriers of guideline use were more likely to improve professional practice compared with either no intervention or the dissemination of guidelines [
11], and this was supported by subsequent research [
12]. As a result, implementation scientists advocated for choosing and adapting interventions based on mapping pre-identified determinants, including barriers and facilitators of guideline use, with theoretical constructs [
12]. Theories (or models or frameworks) suggest how determinants influence the association between processes and outcomes and provide insight on interventions (approaches, strategies) that overcome determinants and/or support processes associated with desirable outcomes [
13].
However, guideline developers, implementers, researchers, or others may not be using theory when planning or undertaking guideline implementation. A systematic review published in 2010 by Davies et al., based on controlled trials, before-after studies and interrupted time series that evaluated any guideline dissemination or implementation strategy targeting physicians and reported objective measures of provider behavior and/or patient outcomes, found that only 6.0% (14/235) of studies of guideline implementation planning or evaluation were explicitly based on theory [
14]. Those studies were published from 1976 to 1998, well before knowledge was published of the need to tailor interventions to identified determinants of guideline use [
11]. Hence, guideline developers, implementers, researchers or others may not be aware of, or be employing the most relevant theories from among the multitude that are available [
15], or may not understand how to use theory when interpreting identified determinants, or choosing or tailoring interventions [
16]. Theory may be more commonly used in recently published research, although this is unknown. Furthermore, recent rigorous studies that evaluated tailored, theory-based interventions in various health care contexts have failed to consistently demonstrate a beneficial impact on physician use of guidelines or patient outcomes [
17,
18]. Theory use may be sub-optimal, but without further analysis of such studies, this too is unknown.
Considerable resources continue to be invested in the development of guidelines that are not achieving the maximum benefit of which they are capable, potentially due to limitations in the way they are implemented. Further research is needed to understand if theory is being used to plan, undertake, and evaluate guideline implementation. In particular, greater insight is needed on how theory was employed in guideline implementation research. This may reveal the need for interventions to promote awareness, knowledge, and skill in the use of theories for guideline implementation, or for further research on how to use theory in guideline implementation. The purpose of this scoping review was to summarize current research in the field of guideline implementation to describe if and how theory has been used to plan or evaluate the implementation and use of guidelines among physicians, who are frequently the target users of guidelines.
Methods
Approach
This review sought to identify theories that were used to plan or evaluate guideline implementation targeted to physicians, reveal rationale for choice of theory as is required by the Workgroup for Intervention Development and Evaluation Research (WIDER) reporting standards for studies that evaluate behavioral interventions [
19] and describe how theory was used. Therefore, rather than a traditional systematic review that seeks to describe outcomes, a five-step scoping review was chosen: scoping, searching, screening, data extraction, and data analysis [
20]. This approach was employed to acquire an understanding of the extent, range, and nature of research on this topic, and to describe if and how theory has been used to plan or evaluate guideline implementation. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria guided the conduct and reporting of this review [
21]. Data were publicly available so institutional review board approval was not necessary. A protocol for this review was not registered.
Scoping
As per scoping methods standards, this step involved becoming familiar with the literature on this topic and generated eligibility criteria through consultation with the members of the international research team who are experienced guideline developers and implementers. A preliminary search was conducted in MEDLINE using Medical Subject Headings (MeSH) including, but not limited to, diffusion of innovation or information dissemination and practice guidelines as topic or guideline adherence and a range of available terms that referred to theory (for example, models, theoretical). LL, ARG, SB, and RV screened titles and abstracts of the preliminary search results, which were used to plan a more comprehensive search strategy and to generate final eligibility criteria based on the PICO (population, intervention, comparisons, outcomes) framework. All members of the research team reviewed eligibility criteria and provided feedback, which was used to refine the eligibility criteria. The research team was comprised of guideline developers, implementation scientists, and systematic review methodologists.
Populations referred to practicing physicians of any type based in health care settings (e.g., hospitals, ambulatory clinics, community-based physician offices) who were target users of specifically named and referenced guidelines upon which implementation efforts were focused, either for guidelines newly developed, or to improve compliance with an existing guideline. Studies were included if the target users were of various professions provided that the majority were physicians. Guidelines referred to clinical practice guidelines, defined as statements that include recommendations intended to optimize patient care based on systematic review of evidence, for any form of test, procedure, or treatment for any condition or disease, that were specifically named and cited in eligible primary studies [
22].
Interventions referred to guideline implementation processes or interventions in which theory was explicitly named, referenced, and used. Theory was defined as a set of analytical principles or statements including defined variables, a domain to which the theory applies, and a set of relationships between the variables and specific predictions [
23]. Theory-informed frameworks or models were also considered. Interventions also referred to the strategies or processes that were chosen, tailored, implemented, or evaluated to promote or improve guideline use. Similar to the 2010 systematic review of the use of theory in guideline implementation [
14], eligible studies used theory to (1) identify potential determinants of guideline use among target users with questionnaires or qualitative methods or analyze the findings of such studies; (2) choose or tailor interventions that would address identified determinants, either among the research team or through structured consultation with experts or stakeholders (Delphi, modified Delphi, nominal group, etc.); or (3) evaluate the impact of interventions, including studies that examined implementation (fidelity), guideline use (practice/behavior), or impact (patient, provider, organizational, or system-level outcomes).
Comparisons varied depending on the type of study. For example, in studies of type 1, identified determinants may have been similar or varied by specialty or practice setting. In type 2 studies, the choice of intervention or tailoring strategy may have been dependent on characteristics of the involved participants. Type 3 studies may have evaluated physicians with and without exposure to theory-based interventions, or before or after exposure to theory-based interventions, or receiving different types of theory-based interventions, or compared any type of theory-based intervention with any type of non-theory based intervention or control condition, where interventions may have been single or multifaceted.
Outcomes were those reported in eligible studies and were also relevant to study type: (1) perceived or experienced determinants of guideline use including recommended facilitators or interventions to promote guideline use; (2) recommended interventions or strategies to tailor interventions; and (3) any reported impact of guideline implementation.
Eligible study designs included English language qualitative (interviews, focus groups, qualitative case studies), quantitative (questionnaires, randomized controlled trials, time series, before-after studies, prospective or retrospective cohort studies, case control studies), or mixed methods studies (studies that integrated quantitative and qualitative data). Systematic reviews were not eligible but were used to identify additional eligible primary studies. Studies were not eligible if they:
-
involved trainee physicians, health care managers or policy-makers, or consumers (patients, families, caregivers, public) as target guideline users
-
were based on infection control, quality improvement, patient safety, client-centeredness, or organizational “best practices” but did not explicitly name and reference a guideline
-
were based on grounded theory technique (a qualitative approach) and did not analyze the findings using an explicitly named and referenced theory, model, or framework
-
searched for, described, developed, or synthesized theory without applying it to plan or evaluate guideline implementation
-
referenced a theory, model, or framework but made no further mention of it
-
were based in a school or sports setting
-
concluded that an intervention was needed to address lack of compliance with a guideline
-
or consisted of policy directives or strategies, consensus statements, guidelines, conference abstracts or proceedings, protocols, letters, editorials, or commentaries.
Searching
A final comprehensive search strategy compliant with the Peer Review of Electronic Search Strategies statement [
24] was developed in consultation with a medical librarian (Additional file
1). MEDLINE, EMBASE, and The Cochrane Library were searched on April 26, 2016 for articles published from January 2006 to that date. The year 2006 was chosen because evidence supporting the need to choose and tailor interventions for implementing guidelines based on identified determinants [
11] and relevant theories [
12] was published by Shaw et al. in 2005. As noted, the references of systematic reviews were scanned to identify additional eligible articles.
Screening
For the ultimate search results, the screening of titles and abstracts according to specified eligibility criteria was independently piloted by LL, ARG, SB, and RV. LL and RV then independently screened titles and abstracts of all non-duplicate items. All items selected by at least one reviewer were retrieved for full-text screening. Full-text items were screened just prior to data extraction by LL in consultation with ARG. If more than one publication described a single study and reported different data, they were all included but counted as a single study.
A data extraction form was developed with input from the research team to collect information on study characteristics (author, publication year, country, design), condition/disease, physician specialty, guideline topic, aim (implement new guideline or improve compliance with existing guideline), theory (or model or framework) used, rationale for choice of theory as specified in individual studies (describes implementation or its determinants, used by others, validated, integrates many theories and/or constructs), and how theory was used (identify determinants, select/tailor interventions, evaluate impact of intervention) [
14]. Qualitative details were extracted about how theory was used if provided. We did not access related studies previously published by the same authors that may have reported these details because reporting standards specify that theory underlying the development or evaluation of interventions should be specified when reporting research [
19]. Details extracted about the intervention were based on the Workgroup for Intervention Development and Evaluation Research (WIDER) reporting checklist and included content (information/knowledge conveyed), format (mode of delivery), timing (duration and/or frequency), participants (number, type, setting), and personnel (who delivered the intervention) [
19]. Interventions were classified as single or multifaceted. Outcomes varied by study type: (1) identified determinants, (2) intervention chosen, and strategies or process for tailoring, and (3) impact of the intervention. LL and ARG independently pilot-tested data extraction on two articles and compared findings by discussion to refine the data extraction form. This was repeated two more times. The refined data extraction form was independently pilot-tested by LL, SB, and RV on five articles. LL extracted data from all articles, which was independently checked in by ARG, SB, and RV so that all were reviewed in duplicate.
Data analysis
Summary statistics (i.e., frequency, proportion) were used to describe the number of studies by year published, country, research design, guideline topic, and type of target user. Each unique theory (or model/framework) was listed, and summary statistics were used to report the frequency of use, rationale for use, and how theories were used. Theories were classified as classic (originating from fields external to implementation science such as psychology, sociology, organizational management) or implementation theories (developed de novo by implementation researchers or by adapting existing classic theories) [
23]. Details about interventions that were evaluated and associated outcomes were charted (collated, synthesized, and interpreted) and summarized in a narrative format [
25]. The quality of individual studies was not assessed because that is not customary for a scoping review. All co-authors reviewed the summary of findings, and their feedback was incorporated in the final version.
Discussion
This study was conducted to understand if and how theory (or models/frameworks) was used to plan or evaluate guideline implementation, as has been advocated [
11,
12]. Of 89 studies that planned or evaluated guideline implementation targeted to physicians, nearly half (42, 47.2%) were based on theory and included in this scoping review. This compares favorably with the Davies et al. 2010 review that reported explicit theory use in 6.0% of guideline implementation studies published from 1976 to 1998 [
14]. There does appear to be an upward trend because, in our review, the number of published studies meeting our inclusion criteria increased almost yearly and represented a wide array of countries, guideline topics, and types of target physicians. However, many studies were excluded because they did not use theory, several studies cited theory but made no further mention of it, and not all studies of each type (identify determinants, select/tailor interventions, evaluate interventions) employed theory or explicitly linked pre-identified determinants of guideline use with theory. Overall, theory was not used consistently and transparently in guideline implementation.
A few issues may limit the interpretation and use of these findings. Although we searched the most relevant databases of medical literature pertaining to physicians with a search that complied with standards [
24] and employed rigorous searching and screening processes, we may not have identified all relevant studies. We focused on physicians, who are key target users of guidelines, as did the prior Davies et al. study of theory use in guideline implementation [
14] and therefore potentially excluded relevant studies in which non-physicians were target users. Whether these results transfer to other health professionals requires investigation. Publication bias, or the tendency for journals to publish trials with positive results or surveys with high response rates, may have influenced the number and type of studies that were retrieved and the largely positive impact of eligible studies that evaluated interventions. We did not thoroughly discuss the impact of interventions because the purpose of this scoping review was not to assess the effectiveness of interventions, but to describe how theory was used when planning or evaluating guideline implementation.
Several notable findings emerged from this review. While use of theory to plan or evaluate guideline implementation increased subsequently to the review published in 2010 by Davies et al. [
14], many studies were not based on theory, or cited theory but made no further mention of how it was used. A previous systematic review of 32 studies published from 2004 to 2013 that evaluated the implementation of guidelines for arthritis, diabetes, colorectal cancer, and heart failure also found that few studies rationalized intervention choice by referring to models, frameworks, or theories (6/32, 18.8%) [
68]. Another systematic review of 57 studies published from 1990 to 2000 evaluated the adoption of health care innovations, including guidelines and reported that none of the studies employed theory [
69]. Other researchers have also found limited use of theory to design or evaluate public health interventions [
70], to inform the implementation of guidelines targeted to community pharmacists [
71] or to plan interventions targeted to the allied health professions [
72]. Theory-driven implementation is considered a required standard, yet many intervention developers are not using theory. Further research is needed to establish whether those who implement guidelines are familiar with theories and how to apply them. Such research could also examine if education or discipline of the implementers are associated with use of theory, and whether or not including health services researchers are familiar with theory to guideline teams improves theory-informed implementation.
Another key finding is that, while most studies justified the selection of theories, the rationales provided were lacking in specificity and failed to explicitly link identified context-specific determinants with particular theoretical constructs. This too has been identified by others in various health care contexts. For example, in a systematic review of 62 studies that used theory to design or evaluate public health interventions, descriptions of theory development and use in intervention design and evaluation lacked detail [
70]. Another systematic review of 32 studies that evaluated interventions targeted to allied health professionals reported that most studies did not describe how the interventions were developed or their underlying mechanism of action [
72]. The Workgroup for Intervention Development and Evaluation Research (WIDER) criteria for reporting of knowledge translation interventions [
19] and the Template for Intervention Description and Replication (TIDieR) checklist for better reporting of interventions [
73] both recommend the inclusion of the rationale, theory, or change process that underpins an intervention as this can help others to know which elements are essential, rather than optional or incidental [
73]. Intervention mapping, used by only two studies included in this review [
53,
58], is an increasingly used process for engaging stakeholders in choosing and designing theory-based interventions [
74,
75]. It offers a systematic and explicit process for mapping identified determinants to program objectives, and selecting evidence- and theory-informed interventions likely to achieve those objectives, which may help intervention developers to better report how theory was used. Further research is needed to understand whether guideline implementers are aware of reporting criteria such as WIDER and TIDieR, or processes such as intervention mapping. However, while all of these resources specify that the use of theory and its rationale are needed and should be explicit, they do not actually provide guidance on how to choose and apply theory. Thus, the development of more detailed reporting guidance specific to the use of theory is needed.
This review found that all eight studies that evaluated the impact of interventions achieved positive impact on the outcomes reported. Yet, many others have not [
15,
16]. A recent process evaluation of five failed trials that evaluated theory-based tailored interventions for guideline implementation found that only some of the determinants targeted by the intervention were relevant to participants who identified many new barriers that were not addressed by the intervention [
76]. It may be that pre-determined barriers do not cover all factors that potentially affect implementation outcomes; that other strategies for identifying barriers and facilitators are necessary; that the removal of one barrier may create another one; and that the complex interplay among various barriers and facilitators cannot always be predicted despite best efforts for doing so [
77]. These issues raise several implications—is the use of a standardized theory-driven approach to implementation planning problematic because it cannot accommodate the reality of the fluidity of barriers and facilitators? Or are theories from a variety of disciplines needed that differ from those that have been historically used to account for the complexity of implementation? Or is there a limit as to what we can expect from theory? Further research is needed to more fully understand why theory-informed interventions fail to consistently achieve desired outcomes and to address these related questions.
Based on the studies in our review that proposed or evaluated an intervention, it appears that the use of theory commonly gives rise to multifaceted interventions. In other research, a meta-review of 25 systematic reviews that compared direct and indirect effect size and dose-response of single and multifaceted strategies showed no benefit of multifaceted over single strategies [
78]. Similar findings emerged from a systematic review of studies that evaluated the implementation of neck and/or back pain guidelines [
79]. These findings contrast with the prevalent use of theory-informed multifaceted guideline implementation interventions by studies included in this review. In comparison with single interventions, multifaceted interventions may be more expensive, may place a higher burden on those delivering and receiving the intervention, and may not be easily replicable outside of the context of scientific investigation. Therefore, further research is needed to better understand how to employ theory when designing or evaluating guideline implementation so that interventions are feasible and can be more readily scaled up if found to be effective, which is the ultimate end-goal of implementation science.
TPB and TDF emerged as the most commonly used theories giving rise to multifaceted interventions. These were chosen because they had been used by others and because they addressed a broad range of determinants, rather than explicitly matching pre-identified determinants to specific theory chosen from among the many that are available. Therefore, guideline implementers may benefit from information about theories and how to use them. To supply this information, research is needed to more firmly establish which theories and how many theories (or models/frameworks) result in effective interventions. In addition, educational interventions may be needed to enhance guideline implementer awareness of existing compilations of theories from various disciplines [
15,
80‐
83].
Acknowledgements
Members of the Guidelines International Network Implementation Working Group (g-i-n.net/working-groups/implementation) reviewed this manuscript to further refine the interpretation of findings and enhance communication of the findings. They included:
Samia Alhabib, Department of Family Medicine, King Abdullah Bin Abdulaziz University Hospital, Saudi Arabia
Margot Fleuren, Organisation for Applied Scientific Research TNO, The Netherlands
Margie Fortino, Thomas Jefferson University Hospitals, United States
Danielle Mazza, Monash University, Australia
Niamh O’Rourke, Clinical Effectiveness Unit, Department of Health, Ireland
Melina Willson, Systematic Reviews and Health Technology Assessments, NHMRC Clinical Trials Centre, University of Sydney, Australia
The Guidelines International Network (G-I-N;
www.g-i-n.net) is a Scottish Charity (SC034047). The views expressed in this article are those of the authors and G-I-N is not liable for any use that may be made of the information presented.