Contributions to the literature
-
We describe three studies that demonstrate the hybrid effectiveness to implementation continuum framework and recommend adding two new domains: program design and program delivery.
-
Hybrid 1 study focused on the effectiveness of direct-mailing of fecal tests to increase colorectal cancer screening. Hybrid 2 adapted the program to community settings, evaluating implementation and effectiveness. Hybrid 3 is evaluating strategies designed to address Hybrid 2 study barriers.
-
Program design and delivery became increasingly pragmatic, adaptable, and informed by contexts: researcher-led in Hybrid 1; collaborative in Hybrid 2; and organization-led in Hybrid 3.
-
Consideration of these two new hybrid domains could speed the transition from research to practice.
Background
Methods
Study characteristic | Hybrid trial Type 1 | Hybrid trial Type 2 | Hybrid trial Type 3 |
---|---|---|---|
Research aims | Primary aim: determine effectiveness of a clinical intervention Secondary aim: better understand context for implementation | Co-primary aim: determine effectiveness of a clinical intervention Co-primary aim: determine feasibility and potential utility of an implementation intervention/strategy | Primary aim: determine utility of an implementation intervention/strategy Secondary aim: assess clinical outcomes associated with implementation trial |
Research questions (examples) | Primary question: will a clinical treatment work in this setting among these patients? Secondary question: what are potential barriers/facilitators to a treatment’s widespread implementation? | Co-primary question: will a clinical treatment work in this setting/these patients? Co-primary question: does the implementation method show promise (either alone or in comparison with another method) in facilitating implementation of a clinical treatment? | Primary question: which method works better in facilitating implementation of a clinical treatment? Secondary question: are clinical outcomes acceptable? |
Units of randomization | Patient, clinical unit | Clinical effectiveness: see type I Implementation: see type III, although may be nonrandomized, for example, case study | Provider, clinical unit, facility, system |
Comparison conditions | Placebo, treatment as usual, competing treatment | Clinical effectiveness: see type I Implementation: see type III, although may be nonrandomized, for example, case study | Provider, clinical unit, facility, system: implementation as usual, competing implementation strategy |
Sampling frames | Patient: limited restrictions, but some inclusion/exclusion criteria Provider, clinical unit, facility, system: choose subsample from relevant participants | Patient: limited restrictions, but some inclusion/exclusion criteria. Providers/clinics/facility/systems; consider “optimal” cases | Provider/clinic/facility/system: either “optimal” cases or a more heterogeneous group Secondary: all or selected patients included in study locations |
Program design | Program is largely designed by the research team, but can be informed by program sites | Program is designed collaboratively by researchers and program site and adapted to setting and populations. | Implementation strategies may be designed by the research team, collaboratively by the research team and program site, or by the program site with the research team as consultants and or evaluators. |
Program delivery | Program is largely delivered by the research team with attention to fidelity | Delivered by the program site, with support from the research team. Fidelity may be variable, with reasons for variability explored | Delivered by the program site with variable support from the research team. Fidelity may be variable, with reasons for variability explored |
Evaluation methods | Primary aim: quantitative, summative Secondary aim: mixed methods, qualitative, process-oriented | Clinical effectiveness aim: quantitative, summative Implementation aim: mixed method; quantitative, qualitative; formative and summative | Primary aim: mixed method, quantitative, qualitative, formative, and summative Secondary aim: quantitative, summative |
Measures | Primary aim: patient symptoms and functioning, possibly cost Secondary aim: feasibility and acceptability of implementing clinical treatment, sustainability potential, barriers and facilitators to implementation | Clinical effectiveness aim: patient symptoms and functioning, possibly cost-effectiveness Implementation aim: adoption of clinical treatment and fidelity to it, as well as related factors | Primary aim: adoption of clinical treatment and fidelity to it, as well as related factors Secondary aim: patient symptoms, functioning, services use |
Potential design challenges | Generating “buy in” among clinical researchers for implementation aims. Insuring appropriate expertise on study team to conduct rigorous Secondary aim. These studies will likely require more research expertise and personnel, and larger budgets, than non-Hybrids | Generating “buy in” among implementation researchers for clinical intervention aims. These studies will require more research expertise and personnel, as well as larger budgets, than non-Hybrids. Insuring appropriate expertise on study team to rigorously conduct both aims “Creep” of clinical treatment away from fidelity needed for optimal effectiveness. IRB complexities with multiple types of participants | Primary data collection with patients in large, multisite implementation trials can be unfeasible, and studies might need to rely on subsamples of patients, medical record review, and/or administrative data. Patient outcomes data will not be as extensive as in traditional effectiveness trials or even other Hybrid type and might be insufficient to answer some questions. “Creep” of clinical treatment away from fidelity needed for optimal effectiveness. Institution review board complexities with multiple types of participants |
Description of the Hybrid 1, 2, and 3 study examples
Results
Hybrid design model characteristics of the 3 study examples
Study characteristics | Hybrid study 1 | Hybrid study 2 | Hybrid study 3 |
---|---|---|---|
Name | Systems of Support to Increase Colorectal Cancer Screening and Follow-up (SOS) | Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP) | Benefit for Increasing Colorectal Cancer Screening in Priority Populations (BeneFIT) |
Research aims | To compare the effectiveness of an EHR-linked automated mailed and stepped intensity CRC screening program to usual care | To compare the Adoption, Reach, Implementation, and Maintenance of a clinic-based EHR-embedded mailed program to usual care | To evaluate implementation of mailed FIT programs by two health insurance plans |
Research questions | Primary question: Will an automated mailed CRC screening program increase screening uptake? Implementation questions: What is the incremental cost-effectiveness of the stepped intensity program? Is the program acceptable to patients and how could it be improved? | Primary question: Compared to usual care in safety net clinics can a mailed fecal testing program be adapted and implemented in safety net clinics and will it increase CRC screening uptake in vulnerable eligible adults and compared to usual care in safety net? Co-primary question: What contextual factors influence adoption, reach, implementation, and maintenance of the program? | Primary question: Can different implementation strategies be used to increase the adoption, reach, effectiveness, and maintenance of a mailed FIT program? What do the programs cost? What contextual factors (inner and outer setting) influence program implementation? Secondary question: What is the FIT return rate among eligible health plan enrollees? |
Unit of randomization (or comparison) | Patient (patient-level randomized trial) | Clinic (clinic-level randomized trial) | Naturalistic experiment (no randomization) with comparative outcomes |
Comparison condition | Usual care | Usual care | Two different implementation models, no usual care comparator |
Sampling frames | Patients age-eligible not current for CRC screening, limited number of exclusions | Clinics with sufficient numbers of age-eligible patients not current for CRC screening, almost no exclusion | Two Medicaid/Medicare health plans and their enrollees within two states - age eligible and not current for CRC, almost no exclusions |
Program design | Research team designed | Designed through collaboration between the EHR vendor, clinics, and research team | Health plan designed, with the researchers serving as consultants |
Program delivery | Research team delivered the mailings. The research team closely supervised clinical staff and the study paid for their time. Program fidelity was monitored; variation minimized. | Delivered by clinics with training assistance and implementation facilitation (plan, do, see, act cycle) by the research team. Program delivery fidelity varied | Delivered by the health plans with the research team providing consultative assistance. The health plans managed program components and program fidelity |
Evaluation method | Quantitative and summative | Mixed methods: quantitative and qualitative, formative and summative | Mixed methods: quantitative and qualitative, formative and summative |
Measures | Comparative colorectal cancer screening rates and cost-effectiveness | FIT screening rates overall and variation by clinic implementation and patient characteristics, RE-AIM measures, variation in costs by clinic, qualitative assessments of factors related to clinic success and challenges | RE-AIM measures comparing the two implementation models. Costs of implementation. Comparisons of FIT test return rates across the two implementation models |
Potential design challenges | Verbal consent required, leading to decreased Reach and external validity | Variation in implementation fidelity. IRB was not a challenge: single institutional review board, waiver of consent | Quasi-experimental design might lead to bias. Implementation and effectiveness outcomes were not directly comparable between the two health plans (no randomization, no control comparison) |
Program design and program delivery
Discussion
Program design
Program delivery
Conclusion
Acknowledgements
Funding
Availability of data and materials
Ethics approval and consent to participate
Consent for publication
Competing interests
-
National Cancer Institute (NCI): R01CA121125
-
National Institute of Health (NIH): Common Fund, through a cooperative agreement (U54 AT007748, UH2AT007782 and 4UH3CA18864002.) from the Office of Strategic Coordination within the Office of the NIH Director.
-
Centers for Disease Control (CDC): Health Promotion and Disease Prevention Research Center grant supported by cooperative agreement U48DP005013.