Background
While experts estimate that one third of annual cancer cases could be prevented through lifestyle modifications related to nutrition, physical activity, and weight management [
1], approximately 10% of US adults consume the recommended amount of fruits and vegetables [
2], 25% meet physical activity guidelines [
3], and only 22% are a healthy weight [
4]. Furthermore, although the benefits of early detection through cancer screening are well-documented [
5‐
7], many people do not receive recommended screenings [
8]. This is particularly true of those without insurance; only 35% of uninsured women regularly get screened for breast cancer and 64% for cervical cancer [
8], and colorectal cancer screening use is also lower among men and women without insurance (25%) [
8]. Thus, strategies are needed to motivate individuals to change screening, nutrition, and physical activity behaviors to reduce cancer incidence and mortality, specifically among those without insurance.
In response to these needs, the National Breast and Cervical Cancer Early Detection Program (NBCCEDP) was created to provide access to screening for uninsured women 21–64 years of age who are never or rarely screened for breast and cervical cancer [
9]. Recognizing that access to services alone does not guarantee utilization, NBCCEDP provides support to state and local health agencies for targeted education and outreach and promotes collaboration with partner organizations, such as the American Cancer Society, to increase program participation [
10].
Several interventions [
10] have been recognized for their approach to education and positive screening and behavioral outcomes, including the Cooking for a Lifetime of Cancer Prevention (C4L) program [
10]. C4L is a community-based educational program with a cooking school format designed to educate women on primary prevention through lifestyle behavior, motivate all women to be screened for breast, cervical, and colorectal cancer, and provide access to breast and cervical cancer screenings for eligible women through the Georgia Breast and Cervical Cancer Control Program (GBCCP, state branch of the NBCCEDP). C4L has reached more than 3500 women over the last 10 years, and has developed a history of practice-based evidence. For example, program evaluation data suggests significant (
P < 0.05) improvements in intention to implement nutrition and physical activity guidelines for cancer prevention (
unpublished) and that women who are eligible for BCCP screening services go on to be screened for breast and cervical cancer [
10].
While interventions like C4L may initially demonstrate effectiveness for motivating behavior change, a number of other implementation science factors may impact individual- and systems-level changes [
11]. Contextual factors, staff turnover, program participant characteristics, and the translation of an intervention from training to actual practice may influence effectiveness of interventions immediately after initial implementation and over time [
11,
12]. Furthermore, it cannot be assumed that the intervention will be implemented consistently over time, and that high-quality program delivery will result in the same positive outcomes over time [
13,
14]. Accordingly, it is important to periodically evaluate implementation as well as outcomes of interventions that have a long history of practice-based evidence. Implementation science provides an approach through which long-standing interventions may be evaluated, as not all interventions are being translated from efficacy to large-scale dissemination trials.
Taken together, it is not only necessary to understand
if a program is effective, but it is also necessary to understand the contextual factors that impact delivery [
15,
16]. Concurrent evaluation of both implementation and outcomes has multiple benefits including assessment of internal validity, the influence of program drift on participant outcomes, and the translation of interventions to different settings or contexts [
11,
14,
15]. Despite these benefits, relatively few studies assess both implementation and outcomes, and challenges exist in comparing implementation across health promotion practice settings [
17]. The Consolidated Framework for Implementation Research (CFIR) was designed to overcome some of the challenges and move implementation science forward by synthesizing constructs from several theories into a robust implementation meta-framework [
12]. While designed to be used in a variety of contexts and settings, the CFIR has been primarily used to evaluate interventions implemented in clinical care settings, including cancer control programs [
18], but has rarely been used to evaluate cancer prevention programs in community settings.
This study was designed to fill a critical gap in the literature by using the CFIR to characterize implementation and to evaluate the relationship between implementation and outcomes in a community-based cancer prevention program with a long history of positive outcomes. The study has the following aims: 1) determine degree of program implementation, 2) determine if implementation is related to outcomes, and 3) use the CFIR to identify barriers and facilitators associated with degree of implementation that may be addressed to improve outcomes.
Methods
Study design
This study utilized a mixed methods design that included quantitative participant program evaluations and both quantitative and qualitative semi-structured interviews with the instructors who implemented the program. All methods and procedures were approved by The University of Georgia Institutional Review Board on Human Subjects. Program participants provided written informed consent and instructors provided verbal informed consent.
Conceptual framework
The CFIR [
12] consists of 39 constructs organized into 5 domains that are believed to influence implementation and were collected from various implementation and change theories. For this study, the CFIR was used to develop the instructor interview guide [
19] and was the foundation for qualitative data coding and analysis [
20]. The analysis methodology was similar to that of Damschroder and Lowery in their evaluation of a weight management program in Veterans Affairs (VA) hospitals [
21] and Liang and colleagues’ study of implementing evidence-based cancer control practices in safety net health systems [
18].
Setting
Cooking for a Lifetime of Cancer Prevention (C4L) is a cancer prevention educational program that is disseminated through the framework of the Cooperative Extension System (CES) in Georgia in collaboration with the American Cancer Society (ACS) client navigations in Georgia. The purpose of CES is to translate research and resources from land-grant universities in the United States to communities through educational outreach delivered by community-based Extension professionals [
22]. CES is the largest adult educational organization in the US and has, for over 100 years, focused on outreach and service and balanced collection of empirically meaningful data with program evaluation that communicates public value, but does not overburden program participants or Extension professionals, and fits within resource constraints of the Extension system [
22].
C4L has been implemented through Georgia Extension for over 10 years in a variety of community-based settings such as churches, technical schools, and non-profit clinics. The program is funded by the American Cancer Society (ACS), administered by state Extension faculty, and implemented by county Extension professionals in collaboration with ACS navigators.
Program description
C4L is a 2.5-h program that includes three core components: 1) a presentation about breast, cervical, and colorectal cancer screening guidelines given by an ACS client navigator (ACS presentation), 2) a presentation of the ACS guidelines on cancer preventive lifestyle behaviors [
23] given by the Extension professional (Extension presentation), and 3) a cooking demonstration with recipe sampling given by the Extension professional (recipe demonstration). ACS lifestyle recommendations discussed include eating a plant-based diet, maintaining a healthy weight, being physically active, and limiting alcohol intake [
2]. Extension professionals and ACS navigators work together to recruit for and implement the program. Following the program, ACS navigators assist uninsured female program attendees (21–64 years of age) in applying for breast and cervical cancer screenings through the GBCCP. All participants receive a cookbook as an incentive for program participation. Table
1 is a logic model of the program.
Table 1
Cooking for a Lifetime of Cancer Prevention (C4L) Logic Model
American Cancer Society grant funding USDA GEO00805 NBCCEDP and GBCCP ACS Guidelines on Nutrition and Physical Activity for Cancer Prevention American Cancer Society personnel (director of community outreach, client navigators) Extension personnel (state specialist, administrative assistant, graduate research assistants, county professionals) | Training - Develop training materials - Train client navigators and Extension professionals Implementation - Distribute program materials - Distribute funding - Market program and recruit target audience - Implement 2.5-hr program Evaluation - Collect data from each Extension professional - Analyze program-level and state-level outcomes Preparation of reports | 2 organizations involved: American Cancer Society and Extension 6 ACS navigators and 13 Extension professionals trained 24 C4L programs implemented 237 C4L program participants 1 report prepared for funders (ACS) and 9 reports prepared for Extension professionals | Increased awareness of cancer screening recommendations Increased intention to be screened for breast, cervical, and colorectal cancer Increased intention to follow ACS guidelines on nutrition and physical activity for cancer prevention Increased access to breast and cervical cancer screenings for uninsured women | Increased breast, cervical, and colorectal cancer screenings Increased number of individuals who follow ACS guidelines on nutrition and physical activity for cancer prevention Increased number of participants in GBCCP | Decreased breast, cervical, and colorectal cancer incidence and mortality |
Contextual Factors (e.g., Affordable Care Act, geographic location, resources, healthcare access, cultural beliefs) |
Study participants
C4L instructors (Extension professionals)
In June of 2016, the state Extension program administrator invited Extension professionals to apply to be C4L instructors via email to an organizational listserv. Extension professionals (C4L instructors) who applied to implement the program during the 2016–2017 grant year and attended program training conducted by Extension state staff were eligible to participate in the study. There were no exclusion criteria for instructors. Thirteen C4L instructors were eligible for study participation and were recruited during a scheduled one-on-one telephone call to the research team prior to program implementation. All 13 eligible instructors agreed to participate, were scheduled for an interview, and received a $30 credit for work supplies upon completion of the interview. Verbal consent was obtained before the interview began.
C4L program participants
C4L instructors, ACS navigators, and/or staff at the location where the program was delivered (i.e. church staff, non-profit clinic staff) recruited C4L program participants through print media, social media advertisements, email listservs, and word of mouth. The goal of recruitment was to reach the target audience, women eligible for GBCCP screening services, (uninsured, ages 21 to 64 y), but women and men of all ages were invited to attend the program. Inclusion criteria for the present analyses were: participation in one of the 13 programs evaluated and age 21 and older. There were 139 participants in the 13 programs included in the present study. Eleven participants were missing information on age and sex and three were less than 21 years of age and thus, were excluded from the sample. Because the program focuses primarily on female cancer screening and male participation was low (n = 10), males were also excluded from the sample. Thus, the final analytical sample for this study (n = 115) included only women aged 21 and older who attended the 13 programs evaluated in the interviews. All program participants included in the analysis provided written informed consent for research.
Data collection
Interviews with C4L instructors
One-on-one, semi-structured interviews were conducted with C4L instructors using Zoom Web Conferencing (Zoom Video Communications, Inc., San Jose, California) within 0–4 weeks of program implementation (mean 9.5 [6.7] days from implementation, range 0–22 days) and lasted 61.5 [16.1] minutes. Instructors implemented from one to five programs in the 2016–2017 grant year, with the majority (62%) implementing only one program. However, to make cases comparable, instructors were only interviewed about the first program completed. Participants were told they could refuse to answer any question or stop the interview at any time. Audio files of the interviews were transcribed by a third-party transcriptionist (
Rev.com, San Francisco, California). A research team member reviewed transcripts for clarity and accuracy and coded identifying information.
The interview guide included two types of questions. Closed-ended interview questions (quantitative data) assessed completion of critical program components (e.g., Was the Extension presentation given? Were recipes demonstrated?). Open-ended interview questions (qualitative data) were developed using the CFIR website resources [
19] and explored perceptions of training and implementation. The interview guide is available as Additional file
2.
C4L participant program evaluation
C4L program participants completed a retrospective, researcher-designed questionnaire at the conclusion of the program while recipes were sampled and before incentives were provided. The questionnaire included demographic and health insurance information and intention to engage in cancer preventive screening, nutrition, and physical activity behaviors before and after the program, and is discussed further in the next sections.
Measures
Program implementation measures
Statistical analysis
Relationships among degree of implementation (high/low), demographic variables, and change variables for nutrition and physical activity behaviors were explored using linear mixed effects models to incorporate potential variability at the program level (implementation) along with the participant-level variables (demographics and behavior change intention). Event number was used as a random effect in the models. Independent variables included in each model were implementation level, age, race, ethnicity, education, and insurance. Type III F tests were conducted, and denominator degrees of freedom were determined using the Satterthwaite method for mixed effects models. Post-hoc analyses were conducted for significant independent variables in the models, which included calculation of estimated marginal means or beta coefficients and pairwise comparisons using Least Significant Differences where appropriate. All model residuals were tested for normality through histograms and visualization.
Ordinal logistic regression mixed effects models were used to explore relationships among implementation level, demographics, and intention to be screened for cancer (cervical, breast, and colorectal). Only participants who were in the appropriate age range for each test were included in the statistical analysis (Pap test: aged 21 to 64 [n = 59], mammogram: aged 40 and older [n = 76], FOBT/FIT and colonoscopy: aged 50 and older [n = 49 and 59, respectively]). Independent variables included in each model were implementation level, age, race, ethnicity, education, insurance, implementation of ACS presentation, and participant screening history. Models were adjusted as necessary to accommodate low variability in responses.
All analyses were conducted using IBM SPSS Statistics version 25 (Armonk, New York). Results are presented as means and standard errors (M [SE]), and 95% confidence intervals (95% CI) as appropriate.
Interview analysis
Data coding
The methods for interview data coding and analysis were adapted from Damschroder and colleagues [
12,
21,
25,
26]. Transcript coding was largely deductive based on CFIR constructs [
12], but inductive coding was used when it was determined that a CFIR code did not adequately explain the aspect of implementation described. All CFIR domains were considered during coding. Analyst triangulation occurred through the use of a consensual qualitative research approach – three members of the research team coded all transcripts independently then met to reach consensus on any differences [
27]. The first three transcripts were coded by the entire team to operationalize construct definitions. Subsequent transcripts were independently coded by balanced pairs of analysts to reduce the impact of individual bias. Between each round of coding, the codebook was updated by the lead analyst and approved by the team. After coding was completed, two analysts completed an audit of the final codes together to ensure consistent and accurate use of the constructs. ATLAS.ti version 8 was used as a tool for coding and analysis (Scientific Software Development GmbH, Berlin, Germany).
Construct ratings
Following coding, the research team rated the constructs on a scale of − 2 to + 2 to indicate influence on implementation [
21,
25,
26]. The sign indicated valence (positive or negative influence on implementation) and number indicated magnitude (strength within an interview). Zero indicated no influence on implementation (neutral), and an “X” indicated both positive and negative influences (mixed) [
19,
25,
26]. The average frequency of construct use across all interviews was used to approximate magnitude, i.e., above average frequency of the construct was rated as 2 instead of 1. A variable-oriented approach (rating one construct at a time) was used to maintain consistent application of ratings across a construct [
21]. Rating was done individually by the analysts before meeting as a group to reach consensus.
Analysis and interpretation
The final ratings for each construct present in each interview were summarized in a ratings matrix by the lead analyst, with programs grouped according to degree of implementation (high or low). Using this matrix, the research team visually identified patterns in the construct ratings that distinguished between high and low implementation programs [
18,
21]. Based on this assessment, constructs were determined to be not distinguishing between programs with high or low implementation (no discernible pattern), or distinguishing, which was further classified to weakly distinguishing (pattern observed but mixed positive and negative valence ratings present, or only slight difference between magnitudes) or strongly distinguishing (clear difference in valence and/or magnitude) through a consensual qualitative approach. Distinguishing constructs were then determined to be barriers to or facilitators of implementation or descriptive only using valence and interview text.
Discussion
This study provides new information about relationships between implementation and program participant outcomes as well as rich descriptions of CFIR constructs that manifested as barriers and facilitators to implementation of a community-based cancer prevention program with a long history of generally positive participant outcomes. Despite all instructors receiving the same program training, implementation materials, and access to implementation instructions on the web and program administrators for support, our study found variation in program implementation.
Notably, a higher degree of fidelity in implementation was related to participants’ intentions to be physically active, achieve a healthy weight, and limit alcohol. It is uncertain why an effect was not found for the other behaviors discussed in the program (i.e., avoiding processed meat, filling half of plate with fruits and vegetables). As desired outcomes may be achieved by reaching a threshold of implementation or improve as implementation improves [
14,
28], it is possible that some aspect of implementation that was relevant to these outcomes was not captured in the implementation measures. In contrast, implementation of the cancer screening presentation was inversely related to intention to get an FOBT/FIT. The cancer screening presentation provides information about FOBT/FIT, a stool sample test done at home, that may deter individuals. Low variability in response for intention to get the other cancer screenings likely limited statistical power to detect any relationship between implementation and intention to get a Pap test, mammogram, or colonoscopy.
Although the primary focus of the analysis was implementation, it is important to note that results suggest the program is more effective for individuals with insurance. Participants with health insurance had significantly greater changes in intention to implement several lifestyle behaviors than those without insurance. Appropriateness and acceptability are key implementation outcomes, so evaluating participant characteristics and outcomes can ensure that an intervention is effective for its intended audience [
17]. These results suggest that the C4L program may need revisions to better reach its target audience: women without insurance.
The C4L instructor interviews added depth to the study by describing what specific barriers and facilitators contributed to the varied degree of implementation. CFIR constructs that manifested as barriers and facilitators to implementation were similar to other studies, but some nuanced differences are of note. Like other studies, access to knowledge and information, compatibility, and design quality and packaging were found to be facilitators of implementation [
21,
26,
29,
30]. However, these constructs distinguished between high and low implementation programs in our study where they were not distinguishing in others [
21,
25,
26,
29,
30]. One possible explanation for this difference is related to another distinguishing construct, experience. The C4L program has been in place for over a decade, whereas other interventions evaluated using CFIR had been implemented for only a few months to a few years [
18,
21,
26,
29,
31]. Therefore, the instructors in our study had varying levels of program experience; and in general, those with more experience had high implementation and a different perception of implementation than those with less experience. For example, while all instructors spoke of compatibility as a facilitator of implementation, those with low implementation (less experience) had more positive comments about compatibility than those with high implementation, which made the construct distinguishing. It is possible that new instructors emphasized compatibility more because they were justifying their reasons for being involved. More senior instructors mentioned compatibility positively, yet very briefly, as a reason for sustained implementation. The low implementation group also had more positive comments about design quality and packaging compared to those with high implementation, who viewed design quality and packaging as a facilitator to implementation but provided more constructive feedback. Having more experience may have contributed to a deeper understanding of program materials and how they could be improved. This suggests that administrators of longstanding programs should consider the effect of the interventionist’s experience when assessing implementation or designing an implementation study. Moreover, using CFIR as the framework to evaluate more established programs may yield different results than when used to evaluate newer interventions.
Another key finding that differed from previous studies [
18,
21,
26,
29] was the impact of collaboration with other organizations on implementation. While in previous literature external change agent was completely absent from analysis [
18,
21,
26] or had a mixed influence on implementation [
29], external change agents was a strongly distinguishing construct and facilitated implementation, specifically recruitment, in this study. This study also had formally appointed
external implementation leaders (ACS navigators) who were described as barriers to implementation. While the effect of formally appointed internal implementation leaders on implementation has been mixed [
21,
26,
29], it was not surprising that the external implementation leader construct was distinguishing in this study due to their roles in recruitment and implementation. In a clinical setting where an intervention is contained within a department or hospital unit and program participants may be referred by a health care provider, collaboration with outside organizations may not be as influential. In a community context, however, where implementation takes place in a variety of settings and targets the general public, creating partnerships with external organizations are often a key to success [
32]. Thus, program administrators should facilitate collaborations where possible and emphasize the importance of community networks in program training. Researchers using CFIR to evaluate community-based programs should explore possible external influences on implementation.
Other personal attributes and executing were distinguishing constructs in our study and Kegler and colleagues found them to be salient [
30], but others have not had the same result [
18,
21,
26,
29]. Practically, personality attributes, like tolerance for stress and resourcefulness, cannot be controlled or translated to program design to improve implementation. Still, these attributes should be considered when developing training materials. The executing construct matched implementation scores in that low implementation programs had more negative ratings for executing and high implementation programs had positive ratings. While this coding method created overlap in the data, it captured implementation details not used in the implementation scoring. As suggested by Damschroder, we found this construct to be more descriptive than explanatory [
12].
Strengths and limitations
This study design has several strengths. First, it used an integrated mixed methods design to evaluate program implementation and outcomes, with each method enhancing the other. For example, the degree of implementation score (quantitative) was a better measure because the qualitative data helped to determine if each program component was fully implemented. Further, creating “hybrid” [
15] research designs that assess both implementation and outcomes accelerates research translation, improves public health impact, and ensures accurate interpretation of program outcomes [
14,
15,
17]. Using the CFIR throughout the research process from developing the research methodology and interview guide to analyzing the data is another notable strength [
20]. It is a comprehensive framework [
12], making it easier for conclusions to be drawn across studies regarding which constructs may be the most important for implementation and under what circumstances, a key question for community health educators. A final strength of the study is that all 13 eligible C4L instructors implementing the program were interviewed.
While the study has several strengths, some limitations should be noted. One limitation was the use of instructor self-report for implementation scoring measures rather than direct observation, which could have introduced social desirability bias [
33]. Program observation was not feasible with the resources available for this study but using the interview narrative helped the research team mitigate this bias. Another limitation was interviewing only Extension professionals. Interviewing ACS navigators (formally appointed external implementation leaders) and individuals from partnering organizations (external change agents) would have provided more implementation perspectives. Future studies should seek other stakeholders to provide a comprehensive picture of implementation. Finally, this study may not be widely generalizable, since it details a unique community-based cancer prevention program. Still, our results contribute to the growing implementation science literature, including the CFIR literature, and may provide considerations for community health promotion program administrators that would enhance implementation and contribute to greater public health impact.
Conclusions
This study provides insight into the implementation of a community-based health promotion program, how implementation is related to participant outcomes, and outlines barriers and facilitators to implementation that can be addressed to improve outcomes. This evaluation revealed variation in the degree of implementation of a long-standing program in which all instructors are provided the same training and resources. Therefore, it is important to monitor implementation of ongoing programs, as it informs efforts needed for continued effectiveness and identifies sources of program drift. Analyses showed that implementation and insurance status were significantly associated with improved intention to change some cancer preventive behaviors. Exploring program fidelity and participant characteristics can help program administrators understand for whom and under what conditions an intervention may be successful in improving behaviors. Other intervention characteristics should also be considered to improve implementation and participant outcomes, including program training, the availability of training material, interventionist experience, and program material quality and design. Ensuring a program is compatible with workflow and organizational values is also beneficial. In addition, programs that incorporate external organizations should foster supportive communication and collaboration.
Lastly, this study adds to the CFIR literature by using it to evaluate a well-established community intervention. The CFIR was found to be appropriate for the themes found in the qualitative data analysis. However, using CFIR for this type of intervention revealed distinguishing constructs that were different than those found in studies of newly-implemented, clinical interventions, and those interventions being translated from clinical trials to community contexts. Our analysis revealed distinguishing constructs that highlighted the role of program experience and relationships with external organizations which were different from those found in studies of newly-implemented and/or clinical interventions. As the CFIR is intended to be used across multiple contexts, future research using the CFIR is needed to determine if these constructs are context-specific or relevant only to the C4L program, and thus, if the use of the CFIR should be limited to programs that are being newly implemented, implemented in clinical settings, or were developed in the clinic and are being translated to communities.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.